Oct 02 18:13:50 localhost kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct 02 18:13:50 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 02 18:13:50 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 02 18:13:50 localhost kernel: BIOS-provided physical RAM map:
Oct 02 18:13:50 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 02 18:13:50 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 02 18:13:50 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 02 18:13:50 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 02 18:13:50 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 02 18:13:50 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 02 18:13:50 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 02 18:13:50 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 02 18:13:50 localhost kernel: NX (Execute Disable) protection: active
Oct 02 18:13:50 localhost kernel: APIC: Static calls initialized
Oct 02 18:13:50 localhost kernel: SMBIOS 2.8 present.
Oct 02 18:13:50 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 02 18:13:50 localhost kernel: Hypervisor detected: KVM
Oct 02 18:13:50 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 02 18:13:50 localhost kernel: kvm-clock: using sched offset of 4042672881 cycles
Oct 02 18:13:50 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 02 18:13:50 localhost kernel: tsc: Detected 2800.000 MHz processor
Oct 02 18:13:50 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 02 18:13:50 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 02 18:13:50 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 02 18:13:50 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 02 18:13:50 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 02 18:13:50 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 02 18:13:50 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 02 18:13:50 localhost kernel: Using GB pages for direct mapping
Oct 02 18:13:50 localhost kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct 02 18:13:50 localhost kernel: ACPI: Early table checksum verification disabled
Oct 02 18:13:50 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 02 18:13:50 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 18:13:50 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 18:13:50 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 18:13:50 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 02 18:13:50 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 18:13:50 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 18:13:50 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 02 18:13:50 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 02 18:13:50 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 02 18:13:50 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 02 18:13:50 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 02 18:13:50 localhost kernel: No NUMA configuration found
Oct 02 18:13:50 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 02 18:13:50 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Oct 02 18:13:50 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 02 18:13:50 localhost kernel: Zone ranges:
Oct 02 18:13:50 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 02 18:13:50 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 02 18:13:50 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 02 18:13:50 localhost kernel:   Device   empty
Oct 02 18:13:50 localhost kernel: Movable zone start for each node
Oct 02 18:13:50 localhost kernel: Early memory node ranges
Oct 02 18:13:50 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 02 18:13:50 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 02 18:13:50 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 02 18:13:50 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 02 18:13:50 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 02 18:13:50 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 02 18:13:50 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 02 18:13:50 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 02 18:13:50 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 02 18:13:50 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 02 18:13:50 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 02 18:13:50 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 02 18:13:50 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 02 18:13:50 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 02 18:13:50 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 02 18:13:50 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 02 18:13:50 localhost kernel: TSC deadline timer available
Oct 02 18:13:50 localhost kernel: CPU topo: Max. logical packages:   8
Oct 02 18:13:50 localhost kernel: CPU topo: Max. logical dies:       8
Oct 02 18:13:50 localhost kernel: CPU topo: Max. dies per package:   1
Oct 02 18:13:50 localhost kernel: CPU topo: Max. threads per core:   1
Oct 02 18:13:50 localhost kernel: CPU topo: Num. cores per package:     1
Oct 02 18:13:50 localhost kernel: CPU topo: Num. threads per package:   1
Oct 02 18:13:50 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 02 18:13:50 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 02 18:13:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 02 18:13:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 02 18:13:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 02 18:13:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 02 18:13:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 02 18:13:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 02 18:13:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 02 18:13:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 02 18:13:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 02 18:13:50 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 02 18:13:50 localhost kernel: Booting paravirtualized kernel on KVM
Oct 02 18:13:50 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 02 18:13:50 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 02 18:13:50 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 02 18:13:50 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 02 18:13:50 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 02 18:13:50 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 02 18:13:50 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 02 18:13:50 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct 02 18:13:50 localhost kernel: random: crng init done
Oct 02 18:13:50 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 02 18:13:50 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 02 18:13:50 localhost kernel: Fallback order for Node 0: 0 
Oct 02 18:13:50 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 02 18:13:50 localhost kernel: Policy zone: Normal
Oct 02 18:13:50 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 02 18:13:50 localhost kernel: software IO TLB: area num 8.
Oct 02 18:13:50 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 02 18:13:50 localhost kernel: ftrace: allocating 49370 entries in 193 pages
Oct 02 18:13:50 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 02 18:13:50 localhost kernel: Dynamic Preempt: voluntary
Oct 02 18:13:50 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 02 18:13:50 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 02 18:13:50 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 02 18:13:50 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 02 18:13:50 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 02 18:13:50 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 02 18:13:50 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 02 18:13:50 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 02 18:13:50 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 02 18:13:50 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 02 18:13:50 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 02 18:13:50 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 02 18:13:50 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 02 18:13:50 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 02 18:13:50 localhost kernel: Console: colour VGA+ 80x25
Oct 02 18:13:50 localhost kernel: printk: console [ttyS0] enabled
Oct 02 18:13:50 localhost kernel: ACPI: Core revision 20230331
Oct 02 18:13:50 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 02 18:13:50 localhost kernel: x2apic enabled
Oct 02 18:13:50 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 02 18:13:50 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 02 18:13:50 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct 02 18:13:50 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 02 18:13:50 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 02 18:13:50 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 02 18:13:50 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 02 18:13:50 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 02 18:13:50 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 02 18:13:50 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 02 18:13:50 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 02 18:13:50 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 02 18:13:50 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 02 18:13:50 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 02 18:13:50 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 02 18:13:50 localhost kernel: x86/bugs: return thunk changed
Oct 02 18:13:50 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 02 18:13:50 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 02 18:13:50 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 02 18:13:50 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 02 18:13:50 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 02 18:13:50 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 02 18:13:50 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 02 18:13:50 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 02 18:13:50 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 02 18:13:50 localhost kernel: landlock: Up and running.
Oct 02 18:13:50 localhost kernel: Yama: becoming mindful.
Oct 02 18:13:50 localhost kernel: SELinux:  Initializing.
Oct 02 18:13:50 localhost kernel: LSM support for eBPF active
Oct 02 18:13:50 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 02 18:13:50 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 02 18:13:50 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 02 18:13:50 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 02 18:13:50 localhost kernel: ... version:                0
Oct 02 18:13:50 localhost kernel: ... bit width:              48
Oct 02 18:13:50 localhost kernel: ... generic registers:      6
Oct 02 18:13:50 localhost kernel: ... value mask:             0000ffffffffffff
Oct 02 18:13:50 localhost kernel: ... max period:             00007fffffffffff
Oct 02 18:13:50 localhost kernel: ... fixed-purpose events:   0
Oct 02 18:13:50 localhost kernel: ... event mask:             000000000000003f
Oct 02 18:13:50 localhost kernel: signal: max sigframe size: 1776
Oct 02 18:13:50 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 02 18:13:50 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 02 18:13:50 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 02 18:13:50 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 02 18:13:50 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 02 18:13:50 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 02 18:13:50 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct 02 18:13:50 localhost kernel: node 0 deferred pages initialised in 23ms
Oct 02 18:13:50 localhost kernel: Memory: 7765552K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616508K reserved, 0K cma-reserved)
Oct 02 18:13:50 localhost kernel: devtmpfs: initialized
Oct 02 18:13:50 localhost kernel: x86/mm: Memory block size: 128MB
Oct 02 18:13:50 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 02 18:13:50 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 02 18:13:50 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 02 18:13:50 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 02 18:13:50 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 02 18:13:50 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 02 18:13:50 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 02 18:13:50 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 02 18:13:50 localhost kernel: audit: type=2000 audit(1759428828.369:1): state=initialized audit_enabled=0 res=1
Oct 02 18:13:50 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 02 18:13:50 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 02 18:13:50 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 02 18:13:50 localhost kernel: cpuidle: using governor menu
Oct 02 18:13:50 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 02 18:13:50 localhost kernel: PCI: Using configuration type 1 for base access
Oct 02 18:13:50 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 02 18:13:50 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 02 18:13:50 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 02 18:13:50 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 02 18:13:50 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 02 18:13:50 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 02 18:13:50 localhost kernel: Demotion targets for Node 0: null
Oct 02 18:13:50 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 02 18:13:50 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 02 18:13:50 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 02 18:13:50 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 02 18:13:50 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 02 18:13:50 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 02 18:13:50 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 02 18:13:50 localhost kernel: ACPI: Interpreter enabled
Oct 02 18:13:50 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 02 18:13:50 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 02 18:13:50 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 02 18:13:50 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 02 18:13:50 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 02 18:13:50 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 02 18:13:50 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [3] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [4] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [5] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [6] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [7] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [8] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [9] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [10] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [11] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [12] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [13] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [14] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [15] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [16] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [17] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [18] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [19] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [20] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [21] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [22] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [23] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [24] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [25] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [26] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [27] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [28] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [29] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [30] registered
Oct 02 18:13:50 localhost kernel: acpiphp: Slot [31] registered
Oct 02 18:13:50 localhost kernel: PCI host bridge to bus 0000:00
Oct 02 18:13:50 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 02 18:13:50 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 02 18:13:50 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 02 18:13:50 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 02 18:13:50 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 02 18:13:50 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 02 18:13:50 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 02 18:13:50 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 02 18:13:50 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 02 18:13:50 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 02 18:13:50 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 02 18:13:50 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 02 18:13:50 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 02 18:13:50 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 02 18:13:50 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 02 18:13:50 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 02 18:13:50 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 02 18:13:50 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 02 18:13:50 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 02 18:13:50 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 02 18:13:50 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 02 18:13:50 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 02 18:13:50 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 02 18:13:50 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 02 18:13:50 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 02 18:13:50 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 02 18:13:50 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 02 18:13:50 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 02 18:13:50 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 02 18:13:50 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 02 18:13:50 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 02 18:13:50 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 02 18:13:50 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 02 18:13:50 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 02 18:13:50 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 02 18:13:50 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 02 18:13:50 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 02 18:13:50 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 02 18:13:50 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 02 18:13:50 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 02 18:13:50 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 02 18:13:50 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 02 18:13:50 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 02 18:13:50 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 02 18:13:50 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 02 18:13:50 localhost kernel: iommu: Default domain type: Translated
Oct 02 18:13:50 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 02 18:13:50 localhost kernel: SCSI subsystem initialized
Oct 02 18:13:50 localhost kernel: ACPI: bus type USB registered
Oct 02 18:13:50 localhost kernel: usbcore: registered new interface driver usbfs
Oct 02 18:13:50 localhost kernel: usbcore: registered new interface driver hub
Oct 02 18:13:50 localhost kernel: usbcore: registered new device driver usb
Oct 02 18:13:50 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 02 18:13:50 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 02 18:13:50 localhost kernel: PTP clock support registered
Oct 02 18:13:50 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 02 18:13:50 localhost kernel: NetLabel: Initializing
Oct 02 18:13:50 localhost kernel: NetLabel:  domain hash size = 128
Oct 02 18:13:50 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 02 18:13:50 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 02 18:13:50 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 02 18:13:50 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 02 18:13:50 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 02 18:13:50 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 02 18:13:50 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 02 18:13:50 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 02 18:13:50 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 02 18:13:50 localhost kernel: vgaarb: loaded
Oct 02 18:13:50 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 02 18:13:50 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 02 18:13:50 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 02 18:13:50 localhost kernel: pnp: PnP ACPI init
Oct 02 18:13:50 localhost kernel: pnp 00:03: [dma 2]
Oct 02 18:13:50 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 02 18:13:50 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 02 18:13:50 localhost kernel: NET: Registered PF_INET protocol family
Oct 02 18:13:50 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 02 18:13:50 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 02 18:13:50 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 02 18:13:50 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 02 18:13:50 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 02 18:13:50 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 02 18:13:50 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 02 18:13:50 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 02 18:13:50 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 02 18:13:50 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 02 18:13:50 localhost kernel: NET: Registered PF_XDP protocol family
Oct 02 18:13:50 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 02 18:13:50 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 02 18:13:50 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 02 18:13:50 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 02 18:13:50 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 02 18:13:50 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 02 18:13:50 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 02 18:13:50 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 02 18:13:50 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 74001 usecs
Oct 02 18:13:50 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 02 18:13:50 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 02 18:13:50 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 02 18:13:50 localhost kernel: ACPI: bus type thunderbolt registered
Oct 02 18:13:50 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 02 18:13:50 localhost kernel: Initialise system trusted keyrings
Oct 02 18:13:50 localhost kernel: Key type blacklist registered
Oct 02 18:13:50 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 02 18:13:50 localhost kernel: zbud: loaded
Oct 02 18:13:50 localhost kernel: integrity: Platform Keyring initialized
Oct 02 18:13:50 localhost kernel: integrity: Machine keyring initialized
Oct 02 18:13:50 localhost kernel: Freeing initrd memory: 86104K
Oct 02 18:13:50 localhost kernel: NET: Registered PF_ALG protocol family
Oct 02 18:13:50 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 02 18:13:50 localhost kernel: Key type asymmetric registered
Oct 02 18:13:50 localhost kernel: Asymmetric key parser 'x509' registered
Oct 02 18:13:50 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 02 18:13:50 localhost kernel: io scheduler mq-deadline registered
Oct 02 18:13:50 localhost kernel: io scheduler kyber registered
Oct 02 18:13:50 localhost kernel: io scheduler bfq registered
Oct 02 18:13:50 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 02 18:13:50 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 02 18:13:50 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 02 18:13:50 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 02 18:13:50 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 02 18:13:50 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 02 18:13:50 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 02 18:13:50 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 02 18:13:50 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 02 18:13:50 localhost kernel: Non-volatile memory driver v1.3
Oct 02 18:13:50 localhost kernel: rdac: device handler registered
Oct 02 18:13:50 localhost kernel: hp_sw: device handler registered
Oct 02 18:13:50 localhost kernel: emc: device handler registered
Oct 02 18:13:50 localhost kernel: alua: device handler registered
Oct 02 18:13:50 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 02 18:13:50 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 02 18:13:50 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 02 18:13:50 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 02 18:13:50 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 02 18:13:50 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 02 18:13:50 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 02 18:13:50 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct 02 18:13:50 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 02 18:13:50 localhost kernel: hub 1-0:1.0: USB hub found
Oct 02 18:13:50 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 02 18:13:50 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 02 18:13:50 localhost kernel: usbserial: USB Serial support registered for generic
Oct 02 18:13:50 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 02 18:13:50 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 02 18:13:50 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 02 18:13:50 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 02 18:13:50 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 02 18:13:50 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 02 18:13:50 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 02 18:13:50 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-02T18:13:49 UTC (1759428829)
Oct 02 18:13:50 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 02 18:13:50 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 02 18:13:50 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 02 18:13:50 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 02 18:13:50 localhost kernel: usbcore: registered new interface driver usbhid
Oct 02 18:13:50 localhost kernel: usbhid: USB HID core driver
Oct 02 18:13:50 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 02 18:13:50 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 02 18:13:50 localhost kernel: Initializing XFRM netlink socket
Oct 02 18:13:50 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 02 18:13:50 localhost kernel: Segment Routing with IPv6
Oct 02 18:13:50 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 02 18:13:50 localhost kernel: mpls_gso: MPLS GSO support
Oct 02 18:13:50 localhost kernel: IPI shorthand broadcast: enabled
Oct 02 18:13:50 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 02 18:13:50 localhost kernel: AES CTR mode by8 optimization enabled
Oct 02 18:13:50 localhost kernel: sched_clock: Marking stable (1173005970, 153542940)->(1438516240, -111967330)
Oct 02 18:13:50 localhost kernel: registered taskstats version 1
Oct 02 18:13:50 localhost kernel: Loading compiled-in X.509 certificates
Oct 02 18:13:50 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 02 18:13:50 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 02 18:13:50 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 02 18:13:50 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 02 18:13:50 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 02 18:13:50 localhost kernel: Demotion targets for Node 0: null
Oct 02 18:13:50 localhost kernel: page_owner is disabled
Oct 02 18:13:50 localhost kernel: Key type .fscrypt registered
Oct 02 18:13:50 localhost kernel: Key type fscrypt-provisioning registered
Oct 02 18:13:50 localhost kernel: Key type big_key registered
Oct 02 18:13:50 localhost kernel: Key type encrypted registered
Oct 02 18:13:50 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 02 18:13:50 localhost kernel: Loading compiled-in module X.509 certificates
Oct 02 18:13:50 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 02 18:13:50 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 02 18:13:50 localhost kernel: ima: No architecture policies found
Oct 02 18:13:50 localhost kernel: evm: Initialising EVM extended attributes:
Oct 02 18:13:50 localhost kernel: evm: security.selinux
Oct 02 18:13:50 localhost kernel: evm: security.SMACK64 (disabled)
Oct 02 18:13:50 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 02 18:13:50 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 02 18:13:50 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 02 18:13:50 localhost kernel: evm: security.apparmor (disabled)
Oct 02 18:13:50 localhost kernel: evm: security.ima
Oct 02 18:13:50 localhost kernel: evm: security.capability
Oct 02 18:13:50 localhost kernel: evm: HMAC attrs: 0x1
Oct 02 18:13:50 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 02 18:13:50 localhost kernel: Running certificate verification RSA selftest
Oct 02 18:13:50 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 02 18:13:50 localhost kernel: Running certificate verification ECDSA selftest
Oct 02 18:13:50 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 02 18:13:50 localhost kernel: clk: Disabling unused clocks
Oct 02 18:13:50 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 02 18:13:50 localhost kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct 02 18:13:50 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 02 18:13:50 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct 02 18:13:50 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 02 18:13:50 localhost kernel: Run /init as init process
Oct 02 18:13:50 localhost kernel:   with arguments:
Oct 02 18:13:50 localhost kernel:     /init
Oct 02 18:13:50 localhost kernel:   with environment:
Oct 02 18:13:50 localhost kernel:     HOME=/
Oct 02 18:13:50 localhost kernel:     TERM=linux
Oct 02 18:13:50 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64
Oct 02 18:13:50 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 02 18:13:50 localhost systemd[1]: Detected virtualization kvm.
Oct 02 18:13:50 localhost systemd[1]: Detected architecture x86-64.
Oct 02 18:13:50 localhost systemd[1]: Running in initrd.
Oct 02 18:13:50 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 02 18:13:50 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 02 18:13:50 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 02 18:13:50 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 02 18:13:50 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 02 18:13:50 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 02 18:13:50 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 02 18:13:50 localhost systemd[1]: No hostname configured, using default hostname.
Oct 02 18:13:50 localhost systemd[1]: Hostname set to <localhost>.
Oct 02 18:13:50 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 02 18:13:50 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 02 18:13:50 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 02 18:13:50 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 02 18:13:50 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 02 18:13:50 localhost systemd[1]: Reached target Local File Systems.
Oct 02 18:13:50 localhost systemd[1]: Reached target Path Units.
Oct 02 18:13:50 localhost systemd[1]: Reached target Slice Units.
Oct 02 18:13:50 localhost systemd[1]: Reached target Swaps.
Oct 02 18:13:50 localhost systemd[1]: Reached target Timer Units.
Oct 02 18:13:50 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 02 18:13:50 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 02 18:13:50 localhost systemd[1]: Listening on Journal Socket.
Oct 02 18:13:50 localhost systemd[1]: Listening on udev Control Socket.
Oct 02 18:13:50 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 02 18:13:50 localhost systemd[1]: Reached target Socket Units.
Oct 02 18:13:50 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 02 18:13:50 localhost systemd[1]: Starting Journal Service...
Oct 02 18:13:50 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 02 18:13:50 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 02 18:13:50 localhost systemd[1]: Starting Create System Users...
Oct 02 18:13:50 localhost systemd[1]: Starting Setup Virtual Console...
Oct 02 18:13:50 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 02 18:13:50 localhost systemd-journald[312]: Journal started
Oct 02 18:13:50 localhost systemd-journald[312]: Runtime Journal (/run/log/journal/b440ca7fed294df19220db3fd23c361a) is 8.0M, max 153.5M, 145.5M free.
Oct 02 18:13:50 localhost systemd[1]: Started Journal Service.
Oct 02 18:13:50 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 02 18:13:50 localhost systemd-sysusers[317]: Creating group 'users' with GID 100.
Oct 02 18:13:50 localhost systemd-sysusers[317]: Creating group 'dbus' with GID 81.
Oct 02 18:13:50 localhost systemd-sysusers[317]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 02 18:13:50 localhost systemd[1]: Finished Create System Users.
Oct 02 18:13:51 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 02 18:13:51 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 02 18:13:51 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 02 18:13:51 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 02 18:13:51 localhost systemd[1]: Finished Setup Virtual Console.
Oct 02 18:13:51 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 02 18:13:51 localhost systemd[1]: Starting dracut cmdline hook...
Oct 02 18:13:51 localhost dracut-cmdline[332]: dracut-9 dracut-057-102.git20250818.el9
Oct 02 18:13:51 localhost dracut-cmdline[332]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 02 18:13:51 localhost systemd[1]: Finished dracut cmdline hook.
Oct 02 18:13:51 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 02 18:13:51 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 02 18:13:51 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 02 18:13:51 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 02 18:13:51 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 02 18:13:51 localhost kernel: RPC: Registered udp transport module.
Oct 02 18:13:51 localhost kernel: RPC: Registered tcp transport module.
Oct 02 18:13:51 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 02 18:13:51 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 02 18:13:51 localhost rpc.statd[449]: Version 2.5.4 starting
Oct 02 18:13:51 localhost rpc.statd[449]: Initializing NSM state
Oct 02 18:13:51 localhost rpc.idmapd[454]: Setting log level to 0
Oct 02 18:13:51 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 02 18:13:51 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 02 18:13:51 localhost systemd-udevd[467]: Using default interface naming scheme 'rhel-9.0'.
Oct 02 18:13:51 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 02 18:13:51 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 02 18:13:51 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 02 18:13:51 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 02 18:13:51 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 02 18:13:51 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 02 18:13:51 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 02 18:13:51 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 02 18:13:51 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 02 18:13:51 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 02 18:13:51 localhost systemd[1]: Reached target Network.
Oct 02 18:13:51 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 02 18:13:51 localhost systemd[1]: Starting dracut initqueue hook...
Oct 02 18:13:51 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 02 18:13:51 localhost kernel: libata version 3.00 loaded.
Oct 02 18:13:51 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 02 18:13:51 localhost kernel: scsi host0: ata_piix
Oct 02 18:13:51 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 02 18:13:51 localhost kernel: scsi host1: ata_piix
Oct 02 18:13:51 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 02 18:13:51 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 02 18:13:51 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 02 18:13:51 localhost kernel:  vda: vda1
Oct 02 18:13:51 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 02 18:13:51 localhost systemd[1]: Reached target System Initialization.
Oct 02 18:13:51 localhost systemd[1]: Reached target Basic System.
Oct 02 18:13:52 localhost kernel: ata1: found unknown device (class 0)
Oct 02 18:13:52 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 02 18:13:52 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 02 18:13:52 localhost systemd-udevd[504]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 18:13:52 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 02 18:13:52 localhost systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 02 18:13:52 localhost systemd[1]: Reached target Initrd Root Device.
Oct 02 18:13:52 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 02 18:13:52 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 02 18:13:52 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 02 18:13:52 localhost systemd[1]: Finished dracut initqueue hook.
Oct 02 18:13:52 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 02 18:13:52 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 02 18:13:52 localhost systemd[1]: Reached target Remote File Systems.
Oct 02 18:13:52 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 02 18:13:52 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 02 18:13:52 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct 02 18:13:52 localhost systemd-fsck[560]: /usr/sbin/fsck.xfs: XFS file system.
Oct 02 18:13:52 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 02 18:13:52 localhost systemd[1]: Mounting /sysroot...
Oct 02 18:13:52 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 02 18:13:52 localhost kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct 02 18:13:52 localhost kernel: XFS (vda1): Ending clean mount
Oct 02 18:13:52 localhost systemd[1]: Mounted /sysroot.
Oct 02 18:13:52 localhost systemd[1]: Reached target Initrd Root File System.
Oct 02 18:13:52 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 02 18:13:52 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 02 18:13:52 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 02 18:13:52 localhost systemd[1]: Reached target Initrd File Systems.
Oct 02 18:13:52 localhost systemd[1]: Reached target Initrd Default Target.
Oct 02 18:13:52 localhost systemd[1]: Starting dracut mount hook...
Oct 02 18:13:53 localhost systemd[1]: Finished dracut mount hook.
Oct 02 18:13:53 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 02 18:13:53 localhost rpc.idmapd[454]: exiting on signal 15
Oct 02 18:13:53 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 02 18:13:53 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 02 18:13:53 localhost systemd[1]: Stopped target Network.
Oct 02 18:13:53 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 02 18:13:53 localhost systemd[1]: Stopped target Timer Units.
Oct 02 18:13:53 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 02 18:13:53 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 02 18:13:53 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 02 18:13:53 localhost systemd[1]: Stopped target Basic System.
Oct 02 18:13:53 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 02 18:13:53 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 02 18:13:53 localhost systemd[1]: Stopped target Path Units.
Oct 02 18:13:53 localhost systemd[1]: Stopped target Remote File Systems.
Oct 02 18:13:53 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 02 18:13:53 localhost systemd[1]: Stopped target Slice Units.
Oct 02 18:13:53 localhost systemd[1]: Stopped target Socket Units.
Oct 02 18:13:53 localhost systemd[1]: Stopped target System Initialization.
Oct 02 18:13:53 localhost systemd[1]: Stopped target Local File Systems.
Oct 02 18:13:53 localhost systemd[1]: Stopped target Swaps.
Oct 02 18:13:53 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Stopped dracut mount hook.
Oct 02 18:13:53 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 02 18:13:53 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 02 18:13:53 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 02 18:13:53 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 02 18:13:53 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 02 18:13:53 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 02 18:13:53 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 02 18:13:53 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 02 18:13:53 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 02 18:13:53 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 02 18:13:53 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 02 18:13:53 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Closed udev Control Socket.
Oct 02 18:13:53 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Closed udev Kernel Socket.
Oct 02 18:13:53 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 02 18:13:53 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 02 18:13:53 localhost systemd[1]: Starting Cleanup udev Database...
Oct 02 18:13:53 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 02 18:13:53 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 02 18:13:53 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Stopped Create System Users.
Oct 02 18:13:53 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 02 18:13:53 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 02 18:13:53 localhost systemd[1]: Finished Cleanup udev Database.
Oct 02 18:13:53 localhost systemd[1]: Reached target Switch Root.
Oct 02 18:13:53 localhost systemd[1]: Starting Switch Root...
Oct 02 18:13:53 localhost systemd[1]: Switching root.
Oct 02 18:13:53 localhost systemd-journald[312]: Journal stopped
Oct 02 18:13:54 localhost systemd-journald[312]: Received SIGTERM from PID 1 (systemd).
Oct 02 18:13:54 localhost kernel: audit: type=1404 audit(1759428833.651:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 02 18:13:54 localhost kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:13:54 localhost kernel: SELinux:  policy capability open_perms=1
Oct 02 18:13:54 localhost kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:13:54 localhost kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:13:54 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:13:54 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:13:54 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:13:54 localhost kernel: audit: type=1403 audit(1759428833.828:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 02 18:13:54 localhost systemd[1]: Successfully loaded SELinux policy in 181.612ms.
Oct 02 18:13:54 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.113ms.
Oct 02 18:13:54 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 02 18:13:54 localhost systemd[1]: Detected virtualization kvm.
Oct 02 18:13:54 localhost systemd[1]: Detected architecture x86-64.
Oct 02 18:13:54 localhost systemd-rc-local-generator[640]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:13:54 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Oct 02 18:13:54 localhost systemd[1]: Stopped Switch Root.
Oct 02 18:13:54 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 02 18:13:54 localhost systemd[1]: Created slice Slice /system/getty.
Oct 02 18:13:54 localhost systemd[1]: Created slice Slice /system/serial-getty.
Oct 02 18:13:54 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Oct 02 18:13:54 localhost systemd[1]: Created slice User and Session Slice.
Oct 02 18:13:54 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 02 18:13:54 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Oct 02 18:13:54 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 02 18:13:54 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 02 18:13:54 localhost systemd[1]: Stopped target Switch Root.
Oct 02 18:13:54 localhost systemd[1]: Stopped target Initrd File Systems.
Oct 02 18:13:54 localhost systemd[1]: Stopped target Initrd Root File System.
Oct 02 18:13:54 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Oct 02 18:13:54 localhost systemd[1]: Reached target Path Units.
Oct 02 18:13:54 localhost systemd[1]: Reached target rpc_pipefs.target.
Oct 02 18:13:54 localhost systemd[1]: Reached target Slice Units.
Oct 02 18:13:54 localhost systemd[1]: Reached target Swaps.
Oct 02 18:13:54 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Oct 02 18:13:54 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Oct 02 18:13:54 localhost systemd[1]: Reached target RPC Port Mapper.
Oct 02 18:13:54 localhost systemd[1]: Listening on Process Core Dump Socket.
Oct 02 18:13:54 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Oct 02 18:13:54 localhost systemd[1]: Listening on udev Control Socket.
Oct 02 18:13:54 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 02 18:13:54 localhost systemd[1]: Mounting Huge Pages File System...
Oct 02 18:13:54 localhost systemd[1]: Mounting POSIX Message Queue File System...
Oct 02 18:13:54 localhost systemd[1]: Mounting Kernel Debug File System...
Oct 02 18:13:54 localhost systemd[1]: Mounting Kernel Trace File System...
Oct 02 18:13:54 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 02 18:13:54 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 02 18:13:54 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 02 18:13:54 localhost systemd[1]: Starting Load Kernel Module drm...
Oct 02 18:13:54 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Oct 02 18:13:54 localhost systemd[1]: Starting Load Kernel Module fuse...
Oct 02 18:13:54 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 02 18:13:54 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Oct 02 18:13:54 localhost systemd[1]: Stopped File System Check on Root Device.
Oct 02 18:13:54 localhost systemd[1]: Stopped Journal Service.
Oct 02 18:13:54 localhost systemd[1]: Starting Journal Service...
Oct 02 18:13:54 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 02 18:13:54 localhost systemd[1]: Starting Generate network units from Kernel command line...
Oct 02 18:13:54 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 02 18:13:54 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Oct 02 18:13:54 localhost systemd-journald[681]: Journal started
Oct 02 18:13:54 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct 02 18:13:54 localhost systemd[1]: Queued start job for default target Multi-User System.
Oct 02 18:13:54 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 02 18:13:54 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 02 18:13:54 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 02 18:13:54 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 02 18:13:54 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 02 18:13:54 localhost systemd[1]: Started Journal Service.
Oct 02 18:13:54 localhost systemd[1]: Mounted Huge Pages File System.
Oct 02 18:13:54 localhost kernel: ACPI: bus type drm_connector registered
Oct 02 18:13:54 localhost systemd[1]: Mounted POSIX Message Queue File System.
Oct 02 18:13:54 localhost kernel: fuse: init (API version 7.37)
Oct 02 18:13:54 localhost systemd[1]: Mounted Kernel Debug File System.
Oct 02 18:13:54 localhost systemd[1]: Mounted Kernel Trace File System.
Oct 02 18:13:54 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 02 18:13:54 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 02 18:13:54 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 02 18:13:54 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 02 18:13:54 localhost systemd[1]: Finished Load Kernel Module drm.
Oct 02 18:13:54 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 02 18:13:54 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 02 18:13:54 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 02 18:13:54 localhost systemd[1]: Finished Load Kernel Module fuse.
Oct 02 18:13:54 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 02 18:13:54 localhost systemd[1]: Finished Generate network units from Kernel command line.
Oct 02 18:13:54 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 02 18:13:54 localhost systemd[1]: Mounting FUSE Control File System...
Oct 02 18:13:54 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 02 18:13:54 localhost systemd[1]: Starting Rebuild Hardware Database...
Oct 02 18:13:54 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 02 18:13:54 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 02 18:13:54 localhost systemd[1]: Starting Load/Save OS Random Seed...
Oct 02 18:13:54 localhost systemd[1]: Starting Create System Users...
Oct 02 18:13:54 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 02 18:13:54 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct 02 18:13:54 localhost systemd-journald[681]: Received client request to flush runtime journal.
Oct 02 18:13:54 localhost systemd[1]: Mounted FUSE Control File System.
Oct 02 18:13:54 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 02 18:13:54 localhost systemd[1]: Finished Load/Save OS Random Seed.
Oct 02 18:13:54 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 02 18:13:54 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 02 18:13:54 localhost systemd[1]: Finished Create System Users.
Oct 02 18:13:54 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 02 18:13:54 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 02 18:13:54 localhost systemd[1]: Reached target Preparation for Local File Systems.
Oct 02 18:13:54 localhost systemd[1]: Reached target Local File Systems.
Oct 02 18:13:54 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct 02 18:13:54 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 02 18:13:54 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 02 18:13:54 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 02 18:13:54 localhost systemd[1]: Starting Automatic Boot Loader Update...
Oct 02 18:13:54 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 02 18:13:54 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 02 18:13:54 localhost bootctl[698]: Couldn't find EFI system partition, skipping.
Oct 02 18:13:54 localhost systemd[1]: Finished Automatic Boot Loader Update.
Oct 02 18:13:54 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 02 18:13:54 localhost systemd[1]: Starting Security Auditing Service...
Oct 02 18:13:54 localhost systemd[1]: Starting RPC Bind...
Oct 02 18:13:54 localhost systemd[1]: Starting Rebuild Journal Catalog...
Oct 02 18:13:54 localhost auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 02 18:13:54 localhost auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 02 18:13:54 localhost systemd[1]: Finished Rebuild Journal Catalog.
Oct 02 18:13:54 localhost systemd[1]: Started RPC Bind.
Oct 02 18:13:54 localhost augenrules[709]: /sbin/augenrules: No change
Oct 02 18:13:54 localhost augenrules[724]: No rules
Oct 02 18:13:54 localhost augenrules[724]: enabled 1
Oct 02 18:13:54 localhost augenrules[724]: failure 1
Oct 02 18:13:54 localhost augenrules[724]: pid 704
Oct 02 18:13:54 localhost augenrules[724]: rate_limit 0
Oct 02 18:13:54 localhost augenrules[724]: backlog_limit 8192
Oct 02 18:13:54 localhost augenrules[724]: lost 0
Oct 02 18:13:54 localhost augenrules[724]: backlog 4
Oct 02 18:13:54 localhost augenrules[724]: backlog_wait_time 60000
Oct 02 18:13:54 localhost augenrules[724]: backlog_wait_time_actual 0
Oct 02 18:13:54 localhost augenrules[724]: enabled 1
Oct 02 18:13:54 localhost augenrules[724]: failure 1
Oct 02 18:13:54 localhost augenrules[724]: pid 704
Oct 02 18:13:54 localhost augenrules[724]: rate_limit 0
Oct 02 18:13:54 localhost augenrules[724]: backlog_limit 8192
Oct 02 18:13:54 localhost augenrules[724]: lost 0
Oct 02 18:13:54 localhost augenrules[724]: backlog 4
Oct 02 18:13:54 localhost augenrules[724]: backlog_wait_time 60000
Oct 02 18:13:54 localhost augenrules[724]: backlog_wait_time_actual 0
Oct 02 18:13:54 localhost augenrules[724]: enabled 1
Oct 02 18:13:54 localhost augenrules[724]: failure 1
Oct 02 18:13:54 localhost augenrules[724]: pid 704
Oct 02 18:13:54 localhost augenrules[724]: rate_limit 0
Oct 02 18:13:54 localhost augenrules[724]: backlog_limit 8192
Oct 02 18:13:54 localhost augenrules[724]: lost 0
Oct 02 18:13:54 localhost augenrules[724]: backlog 4
Oct 02 18:13:54 localhost augenrules[724]: backlog_wait_time 60000
Oct 02 18:13:54 localhost augenrules[724]: backlog_wait_time_actual 0
Oct 02 18:13:54 localhost systemd[1]: Started Security Auditing Service.
Oct 02 18:13:55 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 02 18:13:55 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 02 18:13:55 localhost systemd[1]: Finished Rebuild Hardware Database.
Oct 02 18:13:55 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 02 18:13:55 localhost systemd-udevd[732]: Using default interface naming scheme 'rhel-9.0'.
Oct 02 18:13:55 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct 02 18:13:55 localhost systemd[1]: Starting Update is Completed...
Oct 02 18:13:55 localhost systemd[1]: Finished Update is Completed.
Oct 02 18:13:55 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 02 18:13:55 localhost systemd[1]: Reached target System Initialization.
Oct 02 18:13:55 localhost systemd[1]: Started dnf makecache --timer.
Oct 02 18:13:55 localhost systemd[1]: Started Daily rotation of log files.
Oct 02 18:13:55 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 02 18:13:55 localhost systemd[1]: Reached target Timer Units.
Oct 02 18:13:55 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 02 18:13:55 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 02 18:13:55 localhost systemd[1]: Reached target Socket Units.
Oct 02 18:13:55 localhost systemd[1]: Starting D-Bus System Message Bus...
Oct 02 18:13:55 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 02 18:13:55 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 02 18:13:55 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 02 18:13:55 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 02 18:13:55 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 02 18:13:55 localhost systemd-udevd[745]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 18:13:55 localhost systemd[1]: Started D-Bus System Message Bus.
Oct 02 18:13:55 localhost systemd[1]: Reached target Basic System.
Oct 02 18:13:55 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 02 18:13:55 localhost dbus-broker-lau[761]: Ready
Oct 02 18:13:55 localhost systemd[1]: Starting NTP client/server...
Oct 02 18:13:55 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 02 18:13:55 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 02 18:13:55 localhost systemd[1]: Starting IPv4 firewall with iptables...
Oct 02 18:13:55 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 02 18:13:55 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 02 18:13:55 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 02 18:13:55 localhost systemd[1]: Started irqbalance daemon.
Oct 02 18:13:55 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 02 18:13:55 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 18:13:55 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 18:13:55 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 18:13:55 localhost systemd[1]: Reached target sshd-keygen.target.
Oct 02 18:13:55 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 02 18:13:55 localhost systemd[1]: Reached target User and Group Name Lookups.
Oct 02 18:13:55 localhost systemd[1]: Starting User Login Management...
Oct 02 18:13:55 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 02 18:13:55 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 02 18:13:55 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 02 18:13:55 localhost kernel: Console: switching to colour dummy device 80x25
Oct 02 18:13:55 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 02 18:13:55 localhost kernel: [drm] features: -context_init
Oct 02 18:13:55 localhost kernel: [drm] number of scanouts: 1
Oct 02 18:13:55 localhost kernel: [drm] number of cap sets: 0
Oct 02 18:13:55 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 02 18:13:55 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 02 18:13:55 localhost kernel: Console: switching to colour frame buffer device 128x48
Oct 02 18:13:55 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 02 18:13:55 localhost chronyd[804]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 02 18:13:55 localhost chronyd[804]: Loaded 0 symmetric keys
Oct 02 18:13:55 localhost chronyd[804]: Using right/UTC timezone to obtain leap second data
Oct 02 18:13:55 localhost chronyd[804]: Loaded seccomp filter (level 2)
Oct 02 18:13:55 localhost systemd[1]: Started NTP client/server.
Oct 02 18:13:55 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 02 18:13:55 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 02 18:13:55 localhost systemd-logind[793]: New seat seat0.
Oct 02 18:13:55 localhost systemd-logind[793]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 02 18:13:55 localhost systemd-logind[793]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 02 18:13:55 localhost systemd[1]: Started User Login Management.
Oct 02 18:13:55 localhost kernel: kvm_amd: TSC scaling supported
Oct 02 18:13:55 localhost kernel: kvm_amd: Nested Virtualization enabled
Oct 02 18:13:55 localhost kernel: kvm_amd: Nested Paging enabled
Oct 02 18:13:55 localhost kernel: kvm_amd: LBR virtualization supported
Oct 02 18:13:55 localhost iptables.init[780]: iptables: Applying firewall rules: [  OK  ]
Oct 02 18:13:55 localhost systemd[1]: Finished IPv4 firewall with iptables.
Oct 02 18:13:56 localhost cloud-init[841]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 02 Oct 2025 18:13:56 +0000. Up 7.81 seconds.
Oct 02 18:13:56 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Oct 02 18:13:56 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Oct 02 18:13:56 localhost systemd[1]: run-cloud\x2dinit-tmp-tmppr4b0oph.mount: Deactivated successfully.
Oct 02 18:13:56 localhost systemd[1]: Starting Hostname Service...
Oct 02 18:13:56 localhost systemd[1]: Started Hostname Service.
Oct 02 18:13:56 np0005467081.novalocal systemd-hostnamed[855]: Hostname set to <np0005467081.novalocal> (static)
Oct 02 18:13:56 np0005467081.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 02 18:13:56 np0005467081.novalocal systemd[1]: Reached target Preparation for Network.
Oct 02 18:13:56 np0005467081.novalocal systemd[1]: Starting Network Manager...
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.8776] NetworkManager (version 1.54.1-1.el9) is starting... (boot:335cdeae-868d-405b-8b1c-eba838d0b699)
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.8782] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.8934] manager[0x559fedf84080]: monitoring kernel firmware directory '/lib/firmware'.
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.8996] hostname: hostname: using hostnamed
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.8998] hostname: static hostname changed from (none) to "np0005467081.novalocal"
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9002] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9129] manager[0x559fedf84080]: rfkill: Wi-Fi hardware radio set enabled
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9129] manager[0x559fedf84080]: rfkill: WWAN hardware radio set enabled
Oct 02 18:13:56 np0005467081.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9207] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9208] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9209] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9209] manager: Networking is enabled by state file
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9212] settings: Loaded settings plugin: keyfile (internal)
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9247] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9274] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9299] dhcp: init: Using DHCP client 'internal'
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9302] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9318] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9332] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9339] device (lo): Activation: starting connection 'lo' (ad43cc60-0861-4113-b2c8-fcae658eed34)
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9348] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9351] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:13:56 np0005467081.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9388] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9392] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9395] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9396] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9399] device (eth0): carrier: link connected
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9403] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9410] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9418] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9422] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9423] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9425] manager: NetworkManager state is now CONNECTING
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9426] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:13:56 np0005467081.novalocal systemd[1]: Started Network Manager.
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9432] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9435] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 18:13:56 np0005467081.novalocal systemd[1]: Reached target Network.
Oct 02 18:13:56 np0005467081.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 02 18:13:56 np0005467081.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 02 18:13:56 np0005467081.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9609] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9612] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 02 18:13:56 np0005467081.novalocal NetworkManager[859]: <info>  [1759428836.9621] device (lo): Activation: successful, device activated.
Oct 02 18:13:56 np0005467081.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Oct 02 18:13:56 np0005467081.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 02 18:13:56 np0005467081.novalocal systemd[1]: Reached target NFS client services.
Oct 02 18:13:56 np0005467081.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Oct 02 18:13:56 np0005467081.novalocal systemd[1]: Reached target Remote File Systems.
Oct 02 18:13:56 np0005467081.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 02 18:13:58 np0005467081.novalocal NetworkManager[859]: <info>  [1759428838.7299] dhcp4 (eth0): state changed new lease, address=38.102.83.148
Oct 02 18:13:58 np0005467081.novalocal NetworkManager[859]: <info>  [1759428838.7312] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 02 18:13:58 np0005467081.novalocal NetworkManager[859]: <info>  [1759428838.7337] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:13:58 np0005467081.novalocal NetworkManager[859]: <info>  [1759428838.7379] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:13:58 np0005467081.novalocal NetworkManager[859]: <info>  [1759428838.7383] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:13:58 np0005467081.novalocal NetworkManager[859]: <info>  [1759428838.7388] manager: NetworkManager state is now CONNECTED_SITE
Oct 02 18:13:58 np0005467081.novalocal NetworkManager[859]: <info>  [1759428838.7394] device (eth0): Activation: successful, device activated.
Oct 02 18:13:58 np0005467081.novalocal NetworkManager[859]: <info>  [1759428838.7401] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 02 18:13:58 np0005467081.novalocal NetworkManager[859]: <info>  [1759428838.7405] manager: startup complete
Oct 02 18:13:58 np0005467081.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 02 18:13:58 np0005467081.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 02 Oct 2025 18:13:59 +0000. Up 10.67 seconds.
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: |  eth0  | True |        38.102.83.148         | 255.255.255.0 | global | fa:16:3e:8a:51:d2 |
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fe8a:51d2/64 |       .       |  link  | fa:16:3e:8a:51:d2 |
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Oct 02 18:13:59 np0005467081.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 02 18:14:00 np0005467081.novalocal useradd[992]: new group: name=cloud-user, GID=1001
Oct 02 18:14:00 np0005467081.novalocal useradd[992]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Oct 02 18:14:00 np0005467081.novalocal useradd[992]: add 'cloud-user' to group 'adm'
Oct 02 18:14:00 np0005467081.novalocal useradd[992]: add 'cloud-user' to group 'systemd-journal'
Oct 02 18:14:00 np0005467081.novalocal useradd[992]: add 'cloud-user' to shadow group 'adm'
Oct 02 18:14:00 np0005467081.novalocal useradd[992]: add 'cloud-user' to shadow group 'systemd-journal'
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: Generating public/private rsa key pair.
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: The key fingerprint is:
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: SHA256:tobgMkHN0bYlf5y7M3rTuf+r3WCPXAocu2uzYM+XU7A root@np0005467081.novalocal
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: The key's randomart image is:
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: +---[RSA 3072]----+
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |    ..           |
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |   o .+ .        |
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |  . o. = . .     |
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: | .    . . +   .  |
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |  . .   S. ..  o |
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |   o . o ... oE .|
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |  o . . o oo+.oo.|
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |   o   . .*+===*.|
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |        .o =BO*==|
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: +----[SHA256]-----+
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: Generating public/private ecdsa key pair.
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: The key fingerprint is:
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: SHA256:UQG5+d+7V77zf1WV0lAGsuvpwFa2dHb059c4S/sYpsQ root@np0005467081.novalocal
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: The key's randomart image is:
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: +---[ECDSA 256]---+
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |        .ooo o=o.|
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |        ..  o..o.|
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |        .o .  ...|
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |        o.  . . o|
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |        S. = o .+|
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |        . *.= .o=|
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |         + =E.*.*|
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |        . o..+.B=|
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |           .. **X|
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: +----[SHA256]-----+
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: Generating public/private ed25519 key pair.
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: The key fingerprint is:
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: SHA256:EEs3l/tahhtQgrWOoWlMouChQrQhBk0UXJ5LFDjlAQI root@np0005467081.novalocal
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: The key's randomart image is:
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: +--[ED25519 256]--+
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |EO=**.oo+ ..     |
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |+.B+ +.+.+o      |
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |.+..* + .o .     |
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |=..= + =. .      |
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |+.  * . S. o     |
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |.  .      o +    |
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |           *     |
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |          o      |
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: |                 |
Oct 02 18:14:00 np0005467081.novalocal cloud-init[924]: +----[SHA256]-----+
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Reached target Cloud-config availability.
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Reached target Network is Online.
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Starting System Logging Service...
Oct 02 18:14:00 np0005467081.novalocal sm-notify[1007]: Version 2.5.4 starting
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Starting OpenSSH server daemon...
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Starting Permit User Sessions...
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Started Notify NFS peers of a restart.
Oct 02 18:14:00 np0005467081.novalocal sshd[1009]: Server listening on 0.0.0.0 port 22.
Oct 02 18:14:00 np0005467081.novalocal sshd[1009]: Server listening on :: port 22.
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Started OpenSSH server daemon.
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Finished Permit User Sessions.
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Started Command Scheduler.
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Started Getty on tty1.
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Started Serial Getty on ttyS0.
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Reached target Login Prompts.
Oct 02 18:14:00 np0005467081.novalocal crond[1011]: (CRON) STARTUP (1.5.7)
Oct 02 18:14:00 np0005467081.novalocal crond[1011]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 02 18:14:00 np0005467081.novalocal crond[1011]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 95% if used.)
Oct 02 18:14:00 np0005467081.novalocal crond[1011]: (CRON) INFO (running with inotify support)
Oct 02 18:14:00 np0005467081.novalocal rsyslogd[1008]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1008" x-info="https://www.rsyslog.com"] start
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Started System Logging Service.
Oct 02 18:14:00 np0005467081.novalocal rsyslogd[1008]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Reached target Multi-User System.
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 02 18:14:00 np0005467081.novalocal rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 18:14:00 np0005467081.novalocal cloud-init[1020]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 02 Oct 2025 18:14:00 +0000. Up 12.48 seconds.
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Oct 02 18:14:00 np0005467081.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Oct 02 18:14:01 np0005467081.novalocal cloud-init[1024]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 02 Oct 2025 18:14:01 +0000. Up 12.87 seconds.
Oct 02 18:14:01 np0005467081.novalocal cloud-init[1026]: #############################################################
Oct 02 18:14:01 np0005467081.novalocal cloud-init[1027]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct 02 18:14:01 np0005467081.novalocal cloud-init[1029]: 256 SHA256:UQG5+d+7V77zf1WV0lAGsuvpwFa2dHb059c4S/sYpsQ root@np0005467081.novalocal (ECDSA)
Oct 02 18:14:01 np0005467081.novalocal cloud-init[1031]: 256 SHA256:EEs3l/tahhtQgrWOoWlMouChQrQhBk0UXJ5LFDjlAQI root@np0005467081.novalocal (ED25519)
Oct 02 18:14:01 np0005467081.novalocal cloud-init[1033]: 3072 SHA256:tobgMkHN0bYlf5y7M3rTuf+r3WCPXAocu2uzYM+XU7A root@np0005467081.novalocal (RSA)
Oct 02 18:14:01 np0005467081.novalocal cloud-init[1034]: -----END SSH HOST KEY FINGERPRINTS-----
Oct 02 18:14:01 np0005467081.novalocal cloud-init[1035]: #############################################################
Oct 02 18:14:01 np0005467081.novalocal cloud-init[1024]: Cloud-init v. 24.4-7.el9 finished at Thu, 02 Oct 2025 18:14:01 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 13.09 seconds
Oct 02 18:14:01 np0005467081.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Oct 02 18:14:01 np0005467081.novalocal systemd[1]: Reached target Cloud-init target.
Oct 02 18:14:01 np0005467081.novalocal systemd[1]: Startup finished in 1.623s (kernel) + 3.633s (initrd) + 7.928s (userspace) = 13.185s.
Oct 02 18:14:02 np0005467081.novalocal sshd-session[1039]: Connection closed by 38.102.83.114 port 60900 [preauth]
Oct 02 18:14:02 np0005467081.novalocal sshd-session[1041]: Unable to negotiate with 38.102.83.114 port 60916: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Oct 02 18:14:02 np0005467081.novalocal sshd-session[1045]: Unable to negotiate with 38.102.83.114 port 60938: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Oct 02 18:14:02 np0005467081.novalocal sshd-session[1047]: Unable to negotiate with 38.102.83.114 port 60940: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Oct 02 18:14:02 np0005467081.novalocal sshd-session[1051]: Connection reset by 38.102.83.114 port 60950 [preauth]
Oct 02 18:14:02 np0005467081.novalocal sshd-session[1053]: Unable to negotiate with 38.102.83.114 port 60960: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Oct 02 18:14:02 np0005467081.novalocal sshd-session[1055]: Unable to negotiate with 38.102.83.114 port 60964: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Oct 02 18:14:02 np0005467081.novalocal sshd-session[1043]: Connection closed by 38.102.83.114 port 60930 [preauth]
Oct 02 18:14:02 np0005467081.novalocal sshd-session[1049]: Connection closed by 38.102.83.114 port 60948 [preauth]
Oct 02 18:14:03 np0005467081.novalocal chronyd[804]: Selected source 162.159.200.1 (2.centos.pool.ntp.org)
Oct 02 18:14:03 np0005467081.novalocal chronyd[804]: System clock TAI offset set to 37 seconds
Oct 02 18:14:05 np0005467081.novalocal irqbalance[784]: Cannot change IRQ 35 affinity: Operation not permitted
Oct 02 18:14:05 np0005467081.novalocal irqbalance[784]: IRQ 35 affinity is now unmanaged
Oct 02 18:14:05 np0005467081.novalocal irqbalance[784]: Cannot change IRQ 25 affinity: Operation not permitted
Oct 02 18:14:05 np0005467081.novalocal irqbalance[784]: IRQ 25 affinity is now unmanaged
Oct 02 18:14:05 np0005467081.novalocal irqbalance[784]: Cannot change IRQ 33 affinity: Operation not permitted
Oct 02 18:14:05 np0005467081.novalocal irqbalance[784]: IRQ 33 affinity is now unmanaged
Oct 02 18:14:05 np0005467081.novalocal irqbalance[784]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 02 18:14:05 np0005467081.novalocal irqbalance[784]: IRQ 28 affinity is now unmanaged
Oct 02 18:14:05 np0005467081.novalocal irqbalance[784]: Cannot change IRQ 26 affinity: Operation not permitted
Oct 02 18:14:05 np0005467081.novalocal irqbalance[784]: IRQ 26 affinity is now unmanaged
Oct 02 18:14:05 np0005467081.novalocal irqbalance[784]: Cannot change IRQ 34 affinity: Operation not permitted
Oct 02 18:14:05 np0005467081.novalocal irqbalance[784]: IRQ 34 affinity is now unmanaged
Oct 02 18:14:05 np0005467081.novalocal irqbalance[784]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 02 18:14:05 np0005467081.novalocal irqbalance[784]: IRQ 32 affinity is now unmanaged
Oct 02 18:14:05 np0005467081.novalocal irqbalance[784]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 02 18:14:05 np0005467081.novalocal irqbalance[784]: IRQ 30 affinity is now unmanaged
Oct 02 18:14:08 np0005467081.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 18:14:19 np0005467081.novalocal sshd-session[1057]: Accepted publickey for zuul from 38.102.83.114 port 38412 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Oct 02 18:14:19 np0005467081.novalocal systemd[1]: Created slice User Slice of UID 1000.
Oct 02 18:14:19 np0005467081.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 02 18:14:19 np0005467081.novalocal systemd-logind[793]: New session 1 of user zuul.
Oct 02 18:14:19 np0005467081.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 02 18:14:19 np0005467081.novalocal systemd[1]: Starting User Manager for UID 1000...
Oct 02 18:14:19 np0005467081.novalocal systemd[1061]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:14:19 np0005467081.novalocal systemd[1061]: Queued start job for default target Main User Target.
Oct 02 18:14:19 np0005467081.novalocal systemd[1061]: Created slice User Application Slice.
Oct 02 18:14:19 np0005467081.novalocal systemd[1061]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 18:14:19 np0005467081.novalocal systemd[1061]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 18:14:19 np0005467081.novalocal systemd[1061]: Reached target Paths.
Oct 02 18:14:19 np0005467081.novalocal systemd[1061]: Reached target Timers.
Oct 02 18:14:19 np0005467081.novalocal systemd[1061]: Starting D-Bus User Message Bus Socket...
Oct 02 18:14:19 np0005467081.novalocal systemd[1061]: Starting Create User's Volatile Files and Directories...
Oct 02 18:14:19 np0005467081.novalocal systemd[1061]: Finished Create User's Volatile Files and Directories.
Oct 02 18:14:19 np0005467081.novalocal systemd[1061]: Listening on D-Bus User Message Bus Socket.
Oct 02 18:14:19 np0005467081.novalocal systemd[1061]: Reached target Sockets.
Oct 02 18:14:19 np0005467081.novalocal systemd[1061]: Reached target Basic System.
Oct 02 18:14:19 np0005467081.novalocal systemd[1061]: Reached target Main User Target.
Oct 02 18:14:19 np0005467081.novalocal systemd[1061]: Startup finished in 138ms.
Oct 02 18:14:19 np0005467081.novalocal systemd[1]: Started User Manager for UID 1000.
Oct 02 18:14:19 np0005467081.novalocal systemd[1]: Started Session 1 of User zuul.
Oct 02 18:14:19 np0005467081.novalocal sshd-session[1057]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:14:20 np0005467081.novalocal python3[1143]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:14:22 np0005467081.novalocal python3[1171]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:14:26 np0005467081.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 18:14:28 np0005467081.novalocal python3[1231]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:14:29 np0005467081.novalocal python3[1271]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct 02 18:14:31 np0005467081.novalocal python3[1297]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMKCfnDY7jGOkd97+rqP2R+0CaIV7rcbwNt9g2fa9x4m3xQb96nSa+zhbQNJhmYqC/qRKeFaea8XKaqP7ciMXOmKfwHFvjpS0wA7+duelBzUHq/zfEBMTt+lHD3m38RL+9hCwun19nirt0tl7VPtPvHsFpaX/tG3/Qyp+l5eqSO9BttylivDDQZQccpohtgr6605XEy8nbcgeB/E+wqsqkasIusx8X4oQwNx3pgs/QG6D3f2qRWLoGxB9rT84ZGDNAat1iip108dePhnjdctiSl374toTcPj8SQudkDo3FNMs0zRciBtlrHAAptvq+o9y0/BrbFWh7VvCbS8lfTkEgR6dqA4FQ+gQxzsbb8Y7P9EElvX8PJyoMzkSARR0dKkr9IraqVB71gcq9NH2hk9NP/uZnnsdUC/aJ88/0N6cMzV+IiXtRgVP5zzt4s8mq7H756FeKcX5ixXte30H8zlSxhar4oAZbSxZWAPdkMOxZ3GUi0qt/vNWH+dNMNeCVmEc= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:31 np0005467081.novalocal python3[1321]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:14:32 np0005467081.novalocal python3[1420]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:14:32 np0005467081.novalocal python3[1491]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759428871.897619-207-217145881034947/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=245717e9768948129d16aead23bf0891_id_rsa follow=False checksum=f514e89dcf3773370b8843d3c114ec7edcd5b6c5 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:14:33 np0005467081.novalocal python3[1614]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:14:33 np0005467081.novalocal python3[1685]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759428872.832411-240-69841130507647/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=245717e9768948129d16aead23bf0891_id_rsa.pub follow=False checksum=55d4286af334b3ab4c100896bf8e0689bee0ddb2 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:14:34 np0005467081.novalocal python3[1733]: ansible-ping Invoked with data=pong
Oct 02 18:14:35 np0005467081.novalocal python3[1757]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:14:38 np0005467081.novalocal python3[1815]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct 02 18:14:39 np0005467081.novalocal python3[1847]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:14:39 np0005467081.novalocal python3[1871]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:14:39 np0005467081.novalocal python3[1895]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:14:40 np0005467081.novalocal python3[1919]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:14:40 np0005467081.novalocal python3[1943]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:14:40 np0005467081.novalocal python3[1967]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:14:42 np0005467081.novalocal sudo[1991]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liskmwwzhqkbksbmreqgevcbvtpbancy ; /usr/bin/python3'
Oct 02 18:14:42 np0005467081.novalocal sudo[1991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:14:42 np0005467081.novalocal python3[1993]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:14:42 np0005467081.novalocal sudo[1991]: pam_unix(sudo:session): session closed for user root
Oct 02 18:14:42 np0005467081.novalocal sudo[2069]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqzdafeftnsthrggzmydqgrpodbjkkct ; /usr/bin/python3'
Oct 02 18:14:42 np0005467081.novalocal sudo[2069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:14:42 np0005467081.novalocal python3[2071]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:14:42 np0005467081.novalocal sudo[2069]: pam_unix(sudo:session): session closed for user root
Oct 02 18:14:43 np0005467081.novalocal sudo[2142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdckhbohcywwfvbmlxyfecarjcqlyyhq ; /usr/bin/python3'
Oct 02 18:14:43 np0005467081.novalocal sudo[2142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:14:43 np0005467081.novalocal python3[2144]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759428882.4634922-21-134507018632368/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:14:43 np0005467081.novalocal sudo[2142]: pam_unix(sudo:session): session closed for user root
Oct 02 18:14:44 np0005467081.novalocal python3[2192]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:44 np0005467081.novalocal python3[2216]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:44 np0005467081.novalocal python3[2240]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:45 np0005467081.novalocal python3[2264]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:45 np0005467081.novalocal python3[2288]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:45 np0005467081.novalocal python3[2312]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:45 np0005467081.novalocal python3[2336]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:46 np0005467081.novalocal python3[2360]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:46 np0005467081.novalocal python3[2384]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:46 np0005467081.novalocal python3[2408]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:47 np0005467081.novalocal python3[2432]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:47 np0005467081.novalocal python3[2456]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:47 np0005467081.novalocal python3[2480]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:48 np0005467081.novalocal python3[2504]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:48 np0005467081.novalocal python3[2528]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:48 np0005467081.novalocal python3[2552]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:48 np0005467081.novalocal python3[2576]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:49 np0005467081.novalocal python3[2600]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:49 np0005467081.novalocal python3[2624]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:49 np0005467081.novalocal python3[2648]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:50 np0005467081.novalocal python3[2672]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:50 np0005467081.novalocal python3[2696]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:50 np0005467081.novalocal python3[2720]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:51 np0005467081.novalocal python3[2744]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:51 np0005467081.novalocal python3[2768]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:51 np0005467081.novalocal python3[2792]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:14:53 np0005467081.novalocal sudo[2816]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juscpbwzpiqnrephpnuryjmzyiqguddy ; /usr/bin/python3'
Oct 02 18:14:53 np0005467081.novalocal sudo[2816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:14:53 np0005467081.novalocal python3[2818]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 02 18:14:53 np0005467081.novalocal systemd[1]: Starting Time & Date Service...
Oct 02 18:14:54 np0005467081.novalocal systemd[1]: Started Time & Date Service.
Oct 02 18:14:54 np0005467081.novalocal systemd-timedated[2820]: Changed time zone to 'UTC' (UTC).
Oct 02 18:14:54 np0005467081.novalocal sudo[2816]: pam_unix(sudo:session): session closed for user root
Oct 02 18:14:54 np0005467081.novalocal sudo[2847]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvtyruogkpfdiuibltexeqsmzbyzdrpj ; /usr/bin/python3'
Oct 02 18:14:54 np0005467081.novalocal sudo[2847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:14:54 np0005467081.novalocal python3[2849]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:14:54 np0005467081.novalocal sudo[2847]: pam_unix(sudo:session): session closed for user root
Oct 02 18:14:54 np0005467081.novalocal python3[2925]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:14:55 np0005467081.novalocal python3[2996]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759428894.6395166-153-269340632835775/source _original_basename=tmpfjccas29 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:14:55 np0005467081.novalocal python3[3096]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:14:56 np0005467081.novalocal python3[3167]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759428895.5522683-183-97807295689166/source _original_basename=tmp259p92dq follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:14:56 np0005467081.novalocal sudo[3267]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amfmqbshxwpucewhuldtlzpwrykttmwp ; /usr/bin/python3'
Oct 02 18:14:56 np0005467081.novalocal sudo[3267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:14:57 np0005467081.novalocal python3[3269]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:14:57 np0005467081.novalocal sudo[3267]: pam_unix(sudo:session): session closed for user root
Oct 02 18:14:57 np0005467081.novalocal sudo[3340]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nongcgnaysobmgallsfdzovmjpciwdvm ; /usr/bin/python3'
Oct 02 18:14:57 np0005467081.novalocal sudo[3340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:14:57 np0005467081.novalocal python3[3342]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759428896.6805992-231-165939802342253/source _original_basename=tmpxvubqqpi follow=False checksum=a6c024a6649a87ca7709e2430139c248a6eabb0e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:14:57 np0005467081.novalocal sudo[3340]: pam_unix(sudo:session): session closed for user root
Oct 02 18:14:58 np0005467081.novalocal python3[3390]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:14:58 np0005467081.novalocal python3[3416]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:14:58 np0005467081.novalocal sudo[3494]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmdzhtxqtsimvieimsamlfxnqkxykhei ; /usr/bin/python3'
Oct 02 18:14:58 np0005467081.novalocal sudo[3494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:14:58 np0005467081.novalocal python3[3496]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:14:58 np0005467081.novalocal sudo[3494]: pam_unix(sudo:session): session closed for user root
Oct 02 18:14:59 np0005467081.novalocal sudo[3567]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qldsxkpsteoxvhcikirtuoixxtsrrjtj ; /usr/bin/python3'
Oct 02 18:14:59 np0005467081.novalocal sudo[3567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:14:59 np0005467081.novalocal python3[3569]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759428898.501156-273-74548222759598/source _original_basename=tmp4lr5nsav follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:14:59 np0005467081.novalocal sudo[3567]: pam_unix(sudo:session): session closed for user root
Oct 02 18:14:59 np0005467081.novalocal sudo[3618]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnpkvoxpjsrpbcmfymatwhbpodpegfej ; /usr/bin/python3'
Oct 02 18:14:59 np0005467081.novalocal sudo[3618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:14:59 np0005467081.novalocal python3[3620]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-0042-4e95-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:14:59 np0005467081.novalocal sudo[3618]: pam_unix(sudo:session): session closed for user root
Oct 02 18:15:00 np0005467081.novalocal python3[3648]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-0042-4e95-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 02 18:15:01 np0005467081.novalocal python3[3676]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:15:09 np0005467081.novalocal chronyd[804]: Selected source 138.197.164.54 (2.centos.pool.ntp.org)
Oct 02 18:15:17 np0005467081.novalocal sudo[3700]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpsmycezquveysvgmdokzlzqxgvbsmod ; /usr/bin/python3'
Oct 02 18:15:17 np0005467081.novalocal sudo[3700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:15:18 np0005467081.novalocal python3[3702]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:15:18 np0005467081.novalocal sudo[3700]: pam_unix(sudo:session): session closed for user root
Oct 02 18:15:24 np0005467081.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 02 18:15:51 np0005467081.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 02 18:15:51 np0005467081.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct 02 18:15:51 np0005467081.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct 02 18:15:51 np0005467081.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct 02 18:15:51 np0005467081.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct 02 18:15:51 np0005467081.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct 02 18:15:51 np0005467081.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct 02 18:15:51 np0005467081.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct 02 18:15:51 np0005467081.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct 02 18:15:51 np0005467081.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct 02 18:15:51 np0005467081.novalocal NetworkManager[859]: <info>  [1759428951.3312] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 02 18:15:51 np0005467081.novalocal systemd-udevd[3706]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 18:15:51 np0005467081.novalocal NetworkManager[859]: <info>  [1759428951.3528] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:15:51 np0005467081.novalocal NetworkManager[859]: <info>  [1759428951.3560] settings: (eth1): created default wired connection 'Wired connection 1'
Oct 02 18:15:51 np0005467081.novalocal NetworkManager[859]: <info>  [1759428951.3564] device (eth1): carrier: link connected
Oct 02 18:15:51 np0005467081.novalocal NetworkManager[859]: <info>  [1759428951.3566] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 02 18:15:51 np0005467081.novalocal NetworkManager[859]: <info>  [1759428951.3572] policy: auto-activating connection 'Wired connection 1' (ea663df2-53d3-37b6-9ced-9449cef09dde)
Oct 02 18:15:51 np0005467081.novalocal NetworkManager[859]: <info>  [1759428951.3578] device (eth1): Activation: starting connection 'Wired connection 1' (ea663df2-53d3-37b6-9ced-9449cef09dde)
Oct 02 18:15:51 np0005467081.novalocal NetworkManager[859]: <info>  [1759428951.3579] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:15:51 np0005467081.novalocal NetworkManager[859]: <info>  [1759428951.3581] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:15:51 np0005467081.novalocal NetworkManager[859]: <info>  [1759428951.3586] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:15:51 np0005467081.novalocal NetworkManager[859]: <info>  [1759428951.3592] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 02 18:15:52 np0005467081.novalocal python3[3732]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-4655-e477-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:16:02 np0005467081.novalocal sudo[3810]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvtggylgnakdvklzwcocsadepcmrzsgw ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 18:16:02 np0005467081.novalocal sudo[3810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:16:02 np0005467081.novalocal python3[3812]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:16:02 np0005467081.novalocal sudo[3810]: pam_unix(sudo:session): session closed for user root
Oct 02 18:16:02 np0005467081.novalocal sudo[3883]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgtmrscnuscomrllcodclcuripbsavqd ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 18:16:02 np0005467081.novalocal sudo[3883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:16:02 np0005467081.novalocal python3[3885]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759428961.886834-102-102854882127908/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=66dc3c7784f73361251732df4e9929a68dee904b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:16:02 np0005467081.novalocal sudo[3883]: pam_unix(sudo:session): session closed for user root
Oct 02 18:16:03 np0005467081.novalocal sudo[3933]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fujriptlezecximnepzqeqsfxwbopgee ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 18:16:03 np0005467081.novalocal sudo[3933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:16:03 np0005467081.novalocal python3[3935]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 18:16:03 np0005467081.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 02 18:16:03 np0005467081.novalocal systemd[1]: Stopped Network Manager Wait Online.
Oct 02 18:16:03 np0005467081.novalocal systemd[1]: Stopping Network Manager Wait Online...
Oct 02 18:16:03 np0005467081.novalocal systemd[1]: Stopping Network Manager...
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[859]: <info>  [1759428963.3869] caught SIGTERM, shutting down normally.
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[859]: <info>  [1759428963.3876] dhcp4 (eth0): canceled DHCP transaction
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[859]: <info>  [1759428963.3876] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[859]: <info>  [1759428963.3876] dhcp4 (eth0): state changed no lease
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[859]: <info>  [1759428963.3878] manager: NetworkManager state is now CONNECTING
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[859]: <info>  [1759428963.3934] dhcp4 (eth1): canceled DHCP transaction
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[859]: <info>  [1759428963.3934] dhcp4 (eth1): state changed no lease
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[859]: <info>  [1759428963.3981] exiting (success)
Oct 02 18:16:03 np0005467081.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 18:16:03 np0005467081.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 02 18:16:03 np0005467081.novalocal systemd[1]: Stopped Network Manager.
Oct 02 18:16:03 np0005467081.novalocal systemd[1]: Starting Network Manager...
Oct 02 18:16:03 np0005467081.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.4383] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:335cdeae-868d-405b-8b1c-eba838d0b699)
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.4387] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.4439] manager[0x55a971c10070]: monitoring kernel firmware directory '/lib/firmware'.
Oct 02 18:16:03 np0005467081.novalocal systemd[1]: Starting Hostname Service...
Oct 02 18:16:03 np0005467081.novalocal systemd[1]: Started Hostname Service.
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5694] hostname: hostname: using hostnamed
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5695] hostname: static hostname changed from (none) to "np0005467081.novalocal"
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5705] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5712] manager[0x55a971c10070]: rfkill: Wi-Fi hardware radio set enabled
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5713] manager[0x55a971c10070]: rfkill: WWAN hardware radio set enabled
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5764] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5764] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5765] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5766] manager: Networking is enabled by state file
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5770] settings: Loaded settings plugin: keyfile (internal)
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5777] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5821] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5844] dhcp: init: Using DHCP client 'internal'
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5848] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5856] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5865] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5876] device (lo): Activation: starting connection 'lo' (ad43cc60-0861-4113-b2c8-fcae658eed34)
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5886] device (eth0): carrier: link connected
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5893] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5900] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5900] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5910] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5920] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5930] device (eth1): carrier: link connected
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5937] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5944] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (ea663df2-53d3-37b6-9ced-9449cef09dde) (indicated)
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5944] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5952] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5963] device (eth1): Activation: starting connection 'Wired connection 1' (ea663df2-53d3-37b6-9ced-9449cef09dde)
Oct 02 18:16:03 np0005467081.novalocal systemd[1]: Started Network Manager.
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5970] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5976] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5979] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5981] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5985] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5991] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5995] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.5999] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.6005] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.6019] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.6026] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.6039] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.6045] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.6071] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.6079] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 02 18:16:03 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428963.6088] device (lo): Activation: successful, device activated.
Oct 02 18:16:03 np0005467081.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 02 18:16:03 np0005467081.novalocal sudo[3933]: pam_unix(sudo:session): session closed for user root
Oct 02 18:16:03 np0005467081.novalocal python3[4000]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-4655-e477-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:16:04 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428964.2113] dhcp4 (eth0): state changed new lease, address=38.102.83.148
Oct 02 18:16:04 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428964.2124] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 02 18:16:04 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428964.2205] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 02 18:16:04 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428964.2249] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 02 18:16:04 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428964.2251] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 02 18:16:04 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428964.2255] manager: NetworkManager state is now CONNECTED_SITE
Oct 02 18:16:04 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428964.2260] device (eth0): Activation: successful, device activated.
Oct 02 18:16:04 np0005467081.novalocal NetworkManager[3939]: <info>  [1759428964.2266] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 02 18:16:14 np0005467081.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 18:16:26 np0005467081.novalocal systemd[1061]: Starting Mark boot as successful...
Oct 02 18:16:26 np0005467081.novalocal systemd[1061]: Finished Mark boot as successful.
Oct 02 18:16:33 np0005467081.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.3737] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 02 18:16:48 np0005467081.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 18:16:48 np0005467081.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.4160] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.4165] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.4178] device (eth1): Activation: successful, device activated.
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.4188] manager: startup complete
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.4193] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <warn>  [1759429008.4201] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.4211] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct 02 18:16:48 np0005467081.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.4344] dhcp4 (eth1): canceled DHCP transaction
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.4345] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.4345] dhcp4 (eth1): state changed no lease
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.4370] policy: auto-activating connection 'ci-private-network' (881fd00f-2862-5b6b-b9e7-98cd71b77b44)
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.4378] device (eth1): Activation: starting connection 'ci-private-network' (881fd00f-2862-5b6b-b9e7-98cd71b77b44)
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.4380] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.4384] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.4394] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.4409] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.4451] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.4454] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:16:48 np0005467081.novalocal NetworkManager[3939]: <info>  [1759429008.4462] device (eth1): Activation: successful, device activated.
Oct 02 18:16:58 np0005467081.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 18:17:00 np0005467081.novalocal sudo[4123]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nadfwfhlzzteuldjlbdgoivevxndsddy ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 18:17:00 np0005467081.novalocal sudo[4123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:17:00 np0005467081.novalocal python3[4125]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:17:00 np0005467081.novalocal sudo[4123]: pam_unix(sudo:session): session closed for user root
Oct 02 18:17:01 np0005467081.novalocal sudo[4196]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgciwjgthpowmvazkgfqxrmsrzijxoui ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 18:17:01 np0005467081.novalocal sudo[4196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:17:01 np0005467081.novalocal python3[4198]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759429020.4159992-267-132593142171568/source _original_basename=tmpb3brergx follow=False checksum=24e38e932bb22dcc27113fb3653b20e64a1f1578 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:17:01 np0005467081.novalocal sudo[4196]: pam_unix(sudo:session): session closed for user root
Oct 02 18:18:01 np0005467081.novalocal sshd-session[1070]: Received disconnect from 38.102.83.114 port 38412:11: disconnected by user
Oct 02 18:18:01 np0005467081.novalocal sshd-session[1070]: Disconnected from user zuul 38.102.83.114 port 38412
Oct 02 18:18:01 np0005467081.novalocal sshd-session[1057]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:18:01 np0005467081.novalocal systemd-logind[793]: Session 1 logged out. Waiting for processes to exit.
Oct 02 18:19:26 np0005467081.novalocal systemd[1061]: Created slice User Background Tasks Slice.
Oct 02 18:19:26 np0005467081.novalocal systemd[1061]: Starting Cleanup of User's Temporary Files and Directories...
Oct 02 18:19:26 np0005467081.novalocal systemd[1061]: Finished Cleanup of User's Temporary Files and Directories.
Oct 02 18:22:20 np0005467081.novalocal sshd-session[4227]: Accepted publickey for zuul from 38.102.83.114 port 44050 ssh2: RSA SHA256:ehN6Fkjxbk4ZZsnbj2qNHppfucDBFar0ERCGpW0xI0M
Oct 02 18:22:20 np0005467081.novalocal systemd-logind[793]: New session 3 of user zuul.
Oct 02 18:22:20 np0005467081.novalocal systemd[1]: Started Session 3 of User zuul.
Oct 02 18:22:20 np0005467081.novalocal sshd-session[4227]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:22:20 np0005467081.novalocal sudo[4254]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmnrobtotvdiyymsdbanetexwdnycspx ; /usr/bin/python3'
Oct 02 18:22:20 np0005467081.novalocal sudo[4254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:22:20 np0005467081.novalocal python3[4256]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-2bb4-c5d2-000000001ce6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:22:20 np0005467081.novalocal sudo[4254]: pam_unix(sudo:session): session closed for user root
Oct 02 18:22:20 np0005467081.novalocal sudo[4282]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwxwtxtsftfxmwqqhvqjktszphxdmcqf ; /usr/bin/python3'
Oct 02 18:22:20 np0005467081.novalocal sudo[4282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:22:21 np0005467081.novalocal python3[4284]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:22:21 np0005467081.novalocal sudo[4282]: pam_unix(sudo:session): session closed for user root
Oct 02 18:22:21 np0005467081.novalocal sudo[4308]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzfhvzvomxdfiecezgyksdtgnzpcnmni ; /usr/bin/python3'
Oct 02 18:22:21 np0005467081.novalocal sudo[4308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:22:21 np0005467081.novalocal python3[4311]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:22:21 np0005467081.novalocal sudo[4308]: pam_unix(sudo:session): session closed for user root
Oct 02 18:22:21 np0005467081.novalocal sudo[4335]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxuyzwbowwhookinifpwqxtikkxibzgm ; /usr/bin/python3'
Oct 02 18:22:21 np0005467081.novalocal sudo[4335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:22:21 np0005467081.novalocal python3[4337]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:22:21 np0005467081.novalocal sudo[4335]: pam_unix(sudo:session): session closed for user root
Oct 02 18:22:21 np0005467081.novalocal sudo[4361]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybjrsybuetvsoyytkplhgpxxojcoasum ; /usr/bin/python3'
Oct 02 18:22:21 np0005467081.novalocal sudo[4361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:22:21 np0005467081.novalocal python3[4363]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:22:21 np0005467081.novalocal sudo[4361]: pam_unix(sudo:session): session closed for user root
Oct 02 18:22:22 np0005467081.novalocal sudo[4387]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybspaocyiwirfiulqabuixifpodktmkx ; /usr/bin/python3'
Oct 02 18:22:22 np0005467081.novalocal sudo[4387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:22:22 np0005467081.novalocal python3[4389]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:22:22 np0005467081.novalocal python3[4389]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct 02 18:22:22 np0005467081.novalocal sudo[4387]: pam_unix(sudo:session): session closed for user root
Oct 02 18:22:23 np0005467081.novalocal sudo[4413]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfrpirkkvyveqegwrcglncfdohivramo ; /usr/bin/python3'
Oct 02 18:22:23 np0005467081.novalocal sudo[4413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:22:23 np0005467081.novalocal python3[4415]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 18:22:23 np0005467081.novalocal systemd[1]: Reloading.
Oct 02 18:22:23 np0005467081.novalocal systemd-rc-local-generator[4438]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:22:23 np0005467081.novalocal sudo[4413]: pam_unix(sudo:session): session closed for user root
Oct 02 18:22:24 np0005467081.novalocal sudo[4470]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjdqybomxbqgbggkpvsoocfcxwytuowr ; /usr/bin/python3'
Oct 02 18:22:24 np0005467081.novalocal sudo[4470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:22:25 np0005467081.novalocal python3[4472]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct 02 18:22:25 np0005467081.novalocal sudo[4470]: pam_unix(sudo:session): session closed for user root
Oct 02 18:22:25 np0005467081.novalocal sudo[4496]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weqxvkbfosechgjkxdazpbsglksaodeh ; /usr/bin/python3'
Oct 02 18:22:25 np0005467081.novalocal sudo[4496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:22:25 np0005467081.novalocal python3[4498]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:22:25 np0005467081.novalocal sudo[4496]: pam_unix(sudo:session): session closed for user root
Oct 02 18:22:25 np0005467081.novalocal sudo[4524]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fywavibgapoogtseitobqvhahaqlyibn ; /usr/bin/python3'
Oct 02 18:22:25 np0005467081.novalocal sudo[4524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:22:25 np0005467081.novalocal python3[4526]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:22:25 np0005467081.novalocal sudo[4524]: pam_unix(sudo:session): session closed for user root
Oct 02 18:22:25 np0005467081.novalocal sudo[4552]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgzokbpqxvxgcdlwyynxvcaajhwtnlro ; /usr/bin/python3'
Oct 02 18:22:25 np0005467081.novalocal sudo[4552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:22:26 np0005467081.novalocal python3[4554]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:22:26 np0005467081.novalocal sudo[4552]: pam_unix(sudo:session): session closed for user root
Oct 02 18:22:26 np0005467081.novalocal sudo[4580]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkfyecppizdkzohzohbmyvyctayhmbvo ; /usr/bin/python3'
Oct 02 18:22:26 np0005467081.novalocal sudo[4580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:22:26 np0005467081.novalocal python3[4582]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:22:26 np0005467081.novalocal sudo[4580]: pam_unix(sudo:session): session closed for user root
Oct 02 18:22:26 np0005467081.novalocal python3[4609]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-2bb4-c5d2-000000001cec-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:22:27 np0005467081.novalocal python3[4639]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:22:29 np0005467081.novalocal sshd-session[4230]: Connection closed by 38.102.83.114 port 44050
Oct 02 18:22:29 np0005467081.novalocal sshd-session[4227]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:22:29 np0005467081.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Oct 02 18:22:29 np0005467081.novalocal systemd[1]: session-3.scope: Consumed 3.891s CPU time.
Oct 02 18:22:29 np0005467081.novalocal systemd-logind[793]: Session 3 logged out. Waiting for processes to exit.
Oct 02 18:22:29 np0005467081.novalocal systemd-logind[793]: Removed session 3.
Oct 02 18:22:30 np0005467081.novalocal sshd-session[4645]: Accepted publickey for zuul from 38.102.83.114 port 56376 ssh2: RSA SHA256:ehN6Fkjxbk4ZZsnbj2qNHppfucDBFar0ERCGpW0xI0M
Oct 02 18:22:30 np0005467081.novalocal systemd-logind[793]: New session 4 of user zuul.
Oct 02 18:22:30 np0005467081.novalocal systemd[1]: Started Session 4 of User zuul.
Oct 02 18:22:30 np0005467081.novalocal sshd-session[4645]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:22:31 np0005467081.novalocal sudo[4672]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkpdgziczvhtveilmjpzilbgasnprgqw ; /usr/bin/python3'
Oct 02 18:22:31 np0005467081.novalocal sudo[4672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:22:31 np0005467081.novalocal python3[4674]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 02 18:22:48 np0005467081.novalocal kernel: SELinux:  Converting 363 SID table entries...
Oct 02 18:22:48 np0005467081.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:22:48 np0005467081.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 02 18:22:48 np0005467081.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:22:48 np0005467081.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:22:48 np0005467081.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:22:48 np0005467081.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:22:48 np0005467081.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:22:57 np0005467081.novalocal kernel: SELinux:  Converting 363 SID table entries...
Oct 02 18:22:57 np0005467081.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:22:57 np0005467081.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 02 18:22:57 np0005467081.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:22:57 np0005467081.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:22:57 np0005467081.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:22:57 np0005467081.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:22:57 np0005467081.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:23:06 np0005467081.novalocal kernel: SELinux:  Converting 363 SID table entries...
Oct 02 18:23:06 np0005467081.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:23:06 np0005467081.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 02 18:23:06 np0005467081.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:23:06 np0005467081.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:23:06 np0005467081.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:23:06 np0005467081.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:23:06 np0005467081.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:23:08 np0005467081.novalocal sshd-session[4722]: Connection closed by 223.93.8.66 port 51328 [preauth]
Oct 02 18:23:08 np0005467081.novalocal setsebool[4738]: The virt_use_nfs policy boolean was changed to 1 by root
Oct 02 18:23:08 np0005467081.novalocal setsebool[4738]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct 02 18:23:15 np0005467081.novalocal sshd-session[4748]: banner exchange: Connection from 194.165.16.161 port 65326: invalid format
Oct 02 18:23:20 np0005467081.novalocal kernel: SELinux:  Converting 366 SID table entries...
Oct 02 18:23:20 np0005467081.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:23:20 np0005467081.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 02 18:23:20 np0005467081.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:23:20 np0005467081.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:23:20 np0005467081.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:23:20 np0005467081.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:23:20 np0005467081.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:23:38 np0005467081.novalocal dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 02 18:23:38 np0005467081.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 18:23:38 np0005467081.novalocal systemd[1]: Starting man-db-cache-update.service...
Oct 02 18:23:38 np0005467081.novalocal systemd[1]: Reloading.
Oct 02 18:23:38 np0005467081.novalocal systemd-rc-local-generator[5491]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:23:38 np0005467081.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 18:23:39 np0005467081.novalocal systemd[1]: Starting PackageKit Daemon...
Oct 02 18:23:39 np0005467081.novalocal PackageKit[6224]: daemon start
Oct 02 18:23:39 np0005467081.novalocal systemd[1]: Starting Authorization Manager...
Oct 02 18:23:39 np0005467081.novalocal polkitd[6325]: Started polkitd version 0.117
Oct 02 18:23:39 np0005467081.novalocal polkitd[6325]: Loading rules from directory /etc/polkit-1/rules.d
Oct 02 18:23:39 np0005467081.novalocal polkitd[6325]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 02 18:23:39 np0005467081.novalocal polkitd[6325]: Finished loading, compiling and executing 3 rules
Oct 02 18:23:39 np0005467081.novalocal systemd[1]: Started Authorization Manager.
Oct 02 18:23:39 np0005467081.novalocal polkitd[6325]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Oct 02 18:23:39 np0005467081.novalocal systemd[1]: Started PackageKit Daemon.
Oct 02 18:23:40 np0005467081.novalocal sudo[4672]: pam_unix(sudo:session): session closed for user root
Oct 02 18:23:56 np0005467081.novalocal python3[13494]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-27f6-3bc8-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:23:56 np0005467081.novalocal kernel: evm: overlay not supported
Oct 02 18:23:57 np0005467081.novalocal systemd[1061]: Starting D-Bus User Message Bus...
Oct 02 18:23:57 np0005467081.novalocal dbus-broker-launch[13873]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct 02 18:23:57 np0005467081.novalocal dbus-broker-launch[13873]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct 02 18:23:57 np0005467081.novalocal systemd[1061]: Started D-Bus User Message Bus.
Oct 02 18:23:57 np0005467081.novalocal dbus-broker-lau[13873]: Ready
Oct 02 18:23:57 np0005467081.novalocal systemd[1061]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 02 18:23:57 np0005467081.novalocal systemd[1061]: Created slice Slice /user.
Oct 02 18:23:57 np0005467081.novalocal systemd[1061]: podman-13797.scope: unit configures an IP firewall, but not running as root.
Oct 02 18:23:57 np0005467081.novalocal systemd[1061]: (This warning is only shown for the first unit using IP firewalling.)
Oct 02 18:23:57 np0005467081.novalocal systemd[1061]: Started podman-13797.scope.
Oct 02 18:23:57 np0005467081.novalocal systemd[1061]: Started podman-pause-9e63d2c9.scope.
Oct 02 18:23:58 np0005467081.novalocal sudo[14288]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkxmpirbddmfsicfxbfwjwkooguxnvqf ; /usr/bin/python3'
Oct 02 18:23:58 np0005467081.novalocal sudo[14288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:23:58 np0005467081.novalocal python3[14298]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.39:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.39:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:23:58 np0005467081.novalocal sudo[14288]: pam_unix(sudo:session): session closed for user root
Oct 02 18:23:58 np0005467081.novalocal sshd-session[4648]: Connection closed by 38.102.83.114 port 56376
Oct 02 18:23:58 np0005467081.novalocal sshd-session[4645]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:23:58 np0005467081.novalocal systemd-logind[793]: Session 4 logged out. Waiting for processes to exit.
Oct 02 18:23:58 np0005467081.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Oct 02 18:23:58 np0005467081.novalocal systemd[1]: session-4.scope: Consumed 1min 4.130s CPU time.
Oct 02 18:23:58 np0005467081.novalocal systemd-logind[793]: Removed session 4.
Oct 02 18:24:22 np0005467081.novalocal sshd-session[20570]: Connection closed by 38.102.83.68 port 39778 [preauth]
Oct 02 18:24:22 np0005467081.novalocal sshd-session[20574]: Connection closed by 38.102.83.68 port 39788 [preauth]
Oct 02 18:24:22 np0005467081.novalocal sshd-session[20572]: Unable to negotiate with 38.102.83.68 port 39802: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 02 18:24:22 np0005467081.novalocal sshd-session[20571]: Unable to negotiate with 38.102.83.68 port 39808: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 02 18:24:22 np0005467081.novalocal sshd-session[20568]: Unable to negotiate with 38.102.83.68 port 39814: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 02 18:24:27 np0005467081.novalocal sshd-session[21710]: Accepted publickey for zuul from 38.102.83.114 port 34886 ssh2: RSA SHA256:ehN6Fkjxbk4ZZsnbj2qNHppfucDBFar0ERCGpW0xI0M
Oct 02 18:24:27 np0005467081.novalocal systemd-logind[793]: New session 5 of user zuul.
Oct 02 18:24:27 np0005467081.novalocal systemd[1]: Started Session 5 of User zuul.
Oct 02 18:24:27 np0005467081.novalocal sshd-session[21710]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:24:27 np0005467081.novalocal python3[21804]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHpLym30bT+vKauNaVvxvgYrYblPoCfp0cUnGRzgLCBC3gioYbXoF1eSVQbBGWdv0Z5BDqALZsSaljjgpMIl/Hs= zuul@np0005467080.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:24:28 np0005467081.novalocal sudo[21915]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktqzrvbxqxnwxeivbfdaykwhojhydsgn ; /usr/bin/python3'
Oct 02 18:24:28 np0005467081.novalocal sudo[21915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:24:28 np0005467081.novalocal python3[21926]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHpLym30bT+vKauNaVvxvgYrYblPoCfp0cUnGRzgLCBC3gioYbXoF1eSVQbBGWdv0Z5BDqALZsSaljjgpMIl/Hs= zuul@np0005467080.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:24:28 np0005467081.novalocal sudo[21915]: pam_unix(sudo:session): session closed for user root
Oct 02 18:24:29 np0005467081.novalocal sudo[22147]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoupqdmzgnckfagxzzdkkkdxsrrplizt ; /usr/bin/python3'
Oct 02 18:24:29 np0005467081.novalocal sudo[22147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:24:29 np0005467081.novalocal python3[22156]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005467081.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 02 18:24:29 np0005467081.novalocal useradd[22211]: new group: name=cloud-admin, GID=1002
Oct 02 18:24:29 np0005467081.novalocal useradd[22211]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Oct 02 18:24:29 np0005467081.novalocal sudo[22147]: pam_unix(sudo:session): session closed for user root
Oct 02 18:24:29 np0005467081.novalocal sudo[22314]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljogbkpclzdjenkbhcdqsqpgmodyvoqj ; /usr/bin/python3'
Oct 02 18:24:29 np0005467081.novalocal sudo[22314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:24:29 np0005467081.novalocal python3[22322]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHpLym30bT+vKauNaVvxvgYrYblPoCfp0cUnGRzgLCBC3gioYbXoF1eSVQbBGWdv0Z5BDqALZsSaljjgpMIl/Hs= zuul@np0005467080.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:24:29 np0005467081.novalocal sudo[22314]: pam_unix(sudo:session): session closed for user root
Oct 02 18:24:30 np0005467081.novalocal sudo[22553]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eylpqqqtbldyeocuoxixnnljfumiowka ; /usr/bin/python3'
Oct 02 18:24:30 np0005467081.novalocal sudo[22553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:24:30 np0005467081.novalocal python3[22563]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:24:30 np0005467081.novalocal sudo[22553]: pam_unix(sudo:session): session closed for user root
Oct 02 18:24:30 np0005467081.novalocal sudo[22780]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyogbjkhsdppxhscjakmmuhjmlrjnwkw ; /usr/bin/python3'
Oct 02 18:24:30 np0005467081.novalocal sudo[22780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:24:30 np0005467081.novalocal python3[22790]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759429469.9722548-135-52010532636543/source _original_basename=tmp4nbkn0k7 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:24:30 np0005467081.novalocal sudo[22780]: pam_unix(sudo:session): session closed for user root
Oct 02 18:24:31 np0005467081.novalocal sudo[23022]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-setdcdpblhquslxgskwyrwassjvetels ; /usr/bin/python3'
Oct 02 18:24:31 np0005467081.novalocal sudo[23022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:24:31 np0005467081.novalocal python3[23029]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct 02 18:24:31 np0005467081.novalocal systemd[1]: Starting Hostname Service...
Oct 02 18:24:31 np0005467081.novalocal systemd[1]: Started Hostname Service.
Oct 02 18:24:32 np0005467081.novalocal systemd-hostnamed[23104]: Changed pretty hostname to 'compute-0'
Oct 02 18:24:32 compute-0 systemd-hostnamed[23104]: Hostname set to <compute-0> (static)
Oct 02 18:24:32 compute-0 NetworkManager[3939]: <info>  [1759429472.0216] hostname: static hostname changed from "np0005467081.novalocal" to "compute-0"
Oct 02 18:24:32 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 18:24:32 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 18:24:32 compute-0 sudo[23022]: pam_unix(sudo:session): session closed for user root
Oct 02 18:24:32 compute-0 sshd-session[21753]: Connection closed by 38.102.83.114 port 34886
Oct 02 18:24:32 compute-0 sshd-session[21710]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:24:32 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Oct 02 18:24:32 compute-0 systemd[1]: session-5.scope: Consumed 3.021s CPU time.
Oct 02 18:24:32 compute-0 systemd-logind[793]: Session 5 logged out. Waiting for processes to exit.
Oct 02 18:24:32 compute-0 systemd-logind[793]: Removed session 5.
Oct 02 18:24:42 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 18:24:45 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 18:24:45 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 18:24:45 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 20.806s CPU time.
Oct 02 18:24:45 compute-0 systemd[1]: run-r37c0f4e9f4bc4457970cec77f5e0a38f.service: Deactivated successfully.
Oct 02 18:25:02 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 18:28:22 compute-0 sshd-session[26552]: error: kex_exchange_identification: read: Connection reset by peer
Oct 02 18:28:22 compute-0 sshd-session[26552]: Connection reset by 45.140.17.97 port 22672
Oct 02 18:28:45 compute-0 PackageKit[6224]: daemon quit
Oct 02 18:28:45 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 02 18:29:26 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Oct 02 18:29:26 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 02 18:29:26 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Oct 02 18:29:26 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 02 18:29:51 compute-0 sshd-session[26555]: Accepted publickey for zuul from 38.102.83.68 port 37792 ssh2: RSA SHA256:ehN6Fkjxbk4ZZsnbj2qNHppfucDBFar0ERCGpW0xI0M
Oct 02 18:29:51 compute-0 systemd-logind[793]: New session 6 of user zuul.
Oct 02 18:29:51 compute-0 systemd[1]: Started Session 6 of User zuul.
Oct 02 18:29:51 compute-0 sshd-session[26555]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:29:52 compute-0 python3[26631]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:29:53 compute-0 sudo[26745]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpwtjaidbgcbedghgrkyuvctqkircvtq ; /usr/bin/python3'
Oct 02 18:29:53 compute-0 sudo[26745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:29:53 compute-0 python3[26747]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:29:53 compute-0 sudo[26745]: pam_unix(sudo:session): session closed for user root
Oct 02 18:29:54 compute-0 sudo[26818]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfneuevgjaikdalnawjymdhwhwxrukyp ; /usr/bin/python3'
Oct 02 18:29:54 compute-0 sudo[26818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:29:54 compute-0 python3[26820]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759429793.3591137-30251-137357856449159/source mode=0755 _original_basename=delorean.repo follow=False checksum=bb4c2ff9dad546f135d54d9729ea11b84117755d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:29:54 compute-0 sudo[26818]: pam_unix(sudo:session): session closed for user root
Oct 02 18:29:54 compute-0 sudo[26844]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwzewlpthfzohjlfcwtwgjysplajqdvo ; /usr/bin/python3'
Oct 02 18:29:54 compute-0 sudo[26844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:29:54 compute-0 python3[26846]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:29:54 compute-0 sudo[26844]: pam_unix(sudo:session): session closed for user root
Oct 02 18:29:54 compute-0 sudo[26917]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfbjqnzwyskxisbkulnpetynelfoqyhy ; /usr/bin/python3'
Oct 02 18:29:54 compute-0 sudo[26917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:29:54 compute-0 python3[26919]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759429793.3591137-30251-137357856449159/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:29:54 compute-0 sudo[26917]: pam_unix(sudo:session): session closed for user root
Oct 02 18:29:55 compute-0 sudo[26943]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkaawocusumtszmstohtagfphrnlcyhh ; /usr/bin/python3'
Oct 02 18:29:55 compute-0 sudo[26943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:29:55 compute-0 python3[26945]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:29:55 compute-0 sudo[26943]: pam_unix(sudo:session): session closed for user root
Oct 02 18:29:55 compute-0 sudo[27016]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqnlojstwiltjkewsemislzssldydhhk ; /usr/bin/python3'
Oct 02 18:29:55 compute-0 sudo[27016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:29:55 compute-0 python3[27018]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759429793.3591137-30251-137357856449159/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:29:55 compute-0 sudo[27016]: pam_unix(sudo:session): session closed for user root
Oct 02 18:29:55 compute-0 sudo[27042]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxromtkbpglgkhozmlgszjusshkzkjsq ; /usr/bin/python3'
Oct 02 18:29:55 compute-0 sudo[27042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:29:55 compute-0 python3[27044]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:29:55 compute-0 sudo[27042]: pam_unix(sudo:session): session closed for user root
Oct 02 18:29:56 compute-0 sudo[27115]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wggpkxarawbocshaiwrjcyvnilwoyxmv ; /usr/bin/python3'
Oct 02 18:29:56 compute-0 sudo[27115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:29:56 compute-0 python3[27117]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759429793.3591137-30251-137357856449159/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:29:56 compute-0 sudo[27115]: pam_unix(sudo:session): session closed for user root
Oct 02 18:29:56 compute-0 sudo[27141]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ashcxvsbuzisabtsasfmrwwkvluckckz ; /usr/bin/python3'
Oct 02 18:29:56 compute-0 sudo[27141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:29:56 compute-0 python3[27143]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:29:56 compute-0 sudo[27141]: pam_unix(sudo:session): session closed for user root
Oct 02 18:29:56 compute-0 sudo[27214]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txreksqyzgsqtfoxleiujsupdfabspvy ; /usr/bin/python3'
Oct 02 18:29:56 compute-0 sudo[27214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:29:57 compute-0 python3[27216]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759429793.3591137-30251-137357856449159/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:29:57 compute-0 sudo[27214]: pam_unix(sudo:session): session closed for user root
Oct 02 18:29:57 compute-0 sudo[27240]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrworgskkmwjwxoagryamlubwqyfvmki ; /usr/bin/python3'
Oct 02 18:29:57 compute-0 sudo[27240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:29:57 compute-0 python3[27242]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:29:57 compute-0 sudo[27240]: pam_unix(sudo:session): session closed for user root
Oct 02 18:29:57 compute-0 sudo[27313]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgosyniovliyafduohojtxfglugdljyj ; /usr/bin/python3'
Oct 02 18:29:57 compute-0 sudo[27313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:29:57 compute-0 python3[27315]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759429793.3591137-30251-137357856449159/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:29:57 compute-0 sudo[27313]: pam_unix(sudo:session): session closed for user root
Oct 02 18:29:57 compute-0 sudo[27339]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ninlkzustrcmssblcwbmoxmssomgvubw ; /usr/bin/python3'
Oct 02 18:29:57 compute-0 sudo[27339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:29:58 compute-0 python3[27341]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:29:58 compute-0 sudo[27339]: pam_unix(sudo:session): session closed for user root
Oct 02 18:29:58 compute-0 sudo[27412]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inaaofwuelbvkyqxjwvurflapdkxlqtg ; /usr/bin/python3'
Oct 02 18:29:58 compute-0 sudo[27412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:29:58 compute-0 python3[27414]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759429793.3591137-30251-137357856449159/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=d911291791b114a72daf18f370e91cb1ae300933 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:29:58 compute-0 sudo[27412]: pam_unix(sudo:session): session closed for user root
Oct 02 18:30:00 compute-0 sshd-session[27439]: Unable to negotiate with 192.168.122.11 port 44214: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 02 18:30:00 compute-0 sshd-session[27440]: Connection closed by 192.168.122.11 port 44202 [preauth]
Oct 02 18:30:00 compute-0 sshd-session[27441]: Connection closed by 192.168.122.11 port 44190 [preauth]
Oct 02 18:30:00 compute-0 sshd-session[27443]: Unable to negotiate with 192.168.122.11 port 44224: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 02 18:30:00 compute-0 sshd-session[27442]: Unable to negotiate with 192.168.122.11 port 44232: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 02 18:30:26 compute-0 systemd[1]: Starting dnf makecache...
Oct 02 18:30:26 compute-0 dnf[27449]: Failed determining last makecache time.
Oct 02 18:30:26 compute-0 dnf[27449]: delorean-openstack-barbican-42b4c41831408a8e323 323 kB/s |  13 kB     00:00
Oct 02 18:30:26 compute-0 dnf[27449]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 2.4 MB/s |  65 kB     00:00
Oct 02 18:30:26 compute-0 dnf[27449]: delorean-openstack-cinder-1c00d6490d88e436f26ef 1.1 MB/s |  32 kB     00:00
Oct 02 18:30:26 compute-0 dnf[27449]: delorean-python-stevedore-c4acc5639fd2329372142 4.9 MB/s | 131 kB     00:00
Oct 02 18:30:26 compute-0 dnf[27449]: delorean-python-cloudkitty-tests-tempest-3961dc 959 kB/s |  25 kB     00:00
Oct 02 18:30:27 compute-0 dnf[27449]: delorean-os-net-config-28598c2978b9e2207dd19fc4 7.9 MB/s | 356 kB     00:00
Oct 02 18:30:27 compute-0 dnf[27449]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 687 kB/s |  42 kB     00:00
Oct 02 18:30:27 compute-0 dnf[27449]: delorean-python-designate-tests-tempest-347fdbc 462 kB/s |  18 kB     00:00
Oct 02 18:30:27 compute-0 dnf[27449]: delorean-openstack-glance-1fd12c29b339f30fe823e 398 kB/s |  18 kB     00:00
Oct 02 18:30:27 compute-0 dnf[27449]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 598 kB/s |  29 kB     00:00
Oct 02 18:30:27 compute-0 dnf[27449]: delorean-openstack-manila-3c01b7181572c95dac462 234 kB/s |  25 kB     00:00
Oct 02 18:30:27 compute-0 dnf[27449]: delorean-python-whitebox-neutron-tests-tempest- 5.1 MB/s | 154 kB     00:00
Oct 02 18:30:27 compute-0 dnf[27449]: delorean-openstack-octavia-ba397f07a7331190208c 1.0 MB/s |  26 kB     00:00
Oct 02 18:30:27 compute-0 dnf[27449]: delorean-openstack-watcher-c014f81a8647287f6dcc 701 kB/s |  16 kB     00:00
Oct 02 18:30:27 compute-0 dnf[27449]: delorean-edpm-image-builder-55ba53cf215b14ed95b 299 kB/s | 7.4 kB     00:00
Oct 02 18:30:27 compute-0 dnf[27449]: delorean-puppet-ceph-b0c245ccde541a63fde0564366 4.4 MB/s | 144 kB     00:00
Oct 02 18:30:27 compute-0 dnf[27449]: delorean-openstack-swift-dc98a8463506ac520c469a 575 kB/s |  14 kB     00:00
Oct 02 18:30:27 compute-0 dnf[27449]: delorean-python-tempestconf-8515371b7cceebd4282 2.3 MB/s |  53 kB     00:00
Oct 02 18:30:27 compute-0 dnf[27449]: delorean-openstack-heat-ui-013accbfd179753bc3f0 3.2 MB/s |  96 kB     00:00
Oct 02 18:30:27 compute-0 dnf[27449]: CentOS Stream 9 - BaseOS                         68 kB/s | 6.7 kB     00:00
Oct 02 18:30:28 compute-0 dnf[27449]: CentOS Stream 9 - AppStream                      26 kB/s | 6.8 kB     00:00
Oct 02 18:30:28 compute-0 dnf[27449]: CentOS Stream 9 - CRB                            44 kB/s | 6.6 kB     00:00
Oct 02 18:30:28 compute-0 dnf[27449]: CentOS Stream 9 - Extras packages                50 kB/s | 8.0 kB     00:00
Oct 02 18:30:28 compute-0 dnf[27449]: dlrn-antelope-testing                            28 MB/s | 1.1 MB     00:00
Oct 02 18:30:29 compute-0 dnf[27449]: dlrn-antelope-build-deps                         16 MB/s | 461 kB     00:00
Oct 02 18:30:29 compute-0 dnf[27449]: centos9-rabbitmq                                7.2 MB/s | 123 kB     00:00
Oct 02 18:30:29 compute-0 dnf[27449]: centos9-storage                                  25 MB/s | 415 kB     00:00
Oct 02 18:30:29 compute-0 dnf[27449]: centos9-opstools                                4.3 MB/s |  51 kB     00:00
Oct 02 18:30:29 compute-0 dnf[27449]: NFV SIG OpenvSwitch                              17 MB/s | 447 kB     00:00
Oct 02 18:30:30 compute-0 dnf[27449]: repo-setup-centos-appstream                      96 MB/s |  25 MB     00:00
Oct 02 18:30:35 compute-0 dnf[27449]: repo-setup-centos-baseos                         80 MB/s | 8.8 MB     00:00
Oct 02 18:30:37 compute-0 dnf[27449]: repo-setup-centos-highavailability               29 MB/s | 744 kB     00:00
Oct 02 18:30:37 compute-0 dnf[27449]: repo-setup-centos-powertools                     73 MB/s | 7.1 MB     00:00
Oct 02 18:30:40 compute-0 dnf[27449]: Extra Packages for Enterprise Linux 9 - x86_64   19 MB/s |  20 MB     00:01
Oct 02 18:30:53 compute-0 dnf[27449]: Metadata cache created.
Oct 02 18:30:53 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 02 18:30:53 compute-0 systemd[1]: Finished dnf makecache.
Oct 02 18:30:53 compute-0 systemd[1]: dnf-makecache.service: Consumed 24.222s CPU time.
Oct 02 18:32:41 compute-0 python3[27573]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:37:40 compute-0 sshd-session[26558]: Received disconnect from 38.102.83.68 port 37792:11: disconnected by user
Oct 02 18:37:40 compute-0 sshd-session[26558]: Disconnected from user zuul 38.102.83.68 port 37792
Oct 02 18:37:40 compute-0 sshd-session[26555]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:37:40 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Oct 02 18:37:40 compute-0 systemd[1]: session-6.scope: Consumed 5.888s CPU time.
Oct 02 18:37:40 compute-0 systemd-logind[793]: Session 6 logged out. Waiting for processes to exit.
Oct 02 18:37:40 compute-0 systemd-logind[793]: Removed session 6.
Oct 02 18:38:47 compute-0 sshd-session[27579]: Connection closed by 49.234.53.181 port 54366 [preauth]
Oct 02 18:43:33 compute-0 sshd-session[27582]: Connection closed by 162.142.125.204 port 56678 [preauth]
Oct 02 18:44:54 compute-0 sshd-session[27585]: Accepted publickey for zuul from 192.168.122.30 port 60960 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 18:44:54 compute-0 systemd-logind[793]: New session 7 of user zuul.
Oct 02 18:44:54 compute-0 systemd[1]: Started Session 7 of User zuul.
Oct 02 18:44:54 compute-0 sshd-session[27585]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:44:56 compute-0 python3.9[27738]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:44:57 compute-0 sudo[27917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yerpwvqancwhhnqxdreyxkgbeutdgvoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430696.9300737-32-78943839116339/AnsiballZ_command.py'
Oct 02 18:44:57 compute-0 sudo[27917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:44:57 compute-0 python3.9[27919]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:45:04 compute-0 sshd-session[27943]: Invalid user admin from 139.19.117.129 port 55950
Oct 02 18:45:04 compute-0 sshd-session[27943]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Oct 02 18:45:04 compute-0 sudo[27917]: pam_unix(sudo:session): session closed for user root
Oct 02 18:45:05 compute-0 sshd-session[27588]: Connection closed by 192.168.122.30 port 60960
Oct 02 18:45:05 compute-0 sshd-session[27585]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:45:05 compute-0 systemd-logind[793]: Session 7 logged out. Waiting for processes to exit.
Oct 02 18:45:05 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Oct 02 18:45:05 compute-0 systemd[1]: session-7.scope: Consumed 8.366s CPU time.
Oct 02 18:45:05 compute-0 systemd-logind[793]: Removed session 7.
Oct 02 18:45:13 compute-0 sshd-session[27943]: Connection closed by invalid user admin 139.19.117.129 port 55950 [preauth]
Oct 02 18:45:20 compute-0 sshd-session[27978]: Accepted publickey for zuul from 192.168.122.30 port 52838 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 18:45:20 compute-0 systemd-logind[793]: New session 8 of user zuul.
Oct 02 18:45:20 compute-0 systemd[1]: Started Session 8 of User zuul.
Oct 02 18:45:20 compute-0 sshd-session[27978]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:45:21 compute-0 python3.9[28131]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 02 18:45:22 compute-0 python3.9[28305]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:45:23 compute-0 sudo[28455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgamyvpnixddymztwzccfrbamdrtlmdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430723.1980143-45-181744902757500/AnsiballZ_command.py'
Oct 02 18:45:23 compute-0 sudo[28455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:45:23 compute-0 python3.9[28457]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:45:23 compute-0 sudo[28455]: pam_unix(sudo:session): session closed for user root
Oct 02 18:45:24 compute-0 sudo[28608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttzxdpobbbxjdafccqoazkziblpwijmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430724.2997355-57-276906777448876/AnsiballZ_stat.py'
Oct 02 18:45:24 compute-0 sudo[28608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:45:25 compute-0 python3.9[28610]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:45:25 compute-0 sudo[28608]: pam_unix(sudo:session): session closed for user root
Oct 02 18:45:25 compute-0 sudo[28760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzbdqdmdnakgneyljsfuoijkfwzprnpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430725.2264636-65-34438500633320/AnsiballZ_file.py'
Oct 02 18:45:25 compute-0 sudo[28760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:45:25 compute-0 python3.9[28762]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:45:25 compute-0 sudo[28760]: pam_unix(sudo:session): session closed for user root
Oct 02 18:45:26 compute-0 sudo[28913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqyqwornrpunthlpgmflxtxoysppylii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430726.1668203-73-236709990743659/AnsiballZ_stat.py'
Oct 02 18:45:26 compute-0 sudo[28913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:45:26 compute-0 python3.9[28915]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:45:26 compute-0 sudo[28913]: pam_unix(sudo:session): session closed for user root
Oct 02 18:45:27 compute-0 sudo[29036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pabvqgjwhmvvbdavxciyweljnxhlzaqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430726.1668203-73-236709990743659/AnsiballZ_copy.py'
Oct 02 18:45:27 compute-0 sudo[29036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:45:27 compute-0 python3.9[29038]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759430726.1668203-73-236709990743659/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:45:27 compute-0 sudo[29036]: pam_unix(sudo:session): session closed for user root
Oct 02 18:45:28 compute-0 sudo[29188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imcxzhmzjkizhzzmaozlwqeazxhvzbtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430727.7834153-88-117407670938359/AnsiballZ_setup.py'
Oct 02 18:45:28 compute-0 sudo[29188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:45:28 compute-0 python3.9[29190]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:45:28 compute-0 sudo[29188]: pam_unix(sudo:session): session closed for user root
Oct 02 18:45:29 compute-0 sudo[29344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuesemmcuiqcctdmuhvgvylmepxjfpzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430728.8507044-96-131200694079072/AnsiballZ_file.py'
Oct 02 18:45:29 compute-0 sudo[29344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:45:29 compute-0 python3.9[29346]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:45:29 compute-0 sudo[29344]: pam_unix(sudo:session): session closed for user root
Oct 02 18:45:30 compute-0 python3.9[29496]: ansible-ansible.builtin.service_facts Invoked
Oct 02 18:45:34 compute-0 python3.9[29751]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:45:35 compute-0 python3.9[29901]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:45:36 compute-0 python3.9[30055]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:45:37 compute-0 sudo[30211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpigsavmpyrplgcujvcpvettnsofqycx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430737.1850178-144-40661685520730/AnsiballZ_setup.py'
Oct 02 18:45:37 compute-0 sudo[30211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:45:37 compute-0 python3.9[30213]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:45:38 compute-0 sudo[30211]: pam_unix(sudo:session): session closed for user root
Oct 02 18:45:38 compute-0 sudo[30295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blqjfqkdgkrclllqbgooynnnlqwcwiyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430737.1850178-144-40661685520730/AnsiballZ_dnf.py'
Oct 02 18:45:38 compute-0 sudo[30295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:45:38 compute-0 python3.9[30297]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:46:23 compute-0 systemd[1]: Reloading.
Oct 02 18:46:23 compute-0 systemd-rc-local-generator[30494]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:46:23 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 02 18:46:23 compute-0 systemd[1]: Reloading.
Oct 02 18:46:23 compute-0 systemd-rc-local-generator[30537]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:46:24 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 02 18:46:24 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 02 18:46:24 compute-0 systemd[1]: Reloading.
Oct 02 18:46:24 compute-0 systemd-rc-local-generator[30575]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:46:24 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 02 18:46:24 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Oct 02 18:46:24 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Oct 02 18:46:24 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Oct 02 18:47:32 compute-0 kernel: SELinux:  Converting 2715 SID table entries...
Oct 02 18:47:32 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:47:32 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 18:47:32 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:47:32 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:47:32 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:47:32 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:47:32 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:47:33 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct 02 18:47:33 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 18:47:33 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 18:47:33 compute-0 systemd[1]: Reloading.
Oct 02 18:47:33 compute-0 systemd-rc-local-generator[30897]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:47:33 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 18:47:33 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 02 18:47:33 compute-0 PackageKit[31071]: daemon start
Oct 02 18:47:33 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 02 18:47:34 compute-0 sudo[30295]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:34 compute-0 sudo[31814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkqvgjavkkekzgwrgugpkdrpllwvifjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430854.4561577-156-192655092174065/AnsiballZ_command.py'
Oct 02 18:47:34 compute-0 sudo[31814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:34 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 18:47:34 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 18:47:34 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.801s CPU time.
Oct 02 18:47:34 compute-0 systemd[1]: run-rb6ef5ab1b1d8442eb2af65c42baaab0f.service: Deactivated successfully.
Oct 02 18:47:35 compute-0 python3.9[31816]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:47:36 compute-0 sudo[31814]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:37 compute-0 sudo[32096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snkuszetbczwcpdvohbbcstqzznonyxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430856.464523-164-252536426315283/AnsiballZ_selinux.py'
Oct 02 18:47:37 compute-0 sudo[32096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:37 compute-0 python3.9[32098]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 02 18:47:37 compute-0 sudo[32096]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:38 compute-0 sudo[32248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqdhguyqxbjlmiiswnnocsusqqsaguux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430857.841103-175-9910434800889/AnsiballZ_command.py'
Oct 02 18:47:38 compute-0 sudo[32248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:38 compute-0 python3.9[32250]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 02 18:47:38 compute-0 sshd-session[32251]: Invalid user admin from 78.128.112.74 port 45242
Oct 02 18:47:38 compute-0 sshd-session[32251]: Connection closed by invalid user admin 78.128.112.74 port 45242 [preauth]
Oct 02 18:47:39 compute-0 sudo[32248]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:40 compute-0 sudo[32403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcojtqdgbumuuccwrsqmfpjsdylskxyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430859.7833347-183-120868768994094/AnsiballZ_file.py'
Oct 02 18:47:40 compute-0 sudo[32403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:40 compute-0 python3.9[32405]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:47:40 compute-0 sudo[32403]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:41 compute-0 sudo[32555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkewjmvtldbvdrklydefkwcatyaodsib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430860.7719495-191-208964224406490/AnsiballZ_mount.py'
Oct 02 18:47:41 compute-0 sudo[32555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:41 compute-0 python3.9[32557]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 02 18:47:41 compute-0 sudo[32555]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:42 compute-0 sudo[32707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flsbnhensdigblkjldtvjbdcbbzuyjpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430862.3745894-219-66504461736898/AnsiballZ_file.py'
Oct 02 18:47:42 compute-0 sudo[32707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:42 compute-0 python3.9[32709]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:47:43 compute-0 sudo[32707]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:43 compute-0 sudo[32859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pedzfznkzwnvimluawbaeeqvhspzjhri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430863.211526-227-217659156195779/AnsiballZ_stat.py'
Oct 02 18:47:43 compute-0 sudo[32859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:43 compute-0 python3.9[32861]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:47:43 compute-0 sudo[32859]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:44 compute-0 sudo[32982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkncaecltwrmnsktitphvlnxydrbepbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430863.211526-227-217659156195779/AnsiballZ_copy.py'
Oct 02 18:47:44 compute-0 sudo[32982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:44 compute-0 python3.9[32984]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759430863.211526-227-217659156195779/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81b905e41eda5af3080e544e4dc3bafb229246e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:47:44 compute-0 sudo[32982]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:45 compute-0 sudo[33135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iorcixllcnnvajhgffkzluwigtfqvxfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430865.1716003-254-44646217157108/AnsiballZ_getent.py'
Oct 02 18:47:45 compute-0 sudo[33135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:47 compute-0 python3.9[33137]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 02 18:47:47 compute-0 sudo[33135]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:48 compute-0 sudo[33288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsticfntzksvgusogmciyrlavapatdzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430867.823197-262-241660114676540/AnsiballZ_group.py'
Oct 02 18:47:48 compute-0 sudo[33288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:48 compute-0 python3.9[33290]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 18:47:48 compute-0 groupadd[33291]: group added to /etc/group: name=qemu, GID=107
Oct 02 18:47:48 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 18:47:48 compute-0 groupadd[33291]: group added to /etc/gshadow: name=qemu
Oct 02 18:47:48 compute-0 groupadd[33291]: new group: name=qemu, GID=107
Oct 02 18:47:48 compute-0 sudo[33288]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:49 compute-0 sudo[33447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtqvnrsqxodspnaonaylfwymyzlmxcpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430868.8997066-270-5959526617294/AnsiballZ_user.py'
Oct 02 18:47:49 compute-0 sudo[33447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:49 compute-0 python3.9[33449]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 18:47:49 compute-0 useradd[33451]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Oct 02 18:47:49 compute-0 sudo[33447]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:50 compute-0 sudo[33607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nilntqtycsewiwgfmedtqvwtucczchih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430870.1095955-278-253688718773671/AnsiballZ_getent.py'
Oct 02 18:47:50 compute-0 sudo[33607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:50 compute-0 python3.9[33609]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 02 18:47:50 compute-0 sudo[33607]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:51 compute-0 sudo[33760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgwwmbznbtwisxwmtgbioibdrbgooxcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430870.9440737-286-276052078416222/AnsiballZ_group.py'
Oct 02 18:47:51 compute-0 sudo[33760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:51 compute-0 python3.9[33762]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 18:47:51 compute-0 groupadd[33763]: group added to /etc/group: name=hugetlbfs, GID=42477
Oct 02 18:47:51 compute-0 groupadd[33763]: group added to /etc/gshadow: name=hugetlbfs
Oct 02 18:47:51 compute-0 groupadd[33763]: new group: name=hugetlbfs, GID=42477
Oct 02 18:47:51 compute-0 sudo[33760]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:52 compute-0 sudo[33918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-echzfoxjzggkaobrpulgbqnbttdxljfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430871.8227305-295-234436683955248/AnsiballZ_file.py'
Oct 02 18:47:52 compute-0 sudo[33918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:52 compute-0 python3.9[33920]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 02 18:47:52 compute-0 sudo[33918]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:53 compute-0 sudo[34070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpayfsvuayfjspervgtifbdmwfsdlvyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430872.7940102-306-19245974887584/AnsiballZ_dnf.py'
Oct 02 18:47:53 compute-0 sudo[34070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:53 compute-0 python3.9[34072]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:47:55 compute-0 sudo[34070]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:55 compute-0 sudo[34223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsylooinsomfpxfszkciveormrwzigwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430875.2192688-314-212052575556921/AnsiballZ_file.py'
Oct 02 18:47:55 compute-0 sudo[34223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:55 compute-0 python3.9[34225]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:47:55 compute-0 sudo[34223]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:56 compute-0 sudo[34375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpsfytxcnnerqxrlqzvzuagelpeqihyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430875.9491928-322-204391825862965/AnsiballZ_stat.py'
Oct 02 18:47:56 compute-0 sudo[34375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:56 compute-0 python3.9[34377]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:47:56 compute-0 sudo[34375]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:57 compute-0 sudo[34498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpgezdbhusrgyipmzltflsjrjaqhkntu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430875.9491928-322-204391825862965/AnsiballZ_copy.py'
Oct 02 18:47:57 compute-0 sudo[34498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:57 compute-0 python3.9[34500]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759430875.9491928-322-204391825862965/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:47:57 compute-0 sudo[34498]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:58 compute-0 sudo[34650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxatncucgcqzeisyuweervezbglevhop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430877.5008194-337-41551128252988/AnsiballZ_systemd.py'
Oct 02 18:47:58 compute-0 sudo[34650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:58 compute-0 python3.9[34652]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 18:47:58 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 02 18:47:58 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 02 18:47:58 compute-0 kernel: Bridge firewalling registered
Oct 02 18:47:58 compute-0 systemd-modules-load[34656]: Inserted module 'br_netfilter'
Oct 02 18:47:58 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 02 18:47:58 compute-0 sudo[34650]: pam_unix(sudo:session): session closed for user root
Oct 02 18:47:59 compute-0 sudo[34809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vypkeuzfggupegyuphrfdowawingyssf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430878.9841335-345-39952794393070/AnsiballZ_stat.py'
Oct 02 18:47:59 compute-0 sudo[34809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:47:59 compute-0 python3.9[34811]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:47:59 compute-0 sudo[34809]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:00 compute-0 sudo[34932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exqqdpemmvfeiebzlekccqeasifxxrqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430878.9841335-345-39952794393070/AnsiballZ_copy.py'
Oct 02 18:48:00 compute-0 sudo[34932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:00 compute-0 python3.9[34934]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759430878.9841335-345-39952794393070/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:48:00 compute-0 sudo[34932]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:01 compute-0 sudo[35084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnbavgezryotnlzkcyiokdengxbyiude ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430880.6677063-363-1288063406685/AnsiballZ_dnf.py'
Oct 02 18:48:01 compute-0 sudo[35084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:01 compute-0 python3.9[35086]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:48:04 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Oct 02 18:48:04 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Oct 02 18:48:05 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 18:48:05 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 18:48:05 compute-0 systemd[1]: Reloading.
Oct 02 18:48:05 compute-0 systemd-rc-local-generator[35148]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:48:05 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 18:48:05 compute-0 sudo[35084]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:06 compute-0 python3.9[36180]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:48:07 compute-0 python3.9[37079]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 02 18:48:08 compute-0 python3.9[37760]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:48:09 compute-0 sudo[38568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frpqjghooessqqxomwhytunndinjdrhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430888.9767063-402-56313652588614/AnsiballZ_command.py'
Oct 02 18:48:09 compute-0 sudo[38568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:09 compute-0 python3.9[38588]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:48:09 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 02 18:48:10 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 02 18:48:10 compute-0 sudo[38568]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:10 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 18:48:10 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 18:48:10 compute-0 systemd[1]: man-db-cache-update.service: Consumed 7.270s CPU time.
Oct 02 18:48:10 compute-0 systemd[1]: run-r2f277a1fd4dc422c8e1bc7e77c6132eb.service: Deactivated successfully.
Oct 02 18:48:11 compute-0 sudo[39621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvynbvoqjbbhqtsymmfwwutwnaxiobnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430890.719912-411-86284071230317/AnsiballZ_systemd.py'
Oct 02 18:48:11 compute-0 sudo[39621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:11 compute-0 python3.9[39623]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:48:11 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 02 18:48:11 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct 02 18:48:11 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 02 18:48:11 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 02 18:48:11 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 02 18:48:11 compute-0 sudo[39621]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:12 compute-0 python3.9[39784]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 02 18:48:15 compute-0 sudo[39934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrysnhyvhpxzryyhatguucpltburcrdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430894.7446377-468-207622801949546/AnsiballZ_systemd.py'
Oct 02 18:48:15 compute-0 sudo[39934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:15 compute-0 python3.9[39936]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:48:15 compute-0 systemd[1]: Reloading.
Oct 02 18:48:15 compute-0 systemd-rc-local-generator[39963]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:48:15 compute-0 sudo[39934]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:16 compute-0 sudo[40123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbucqmkqiytkmsshjesyzmiajyiermzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430895.9802234-468-228713983597876/AnsiballZ_systemd.py'
Oct 02 18:48:16 compute-0 sudo[40123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:16 compute-0 python3.9[40125]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:48:16 compute-0 systemd[1]: Reloading.
Oct 02 18:48:16 compute-0 systemd-rc-local-generator[40150]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:48:17 compute-0 sudo[40123]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:17 compute-0 sudo[40312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srsrgguhgutltzoczbgqxezqahuwhdkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430897.3693204-484-227635490282256/AnsiballZ_command.py'
Oct 02 18:48:17 compute-0 sudo[40312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:17 compute-0 python3.9[40314]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:48:18 compute-0 sudo[40312]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:18 compute-0 sudo[40465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkknxyeljyosgexyhsxamklznuzdizoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430898.2317193-492-253335036896719/AnsiballZ_command.py'
Oct 02 18:48:18 compute-0 sudo[40465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:18 compute-0 python3.9[40467]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:48:18 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 02 18:48:18 compute-0 sudo[40465]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:19 compute-0 sudo[40618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pffbiheidlunporuafauhaetnntihjzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430899.0481753-500-75732313090451/AnsiballZ_command.py'
Oct 02 18:48:19 compute-0 sudo[40618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:19 compute-0 python3.9[40620]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:48:21 compute-0 sudo[40618]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:21 compute-0 sudo[40780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlahridfzjnpnvqzndmtrheplaiawgjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430901.431367-508-16289085869077/AnsiballZ_command.py'
Oct 02 18:48:21 compute-0 sudo[40780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:22 compute-0 python3.9[40782]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:48:22 compute-0 sudo[40780]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:22 compute-0 sudo[40933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbesiansgogzusuarnfjhsxmnqvowsre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430902.245745-516-227735133780485/AnsiballZ_systemd.py'
Oct 02 18:48:22 compute-0 sudo[40933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:22 compute-0 python3.9[40935]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 18:48:22 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 02 18:48:22 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Oct 02 18:48:22 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Oct 02 18:48:22 compute-0 systemd[1]: Starting Apply Kernel Variables...
Oct 02 18:48:22 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 02 18:48:22 compute-0 systemd[1]: Finished Apply Kernel Variables.
Oct 02 18:48:22 compute-0 sudo[40933]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:23 compute-0 sshd-session[27981]: Connection closed by 192.168.122.30 port 52838
Oct 02 18:48:23 compute-0 sshd-session[27978]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:48:23 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Oct 02 18:48:23 compute-0 systemd[1]: session-8.scope: Consumed 2min 25.681s CPU time.
Oct 02 18:48:23 compute-0 systemd-logind[793]: Session 8 logged out. Waiting for processes to exit.
Oct 02 18:48:23 compute-0 systemd-logind[793]: Removed session 8.
Oct 02 18:48:29 compute-0 sshd-session[40965]: Accepted publickey for zuul from 192.168.122.30 port 40520 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 18:48:29 compute-0 systemd-logind[793]: New session 9 of user zuul.
Oct 02 18:48:29 compute-0 systemd[1]: Started Session 9 of User zuul.
Oct 02 18:48:29 compute-0 sshd-session[40965]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:48:30 compute-0 python3.9[41118]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:48:31 compute-0 sudo[41272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctfiyzjfmdrfujozpkphwdtamxwwqxds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430911.4365985-36-231079485605286/AnsiballZ_getent.py'
Oct 02 18:48:32 compute-0 sudo[41272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:32 compute-0 python3.9[41274]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 02 18:48:32 compute-0 sudo[41272]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:32 compute-0 sudo[41425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggfbizdkvxwftyzqwqskjaolpuxzvtau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430912.4359212-44-150827919979435/AnsiballZ_group.py'
Oct 02 18:48:32 compute-0 sudo[41425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:33 compute-0 python3.9[41427]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 18:48:33 compute-0 groupadd[41428]: group added to /etc/group: name=openvswitch, GID=42476
Oct 02 18:48:33 compute-0 groupadd[41428]: group added to /etc/gshadow: name=openvswitch
Oct 02 18:48:33 compute-0 groupadd[41428]: new group: name=openvswitch, GID=42476
Oct 02 18:48:33 compute-0 sudo[41425]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:34 compute-0 sudo[41583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzsuzpddzpolervixzerkxojrpmfqgzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430913.5045602-52-207947420857946/AnsiballZ_user.py'
Oct 02 18:48:34 compute-0 sudo[41583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:34 compute-0 python3.9[41585]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 18:48:34 compute-0 useradd[41587]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Oct 02 18:48:34 compute-0 useradd[41587]: add 'openvswitch' to group 'hugetlbfs'
Oct 02 18:48:34 compute-0 useradd[41587]: add 'openvswitch' to shadow group 'hugetlbfs'
Oct 02 18:48:34 compute-0 sudo[41583]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:35 compute-0 sudo[41743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcdpdvtfiewmgwiprgxqgfvswgtjbmec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430914.8304741-62-68402164388655/AnsiballZ_setup.py'
Oct 02 18:48:35 compute-0 sudo[41743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:35 compute-0 python3.9[41745]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:48:35 compute-0 sudo[41743]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:36 compute-0 sudo[41827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vinumimzkbszneljcbtdebkykpywznjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430914.8304741-62-68402164388655/AnsiballZ_dnf.py'
Oct 02 18:48:36 compute-0 sudo[41827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:36 compute-0 python3.9[41829]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 18:48:38 compute-0 sudo[41827]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:39 compute-0 sudo[41991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfxlbytlleuxhtlsdzqddhznfustdfjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430919.193732-76-83664557989641/AnsiballZ_dnf.py'
Oct 02 18:48:39 compute-0 sudo[41991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:39 compute-0 python3.9[41993]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:48:51 compute-0 kernel: SELinux:  Converting 2725 SID table entries...
Oct 02 18:48:51 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:48:51 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 18:48:51 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:48:51 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:48:51 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:48:51 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:48:51 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:48:51 compute-0 groupadd[42016]: group added to /etc/group: name=unbound, GID=993
Oct 02 18:48:51 compute-0 groupadd[42016]: group added to /etc/gshadow: name=unbound
Oct 02 18:48:51 compute-0 groupadd[42016]: new group: name=unbound, GID=993
Oct 02 18:48:51 compute-0 useradd[42023]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Oct 02 18:48:52 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct 02 18:48:52 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 02 18:48:53 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 18:48:53 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 18:48:53 compute-0 systemd[1]: Reloading.
Oct 02 18:48:53 compute-0 systemd-sysv-generator[42524]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:48:53 compute-0 systemd-rc-local-generator[42521]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:48:53 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 18:48:54 compute-0 sudo[41991]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:54 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 18:48:54 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 18:48:54 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.080s CPU time.
Oct 02 18:48:54 compute-0 systemd[1]: run-r60295077e9354b159abbdd7e88df06e4.service: Deactivated successfully.
Oct 02 18:48:55 compute-0 sudo[43093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkyphnlnjutsxrsyekhactutxtjfrgtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430934.6090944-84-99113171916536/AnsiballZ_systemd.py'
Oct 02 18:48:55 compute-0 sudo[43093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:55 compute-0 python3.9[43095]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 18:48:55 compute-0 systemd[1]: Reloading.
Oct 02 18:48:55 compute-0 systemd-rc-local-generator[43126]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:48:55 compute-0 systemd-sysv-generator[43129]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:48:56 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Oct 02 18:48:56 compute-0 chown[43137]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 02 18:48:56 compute-0 ovs-ctl[43142]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 02 18:48:56 compute-0 ovs-ctl[43142]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct 02 18:48:56 compute-0 ovs-ctl[43142]: Starting ovsdb-server [  OK  ]
Oct 02 18:48:56 compute-0 ovs-vsctl[43191]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 02 18:48:56 compute-0 ovs-vsctl[43211]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"1de2af17-a89c-45e5-97c6-db433f26bbb6\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 02 18:48:56 compute-0 ovs-ctl[43142]: Configuring Open vSwitch system IDs [  OK  ]
Oct 02 18:48:56 compute-0 ovs-vsctl[43217]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 02 18:48:56 compute-0 ovs-ctl[43142]: Enabling remote OVSDB managers [  OK  ]
Oct 02 18:48:56 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Oct 02 18:48:56 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 02 18:48:56 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 02 18:48:56 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 02 18:48:56 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Oct 02 18:48:56 compute-0 ovs-ctl[43261]: Inserting openvswitch module [  OK  ]
Oct 02 18:48:56 compute-0 ovs-ctl[43230]: Starting ovs-vswitchd [  OK  ]
Oct 02 18:48:56 compute-0 ovs-ctl[43230]: Enabling remote OVSDB managers [  OK  ]
Oct 02 18:48:56 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 02 18:48:56 compute-0 ovs-vsctl[43283]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 02 18:48:56 compute-0 systemd[1]: Starting Open vSwitch...
Oct 02 18:48:56 compute-0 systemd[1]: Finished Open vSwitch.
Oct 02 18:48:56 compute-0 sudo[43093]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:57 compute-0 python3.9[43434]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:48:58 compute-0 sudo[43584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aziolhshwwvwnxbzuqkfqyvqttzgauhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430938.152416-102-234481276091488/AnsiballZ_sefcontext.py'
Oct 02 18:48:58 compute-0 sudo[43584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:58 compute-0 python3.9[43586]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 02 18:49:00 compute-0 kernel: SELinux:  Converting 2739 SID table entries...
Oct 02 18:49:00 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:49:00 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 18:49:00 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:49:00 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:49:00 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:49:00 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:49:00 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:49:00 compute-0 sudo[43584]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:01 compute-0 python3.9[43741]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:49:02 compute-0 sudo[43897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlnxfjdhjpawltqfnpcntcspdvujiqsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430941.915406-120-229714957985771/AnsiballZ_dnf.py'
Oct 02 18:49:02 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct 02 18:49:02 compute-0 sudo[43897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:02 compute-0 python3.9[43899]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:49:03 compute-0 sudo[43897]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:04 compute-0 sudo[44050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvtwxdhzhirodhadtfxlqibpnpgxclpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430944.0957217-128-198147740650284/AnsiballZ_command.py'
Oct 02 18:49:04 compute-0 sudo[44050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:04 compute-0 python3.9[44052]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:49:05 compute-0 sudo[44050]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:06 compute-0 sudo[44337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqerwzagmeakpbzuignwphhbtdkvhmpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430945.9001398-136-79831472675112/AnsiballZ_file.py'
Oct 02 18:49:06 compute-0 sudo[44337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:06 compute-0 python3.9[44339]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 18:49:06 compute-0 sudo[44337]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:07 compute-0 python3.9[44489]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:49:08 compute-0 sudo[44641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgqexmdepjmwclelxcfurnpyeutmhttc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430947.7112124-152-63853273808871/AnsiballZ_dnf.py'
Oct 02 18:49:08 compute-0 sudo[44641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:08 compute-0 python3.9[44643]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:49:10 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 18:49:10 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 18:49:10 compute-0 systemd[1]: Reloading.
Oct 02 18:49:10 compute-0 systemd-rc-local-generator[44678]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:49:10 compute-0 systemd-sysv-generator[44683]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:49:10 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 18:49:11 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 18:49:11 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 18:49:11 compute-0 systemd[1]: run-rdb4cef8a48ff4012b31528b6573f0bf8.service: Deactivated successfully.
Oct 02 18:49:11 compute-0 sudo[44641]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:11 compute-0 sudo[44957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryyfnulmifuxqkaonmdkdlszzgtmkjdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430951.39238-160-168259438949923/AnsiballZ_systemd.py'
Oct 02 18:49:11 compute-0 sudo[44957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:12 compute-0 python3.9[44959]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 18:49:12 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 02 18:49:12 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Oct 02 18:49:12 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Oct 02 18:49:12 compute-0 NetworkManager[3939]: <info>  [1759430952.2083] caught SIGTERM, shutting down normally.
Oct 02 18:49:12 compute-0 systemd[1]: Stopping Network Manager...
Oct 02 18:49:12 compute-0 NetworkManager[3939]: <info>  [1759430952.2105] dhcp4 (eth0): canceled DHCP transaction
Oct 02 18:49:12 compute-0 NetworkManager[3939]: <info>  [1759430952.2105] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 18:49:12 compute-0 NetworkManager[3939]: <info>  [1759430952.2105] dhcp4 (eth0): state changed no lease
Oct 02 18:49:12 compute-0 NetworkManager[3939]: <info>  [1759430952.2109] manager: NetworkManager state is now CONNECTED_SITE
Oct 02 18:49:12 compute-0 NetworkManager[3939]: <info>  [1759430952.2191] exiting (success)
Oct 02 18:49:12 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 18:49:12 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 02 18:49:12 compute-0 systemd[1]: Stopped Network Manager.
Oct 02 18:49:12 compute-0 systemd[1]: NetworkManager.service: Consumed 15.120s CPU time, 4.0M memory peak, read 0B from disk, written 11.0K to disk.
Oct 02 18:49:12 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 18:49:12 compute-0 systemd[1]: Starting Network Manager...
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.3334] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:335cdeae-868d-405b-8b1c-eba838d0b699)
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.3338] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.3412] manager[0x55bddafc1090]: monitoring kernel firmware directory '/lib/firmware'.
Oct 02 18:49:12 compute-0 systemd[1]: Starting Hostname Service...
Oct 02 18:49:12 compute-0 systemd[1]: Started Hostname Service.
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4675] hostname: hostname: using hostnamed
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4675] hostname: static hostname changed from (none) to "compute-0"
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4680] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4684] manager[0x55bddafc1090]: rfkill: Wi-Fi hardware radio set enabled
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4685] manager[0x55bddafc1090]: rfkill: WWAN hardware radio set enabled
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4708] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4718] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4719] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4719] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4719] manager: Networking is enabled by state file
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4721] settings: Loaded settings plugin: keyfile (internal)
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4725] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4751] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4762] dhcp: init: Using DHCP client 'internal'
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4764] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4768] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4773] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4779] device (lo): Activation: starting connection 'lo' (ad43cc60-0861-4113-b2c8-fcae658eed34)
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4785] device (eth0): carrier: link connected
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4788] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4792] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4792] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4797] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4802] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4807] device (eth1): carrier: link connected
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4810] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4814] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (881fd00f-2862-5b6b-b9e7-98cd71b77b44) (indicated)
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4814] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4818] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4823] device (eth1): Activation: starting connection 'ci-private-network' (881fd00f-2862-5b6b-b9e7-98cd71b77b44)
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4831] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 02 18:49:12 compute-0 systemd[1]: Started Network Manager.
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4839] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4841] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4842] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4844] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4846] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4848] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4849] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4852] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4857] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4859] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4872] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4890] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4916] dhcp4 (eth0): state changed new lease, address=38.102.83.148
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.4921] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 02 18:49:12 compute-0 systemd[1]: Starting Network Manager Wait Online...
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.5023] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.5040] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.5044] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.5048] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.5060] device (lo): Activation: successful, device activated.
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.5074] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.5080] manager: NetworkManager state is now CONNECTED_LOCAL
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.5087] device (eth1): Activation: successful, device activated.
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.5107] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.5111] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.5118] manager: NetworkManager state is now CONNECTED_SITE
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.5125] device (eth0): Activation: successful, device activated.
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.5137] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 02 18:49:12 compute-0 NetworkManager[44968]: <info>  [1759430952.5142] manager: startup complete
Oct 02 18:49:12 compute-0 systemd[1]: Finished Network Manager Wait Online.
Oct 02 18:49:12 compute-0 sudo[44957]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:13 compute-0 sudo[45183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfprwnzfunwsombtxpxfsgsazhjcmhcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430952.786723-168-17824419429590/AnsiballZ_dnf.py'
Oct 02 18:49:13 compute-0 sudo[45183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:13 compute-0 python3.9[45185]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:49:18 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 18:49:18 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 18:49:18 compute-0 systemd[1]: Reloading.
Oct 02 18:49:18 compute-0 systemd-sysv-generator[45243]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:49:18 compute-0 systemd-rc-local-generator[45240]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:49:18 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 18:49:19 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 18:49:19 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 18:49:19 compute-0 systemd[1]: run-r25db1f6c36fc4bdcbf29d76d96420420.service: Deactivated successfully.
Oct 02 18:49:19 compute-0 sudo[45183]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:20 compute-0 sudo[45646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boyhphrjqqruwfdxyspxgzlvcjemobpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430959.8278317-180-156058211380926/AnsiballZ_stat.py'
Oct 02 18:49:20 compute-0 sudo[45646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:20 compute-0 python3.9[45648]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:49:20 compute-0 sudo[45646]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:21 compute-0 sudo[45798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkgdaschtinvjhpwydinbyxxivspznln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430960.6646128-189-36299382141706/AnsiballZ_ini_file.py'
Oct 02 18:49:21 compute-0 sudo[45798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:21 compute-0 python3.9[45800]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:49:21 compute-0 sudo[45798]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:22 compute-0 sudo[45952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqselzoayrwyhutwebudknnzspgabzlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430961.6890075-199-64595260885773/AnsiballZ_ini_file.py'
Oct 02 18:49:22 compute-0 sudo[45952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:22 compute-0 python3.9[45954]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:49:22 compute-0 sudo[45952]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:22 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 18:49:22 compute-0 sudo[46104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqckhrmfsmxwjyqjownytbfuzqipajwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430962.4725316-199-20792672409720/AnsiballZ_ini_file.py'
Oct 02 18:49:22 compute-0 sudo[46104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:23 compute-0 python3.9[46106]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:49:23 compute-0 sudo[46104]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:23 compute-0 sudo[46256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cynflilipdtbsagxevinwnqoznesjwjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430963.2874596-214-149133369397138/AnsiballZ_ini_file.py'
Oct 02 18:49:23 compute-0 sudo[46256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:23 compute-0 python3.9[46258]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:49:23 compute-0 sudo[46256]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:24 compute-0 sudo[46409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgpnwhwissvoxudvuaytfpieyexwdoql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430964.116343-214-215223882647094/AnsiballZ_ini_file.py'
Oct 02 18:49:24 compute-0 sudo[46409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:24 compute-0 python3.9[46411]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:49:24 compute-0 sudo[46409]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:25 compute-0 sudo[46561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hspxyonifpudwcquxijxfjoprnlrvdhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430964.9147959-229-141213425529025/AnsiballZ_stat.py'
Oct 02 18:49:25 compute-0 sudo[46561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:25 compute-0 python3.9[46563]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:49:25 compute-0 sudo[46561]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:26 compute-0 sudo[46684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtetbzxazinmqhbdjrmvofhgyvdxafbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430964.9147959-229-141213425529025/AnsiballZ_copy.py'
Oct 02 18:49:26 compute-0 sudo[46684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:26 compute-0 python3.9[46686]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759430964.9147959-229-141213425529025/.source _original_basename=.xo_q8fof follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:49:26 compute-0 sudo[46684]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:27 compute-0 sudo[46836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjdhnpsronnkeufezruzzbryxullmlqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430966.8972442-244-12979145478270/AnsiballZ_file.py'
Oct 02 18:49:27 compute-0 sudo[46836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:27 compute-0 python3.9[46838]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:49:27 compute-0 sudo[46836]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:28 compute-0 sudo[46988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mznpkouvjmdjbmepwrltvnsnsjngeqkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430967.8173437-252-198355220060084/AnsiballZ_edpm_os_net_config_mappings.py'
Oct 02 18:49:28 compute-0 sudo[46988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:28 compute-0 python3.9[46990]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 02 18:49:28 compute-0 sudo[46988]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:29 compute-0 sudo[47140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjxmtgbswqzzyfjuexsnnvtqczgsajyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430968.8690813-261-256526180020645/AnsiballZ_file.py'
Oct 02 18:49:29 compute-0 sudo[47140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:29 compute-0 python3.9[47142]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:49:29 compute-0 sudo[47140]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:30 compute-0 sudo[47292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmneswhvjtobrixvnzbqbjypuvjujbjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430969.7767372-271-267540892563035/AnsiballZ_stat.py'
Oct 02 18:49:30 compute-0 sudo[47292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:30 compute-0 sudo[47292]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:30 compute-0 sudo[47415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrecpcypcgiesdoqvsalzyxligxjzqzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430969.7767372-271-267540892563035/AnsiballZ_copy.py'
Oct 02 18:49:30 compute-0 sudo[47415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:31 compute-0 sudo[47415]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:31 compute-0 sudo[47567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eygkxnjlgnvffjnfhxumgagmgqsensav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430971.3212101-286-215224256432402/AnsiballZ_slurp.py'
Oct 02 18:49:31 compute-0 sudo[47567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:32 compute-0 python3.9[47569]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct 02 18:49:32 compute-0 sudo[47567]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:33 compute-0 sudo[47742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bixilcomtcbwtkioimzbgrndoaexnnwx ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430972.3578315-295-210854130616395/async_wrapper.py j14059809685 300 /home/zuul/.ansible/tmp/ansible-tmp-1759430972.3578315-295-210854130616395/AnsiballZ_edpm_os_net_config.py _'
Oct 02 18:49:33 compute-0 sudo[47742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:33 compute-0 ansible-async_wrapper.py[47744]: Invoked with j14059809685 300 /home/zuul/.ansible/tmp/ansible-tmp-1759430972.3578315-295-210854130616395/AnsiballZ_edpm_os_net_config.py _
Oct 02 18:49:33 compute-0 ansible-async_wrapper.py[47747]: Starting module and watcher
Oct 02 18:49:33 compute-0 ansible-async_wrapper.py[47747]: Start watching 47748 (300)
Oct 02 18:49:33 compute-0 ansible-async_wrapper.py[47748]: Start module (47748)
Oct 02 18:49:33 compute-0 ansible-async_wrapper.py[47744]: Return async_wrapper task started.
Oct 02 18:49:33 compute-0 sudo[47742]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:33 compute-0 python3.9[47749]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct 02 18:49:34 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 02 18:49:34 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 02 18:49:34 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 02 18:49:34 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 02 18:49:34 compute-0 kernel: cfg80211: failed to load regulatory.db
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.0578] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.0606] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1303] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1306] audit: op="connection-add" uuid="43ae1d40-28c0-4884-a70f-d5e6d0a10d8d" name="br-ex-br" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1324] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1325] audit: op="connection-add" uuid="e6fb5a05-a12b-4e44-90d0-377918e4bc47" name="br-ex-port" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1337] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1338] audit: op="connection-add" uuid="56e9627d-0aa1-4a9a-ba9f-b5eafd8ed8cc" name="eth1-port" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1351] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1353] audit: op="connection-add" uuid="7308b1fe-50b2-4cf1-acc5-cc9910d6af96" name="vlan20-port" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1366] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1368] audit: op="connection-add" uuid="431f6ab8-2cbb-496f-bec9-ecb333dd3727" name="vlan21-port" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1380] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1382] audit: op="connection-add" uuid="17b3f744-397b-463f-9bb2-da999775d680" name="vlan22-port" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1394] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1396] audit: op="connection-add" uuid="e6f276cf-7084-44aa-ac95-c43be8730c2f" name="vlan23-port" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1417] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.timestamp,connection.autoconnect-priority,802-3-ethernet.mtu" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1434] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1436] audit: op="connection-add" uuid="26690725-c05b-4395-993d-a7e939bc7c5c" name="br-ex-if" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1479] audit: op="connection-update" uuid="881fd00f-2862-5b6b-b9e7-98cd71b77b44" name="ci-private-network" args="ipv6.addresses,ipv6.routes,ipv6.routing-rules,ipv6.dns,ipv6.addr-gen-mode,ipv6.method,ipv4.addresses,ipv4.routes,ipv4.never-default,ipv4.dns,ipv4.method,ipv4.routing-rules,ovs-external-ids.data,connection.port-type,connection.timestamp,connection.slave-type,connection.controller,connection.master,ovs-interface.type" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1498] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1500] audit: op="connection-add" uuid="5afe3410-dd29-42b0-ae03-d461ab1f3ef2" name="vlan20-if" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1516] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1517] audit: op="connection-add" uuid="5c395391-fb06-4c66-81c8-bbb49bfbb308" name="vlan21-if" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1533] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1535] audit: op="connection-add" uuid="efb576e4-f69a-4a4c-81bd-a381bae0ca28" name="vlan22-if" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1551] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1553] audit: op="connection-add" uuid="d18c8e54-efca-4a96-a021-0ea503b66822" name="vlan23-if" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1567] audit: op="connection-delete" uuid="ea663df2-53d3-37b6-9ced-9449cef09dde" name="Wired connection 1" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1581] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1592] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1596] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (43ae1d40-28c0-4884-a70f-d5e6d0a10d8d)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1598] audit: op="connection-activate" uuid="43ae1d40-28c0-4884-a70f-d5e6d0a10d8d" name="br-ex-br" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1600] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1607] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1611] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (e6fb5a05-a12b-4e44-90d0-377918e4bc47)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1614] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1620] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1624] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (56e9627d-0aa1-4a9a-ba9f-b5eafd8ed8cc)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1626] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1633] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1638] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (7308b1fe-50b2-4cf1-acc5-cc9910d6af96)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1640] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1648] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1653] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (431f6ab8-2cbb-496f-bec9-ecb333dd3727)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1656] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1662] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1666] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (17b3f744-397b-463f-9bb2-da999775d680)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1669] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1676] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1681] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (e6f276cf-7084-44aa-ac95-c43be8730c2f)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1682] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1685] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1687] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1694] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1698] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1702] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (26690725-c05b-4395-993d-a7e939bc7c5c)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1703] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1705] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1707] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1708] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1709] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1718] device (eth1): disconnecting for new activation request.
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1719] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1721] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1723] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1724] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1727] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1730] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1733] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (5afe3410-dd29-42b0-ae03-d461ab1f3ef2)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1734] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1736] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1738] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1739] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1741] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1747] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1750] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (5c395391-fb06-4c66-81c8-bbb49bfbb308)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1751] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1756] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1757] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1758] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1760] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1764] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1767] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (efb576e4-f69a-4a4c-81bd-a381bae0ca28)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1768] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1770] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1772] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1773] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1775] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1779] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1782] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (d18c8e54-efca-4a96-a021-0ea503b66822)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1783] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1785] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1787] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1788] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1789] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1801] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,802-3-ethernet.mtu" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1803] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1806] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1808] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1813] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1816] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1819] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1822] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1825] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1829] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1832] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 kernel: ovs-system: entered promiscuous mode
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1835] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1837] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1841] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1845] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1848] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1850] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1854] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1857] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1860] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1862] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 systemd-udevd[47756]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 18:49:36 compute-0 kernel: Timeout policy base is empty
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1881] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1890] dhcp4 (eth0): canceled DHCP transaction
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1890] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1891] dhcp4 (eth0): state changed no lease
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1893] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1910] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1915] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47750 uid=0 result="fail" reason="Device is not activated"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1919] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 02 18:49:36 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1935] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.1950] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2009] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2013] dhcp4 (eth0): state changed new lease, address=38.102.83.148
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2080] device (eth1): disconnecting for new activation request.
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2081] audit: op="connection-activate" uuid="881fd00f-2862-5b6b-b9e7-98cd71b77b44" name="ci-private-network" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2114] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2257] device (eth1): Activation: starting connection 'ci-private-network' (881fd00f-2862-5b6b-b9e7-98cd71b77b44)
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2267] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47750 uid=0 result="success"
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2268] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2276] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2279] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2288] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2293] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2300] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2302] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2303] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2304] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2306] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2307] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2309] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2317] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2321] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2326] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2331] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2335] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2339] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2344] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2350] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2354] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2358] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2362] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2368] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2375] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2380] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2416] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2418] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2423] device (eth1): Activation: successful, device activated.
Oct 02 18:49:36 compute-0 kernel: br-ex: entered promiscuous mode
Oct 02 18:49:36 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 02 18:49:36 compute-0 kernel: vlan22: entered promiscuous mode
Oct 02 18:49:36 compute-0 systemd-udevd[47754]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 18:49:36 compute-0 kernel: vlan20: entered promiscuous mode
Oct 02 18:49:36 compute-0 systemd-udevd[47755]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2746] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2765] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 kernel: vlan23: entered promiscuous mode
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2800] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2806] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2816] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2879] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.2885] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 02 18:49:36 compute-0 kernel: vlan21: entered promiscuous mode
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.3013] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.3034] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.3051] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.3056] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.3058] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.3067] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.3075] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.3080] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.3087] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.3103] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.3104] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.3120] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.3155] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.3156] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.3157] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.3161] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.3165] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:49:36 compute-0 NetworkManager[44968]: <info>  [1759430976.3170] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 18:49:37 compute-0 NetworkManager[44968]: <info>  [1759430977.4440] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47750 uid=0 result="success"
Oct 02 18:49:37 compute-0 sudo[48106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chggjvpuwwdabjgosahtyepvgnaexrym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430976.8930557-295-118021562640305/AnsiballZ_async_status.py'
Oct 02 18:49:37 compute-0 sudo[48106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:37 compute-0 python3.9[48108]: ansible-ansible.legacy.async_status Invoked with jid=j14059809685.47744 mode=status _async_dir=/root/.ansible_async
Oct 02 18:49:37 compute-0 sudo[48106]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:37 compute-0 NetworkManager[44968]: <info>  [1759430977.7385] checkpoint[0x55bddaf97950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct 02 18:49:37 compute-0 NetworkManager[44968]: <info>  [1759430977.7389] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47750 uid=0 result="success"
Oct 02 18:49:38 compute-0 NetworkManager[44968]: <info>  [1759430978.1251] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47750 uid=0 result="success"
Oct 02 18:49:38 compute-0 NetworkManager[44968]: <info>  [1759430978.1270] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47750 uid=0 result="success"
Oct 02 18:49:38 compute-0 NetworkManager[44968]: <info>  [1759430978.4137] audit: op="networking-control" arg="global-dns-configuration" pid=47750 uid=0 result="success"
Oct 02 18:49:38 compute-0 NetworkManager[44968]: <info>  [1759430978.4167] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct 02 18:49:38 compute-0 NetworkManager[44968]: <info>  [1759430978.4201] audit: op="networking-control" arg="global-dns-configuration" pid=47750 uid=0 result="success"
Oct 02 18:49:38 compute-0 NetworkManager[44968]: <info>  [1759430978.4245] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47750 uid=0 result="success"
Oct 02 18:49:38 compute-0 NetworkManager[44968]: <info>  [1759430978.6018] checkpoint[0x55bddaf97a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct 02 18:49:38 compute-0 NetworkManager[44968]: <info>  [1759430978.6023] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47750 uid=0 result="success"
Oct 02 18:49:38 compute-0 ansible-async_wrapper.py[47748]: Module complete (47748)
Oct 02 18:49:38 compute-0 ansible-async_wrapper.py[47747]: 47748 still running (300)
Oct 02 18:49:41 compute-0 sudo[48213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uclumyxtgbjblzoleeqmopbgfjnzxyhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430976.8930557-295-118021562640305/AnsiballZ_async_status.py'
Oct 02 18:49:41 compute-0 sudo[48213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:41 compute-0 python3.9[48215]: ansible-ansible.legacy.async_status Invoked with jid=j14059809685.47744 mode=status _async_dir=/root/.ansible_async
Oct 02 18:49:41 compute-0 sudo[48213]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:41 compute-0 sudo[48312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwkmwirrhvduymqlvtumluxuzwbglvam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430976.8930557-295-118021562640305/AnsiballZ_async_status.py'
Oct 02 18:49:41 compute-0 sudo[48312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:41 compute-0 python3.9[48314]: ansible-ansible.legacy.async_status Invoked with jid=j14059809685.47744 mode=cleanup _async_dir=/root/.ansible_async
Oct 02 18:49:41 compute-0 sudo[48312]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:42 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 18:49:42 compute-0 sudo[48466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfszjtfcflnpzjzydmexwftcofqnophm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430982.2156763-322-148467796793677/AnsiballZ_stat.py'
Oct 02 18:49:42 compute-0 sudo[48466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:42 compute-0 python3.9[48468]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:49:42 compute-0 sudo[48466]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:43 compute-0 sudo[48589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qemculgdthxmhuivmazzaawelquhgmbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430982.2156763-322-148467796793677/AnsiballZ_copy.py'
Oct 02 18:49:43 compute-0 sudo[48589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:43 compute-0 python3.9[48591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759430982.2156763-322-148467796793677/.source.returncode _original_basename=.a2r4yyfl follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:49:43 compute-0 sudo[48589]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:43 compute-0 ansible-async_wrapper.py[47747]: Done in kid B.
Oct 02 18:49:44 compute-0 sudo[48742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmpjvhclugsihfkhdeyjifloxopnfkry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430983.8409376-338-11609631916230/AnsiballZ_stat.py'
Oct 02 18:49:44 compute-0 sudo[48742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:44 compute-0 python3.9[48744]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:49:44 compute-0 sudo[48742]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:45 compute-0 sudo[48865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnmegwbjwzoobeyxpafpxydslvnqoave ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430983.8409376-338-11609631916230/AnsiballZ_copy.py'
Oct 02 18:49:45 compute-0 sudo[48865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:45 compute-0 python3.9[48867]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759430983.8409376-338-11609631916230/.source.cfg _original_basename=.e10pd3cg follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:49:45 compute-0 sudo[48865]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:46 compute-0 sudo[49017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqqgzwckfxakkrvlnkkvrsshqmmfsbdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430985.5378914-353-248983358869886/AnsiballZ_systemd.py'
Oct 02 18:49:46 compute-0 sudo[49017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:46 compute-0 python3.9[49019]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 18:49:46 compute-0 systemd[1]: Reloading Network Manager...
Oct 02 18:49:46 compute-0 NetworkManager[44968]: <info>  [1759430986.4366] audit: op="reload" arg="0" pid=49023 uid=0 result="success"
Oct 02 18:49:46 compute-0 NetworkManager[44968]: <info>  [1759430986.4377] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct 02 18:49:46 compute-0 systemd[1]: Reloaded Network Manager.
Oct 02 18:49:46 compute-0 sudo[49017]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:46 compute-0 sshd-session[40968]: Connection closed by 192.168.122.30 port 40520
Oct 02 18:49:46 compute-0 sshd-session[40965]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:49:46 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Oct 02 18:49:46 compute-0 systemd[1]: session-9.scope: Consumed 58.200s CPU time.
Oct 02 18:49:46 compute-0 systemd-logind[793]: Session 9 logged out. Waiting for processes to exit.
Oct 02 18:49:46 compute-0 systemd-logind[793]: Removed session 9.
Oct 02 18:49:52 compute-0 sshd-session[49054]: Accepted publickey for zuul from 192.168.122.30 port 37342 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 18:49:52 compute-0 systemd-logind[793]: New session 10 of user zuul.
Oct 02 18:49:52 compute-0 systemd[1]: Started Session 10 of User zuul.
Oct 02 18:49:52 compute-0 sshd-session[49054]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:49:53 compute-0 python3.9[49207]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:49:54 compute-0 python3.9[49362]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:49:56 compute-0 python3.9[49555]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:49:56 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 18:49:56 compute-0 sshd-session[49057]: Connection closed by 192.168.122.30 port 37342
Oct 02 18:49:56 compute-0 sshd-session[49054]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:49:56 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Oct 02 18:49:56 compute-0 systemd[1]: session-10.scope: Consumed 3.078s CPU time.
Oct 02 18:49:56 compute-0 systemd-logind[793]: Session 10 logged out. Waiting for processes to exit.
Oct 02 18:49:56 compute-0 systemd-logind[793]: Removed session 10.
Oct 02 18:50:02 compute-0 sshd-session[49585]: Accepted publickey for zuul from 192.168.122.30 port 42390 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 18:50:02 compute-0 systemd-logind[793]: New session 11 of user zuul.
Oct 02 18:50:02 compute-0 systemd[1]: Started Session 11 of User zuul.
Oct 02 18:50:02 compute-0 sshd-session[49585]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:50:03 compute-0 python3.9[49739]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:50:05 compute-0 python3.9[49893]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:50:05 compute-0 sudo[50047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqkywmsfrvypmqyqqoyuxubqxnmeikvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431005.517782-40-47521592704912/AnsiballZ_setup.py'
Oct 02 18:50:05 compute-0 sudo[50047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:06 compute-0 python3.9[50049]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:50:06 compute-0 sudo[50047]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:07 compute-0 sudo[50132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khkfpfsijicvnfmcywheglwygtjfkvmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431005.517782-40-47521592704912/AnsiballZ_dnf.py'
Oct 02 18:50:07 compute-0 sudo[50132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:07 compute-0 python3.9[50134]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:50:08 compute-0 sudo[50132]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:09 compute-0 sudo[50285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtrtmnlzsvawfiiqehqzosfnqqcyfzgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431008.7110026-52-97291018721012/AnsiballZ_setup.py'
Oct 02 18:50:09 compute-0 sudo[50285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:09 compute-0 python3.9[50287]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:50:09 compute-0 sudo[50285]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:10 compute-0 sudo[50480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypylfcxxuefptkxomudkvujranyfzlqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431010.2041304-63-228116434371658/AnsiballZ_file.py'
Oct 02 18:50:10 compute-0 sudo[50480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:11 compute-0 python3.9[50482]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:50:11 compute-0 sudo[50480]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:11 compute-0 sudo[50632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kekggdurihpieqbmfnietdrqcjjqtmwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431011.264253-71-152190020945275/AnsiballZ_command.py'
Oct 02 18:50:11 compute-0 sudo[50632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:12 compute-0 python3.9[50634]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:50:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2134641726-merged.mount: Deactivated successfully.
Oct 02 18:50:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1450373770-merged.mount: Deactivated successfully.
Oct 02 18:50:12 compute-0 podman[50635]: 2025-10-02 18:50:12.136800416 +0000 UTC m=+0.057096211 system refresh
Oct 02 18:50:12 compute-0 sudo[50632]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:12 compute-0 sudo[50796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iisycixojodnrulesikzeysxbidmefxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431012.3721282-79-138681497811469/AnsiballZ_stat.py'
Oct 02 18:50:12 compute-0 sudo[50796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:13 compute-0 python3.9[50798]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:50:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:50:13 compute-0 sudo[50796]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:13 compute-0 sudo[50919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebxvddkzlrlmkhgoatftierqdmmgvrlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431012.3721282-79-138681497811469/AnsiballZ_copy.py'
Oct 02 18:50:13 compute-0 sudo[50919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:14 compute-0 python3.9[50921]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431012.3721282-79-138681497811469/.source.json follow=False _original_basename=podman_network_config.j2 checksum=833ef0547e56fde7e322eb7acc66d31713971956 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:50:14 compute-0 sudo[50919]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:14 compute-0 sudo[51071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jckjvgerlynoetglgkdgaxitwxfdcfmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431014.3160803-94-25428104599610/AnsiballZ_stat.py'
Oct 02 18:50:14 compute-0 sudo[51071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:14 compute-0 python3.9[51073]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:50:14 compute-0 sudo[51071]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:15 compute-0 sudo[51194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvzoerixkrshrwyphzyllkjwqhenhroe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431014.3160803-94-25428104599610/AnsiballZ_copy.py'
Oct 02 18:50:15 compute-0 sudo[51194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:15 compute-0 python3.9[51196]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431014.3160803-94-25428104599610/.source.conf follow=False _original_basename=registries.conf.j2 checksum=f27f86218e398aa50b444b0bf8b9e443f3d2c120 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:50:15 compute-0 sudo[51194]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:16 compute-0 sudo[51346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcbpdnuylvllvxbgwvvlvfwonyllhhhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431016.0064378-110-38482874501589/AnsiballZ_ini_file.py'
Oct 02 18:50:16 compute-0 sudo[51346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:16 compute-0 python3.9[51348]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:50:16 compute-0 sudo[51346]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:17 compute-0 sudo[51498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdcutdygjoqlbvsfdcpodoidmwispuot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431016.9819522-110-131299028611870/AnsiballZ_ini_file.py'
Oct 02 18:50:17 compute-0 sudo[51498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:17 compute-0 python3.9[51500]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:50:17 compute-0 sudo[51498]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:18 compute-0 sudo[51650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vawaxlrgeypjlxpgoreuxtukmqgxmbbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431017.7543967-110-17672071274527/AnsiballZ_ini_file.py'
Oct 02 18:50:18 compute-0 sudo[51650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:18 compute-0 python3.9[51652]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:50:18 compute-0 sudo[51650]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:19 compute-0 sudo[51802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjgrbvfammpzkqsgglmjhrtpgzskxtzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431018.6813276-110-249414689274151/AnsiballZ_ini_file.py'
Oct 02 18:50:19 compute-0 sudo[51802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:19 compute-0 python3.9[51804]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:50:19 compute-0 sudo[51802]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:19 compute-0 sudo[51954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewftjkzjkykubvrpxxaywngiajxkodsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431019.5909135-141-41260973605430/AnsiballZ_dnf.py'
Oct 02 18:50:20 compute-0 sudo[51954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:20 compute-0 python3.9[51956]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:50:21 compute-0 sudo[51954]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:22 compute-0 sudo[52107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhhykagxgdpafyeonyvupfnelohgzlqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431021.8162024-152-164376950275116/AnsiballZ_setup.py'
Oct 02 18:50:22 compute-0 sudo[52107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:22 compute-0 python3.9[52109]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:50:22 compute-0 sudo[52107]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:23 compute-0 sudo[52261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekzmpjldpdeovvbzryqyhmjhqervvexq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431022.7441316-160-263328634404880/AnsiballZ_stat.py'
Oct 02 18:50:23 compute-0 sudo[52261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:23 compute-0 python3.9[52263]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:50:23 compute-0 sudo[52261]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:24 compute-0 sudo[52413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rizpqpxxlcselwbuauvarauralpbestm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431023.6639156-169-278372386194660/AnsiballZ_stat.py'
Oct 02 18:50:24 compute-0 sudo[52413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:24 compute-0 python3.9[52415]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:50:24 compute-0 sudo[52413]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:25 compute-0 sudo[52565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swgwowduvmenwzcqqtemjbaggfojngfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431024.574476-179-147665562308397/AnsiballZ_service_facts.py'
Oct 02 18:50:25 compute-0 sudo[52565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:25 compute-0 python3.9[52567]: ansible-service_facts Invoked
Oct 02 18:50:25 compute-0 network[52584]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 18:50:25 compute-0 network[52585]: 'network-scripts' will be removed from distribution in near future.
Oct 02 18:50:25 compute-0 network[52586]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 18:50:29 compute-0 sudo[52565]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:30 compute-0 sudo[52871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aclhaobmgyctuasawwbyxzaaxxxmguwh ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1759431030.2084532-192-85228279193483/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1759431030.2084532-192-85228279193483/args'
Oct 02 18:50:30 compute-0 sudo[52871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:30 compute-0 sudo[52871]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:31 compute-0 sudo[53038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwjacniwboaifajaoayqbujdavekydyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431031.0815406-203-39865124424850/AnsiballZ_dnf.py'
Oct 02 18:50:31 compute-0 sudo[53038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:31 compute-0 python3.9[53040]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:50:32 compute-0 sudo[53038]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:34 compute-0 sudo[53191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trtxzqdoubnfrafohvdevduwpyrlfawg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431033.3668134-216-189387304025914/AnsiballZ_package_facts.py'
Oct 02 18:50:34 compute-0 sudo[53191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:34 compute-0 python3.9[53193]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 02 18:50:34 compute-0 sudo[53191]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:35 compute-0 sudo[53343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baghxujphcoslshxukxytddygvmbdncq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431035.2458286-226-69311504434049/AnsiballZ_stat.py'
Oct 02 18:50:35 compute-0 sudo[53343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:35 compute-0 python3.9[53345]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:50:36 compute-0 sudo[53343]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:36 compute-0 sudo[53468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hocwhthcuvshlksgwsffjqusyaysvxjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431035.2458286-226-69311504434049/AnsiballZ_copy.py'
Oct 02 18:50:36 compute-0 sudo[53468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:36 compute-0 python3.9[53470]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431035.2458286-226-69311504434049/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:50:36 compute-0 sudo[53468]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:37 compute-0 sudo[53622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yalfnyrfskgtkylhbkytijfqudccidgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431037.0057962-241-11705926884807/AnsiballZ_stat.py'
Oct 02 18:50:37 compute-0 sudo[53622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:37 compute-0 python3.9[53624]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:50:37 compute-0 sudo[53622]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:38 compute-0 sudo[53747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnjatawgrzyagnybbxcpuvgbewszgrfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431037.0057962-241-11705926884807/AnsiballZ_copy.py'
Oct 02 18:50:38 compute-0 sudo[53747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:38 compute-0 python3.9[53749]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431037.0057962-241-11705926884807/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:50:38 compute-0 sudo[53747]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:39 compute-0 sudo[53901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftbfrveqrhscgoussilzghpukeuwscll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431039.1106586-262-194734782869790/AnsiballZ_lineinfile.py'
Oct 02 18:50:39 compute-0 sudo[53901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:39 compute-0 python3.9[53903]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:50:39 compute-0 sudo[53901]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:40 compute-0 sudo[54055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meyblczdjoqtdpzludygqarizbjhaotn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431040.5101776-277-249251351196881/AnsiballZ_setup.py'
Oct 02 18:50:40 compute-0 sudo[54055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:41 compute-0 python3.9[54057]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:50:41 compute-0 sudo[54055]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:42 compute-0 sudo[54139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmcfwqtggsdbtplbkpthahdqvxubztbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431040.5101776-277-249251351196881/AnsiballZ_systemd.py'
Oct 02 18:50:42 compute-0 sudo[54139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:42 compute-0 python3.9[54141]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:50:42 compute-0 sudo[54139]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:43 compute-0 sudo[54293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzwknhjfeofsitwngtyvydzatuvyvfmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431042.9515126-293-107357378275278/AnsiballZ_setup.py'
Oct 02 18:50:43 compute-0 sudo[54293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:43 compute-0 python3.9[54295]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:50:44 compute-0 sudo[54293]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:44 compute-0 sudo[54377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wccyfqtxkkptoinxgcyhtopwuibxqqbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431042.9515126-293-107357378275278/AnsiballZ_systemd.py'
Oct 02 18:50:44 compute-0 sudo[54377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:44 compute-0 python3.9[54379]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 18:50:44 compute-0 chronyd[804]: chronyd exiting
Oct 02 18:50:44 compute-0 systemd[1]: Stopping NTP client/server...
Oct 02 18:50:44 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Oct 02 18:50:44 compute-0 systemd[1]: Stopped NTP client/server.
Oct 02 18:50:44 compute-0 systemd[1]: Starting NTP client/server...
Oct 02 18:50:44 compute-0 chronyd[54387]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 02 18:50:44 compute-0 chronyd[54387]: Frequency -26.414 +/- 0.131 ppm read from /var/lib/chrony/drift
Oct 02 18:50:44 compute-0 chronyd[54387]: Loaded seccomp filter (level 2)
Oct 02 18:50:44 compute-0 systemd[1]: Started NTP client/server.
Oct 02 18:50:44 compute-0 sudo[54377]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:45 compute-0 sshd-session[49588]: Connection closed by 192.168.122.30 port 42390
Oct 02 18:50:45 compute-0 sshd-session[49585]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:50:45 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Oct 02 18:50:45 compute-0 systemd[1]: session-11.scope: Consumed 31.757s CPU time.
Oct 02 18:50:45 compute-0 systemd-logind[793]: Session 11 logged out. Waiting for processes to exit.
Oct 02 18:50:45 compute-0 systemd-logind[793]: Removed session 11.
Oct 02 18:50:51 compute-0 sshd-session[54413]: Accepted publickey for zuul from 192.168.122.30 port 52464 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 18:50:51 compute-0 systemd-logind[793]: New session 12 of user zuul.
Oct 02 18:50:51 compute-0 systemd[1]: Started Session 12 of User zuul.
Oct 02 18:50:51 compute-0 sshd-session[54413]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:50:52 compute-0 sudo[54566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvznkhzozbqibhfemkingewdqanpjmen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431051.493154-22-233523605751052/AnsiballZ_file.py'
Oct 02 18:50:52 compute-0 sudo[54566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:52 compute-0 python3.9[54568]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:50:52 compute-0 sudo[54566]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:53 compute-0 sudo[54718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnxbfqvygjhgsfnnrrsdgsqigoiduqkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431052.5934448-34-233907068205386/AnsiballZ_stat.py'
Oct 02 18:50:53 compute-0 sudo[54718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:53 compute-0 python3.9[54720]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:50:53 compute-0 sudo[54718]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:54 compute-0 sudo[54841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chynldvckfpubyivevuklnmfbqypqwxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431052.5934448-34-233907068205386/AnsiballZ_copy.py'
Oct 02 18:50:54 compute-0 sudo[54841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:54 compute-0 python3.9[54843]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431052.5934448-34-233907068205386/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:50:54 compute-0 sudo[54841]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:54 compute-0 sshd-session[54416]: Connection closed by 192.168.122.30 port 52464
Oct 02 18:50:54 compute-0 sshd-session[54413]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:50:54 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Oct 02 18:50:54 compute-0 systemd[1]: session-12.scope: Consumed 2.234s CPU time.
Oct 02 18:50:54 compute-0 systemd-logind[793]: Session 12 logged out. Waiting for processes to exit.
Oct 02 18:50:54 compute-0 systemd-logind[793]: Removed session 12.
Oct 02 18:51:00 compute-0 sshd-session[54868]: Accepted publickey for zuul from 192.168.122.30 port 32964 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 18:51:00 compute-0 systemd-logind[793]: New session 13 of user zuul.
Oct 02 18:51:00 compute-0 systemd[1]: Started Session 13 of User zuul.
Oct 02 18:51:00 compute-0 sshd-session[54868]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:51:01 compute-0 python3.9[55021]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:51:02 compute-0 sudo[55175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztqgawgjvrbmzwjcagrgdghhsghjaszk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431062.1206205-33-76913230536115/AnsiballZ_file.py'
Oct 02 18:51:02 compute-0 sudo[55175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:02 compute-0 python3.9[55177]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:51:02 compute-0 sudo[55175]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:03 compute-0 sudo[55350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgocgodrweqhqzavtutnsqfqhzocatxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431063.1822238-41-145598707021996/AnsiballZ_stat.py'
Oct 02 18:51:03 compute-0 sudo[55350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:03 compute-0 python3.9[55352]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:51:03 compute-0 sudo[55350]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:04 compute-0 sudo[55473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhoooykaeeyrybpdfmrpngkipsyupmfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431063.1822238-41-145598707021996/AnsiballZ_copy.py'
Oct 02 18:51:04 compute-0 sudo[55473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:04 compute-0 python3.9[55475]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759431063.1822238-41-145598707021996/.source.json _original_basename=.9ot6d_k6 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:51:04 compute-0 sudo[55473]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:05 compute-0 sudo[55625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niithzlbbejvddecoimhuygveytgjojf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431065.2528603-64-257726986591363/AnsiballZ_stat.py'
Oct 02 18:51:05 compute-0 sudo[55625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:06 compute-0 python3.9[55627]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:51:06 compute-0 sudo[55625]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:06 compute-0 sudo[55748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evcbtfzgjtgfdaudyezciphrvofzcmwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431065.2528603-64-257726986591363/AnsiballZ_copy.py'
Oct 02 18:51:06 compute-0 sudo[55748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:06 compute-0 python3.9[55750]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431065.2528603-64-257726986591363/.source _original_basename=.45f9o6tr follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:51:06 compute-0 sudo[55748]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:07 compute-0 sudo[55900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rurypykvwqnvjbkmavmthrpwgolftvce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431066.891834-80-106232861918050/AnsiballZ_file.py'
Oct 02 18:51:07 compute-0 sudo[55900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:07 compute-0 python3.9[55902]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:51:07 compute-0 sudo[55900]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:08 compute-0 sudo[56052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzttvzjkhswulilszhsybclhpltgdcnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431067.773798-88-145144717633757/AnsiballZ_stat.py'
Oct 02 18:51:08 compute-0 sudo[56052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:08 compute-0 python3.9[56054]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:51:08 compute-0 sudo[56052]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:08 compute-0 sudo[56175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txcnqvtpontsfjrojquuenhjjfdxrspt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431067.773798-88-145144717633757/AnsiballZ_copy.py'
Oct 02 18:51:08 compute-0 sudo[56175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:09 compute-0 python3.9[56177]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431067.773798-88-145144717633757/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:51:09 compute-0 sudo[56175]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:09 compute-0 sudo[56327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svgzwqoqbhqwqvrdipcktlugtooboqwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431069.3765855-88-180221851323441/AnsiballZ_stat.py'
Oct 02 18:51:09 compute-0 sudo[56327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:09 compute-0 python3.9[56329]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:51:09 compute-0 sudo[56327]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:10 compute-0 sudo[56450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiufwgncgdbjdmkdjuiyiuoiwffflajh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431069.3765855-88-180221851323441/AnsiballZ_copy.py'
Oct 02 18:51:10 compute-0 sudo[56450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:10 compute-0 python3.9[56452]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431069.3765855-88-180221851323441/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:51:10 compute-0 sudo[56450]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:11 compute-0 sudo[56602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hllngxrilnfqmrmnuwwiddcywtushlkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431070.7378547-117-135132389888622/AnsiballZ_file.py'
Oct 02 18:51:11 compute-0 sudo[56602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:11 compute-0 python3.9[56604]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:51:11 compute-0 sudo[56602]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:11 compute-0 sudo[56754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvkemxsltukribimbenbchodfjgzgaik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431071.4345925-125-30969705225543/AnsiballZ_stat.py'
Oct 02 18:51:11 compute-0 sudo[56754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:12 compute-0 python3.9[56756]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:51:12 compute-0 sudo[56754]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:12 compute-0 sudo[56877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhwukopsenglqbfisvatbznjauyvfqlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431071.4345925-125-30969705225543/AnsiballZ_copy.py'
Oct 02 18:51:12 compute-0 sudo[56877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:12 compute-0 python3.9[56879]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431071.4345925-125-30969705225543/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:51:12 compute-0 sudo[56877]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:13 compute-0 sudo[57029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfnbujmyueotwxyhjsddcxkjccdywglb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431072.9293547-140-181260707056142/AnsiballZ_stat.py'
Oct 02 18:51:13 compute-0 sudo[57029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:13 compute-0 python3.9[57031]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:51:13 compute-0 sudo[57029]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:14 compute-0 sudo[57152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqcgjbtjcnqwenxzxuizdkedwklfjabh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431072.9293547-140-181260707056142/AnsiballZ_copy.py'
Oct 02 18:51:14 compute-0 sudo[57152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:14 compute-0 python3.9[57154]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431072.9293547-140-181260707056142/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:51:14 compute-0 sudo[57152]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:15 compute-0 sudo[57304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwfpyljndpjllxuylvlyulmevlyewilr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431074.4591167-155-80597259366964/AnsiballZ_systemd.py'
Oct 02 18:51:15 compute-0 sudo[57304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:15 compute-0 python3.9[57306]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:51:15 compute-0 systemd[1]: Reloading.
Oct 02 18:51:15 compute-0 systemd-rc-local-generator[57331]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:51:15 compute-0 systemd-sysv-generator[57336]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:51:15 compute-0 systemd[1]: Reloading.
Oct 02 18:51:15 compute-0 systemd-rc-local-generator[57372]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:51:15 compute-0 systemd-sysv-generator[57375]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:51:16 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Oct 02 18:51:16 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Oct 02 18:51:16 compute-0 sudo[57304]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:16 compute-0 sudo[57532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtaiowtkqxvtusrihetgfzvhfgiftqkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431076.2927947-163-26200448964671/AnsiballZ_stat.py'
Oct 02 18:51:16 compute-0 sudo[57532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:16 compute-0 python3.9[57534]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:51:16 compute-0 sudo[57532]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:17 compute-0 sudo[57655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppgxjjwghxjrxxhipizaknjwkywqnbkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431076.2927947-163-26200448964671/AnsiballZ_copy.py'
Oct 02 18:51:17 compute-0 sudo[57655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:17 compute-0 python3.9[57657]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431076.2927947-163-26200448964671/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:51:17 compute-0 sudo[57655]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:18 compute-0 sudo[57807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmircylxhpduxjcbxfclybjifiaajpod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431077.783543-178-119455184552911/AnsiballZ_stat.py'
Oct 02 18:51:18 compute-0 sudo[57807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:18 compute-0 python3.9[57809]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:51:18 compute-0 sudo[57807]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:18 compute-0 sudo[57930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoxgzprnjhhbntmxnrdgryakgqsjexat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431077.783543-178-119455184552911/AnsiballZ_copy.py'
Oct 02 18:51:18 compute-0 sudo[57930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:19 compute-0 python3.9[57932]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431077.783543-178-119455184552911/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:51:19 compute-0 sudo[57930]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:19 compute-0 sudo[58082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eownovkfrewhqoozlhfdobtjxrjhwadc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431079.2697382-193-93505445170289/AnsiballZ_systemd.py'
Oct 02 18:51:19 compute-0 sudo[58082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:19 compute-0 python3.9[58084]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:51:19 compute-0 systemd[1]: Reloading.
Oct 02 18:51:20 compute-0 systemd-rc-local-generator[58111]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:51:20 compute-0 systemd-sysv-generator[58116]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:51:20 compute-0 systemd[1]: Reloading.
Oct 02 18:51:20 compute-0 systemd-sysv-generator[58149]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:51:20 compute-0 systemd-rc-local-generator[58146]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:51:20 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 18:51:20 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 18:51:20 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 18:51:20 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 18:51:20 compute-0 sudo[58082]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:21 compute-0 python3.9[58310]: ansible-ansible.builtin.service_facts Invoked
Oct 02 18:51:21 compute-0 network[58327]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 18:51:21 compute-0 network[58328]: 'network-scripts' will be removed from distribution in near future.
Oct 02 18:51:21 compute-0 network[58329]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 18:51:25 compute-0 sudo[58591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nurdrrcnllqgrsnxofnxtfmwswoevvgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431085.1573582-209-258501677694735/AnsiballZ_systemd.py'
Oct 02 18:51:25 compute-0 sudo[58591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:25 compute-0 python3.9[58593]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:51:25 compute-0 systemd[1]: Reloading.
Oct 02 18:51:26 compute-0 systemd-rc-local-generator[58623]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:51:26 compute-0 systemd-sysv-generator[58626]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:51:26 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Oct 02 18:51:26 compute-0 iptables.init[58633]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct 02 18:51:26 compute-0 iptables.init[58633]: iptables: Flushing firewall rules: [  OK  ]
Oct 02 18:51:26 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Oct 02 18:51:26 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Oct 02 18:51:26 compute-0 sudo[58591]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:27 compute-0 sudo[58827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mghprvcufgijkmmnzdvvgkatahxphrky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431086.9083023-209-91431675088372/AnsiballZ_systemd.py'
Oct 02 18:51:27 compute-0 sudo[58827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:27 compute-0 python3.9[58829]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:51:27 compute-0 sudo[58827]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:28 compute-0 sudo[58981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnwbyaoxttveftiuyswksoancibjnfzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431087.9904628-225-18624039533646/AnsiballZ_systemd.py'
Oct 02 18:51:28 compute-0 sudo[58981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:28 compute-0 python3.9[58983]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:51:28 compute-0 systemd[1]: Reloading.
Oct 02 18:51:28 compute-0 systemd-sysv-generator[59016]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:51:28 compute-0 systemd-rc-local-generator[59013]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:51:29 compute-0 systemd[1]: Starting Netfilter Tables...
Oct 02 18:51:29 compute-0 systemd[1]: Finished Netfilter Tables.
Oct 02 18:51:29 compute-0 sudo[58981]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:29 compute-0 sudo[59173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdbrnhciqssnlnvtirshticaifwvigpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431089.4171448-233-114212899365454/AnsiballZ_command.py'
Oct 02 18:51:29 compute-0 sudo[59173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:30 compute-0 python3.9[59175]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:51:30 compute-0 sudo[59173]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:31 compute-0 sudo[59326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njgutqmibhgqgvaxjbdkayjsohqqzbba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431090.5903735-247-255343870628716/AnsiballZ_stat.py'
Oct 02 18:51:31 compute-0 sudo[59326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:31 compute-0 python3.9[59328]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:51:31 compute-0 sudo[59326]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:31 compute-0 sudo[59451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aapeyttlolstorposnjlwgosxbmrhzxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431090.5903735-247-255343870628716/AnsiballZ_copy.py'
Oct 02 18:51:31 compute-0 sudo[59451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:31 compute-0 python3.9[59453]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431090.5903735-247-255343870628716/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:51:32 compute-0 sudo[59451]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:32 compute-0 python3.9[59604]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 18:51:32 compute-0 polkitd[6325]: Registered Authentication Agent for unix-process:59606:226458 (system bus name :1.522 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 02 18:51:58 compute-0 polkitd[6325]: Unregistered Authentication Agent for unix-process:59606:226458 (system bus name :1.522, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 02 18:51:58 compute-0 polkit-agent-helper-1[59618]: pam_unix(polkit-1:auth): conversation failed
Oct 02 18:51:58 compute-0 polkit-agent-helper-1[59618]: pam_unix(polkit-1:auth): auth could not identify password for [root]
Oct 02 18:51:58 compute-0 polkitd[6325]: Operator of unix-process:59606:226458 FAILED to authenticate to gain authorization for action org.freedesktop.systemd1.manage-units for system-bus-name::1.521 [<unknown>] (owned by unix-user:zuul)
Oct 02 18:51:58 compute-0 sshd-session[54871]: Connection closed by 192.168.122.30 port 32964
Oct 02 18:51:58 compute-0 sshd-session[54868]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:51:58 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Oct 02 18:51:58 compute-0 systemd[1]: session-13.scope: Consumed 24.463s CPU time.
Oct 02 18:51:58 compute-0 systemd-logind[793]: Session 13 logged out. Waiting for processes to exit.
Oct 02 18:51:58 compute-0 systemd-logind[793]: Removed session 13.
Oct 02 18:52:10 compute-0 sshd-session[59644]: Accepted publickey for zuul from 192.168.122.30 port 55146 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 18:52:10 compute-0 systemd-logind[793]: New session 14 of user zuul.
Oct 02 18:52:10 compute-0 systemd[1]: Started Session 14 of User zuul.
Oct 02 18:52:11 compute-0 sshd-session[59644]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:52:12 compute-0 python3.9[59797]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:52:13 compute-0 sudo[59951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwvnkyxsbgqzpyxvzsaoffcqovukwsog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431132.545336-33-258526199973287/AnsiballZ_file.py'
Oct 02 18:52:13 compute-0 sudo[59951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:13 compute-0 python3.9[59953]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:13 compute-0 sudo[59951]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:14 compute-0 sudo[60126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiddbddiabwuwsupylduprwxsxukhdvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431133.6624525-41-130522047801835/AnsiballZ_stat.py'
Oct 02 18:52:14 compute-0 sudo[60126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:14 compute-0 python3.9[60128]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:14 compute-0 sudo[60126]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:14 compute-0 sudo[60204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdvcndkhmubxhjrgxrcmwzgfshdsraci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431133.6624525-41-130522047801835/AnsiballZ_file.py'
Oct 02 18:52:14 compute-0 sudo[60204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:15 compute-0 python3.9[60206]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.z5qweyrx recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:15 compute-0 sudo[60204]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:16 compute-0 sudo[60356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-korgjlvkbkdohcmtkteyenqkthkekinx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431135.7764885-61-226340677503684/AnsiballZ_stat.py'
Oct 02 18:52:16 compute-0 sudo[60356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:16 compute-0 python3.9[60358]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:16 compute-0 sudo[60356]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:16 compute-0 sudo[60434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oecrfyylhcgmglyqeudoqnfoksdmktft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431135.7764885-61-226340677503684/AnsiballZ_file.py'
Oct 02 18:52:16 compute-0 sudo[60434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:16 compute-0 python3.9[60436]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.q7s9drnz recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:16 compute-0 sudo[60434]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:17 compute-0 sudo[60586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prvdtnjijndcahfugdeadaufevemnqub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431137.2092035-74-272796022109458/AnsiballZ_file.py'
Oct 02 18:52:17 compute-0 sudo[60586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:17 compute-0 python3.9[60588]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:52:17 compute-0 sudo[60586]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:18 compute-0 sudo[60738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcopaijbsrbtuxogpkvlzkrbpllfqgup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431138.0117576-82-124077514764708/AnsiballZ_stat.py'
Oct 02 18:52:18 compute-0 sudo[60738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:18 compute-0 python3.9[60740]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:18 compute-0 sudo[60738]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:19 compute-0 sudo[60816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpvpunwrkniuydsecwhtxftirwyrxvil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431138.0117576-82-124077514764708/AnsiballZ_file.py'
Oct 02 18:52:19 compute-0 sudo[60816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:19 compute-0 python3.9[60818]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:52:19 compute-0 sudo[60816]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:19 compute-0 sudo[60968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eciukkvhwqrjhoxfthmijfiecmmnxsba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431139.5077932-82-163670273758657/AnsiballZ_stat.py'
Oct 02 18:52:19 compute-0 sudo[60968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:20 compute-0 python3.9[60970]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:20 compute-0 sudo[60968]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:20 compute-0 sudo[61046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayudvghucbpgnmorjglrxiwkzfarfoxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431139.5077932-82-163670273758657/AnsiballZ_file.py'
Oct 02 18:52:20 compute-0 sudo[61046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:20 compute-0 python3.9[61048]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:52:20 compute-0 sudo[61046]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:21 compute-0 sudo[61198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzaydhnnmlwwvkehnmiuibxdhzldrcwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431140.8297598-105-177926053553935/AnsiballZ_file.py'
Oct 02 18:52:21 compute-0 sudo[61198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:21 compute-0 python3.9[61200]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:21 compute-0 sudo[61198]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:21 compute-0 sudo[61350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzuftsfoaenpvaqixzpjalyksghlsmok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431141.606567-113-50701431230285/AnsiballZ_stat.py'
Oct 02 18:52:21 compute-0 sudo[61350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:22 compute-0 python3.9[61352]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:22 compute-0 sudo[61350]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:22 compute-0 sudo[61428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auykaluqxqllaudoxhqzpatqejokzfjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431141.606567-113-50701431230285/AnsiballZ_file.py'
Oct 02 18:52:22 compute-0 sudo[61428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:22 compute-0 python3.9[61430]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:22 compute-0 sudo[61428]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:23 compute-0 sudo[61580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beufoosvfguqmsetnlznmhzlvtyblrfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431142.9232676-125-103405957254/AnsiballZ_stat.py'
Oct 02 18:52:23 compute-0 sudo[61580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:23 compute-0 python3.9[61582]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:23 compute-0 sudo[61580]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:23 compute-0 sudo[61658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njapbntrxpcbjwntcfoglmthdshluwjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431142.9232676-125-103405957254/AnsiballZ_file.py'
Oct 02 18:52:23 compute-0 sudo[61658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:24 compute-0 python3.9[61660]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:24 compute-0 sudo[61658]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:25 compute-0 sudo[61810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfwfwhvztecfyffojkadmuevnzpukbza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431144.32153-137-4942413565236/AnsiballZ_systemd.py'
Oct 02 18:52:25 compute-0 sudo[61810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:25 compute-0 python3.9[61812]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:52:25 compute-0 systemd[1]: Reloading.
Oct 02 18:52:25 compute-0 systemd-rc-local-generator[61840]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:52:25 compute-0 systemd-sysv-generator[61844]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:52:25 compute-0 sudo[61810]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:26 compute-0 sudo[61999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddkgioefwvypfxuskyfprepaaibwizky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431145.965405-145-170448385274343/AnsiballZ_stat.py'
Oct 02 18:52:26 compute-0 sudo[61999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:26 compute-0 python3.9[62001]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:26 compute-0 sudo[61999]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:26 compute-0 sudo[62077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twyjxlouwxebxamnbkorfoalonuzsxes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431145.965405-145-170448385274343/AnsiballZ_file.py'
Oct 02 18:52:26 compute-0 sudo[62077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:27 compute-0 python3.9[62079]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:27 compute-0 sudo[62077]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:27 compute-0 sudo[62229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csoxbptlfamckprczknghindhuqwclpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431147.4177577-157-49768909539485/AnsiballZ_stat.py'
Oct 02 18:52:27 compute-0 sudo[62229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:28 compute-0 python3.9[62231]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:28 compute-0 sudo[62229]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:28 compute-0 sudo[62307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waaqjztjhlmiyzmoxtpszxmfowltghef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431147.4177577-157-49768909539485/AnsiballZ_file.py'
Oct 02 18:52:28 compute-0 sudo[62307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:28 compute-0 python3.9[62309]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:28 compute-0 sudo[62307]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:29 compute-0 sudo[62459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxujwepkboytgeixsrvnboosrgkzxrkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431148.811475-169-9648607106504/AnsiballZ_systemd.py'
Oct 02 18:52:29 compute-0 sudo[62459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:29 compute-0 python3.9[62461]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:52:29 compute-0 systemd[1]: Reloading.
Oct 02 18:52:29 compute-0 systemd-rc-local-generator[62491]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:52:29 compute-0 systemd-sysv-generator[62494]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:52:29 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 18:52:29 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 18:52:29 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 18:52:29 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 18:52:29 compute-0 sudo[62459]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:31 compute-0 python3.9[62654]: ansible-ansible.builtin.service_facts Invoked
Oct 02 18:52:31 compute-0 network[62671]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 18:52:31 compute-0 network[62672]: 'network-scripts' will be removed from distribution in near future.
Oct 02 18:52:31 compute-0 network[62673]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 18:52:35 compute-0 sudo[62934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aswufwevtguledsaywspfmxokpvgbazl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431155.4584022-195-190151593566543/AnsiballZ_stat.py'
Oct 02 18:52:35 compute-0 sudo[62934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:36 compute-0 python3.9[62936]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:36 compute-0 sudo[62934]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:36 compute-0 sudo[63012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khdandwzvtplfqujhzayepvtfmbkigtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431155.4584022-195-190151593566543/AnsiballZ_file.py'
Oct 02 18:52:36 compute-0 sudo[63012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:36 compute-0 python3.9[63014]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:36 compute-0 sudo[63012]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:37 compute-0 sudo[63164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aolnehtbolhkvxabiotklpsgzqpjlpcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431156.9127598-208-118171244480248/AnsiballZ_file.py'
Oct 02 18:52:37 compute-0 sudo[63164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:37 compute-0 python3.9[63166]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:37 compute-0 sudo[63164]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:38 compute-0 sudo[63316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzuqkbcdhhswowrpiiwxrijzprmovive ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431157.774505-216-60528262319291/AnsiballZ_stat.py'
Oct 02 18:52:38 compute-0 sudo[63316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:38 compute-0 python3.9[63318]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:38 compute-0 sudo[63316]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:39 compute-0 sudo[63439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myxigzoedvpmuolasophgyefewuebfcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431157.774505-216-60528262319291/AnsiballZ_copy.py'
Oct 02 18:52:39 compute-0 sudo[63439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:39 compute-0 python3.9[63441]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431157.774505-216-60528262319291/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:39 compute-0 sudo[63439]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:40 compute-0 sudo[63591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmwleueqyiwmbampyxxwmoywfmaxwpuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431159.5094686-234-18173671086726/AnsiballZ_timezone.py'
Oct 02 18:52:40 compute-0 sudo[63591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:40 compute-0 python3.9[63593]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 02 18:52:40 compute-0 systemd[1]: Starting Time & Date Service...
Oct 02 18:52:40 compute-0 systemd[1]: Started Time & Date Service.
Oct 02 18:52:40 compute-0 sudo[63591]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:41 compute-0 sudo[63747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krjzkddhojwvddqisgerbzdojrmhtkcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431160.8464732-243-42554089877312/AnsiballZ_file.py'
Oct 02 18:52:41 compute-0 sudo[63747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:41 compute-0 python3.9[63749]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:41 compute-0 sudo[63747]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:42 compute-0 sudo[63899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unfjskblonmdzjqyubejiyocgyvjoyso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431161.7336972-251-31377321468631/AnsiballZ_stat.py'
Oct 02 18:52:42 compute-0 sudo[63899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:42 compute-0 python3.9[63901]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:42 compute-0 sudo[63899]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:42 compute-0 sudo[64022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nulviiywfcvjugwsfeqhmzxfudmfiuku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431161.7336972-251-31377321468631/AnsiballZ_copy.py'
Oct 02 18:52:42 compute-0 sudo[64022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:43 compute-0 python3.9[64024]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431161.7336972-251-31377321468631/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:43 compute-0 sudo[64022]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:43 compute-0 sudo[64174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-panxlzkwovfgltmahelxrtuituiydwfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431163.3771715-266-156374580740214/AnsiballZ_stat.py'
Oct 02 18:52:43 compute-0 sudo[64174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:43 compute-0 python3.9[64176]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:43 compute-0 sudo[64174]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:44 compute-0 sudo[64297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqsijdqcdgsfvwbrfoxyjwghhzmlprsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431163.3771715-266-156374580740214/AnsiballZ_copy.py'
Oct 02 18:52:44 compute-0 sudo[64297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:44 compute-0 python3.9[64299]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431163.3771715-266-156374580740214/.source.yaml _original_basename=.2x7bqd8r follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:44 compute-0 sudo[64297]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:45 compute-0 sudo[64449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xujuyopuntlvgpgerqkbnfjrrvdormze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431164.9206188-281-110821293288088/AnsiballZ_stat.py'
Oct 02 18:52:45 compute-0 sudo[64449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:45 compute-0 python3.9[64451]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:45 compute-0 sudo[64449]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:46 compute-0 sudo[64572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xndxzhgbtemmhkvghczhiccgiynrdfrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431164.9206188-281-110821293288088/AnsiballZ_copy.py'
Oct 02 18:52:46 compute-0 sudo[64572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:46 compute-0 python3.9[64574]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431164.9206188-281-110821293288088/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:46 compute-0 sudo[64572]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:47 compute-0 sudo[64724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twojoeoauqhafurwkjenefoayeqepteo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431166.523232-296-18916423771208/AnsiballZ_command.py'
Oct 02 18:52:47 compute-0 sudo[64724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:47 compute-0 python3.9[64726]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:52:47 compute-0 sudo[64724]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:47 compute-0 sudo[64877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuoqnofcygmikimjfvaebjksvkmbcmkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431167.5705597-304-242338020772058/AnsiballZ_command.py'
Oct 02 18:52:47 compute-0 sudo[64877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:48 compute-0 python3.9[64879]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:52:48 compute-0 sudo[64877]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:48 compute-0 sudo[65030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-salevgilqilbzcwbdswbcuugbprwwnzc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431168.4327648-312-112461675985582/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 18:52:48 compute-0 sudo[65030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:49 compute-0 python3[65032]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 18:52:49 compute-0 sudo[65030]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:49 compute-0 sudo[65182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyfeqcgwalubwhgorucxfoobpoqnlyhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431169.402989-320-264956795344895/AnsiballZ_stat.py'
Oct 02 18:52:49 compute-0 sudo[65182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:50 compute-0 python3.9[65184]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:50 compute-0 sudo[65182]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:50 compute-0 sudo[65305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykctssbtpobpdxweqltxtkwsbuepujmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431169.402989-320-264956795344895/AnsiballZ_copy.py'
Oct 02 18:52:50 compute-0 sudo[65305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:50 compute-0 python3.9[65307]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431169.402989-320-264956795344895/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:50 compute-0 sudo[65305]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:51 compute-0 sudo[65457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvuhyxelnnpabiqceuszdqfzeietdfnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431170.9043536-335-273082731941638/AnsiballZ_stat.py'
Oct 02 18:52:51 compute-0 sudo[65457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:51 compute-0 python3.9[65459]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:51 compute-0 sudo[65457]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:51 compute-0 sudo[65580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcsdanxjedsffimyuhueizwkmgbfucgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431170.9043536-335-273082731941638/AnsiballZ_copy.py'
Oct 02 18:52:51 compute-0 sudo[65580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:52 compute-0 python3.9[65582]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431170.9043536-335-273082731941638/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:52 compute-0 sudo[65580]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:52 compute-0 sudo[65732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjvouklbmhtiqjifmhdmdtmenpqczfec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431172.4038846-350-78647335076851/AnsiballZ_stat.py'
Oct 02 18:52:52 compute-0 sudo[65732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:53 compute-0 python3.9[65734]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:53 compute-0 sudo[65732]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:53 compute-0 sudo[65855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcfibopehrdiwstbsnbptneoaqvehjyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431172.4038846-350-78647335076851/AnsiballZ_copy.py'
Oct 02 18:52:53 compute-0 sudo[65855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:53 compute-0 python3.9[65857]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431172.4038846-350-78647335076851/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:53 compute-0 sudo[65855]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:54 compute-0 sudo[66007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcukqsxlgpdvzsuhczpdkrsbzqxkeect ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431174.1889126-365-29086413144024/AnsiballZ_stat.py'
Oct 02 18:52:54 compute-0 sudo[66007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:54 compute-0 python3.9[66009]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:54 compute-0 sudo[66007]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:54 compute-0 chronyd[54387]: Selected source 207.210.46.249 (pool.ntp.org)
Oct 02 18:52:55 compute-0 sudo[66130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qauosblolyokhlqwlswfcdhxapwzvyhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431174.1889126-365-29086413144024/AnsiballZ_copy.py'
Oct 02 18:52:55 compute-0 sudo[66130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:55 compute-0 python3.9[66132]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431174.1889126-365-29086413144024/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:55 compute-0 sudo[66130]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:56 compute-0 sudo[66282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llrfvustcpstblkmbdxvmbgrpdfvrbec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431175.8068995-380-68432510205985/AnsiballZ_stat.py'
Oct 02 18:52:56 compute-0 sudo[66282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:56 compute-0 python3.9[66284]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:56 compute-0 sudo[66282]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:57 compute-0 sudo[66405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uewpqrshfraadpjbsfjgvthvjohuqgnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431175.8068995-380-68432510205985/AnsiballZ_copy.py'
Oct 02 18:52:57 compute-0 sudo[66405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:57 compute-0 python3.9[66407]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431175.8068995-380-68432510205985/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:57 compute-0 sudo[66405]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:57 compute-0 sudo[66557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jahuigbakcynwrwvgeuxgplfyxdzgyqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431177.5050678-395-119225878243282/AnsiballZ_file.py'
Oct 02 18:52:57 compute-0 sudo[66557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:58 compute-0 python3.9[66559]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:58 compute-0 sudo[66557]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:58 compute-0 sudo[66709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oorxjfoiwznzovbhxmvjjgchqduupyte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431178.352483-403-249693488590235/AnsiballZ_command.py'
Oct 02 18:52:58 compute-0 sudo[66709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:58 compute-0 python3.9[66711]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:52:58 compute-0 sudo[66709]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:59 compute-0 sudo[66868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmfdlbkagizbubbhlurpzpbwtnabpcfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431179.1251924-411-188285462332146/AnsiballZ_blockinfile.py'
Oct 02 18:52:59 compute-0 sudo[66868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:59 compute-0 python3.9[66870]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:59 compute-0 sudo[66868]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:00 compute-0 sudo[67021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhucypcsoniivlbqnckjtctughupwxrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431180.092399-420-119087182088101/AnsiballZ_file.py'
Oct 02 18:53:00 compute-0 sudo[67021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:00 compute-0 python3.9[67023]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:53:00 compute-0 sudo[67021]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:01 compute-0 sudo[67173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtoqxpariozfyjyigdcdcwmvlrkcqohw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431180.8650777-420-245398252665912/AnsiballZ_file.py'
Oct 02 18:53:01 compute-0 sudo[67173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:01 compute-0 python3.9[67175]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:53:01 compute-0 sudo[67173]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:02 compute-0 sudo[67325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgrauwvdxdlgisaufxrfohhcnewyfkmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431181.6097388-435-225974309458599/AnsiballZ_mount.py'
Oct 02 18:53:02 compute-0 sudo[67325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:02 compute-0 python3.9[67327]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 02 18:53:02 compute-0 sudo[67325]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:02 compute-0 sudo[67478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkuazwltbjdkjsnwbguaafuhgbxjunmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431182.5081742-435-146553678873529/AnsiballZ_mount.py'
Oct 02 18:53:02 compute-0 sudo[67478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:03 compute-0 python3.9[67480]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 02 18:53:03 compute-0 sudo[67478]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:03 compute-0 sshd-session[59647]: Connection closed by 192.168.122.30 port 55146
Oct 02 18:53:03 compute-0 sshd-session[59644]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:53:03 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Oct 02 18:53:03 compute-0 systemd[1]: session-14.scope: Consumed 40.267s CPU time.
Oct 02 18:53:03 compute-0 systemd-logind[793]: Session 14 logged out. Waiting for processes to exit.
Oct 02 18:53:03 compute-0 systemd-logind[793]: Removed session 14.
Oct 02 18:53:09 compute-0 sshd-session[67506]: Accepted publickey for zuul from 192.168.122.30 port 44042 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 18:53:09 compute-0 systemd-logind[793]: New session 15 of user zuul.
Oct 02 18:53:09 compute-0 systemd[1]: Started Session 15 of User zuul.
Oct 02 18:53:09 compute-0 sshd-session[67506]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:53:10 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 02 18:53:10 compute-0 sudo[67661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubrvkscicdfjrewlmofdxhztaphxflmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431189.889724-16-105400929157710/AnsiballZ_tempfile.py'
Oct 02 18:53:10 compute-0 sudo[67661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:10 compute-0 python3.9[67663]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 02 18:53:10 compute-0 sudo[67661]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:11 compute-0 sudo[67814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vofwwsdxgfvapjujpexigmlfmsdshnlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431190.8685348-28-194358576558179/AnsiballZ_stat.py'
Oct 02 18:53:11 compute-0 sudo[67814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:11 compute-0 python3.9[67816]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:53:11 compute-0 sudo[67814]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:12 compute-0 sudo[67966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hizfzktfbclcnehlhzfvaiwssvuekpby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431191.8911061-38-117312431942573/AnsiballZ_setup.py'
Oct 02 18:53:12 compute-0 sudo[67966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:12 compute-0 python3.9[67968]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:53:12 compute-0 sudo[67966]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:13 compute-0 sudo[68118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-popqjairreixdpafkzicjvidpjuatliq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431193.0812533-47-189435000242929/AnsiballZ_blockinfile.py'
Oct 02 18:53:13 compute-0 sudo[68118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:13 compute-0 python3.9[68120]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbvy1nmZlQ1mwB+8mXD1QVEPHj9WDqCT0xaUa0WXwPbTqC63n5C/4mCHmoqqXTwoEhHX7so7AlSpv5zZ7hPkQOsh2gCmla2/HhNjy/xA5JU+H4TM08v9CmvM5ymnbSuLlQxrYXJOAzVSvZV4eKucl4LDsV5CMlRMJjTim4/SvCrGpM09ZfwVaN0pzt0NY21deN4P7w4mt27M+xtoVorj/BupjoBo24TZzqPokPuZXFUigBfiHWqiEENVhU9baZbXWsxcToG6PgefXxjz0KPMd7Nuk7aP8paYmZwXQfEZgVe+m5ihzuwQw5rtVmj0XDfT/OT+kUBWhVInST0A96gtIN5d/7rsiWdiCFPqEu0sJEG3rMPkinVARq5Q4hV/I8dZ45vEYVV6KVipSJYx2eldcJrSpYH2LoC3XLdQoJBlWr5Mz50aFuI35bWbkZAbLcG9UJvIQDKZ8Z+UC/JnYyCHn3m2Zimlf93NaKxuB4cuROvZYifnCiCOr9xV1pyAguC6E=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK8dVLzi3MTJZ5eDxe5XdUxMonA7YKX5W9IYtbfghkzW
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC2CDn2xLcobMglTrqlWQW+s0s7KVx/tuT7qoElt54b5qX7SDKjeu7ZNAyB2Kosqdgz51mquHrgoPZYMVp0nqB8=
                                             create=True mode=0644 path=/tmp/ansible.j_e8i6ih state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:53:13 compute-0 sudo[68118]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:14 compute-0 sudo[68270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcdshonqigrxejwpxeynztnlurjvrsdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431193.9584181-55-232880770556772/AnsiballZ_command.py'
Oct 02 18:53:14 compute-0 sudo[68270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:14 compute-0 python3.9[68272]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.j_e8i6ih' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:53:14 compute-0 sudo[68270]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:15 compute-0 sudo[68424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvknbsuaygtqazaaticdotdcaomvzwuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431195.1394908-63-227110789662284/AnsiballZ_file.py'
Oct 02 18:53:15 compute-0 sudo[68424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:15 compute-0 python3.9[68426]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.j_e8i6ih state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:53:15 compute-0 sudo[68424]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:16 compute-0 sshd-session[67509]: Connection closed by 192.168.122.30 port 44042
Oct 02 18:53:16 compute-0 sshd-session[67506]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:53:16 compute-0 systemd-logind[793]: Session 15 logged out. Waiting for processes to exit.
Oct 02 18:53:16 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Oct 02 18:53:16 compute-0 systemd[1]: session-15.scope: Consumed 4.061s CPU time.
Oct 02 18:53:16 compute-0 systemd-logind[793]: Removed session 15.
Oct 02 18:53:22 compute-0 sshd-session[68452]: Accepted publickey for zuul from 192.168.122.30 port 41454 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 18:53:22 compute-0 systemd-logind[793]: New session 16 of user zuul.
Oct 02 18:53:22 compute-0 systemd[1]: Started Session 16 of User zuul.
Oct 02 18:53:22 compute-0 sshd-session[68452]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:53:23 compute-0 python3.9[68605]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:53:24 compute-0 sudo[68759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wteqxkwvvovnvebcpgraiowaucvuqahd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431203.836126-32-142117958922358/AnsiballZ_systemd.py'
Oct 02 18:53:24 compute-0 sudo[68759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:24 compute-0 python3.9[68761]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 02 18:53:24 compute-0 sudo[68759]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:25 compute-0 sudo[68913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kassniermonlfymfzumvsbofnmgmlclr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431205.03086-40-258677070086393/AnsiballZ_systemd.py'
Oct 02 18:53:25 compute-0 sudo[68913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:25 compute-0 python3.9[68915]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 18:53:25 compute-0 sudo[68913]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:26 compute-0 sudo[69066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmvpflmotomchfqdihtentlztfyaaeze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431206.0542903-49-136943776499246/AnsiballZ_command.py'
Oct 02 18:53:26 compute-0 sudo[69066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:26 compute-0 python3.9[69068]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:53:26 compute-0 sudo[69066]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:27 compute-0 sudo[69219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeekrwqbcdnqxkbgbirmyaedkdjujipo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431206.9822578-57-102610041350261/AnsiballZ_stat.py'
Oct 02 18:53:27 compute-0 sudo[69219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:27 compute-0 python3.9[69221]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:53:27 compute-0 sudo[69219]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:28 compute-0 sudo[69373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axosxrjegsgympusvijlsrtgbhgkxpbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431207.9007263-65-22135317856209/AnsiballZ_command.py'
Oct 02 18:53:28 compute-0 sudo[69373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:28 compute-0 python3.9[69375]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:53:28 compute-0 sudo[69373]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:29 compute-0 sudo[69528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axdjcdcspvoqfoqmzzbluibopsnxkicv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431208.6545136-73-2665385036391/AnsiballZ_file.py'
Oct 02 18:53:29 compute-0 sudo[69528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:29 compute-0 python3.9[69530]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:53:29 compute-0 sudo[69528]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:29 compute-0 sshd-session[68455]: Connection closed by 192.168.122.30 port 41454
Oct 02 18:53:29 compute-0 sshd-session[68452]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:53:29 compute-0 systemd-logind[793]: Session 16 logged out. Waiting for processes to exit.
Oct 02 18:53:29 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Oct 02 18:53:29 compute-0 systemd[1]: session-16.scope: Consumed 5.391s CPU time.
Oct 02 18:53:29 compute-0 systemd-logind[793]: Removed session 16.
Oct 02 18:53:35 compute-0 sshd-session[69555]: Accepted publickey for zuul from 192.168.122.30 port 41088 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 18:53:35 compute-0 systemd-logind[793]: New session 17 of user zuul.
Oct 02 18:53:35 compute-0 systemd[1]: Started Session 17 of User zuul.
Oct 02 18:53:35 compute-0 sshd-session[69555]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:53:36 compute-0 python3.9[69708]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:53:37 compute-0 sudo[69862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kymuucvlskapekejnycfjuisvbypsltu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431217.100853-34-71617856418738/AnsiballZ_setup.py'
Oct 02 18:53:37 compute-0 sudo[69862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:37 compute-0 python3.9[69864]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:53:38 compute-0 sudo[69862]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:38 compute-0 sudo[69946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muhermtnhlibampxornfxtfkduzrwkmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431217.100853-34-71617856418738/AnsiballZ_dnf.py'
Oct 02 18:53:38 compute-0 sudo[69946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:38 compute-0 python3.9[69948]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 18:53:40 compute-0 sudo[69946]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:40 compute-0 python3.9[70099]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:53:42 compute-0 python3.9[70250]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 18:53:43 compute-0 python3.9[70400]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:53:43 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 18:53:44 compute-0 python3.9[70551]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:53:44 compute-0 sshd-session[69558]: Connection closed by 192.168.122.30 port 41088
Oct 02 18:53:44 compute-0 sshd-session[69555]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:53:44 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Oct 02 18:53:44 compute-0 systemd[1]: session-17.scope: Consumed 6.818s CPU time.
Oct 02 18:53:44 compute-0 systemd-logind[793]: Session 17 logged out. Waiting for processes to exit.
Oct 02 18:53:44 compute-0 systemd-logind[793]: Removed session 17.
Oct 02 18:53:51 compute-0 sshd-session[70576]: Accepted publickey for zuul from 192.168.122.30 port 40746 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 18:53:51 compute-0 systemd-logind[793]: New session 18 of user zuul.
Oct 02 18:53:51 compute-0 systemd[1]: Started Session 18 of User zuul.
Oct 02 18:53:51 compute-0 sshd-session[70576]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:53:52 compute-0 python3.9[70729]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:53:54 compute-0 sudo[70883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfippuhwhqgnmfcuommjlsnhmcyrcrry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431233.6190436-50-64241410087746/AnsiballZ_file.py'
Oct 02 18:53:54 compute-0 sudo[70883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:54 compute-0 python3.9[70885]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:53:54 compute-0 sudo[70883]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:54 compute-0 sudo[71035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgpdluhdafgwyoqxwiggxqixenbjsewc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431234.640897-50-260631866111326/AnsiballZ_file.py'
Oct 02 18:53:54 compute-0 sudo[71035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:55 compute-0 python3.9[71037]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:53:55 compute-0 sudo[71035]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:56 compute-0 sudo[71187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqdgqydtrgbbpkmxgptdfrqhdtnsdaeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431235.4302666-65-119226372773766/AnsiballZ_stat.py'
Oct 02 18:53:56 compute-0 sudo[71187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:56 compute-0 python3.9[71189]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:53:56 compute-0 sudo[71187]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:56 compute-0 sudo[71310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njsmofbrfmfiornijjfvwywanhusftnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431235.4302666-65-119226372773766/AnsiballZ_copy.py'
Oct 02 18:53:56 compute-0 sudo[71310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:57 compute-0 python3.9[71312]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431235.4302666-65-119226372773766/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=de17158cdcc05a0c4ed9a1842b9c329569d2ac3c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:53:57 compute-0 sudo[71310]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:57 compute-0 sudo[71462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvltavwnlqoqbylhbdvgilovramlvrmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431237.2806056-65-154620276746655/AnsiballZ_stat.py'
Oct 02 18:53:57 compute-0 sudo[71462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:57 compute-0 python3.9[71464]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:53:57 compute-0 sudo[71462]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:58 compute-0 sudo[71585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfhoqhvczsqrwjfechdcjzrsmfsqgsmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431237.2806056-65-154620276746655/AnsiballZ_copy.py'
Oct 02 18:53:58 compute-0 sudo[71585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:58 compute-0 python3.9[71587]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431237.2806056-65-154620276746655/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=406a46f9ce1e820c24207ad445668953ff68a11c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:53:58 compute-0 sudo[71585]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:59 compute-0 sudo[71737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruztvrykiktawqsmueypnyyldbtmyifp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431238.7358518-65-118316412162563/AnsiballZ_stat.py'
Oct 02 18:53:59 compute-0 sudo[71737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:59 compute-0 python3.9[71739]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:53:59 compute-0 sudo[71737]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:59 compute-0 sudo[71860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iditrebudeubkoxxxxxpnlmbymxpjbxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431238.7358518-65-118316412162563/AnsiballZ_copy.py'
Oct 02 18:53:59 compute-0 sudo[71860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:00 compute-0 python3.9[71862]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431238.7358518-65-118316412162563/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=14cde2bf9b30e9b3a69055dfff363242c4da7798 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:00 compute-0 sudo[71860]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:00 compute-0 sudo[72012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycyjnytpgtdhfsgzwvaesjaontwhpbhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431240.354694-109-72279574139258/AnsiballZ_file.py'
Oct 02 18:54:00 compute-0 sudo[72012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:00 compute-0 python3.9[72014]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:54:00 compute-0 sudo[72012]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:01 compute-0 sudo[72164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzstwinqwxhkhzcuixeerljegdkevgtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431241.1145978-109-21647913058521/AnsiballZ_file.py'
Oct 02 18:54:01 compute-0 sudo[72164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:01 compute-0 python3.9[72166]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:54:01 compute-0 sudo[72164]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:02 compute-0 sudo[72316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrnqnwlboxzlwiyelzqvgukprkghhivi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431241.930996-124-151640409305429/AnsiballZ_stat.py'
Oct 02 18:54:02 compute-0 sudo[72316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:02 compute-0 python3.9[72318]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:02 compute-0 sudo[72316]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:02 compute-0 sudo[72439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzuhxwfmxyutmkenbxgvnhykinawutfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431241.930996-124-151640409305429/AnsiballZ_copy.py'
Oct 02 18:54:02 compute-0 sudo[72439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:03 compute-0 python3.9[72441]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431241.930996-124-151640409305429/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=6f8584d465107895e9de0019fca1fe7351e5f05e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:03 compute-0 sudo[72439]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:03 compute-0 sudo[72591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avkkhrcdzviaiaumgzecfsiiwqtiijvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431243.3536358-124-81933009824773/AnsiballZ_stat.py'
Oct 02 18:54:03 compute-0 sudo[72591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:03 compute-0 python3.9[72593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:03 compute-0 sudo[72591]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:04 compute-0 sudo[72714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnysfpdrsrzlqeqxanzgxxytjwglupfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431243.3536358-124-81933009824773/AnsiballZ_copy.py'
Oct 02 18:54:04 compute-0 sudo[72714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:04 compute-0 python3.9[72716]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431243.3536358-124-81933009824773/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=406a46f9ce1e820c24207ad445668953ff68a11c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:04 compute-0 sudo[72714]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:05 compute-0 sudo[72866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqvflzwprlbebzybcqesixkrvzfzcrnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431244.6623006-124-98354218598262/AnsiballZ_stat.py'
Oct 02 18:54:05 compute-0 sudo[72866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:05 compute-0 python3.9[72868]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:05 compute-0 sudo[72866]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:05 compute-0 sudo[72989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bivgyshkmknlbovrdmavwtnqwglcpxsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431244.6623006-124-98354218598262/AnsiballZ_copy.py'
Oct 02 18:54:05 compute-0 sudo[72989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:05 compute-0 python3.9[72991]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431244.6623006-124-98354218598262/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a508e35d3dfe193a4e9fa9ab42c59d402b79a918 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:05 compute-0 sudo[72989]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:06 compute-0 sudo[73141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nilirzivogcowtopzesbgiyavhspktkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431246.2262864-168-65347982838160/AnsiballZ_file.py'
Oct 02 18:54:06 compute-0 sudo[73141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:06 compute-0 python3.9[73143]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:54:06 compute-0 sudo[73141]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:07 compute-0 sudo[73293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqlhjcpyceabniqkzwmcwynjxhsxrrfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431246.993129-168-140677856275829/AnsiballZ_file.py'
Oct 02 18:54:07 compute-0 sudo[73293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:07 compute-0 python3.9[73295]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:54:07 compute-0 sudo[73293]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:08 compute-0 sudo[73445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxpqesovpowyttulazufyxsnjzxytirm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431247.8179514-183-227311142288104/AnsiballZ_stat.py'
Oct 02 18:54:08 compute-0 sudo[73445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:08 compute-0 python3.9[73447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:08 compute-0 sudo[73445]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:08 compute-0 sudo[73568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oooqhxzklpqqmeqjumkktfirfonfgxpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431247.8179514-183-227311142288104/AnsiballZ_copy.py'
Oct 02 18:54:08 compute-0 sudo[73568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:09 compute-0 python3.9[73570]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431247.8179514-183-227311142288104/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f6429011679f621135ee974266e73a84f1ab29eb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:09 compute-0 sudo[73568]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:09 compute-0 sudo[73720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veevigwyxtrxkozloxkrozwxvtcxezau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431249.274928-183-62205567344179/AnsiballZ_stat.py'
Oct 02 18:54:09 compute-0 sudo[73720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:09 compute-0 python3.9[73722]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:09 compute-0 sudo[73720]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:10 compute-0 sudo[73843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wimfejiriqkyyffjaxlyeffiwlttwznt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431249.274928-183-62205567344179/AnsiballZ_copy.py'
Oct 02 18:54:10 compute-0 sudo[73843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:10 compute-0 python3.9[73845]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431249.274928-183-62205567344179/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=134f2435e78bf7335b2d053ebf9e3c260c85e8be backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:10 compute-0 sudo[73843]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:11 compute-0 sudo[73995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjuncjtscvnfywzpgebjfgsvudasobag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431250.6760573-183-210330357216485/AnsiballZ_stat.py'
Oct 02 18:54:11 compute-0 sudo[73995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:11 compute-0 python3.9[73997]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:11 compute-0 sudo[73995]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:11 compute-0 sudo[74119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucwbrwgjboxlqbxtvohjvorljbpjtdgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431250.6760573-183-210330357216485/AnsiballZ_copy.py'
Oct 02 18:54:11 compute-0 sudo[74119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:11 compute-0 python3.9[74121]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431250.6760573-183-210330357216485/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7297333d5950956ef0806f447935034ebc348d05 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:11 compute-0 sudo[74119]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:12 compute-0 sudo[74271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjdrjgjvuhmstjymetydkwpqisvxajgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431252.0759313-227-260516285469569/AnsiballZ_file.py'
Oct 02 18:54:12 compute-0 sudo[74271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:12 compute-0 python3.9[74273]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:54:12 compute-0 sudo[74271]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:13 compute-0 sudo[74423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btlyekcmeuzyyyouavcrqcdgfgntihhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431252.7658205-227-180486942288365/AnsiballZ_file.py'
Oct 02 18:54:13 compute-0 sudo[74423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:13 compute-0 python3.9[74425]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:54:13 compute-0 sudo[74423]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:13 compute-0 sudo[74575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxozsbhqptsztumqefadzxaoaovilhye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431253.5799425-242-95093327743959/AnsiballZ_stat.py'
Oct 02 18:54:13 compute-0 sudo[74575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:14 compute-0 python3.9[74577]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:14 compute-0 sudo[74575]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:14 compute-0 sudo[74698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbqydvqwvsnhgnuvlhzfzcowwmavovdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431253.5799425-242-95093327743959/AnsiballZ_copy.py'
Oct 02 18:54:14 compute-0 sudo[74698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:14 compute-0 python3.9[74700]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431253.5799425-242-95093327743959/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=8ec6159b17770346b63cf67b649e849d47f9bfbc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:14 compute-0 sudo[74698]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:15 compute-0 sudo[74850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwkhuzexewerdqlznbeglbpcrhkgowfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431254.8747628-242-28539545980330/AnsiballZ_stat.py'
Oct 02 18:54:15 compute-0 sudo[74850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:15 compute-0 python3.9[74852]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:15 compute-0 sudo[74850]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:15 compute-0 sudo[74973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wniekseiblhcdwglhajnpyynpaackknm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431254.8747628-242-28539545980330/AnsiballZ_copy.py'
Oct 02 18:54:15 compute-0 sudo[74973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:16 compute-0 python3.9[74975]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431254.8747628-242-28539545980330/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=0ba8230f7bd65fdaafc1bb560aa96358742b150a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:16 compute-0 sudo[74973]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:16 compute-0 sudo[75125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdxqatehosngtyyhsyqdqzujyqdzbsaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431256.218182-242-92321893243496/AnsiballZ_stat.py'
Oct 02 18:54:16 compute-0 sudo[75125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:16 compute-0 python3.9[75127]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:16 compute-0 sudo[75125]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:17 compute-0 sudo[75248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akawoezpkhzmkozfptpuzvlcmwfwmvdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431256.218182-242-92321893243496/AnsiballZ_copy.py'
Oct 02 18:54:17 compute-0 sudo[75248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:17 compute-0 python3.9[75250]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431256.218182-242-92321893243496/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=e7260e06680f8dd9e28e4e90ad1035f6938b47f8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:17 compute-0 sudo[75248]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:18 compute-0 sudo[75400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyrfhmvjiqrreohmrbzudkajzofnvhbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431258.2752407-302-194213425245157/AnsiballZ_file.py'
Oct 02 18:54:18 compute-0 sudo[75400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:18 compute-0 python3.9[75402]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:54:18 compute-0 sudo[75400]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:19 compute-0 sudo[75552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crqccgkkgkezubdhthlpxqbtigxikvwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431259.190484-310-29458715400020/AnsiballZ_stat.py'
Oct 02 18:54:19 compute-0 sudo[75552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:19 compute-0 python3.9[75554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:19 compute-0 sudo[75552]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:20 compute-0 sudo[75675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdcdslywrjotqowqwjeqjawggeswktff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431259.190484-310-29458715400020/AnsiballZ_copy.py'
Oct 02 18:54:20 compute-0 sudo[75675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:20 compute-0 python3.9[75677]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431259.190484-310-29458715400020/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81b905e41eda5af3080e544e4dc3bafb229246e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:20 compute-0 sudo[75675]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:21 compute-0 sudo[75827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jknoogdcbqmdariuutyfyxvvttyanril ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431260.744229-326-266955851465290/AnsiballZ_file.py'
Oct 02 18:54:21 compute-0 sudo[75827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:21 compute-0 python3.9[75829]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:54:21 compute-0 sudo[75827]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:22 compute-0 sudo[75979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dalychcdtmjhxkimtmowfshkvrppjlop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431261.6307871-334-104548665302768/AnsiballZ_stat.py'
Oct 02 18:54:22 compute-0 sudo[75979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:22 compute-0 python3.9[75981]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:22 compute-0 sudo[75979]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:22 compute-0 sudo[76102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnroebnwyykdooujcncfixzrsekimnsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431261.6307871-334-104548665302768/AnsiballZ_copy.py'
Oct 02 18:54:22 compute-0 sudo[76102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:22 compute-0 python3.9[76104]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431261.6307871-334-104548665302768/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81b905e41eda5af3080e544e4dc3bafb229246e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:22 compute-0 sudo[76102]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:23 compute-0 sudo[76254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enuwsoyvjivgenplqbblutpgwradhcaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431263.195637-350-29326729019731/AnsiballZ_file.py'
Oct 02 18:54:23 compute-0 sudo[76254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:23 compute-0 python3.9[76256]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:54:23 compute-0 sudo[76254]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:24 compute-0 PackageKit[31071]: daemon quit
Oct 02 18:54:24 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 02 18:54:24 compute-0 sudo[76408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypzompjimdquoinhpfrprynkxipgdbdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431264.016191-358-113920282099348/AnsiballZ_stat.py'
Oct 02 18:54:24 compute-0 sudo[76408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:24 compute-0 python3.9[76410]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:24 compute-0 sudo[76408]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:25 compute-0 sudo[76531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbbanjhwzoppbvcgqvqpujefvokqsuyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431264.016191-358-113920282099348/AnsiballZ_copy.py'
Oct 02 18:54:25 compute-0 sudo[76531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:25 compute-0 python3.9[76533]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431264.016191-358-113920282099348/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81b905e41eda5af3080e544e4dc3bafb229246e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:25 compute-0 sudo[76531]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:26 compute-0 sudo[76683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkzqslvffvyauuzjzvbyrvcmwschyjfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431265.8107002-374-167727616930905/AnsiballZ_file.py'
Oct 02 18:54:26 compute-0 sudo[76683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:26 compute-0 python3.9[76685]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:54:26 compute-0 sudo[76683]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:26 compute-0 sudo[76835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzdxjoaqmqoevrvphzrsozzhpgyaaysx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431266.624599-382-271311527475171/AnsiballZ_stat.py'
Oct 02 18:54:26 compute-0 sudo[76835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:27 compute-0 python3.9[76837]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:27 compute-0 sudo[76835]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:27 compute-0 sudo[76958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjzpruphpoqeourvtahcvxxwybawgbmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431266.624599-382-271311527475171/AnsiballZ_copy.py'
Oct 02 18:54:27 compute-0 sudo[76958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:28 compute-0 python3.9[76960]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431266.624599-382-271311527475171/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81b905e41eda5af3080e544e4dc3bafb229246e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:28 compute-0 sudo[76958]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:28 compute-0 sudo[77110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvjmykfdumjsrdjdzedbbifpozdakvpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431268.349611-398-75946418500666/AnsiballZ_file.py'
Oct 02 18:54:28 compute-0 sudo[77110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:28 compute-0 python3.9[77112]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:54:28 compute-0 sudo[77110]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:29 compute-0 sudo[77262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oezwextuydeybylmazzezmsdvcfsfpqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431269.1068683-406-30827407960700/AnsiballZ_stat.py'
Oct 02 18:54:29 compute-0 sudo[77262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:29 compute-0 python3.9[77264]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:29 compute-0 sudo[77262]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:30 compute-0 sudo[77385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oimnyhkekfozzwikagiiychsarhsxtwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431269.1068683-406-30827407960700/AnsiballZ_copy.py'
Oct 02 18:54:30 compute-0 sudo[77385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:30 compute-0 python3.9[77387]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431269.1068683-406-30827407960700/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81b905e41eda5af3080e544e4dc3bafb229246e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:30 compute-0 sudo[77385]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:31 compute-0 sudo[77537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qemhhmziwihgtncehnzqatfudwsjhkny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431270.7811759-422-126530073384257/AnsiballZ_file.py'
Oct 02 18:54:31 compute-0 sudo[77537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:31 compute-0 python3.9[77539]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:54:31 compute-0 sudo[77537]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:31 compute-0 sudo[77689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvzasoeafipdladvlhwyrlxdjapwrsrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431271.5444496-430-189085118529183/AnsiballZ_stat.py'
Oct 02 18:54:31 compute-0 sudo[77689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:32 compute-0 python3.9[77691]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:32 compute-0 sudo[77689]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:32 compute-0 sudo[77812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odgsxsurbnljdsbxtunsgggkuveuykzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431271.5444496-430-189085118529183/AnsiballZ_copy.py'
Oct 02 18:54:32 compute-0 sudo[77812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:32 compute-0 python3.9[77814]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431271.5444496-430-189085118529183/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81b905e41eda5af3080e544e4dc3bafb229246e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:32 compute-0 sudo[77812]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:33 compute-0 sshd-session[70579]: Connection closed by 192.168.122.30 port 40746
Oct 02 18:54:33 compute-0 sshd-session[70576]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:54:33 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Oct 02 18:54:33 compute-0 systemd[1]: session-18.scope: Consumed 33.397s CPU time.
Oct 02 18:54:33 compute-0 systemd-logind[793]: Session 18 logged out. Waiting for processes to exit.
Oct 02 18:54:33 compute-0 systemd-logind[793]: Removed session 18.
Oct 02 18:54:40 compute-0 sshd-session[77840]: Accepted publickey for zuul from 192.168.122.30 port 38818 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 18:54:40 compute-0 systemd-logind[793]: New session 19 of user zuul.
Oct 02 18:54:40 compute-0 systemd[1]: Started Session 19 of User zuul.
Oct 02 18:54:40 compute-0 sshd-session[77840]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:54:41 compute-0 python3.9[77993]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:54:42 compute-0 sudo[78147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcofwkjzhggywbogtwbkysvjthvrvzot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431281.85649-34-43548463050006/AnsiballZ_file.py'
Oct 02 18:54:42 compute-0 sudo[78147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:42 compute-0 python3.9[78149]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:54:42 compute-0 sudo[78147]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:43 compute-0 sudo[78299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjzngmzkjgnvhdcrmbaijkscsdcamztk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431282.8713481-34-51241943688188/AnsiballZ_file.py'
Oct 02 18:54:43 compute-0 sudo[78299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:43 compute-0 python3.9[78301]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:54:43 compute-0 sudo[78299]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:44 compute-0 python3.9[78451]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:54:45 compute-0 sudo[78601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zejdylxdxvjilpwsmqjoyzputmxmabpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431284.6593313-57-125468683255479/AnsiballZ_seboolean.py'
Oct 02 18:54:45 compute-0 sudo[78601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:45 compute-0 python3.9[78603]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 02 18:54:46 compute-0 sudo[78601]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:47 compute-0 sudo[78757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khsqbznliqbuelihhsqucxujmownztyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431286.90351-67-274513743083764/AnsiballZ_setup.py'
Oct 02 18:54:47 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct 02 18:54:47 compute-0 sudo[78757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:47 compute-0 python3.9[78759]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:54:47 compute-0 sudo[78757]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:48 compute-0 sudo[78841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgagtajznyguhuwkxgssbyvguqhmdfyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431286.90351-67-274513743083764/AnsiballZ_dnf.py'
Oct 02 18:54:48 compute-0 sudo[78841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:48 compute-0 python3.9[78843]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:54:49 compute-0 sudo[78841]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:50 compute-0 sudo[78994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvkagislhzluoppilcxurmqehjzqgrhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431289.9551508-79-273209982798250/AnsiballZ_systemd.py'
Oct 02 18:54:50 compute-0 sudo[78994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:50 compute-0 python3.9[78996]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 18:54:51 compute-0 sudo[78994]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:51 compute-0 sudo[79150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idayxmpvzzetrgfobamrclzotobezxwn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431291.2496152-87-78648923057650/AnsiballZ_edpm_nftables_snippet.py'
Oct 02 18:54:51 compute-0 sudo[79150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:52 compute-0 python3[79152]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 02 18:54:52 compute-0 sudo[79150]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:52 compute-0 sudo[79302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqpiozplsjynoofpglgkjlhapkegzyxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431292.40778-96-170320288296921/AnsiballZ_file.py'
Oct 02 18:54:52 compute-0 sudo[79302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:53 compute-0 python3.9[79304]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:53 compute-0 sudo[79302]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:53 compute-0 sudo[79454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsvitayzlgtcfcxmnsamyrrtxhrmppic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431293.2423718-104-35195670596380/AnsiballZ_stat.py'
Oct 02 18:54:53 compute-0 sudo[79454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:54 compute-0 python3.9[79456]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:54 compute-0 sudo[79454]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:54 compute-0 sudo[79532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgdpqfmxxvrirbtlxokvxdxfictebcyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431293.2423718-104-35195670596380/AnsiballZ_file.py'
Oct 02 18:54:54 compute-0 sudo[79532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:54 compute-0 python3.9[79534]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:54 compute-0 sudo[79532]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:55 compute-0 sudo[79684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bphaaingpyzucrghlvfqzzcjyyhaihts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431294.8764815-116-197994295520614/AnsiballZ_stat.py'
Oct 02 18:54:55 compute-0 sudo[79684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:55 compute-0 python3.9[79686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:55 compute-0 sudo[79684]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:55 compute-0 sudo[79762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppchtbsxpkwaazszatttsqrkrqapzjsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431294.8764815-116-197994295520614/AnsiballZ_file.py'
Oct 02 18:54:55 compute-0 sudo[79762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:56 compute-0 python3.9[79764]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.dsvkrgwv recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:56 compute-0 sudo[79762]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:56 compute-0 sudo[79914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqmdkdcmnpkshavaajxnazshbjbtoqul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431296.3112402-128-47829260720109/AnsiballZ_stat.py'
Oct 02 18:54:56 compute-0 sudo[79914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:56 compute-0 python3.9[79916]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:56 compute-0 sudo[79914]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:57 compute-0 sudo[79992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztzhhomhasqcipgcphlroeakqgynmqbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431296.3112402-128-47829260720109/AnsiballZ_file.py'
Oct 02 18:54:57 compute-0 sudo[79992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:57 compute-0 python3.9[79994]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:57 compute-0 sudo[79992]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:58 compute-0 sudo[80144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeqwyjfjaqyghdubkgybnddhidvvwmin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431297.735198-141-263524841492247/AnsiballZ_command.py'
Oct 02 18:54:58 compute-0 sudo[80144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:58 compute-0 python3.9[80146]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:54:58 compute-0 sudo[80144]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:59 compute-0 sudo[80297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmfbbeqknyhqurcmnkzdbxzvbqvwglrz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431298.7457252-149-180155871830694/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 18:54:59 compute-0 sudo[80297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:59 compute-0 python3[80299]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 18:54:59 compute-0 sudo[80297]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:00 compute-0 sudo[80449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tosgoppxctlaxvasejbsidahzrzxaizo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431299.7405345-157-254219616767708/AnsiballZ_stat.py'
Oct 02 18:55:00 compute-0 sudo[80449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:00 compute-0 python3.9[80451]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:55:00 compute-0 sudo[80449]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:01 compute-0 sudo[80574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aagqumbqgrcuhhoaeriudxzujpizaxkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431299.7405345-157-254219616767708/AnsiballZ_copy.py'
Oct 02 18:55:01 compute-0 sudo[80574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:01 compute-0 python3.9[80576]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431299.7405345-157-254219616767708/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:01 compute-0 sudo[80574]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:02 compute-0 sudo[80726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aavznfcxljrxgusittughyukcirsbmxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431301.7289064-172-176369230490999/AnsiballZ_stat.py'
Oct 02 18:55:02 compute-0 sudo[80726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:02 compute-0 python3.9[80728]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:55:02 compute-0 sudo[80726]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:02 compute-0 sudo[80851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fijyddpstwllmxeuklemfznbraydcjmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431301.7289064-172-176369230490999/AnsiballZ_copy.py'
Oct 02 18:55:02 compute-0 sudo[80851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:03 compute-0 python3.9[80853]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431301.7289064-172-176369230490999/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:03 compute-0 sudo[80851]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:03 compute-0 sudo[81003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptrxtdlwwbbtfhjisnaczkeggjokieja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431303.2932327-187-115325722918349/AnsiballZ_stat.py'
Oct 02 18:55:03 compute-0 sudo[81003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:03 compute-0 python3.9[81005]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:55:03 compute-0 sudo[81003]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:04 compute-0 sudo[81128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zefuthxdwxcynpsyppqtvcgaownicvbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431303.2932327-187-115325722918349/AnsiballZ_copy.py'
Oct 02 18:55:04 compute-0 sudo[81128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:04 compute-0 python3.9[81130]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431303.2932327-187-115325722918349/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:04 compute-0 sudo[81128]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:05 compute-0 sudo[81280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpsoctxuukvvpeunufkzgijlfaeaubsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431304.790825-202-177961005726620/AnsiballZ_stat.py'
Oct 02 18:55:05 compute-0 sudo[81280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:05 compute-0 python3.9[81282]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:55:05 compute-0 sudo[81280]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:06 compute-0 sudo[81405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnilurqrrfpoqjjxziahynpacogbxzch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431304.790825-202-177961005726620/AnsiballZ_copy.py'
Oct 02 18:55:06 compute-0 sudo[81405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:06 compute-0 python3.9[81407]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431304.790825-202-177961005726620/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:06 compute-0 sudo[81405]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:06 compute-0 sudo[81557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drmnhiqkapabkkpltwqvqshlnhcwtizs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431306.476908-217-225998271182571/AnsiballZ_stat.py'
Oct 02 18:55:06 compute-0 sudo[81557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:07 compute-0 python3.9[81559]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:55:07 compute-0 sudo[81557]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:07 compute-0 sudo[81682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcsouxbarasadkilnhxbwpcrjxyhrgjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431306.476908-217-225998271182571/AnsiballZ_copy.py'
Oct 02 18:55:07 compute-0 sudo[81682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:07 compute-0 python3.9[81684]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431306.476908-217-225998271182571/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:07 compute-0 sudo[81682]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:08 compute-0 sudo[81834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxyptkfkiywcsmrpcncwddocbtxuyjgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431308.1622326-232-59197664514184/AnsiballZ_file.py'
Oct 02 18:55:08 compute-0 sudo[81834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:08 compute-0 python3.9[81836]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:08 compute-0 sudo[81834]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:09 compute-0 sudo[81986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wexpgqjedkpktrdazrhyiomioybzmijy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431308.9945946-240-36571812855450/AnsiballZ_command.py'
Oct 02 18:55:09 compute-0 sudo[81986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:09 compute-0 python3.9[81988]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:55:09 compute-0 sudo[81986]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:10 compute-0 sudo[82141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foyjplzjkapxzdryskpjlnvjpaqbcrfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431309.9196794-248-35779510277317/AnsiballZ_blockinfile.py'
Oct 02 18:55:10 compute-0 sudo[82141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:10 compute-0 python3.9[82143]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:10 compute-0 sudo[82141]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:11 compute-0 sudo[82293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqbashzvknbfoizrjplsgqarjlrzhwuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431311.112865-257-223204744848788/AnsiballZ_command.py'
Oct 02 18:55:11 compute-0 sudo[82293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:11 compute-0 python3.9[82295]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:55:11 compute-0 sudo[82293]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:12 compute-0 sudo[82446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvmlrnnixwsxglzmfbuysfegdwgktrps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431311.9301531-265-99646299794344/AnsiballZ_stat.py'
Oct 02 18:55:12 compute-0 sudo[82446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:12 compute-0 python3.9[82448]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:55:12 compute-0 sudo[82446]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:13 compute-0 sudo[82600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vicguikqdfsnhmekjwhlbolvndoojtpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431312.7919037-273-183506764447166/AnsiballZ_command.py'
Oct 02 18:55:13 compute-0 sudo[82600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:13 compute-0 python3.9[82602]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:55:13 compute-0 sudo[82600]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:14 compute-0 sudo[82755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yibljxoepmimbtolbmntxdvvxenjzffq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431313.6560938-281-77124976161111/AnsiballZ_file.py'
Oct 02 18:55:14 compute-0 sudo[82755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:14 compute-0 python3.9[82757]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:14 compute-0 sudo[82755]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:15 compute-0 python3.9[82907]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:55:16 compute-0 sudo[83058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxxhtcwwkaaxhxadjqmeqhqszdzfxita ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431316.1575544-321-166142098877271/AnsiballZ_command.py'
Oct 02 18:55:16 compute-0 sudo[83058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:16 compute-0 python3.9[83060]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:55:16 compute-0 ovs-vsctl[83061]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 02 18:55:16 compute-0 sudo[83058]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:17 compute-0 sudo[83211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oggfvgldtmgydylhvhthlyhndazfanpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431317.025394-330-110767626695303/AnsiballZ_command.py'
Oct 02 18:55:17 compute-0 sudo[83211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:17 compute-0 python3.9[83213]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:55:17 compute-0 sudo[83211]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:18 compute-0 sudo[83366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxdcotustjqdsjstfyhjepwgbuzxajya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431317.9063587-338-126672870959043/AnsiballZ_command.py'
Oct 02 18:55:18 compute-0 sudo[83366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:18 compute-0 python3.9[83368]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:55:18 compute-0 ovs-vsctl[83369]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 02 18:55:18 compute-0 sudo[83366]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:19 compute-0 python3.9[83519]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:55:20 compute-0 sudo[83671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpkwjyfhlfadbzgxedqdfynimaqiumyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431319.6614015-355-17266901304438/AnsiballZ_file.py'
Oct 02 18:55:20 compute-0 sudo[83671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:20 compute-0 python3.9[83673]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:55:20 compute-0 sudo[83671]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:21 compute-0 sudo[83823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmahfxnmyfjxlbavuwiqcahokosdmfca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431320.594317-363-225317801473543/AnsiballZ_stat.py'
Oct 02 18:55:21 compute-0 sudo[83823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:21 compute-0 python3.9[83825]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:55:21 compute-0 sudo[83823]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:21 compute-0 sudo[83901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqnrefcpneuhmkkguatixxnilsgdvutj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431320.594317-363-225317801473543/AnsiballZ_file.py'
Oct 02 18:55:21 compute-0 sudo[83901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:21 compute-0 python3.9[83903]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:55:21 compute-0 sudo[83901]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:22 compute-0 sudo[84053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxgvpgbxyhqxmkiqhlkngymnbjhhqeza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431322.1151-363-261093381095034/AnsiballZ_stat.py'
Oct 02 18:55:22 compute-0 sudo[84053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:22 compute-0 python3.9[84055]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:55:22 compute-0 sudo[84053]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:22 compute-0 sudo[84131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnqcoyreqkqdlwxpjokoarhnulyansxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431322.1151-363-261093381095034/AnsiballZ_file.py'
Oct 02 18:55:22 compute-0 sudo[84131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:23 compute-0 python3.9[84133]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:55:23 compute-0 sudo[84131]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:23 compute-0 sudo[84283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poikjahgwjnwyqdxjcasvrawcbpzzfbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431323.4013326-386-92798399564445/AnsiballZ_file.py'
Oct 02 18:55:23 compute-0 sudo[84283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:23 compute-0 python3.9[84285]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:23 compute-0 sudo[84283]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:24 compute-0 sudo[84435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiboycnsmdltjdijofozjyapsgztrntz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431324.192846-394-47218230337208/AnsiballZ_stat.py'
Oct 02 18:55:24 compute-0 sudo[84435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:24 compute-0 python3.9[84437]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:55:24 compute-0 sudo[84435]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:25 compute-0 sudo[84513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehnfgiotvwpcjvwrvkvhxgxozjdsgmto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431324.192846-394-47218230337208/AnsiballZ_file.py'
Oct 02 18:55:25 compute-0 sudo[84513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:25 compute-0 python3.9[84515]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:25 compute-0 sudo[84513]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:25 compute-0 sudo[84665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwlnvkfczmlscjilfmsxekwnhcfszjfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431325.5658355-406-134940398402595/AnsiballZ_stat.py'
Oct 02 18:55:25 compute-0 sudo[84665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:26 compute-0 python3.9[84667]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:55:26 compute-0 sudo[84665]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:26 compute-0 sudo[84743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqxwnftovnkncfjpuafmjpfxqlkalzuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431325.5658355-406-134940398402595/AnsiballZ_file.py'
Oct 02 18:55:26 compute-0 sudo[84743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:26 compute-0 python3.9[84745]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:26 compute-0 sudo[84743]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:27 compute-0 sudo[84895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-febyfrbnlfkrjklxrbghgqlcbbkbslrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431326.8933938-418-128867586777929/AnsiballZ_systemd.py'
Oct 02 18:55:27 compute-0 sudo[84895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:27 compute-0 python3.9[84897]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:55:27 compute-0 systemd[1]: Reloading.
Oct 02 18:55:27 compute-0 systemd-sysv-generator[84929]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:55:27 compute-0 systemd-rc-local-generator[84924]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:55:27 compute-0 sudo[84895]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:28 compute-0 sudo[85085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yarieqkyppbvoyeypvuucipmedecshkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431328.154934-426-113423091349966/AnsiballZ_stat.py'
Oct 02 18:55:28 compute-0 sudo[85085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:28 compute-0 python3.9[85087]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:55:28 compute-0 sudo[85085]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:29 compute-0 sudo[85163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyyzxafphakaznbhgqasqqbziysbxelc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431328.154934-426-113423091349966/AnsiballZ_file.py'
Oct 02 18:55:29 compute-0 sudo[85163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:29 compute-0 python3.9[85165]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:29 compute-0 sudo[85163]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:30 compute-0 sudo[85315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmwreplbwbiejzkmsftupygrcvrjkzjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431329.6645894-438-270380519055958/AnsiballZ_stat.py'
Oct 02 18:55:30 compute-0 sudo[85315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:30 compute-0 python3.9[85317]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:55:30 compute-0 sudo[85315]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:30 compute-0 sudo[85393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzkheekhackmzzirkuxskwrxqlisbire ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431329.6645894-438-270380519055958/AnsiballZ_file.py'
Oct 02 18:55:30 compute-0 sudo[85393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:30 compute-0 python3.9[85395]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:30 compute-0 sudo[85393]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:31 compute-0 sudo[85545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prgudqhnsvjbiehlipftypulfgokzhxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431330.9986742-450-15300696843134/AnsiballZ_systemd.py'
Oct 02 18:55:31 compute-0 sudo[85545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:31 compute-0 python3.9[85547]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:55:31 compute-0 systemd[1]: Reloading.
Oct 02 18:55:31 compute-0 systemd-rc-local-generator[85575]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:55:31 compute-0 systemd-sysv-generator[85579]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:55:32 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 18:55:32 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 18:55:32 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 18:55:32 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 18:55:32 compute-0 sudo[85545]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:32 compute-0 sudo[85739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siakhkeewndzzbeidrbfxvuylipqunqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431332.4553273-460-276558137315700/AnsiballZ_file.py'
Oct 02 18:55:32 compute-0 sudo[85739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:33 compute-0 python3.9[85741]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:55:33 compute-0 sudo[85739]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:33 compute-0 sudo[85891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvxwzipvbmbqogalzupkgrmpawqvyesa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431333.3459659-468-153795919818993/AnsiballZ_stat.py'
Oct 02 18:55:33 compute-0 sudo[85891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:33 compute-0 python3.9[85893]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:55:33 compute-0 sudo[85891]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:34 compute-0 sudo[86014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egvcoojsfdhjrddmkyudzjmrfzaxepcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431333.3459659-468-153795919818993/AnsiballZ_copy.py'
Oct 02 18:55:34 compute-0 sudo[86014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:34 compute-0 python3.9[86016]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431333.3459659-468-153795919818993/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:55:34 compute-0 sudo[86014]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:35 compute-0 sudo[86166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lokrfgyitthnxtuydikrearxoyxudzta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431335.008357-485-28605598206200/AnsiballZ_file.py'
Oct 02 18:55:35 compute-0 sudo[86166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:35 compute-0 python3.9[86168]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:55:35 compute-0 sudo[86166]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:36 compute-0 sudo[86318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trnihmvgbotdunhhzaokbntjzaymxajt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431335.8306222-493-20060858086939/AnsiballZ_stat.py'
Oct 02 18:55:36 compute-0 sudo[86318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:36 compute-0 python3.9[86320]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:55:36 compute-0 sudo[86318]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:36 compute-0 sudo[86441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyurhanrviwzrcuwnkqtyqcscbtbhmlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431335.8306222-493-20060858086939/AnsiballZ_copy.py'
Oct 02 18:55:36 compute-0 sudo[86441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:37 compute-0 python3.9[86443]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431335.8306222-493-20060858086939/.source.json _original_basename=.84fc0k38 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:37 compute-0 sudo[86441]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:37 compute-0 sudo[86593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uadfgszxkveylwjqfyysmrzlnrazyvmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431337.4301116-508-192009669033/AnsiballZ_file.py'
Oct 02 18:55:37 compute-0 sudo[86593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:38 compute-0 python3.9[86595]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:38 compute-0 sudo[86593]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:38 compute-0 sudo[86745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjyswdnpavumeuuhtwovrmcnncefoigp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431338.3047276-516-92243367963323/AnsiballZ_stat.py'
Oct 02 18:55:38 compute-0 sudo[86745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:38 compute-0 sudo[86745]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:39 compute-0 sudo[86868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prygenpdepsrdcpxuqhqxvjjcnhxqigp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431338.3047276-516-92243367963323/AnsiballZ_copy.py'
Oct 02 18:55:39 compute-0 sudo[86868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:39 compute-0 sudo[86868]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:40 compute-0 sudo[87020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhlllnduokhbpztuhleynnwtbniqgwan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431339.89564-533-209898709228304/AnsiballZ_container_config_data.py'
Oct 02 18:55:40 compute-0 sudo[87020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:40 compute-0 python3.9[87022]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 02 18:55:40 compute-0 sudo[87020]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:41 compute-0 sudo[87172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsgmikehdhmtsbxghpdkogaxftzfhnmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431341.0222943-542-112685170885989/AnsiballZ_container_config_hash.py'
Oct 02 18:55:41 compute-0 sudo[87172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:41 compute-0 python3.9[87174]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 18:55:41 compute-0 sudo[87172]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:42 compute-0 sudo[87324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mufwalttrqgreuqwbfbdjqkipgjxyjqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431342.168631-551-205133043890088/AnsiballZ_podman_container_info.py'
Oct 02 18:55:42 compute-0 sudo[87324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:42 compute-0 python3.9[87326]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 18:55:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:55:43 compute-0 sudo[87324]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:44 compute-0 sudo[87488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cumzpihwrrsimjeyjirteswluddeplvb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431343.6076748-564-90099189198478/AnsiballZ_edpm_container_manage.py'
Oct 02 18:55:44 compute-0 sudo[87488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:44 compute-0 python3[87490]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 18:55:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:55:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat4187091325-lower\x2dmapped.mount: Deactivated successfully.
Oct 02 18:55:50 compute-0 podman[87504]: 2025-10-02 18:55:50.258072082 +0000 UTC m=+5.571755182 image pull ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 02 18:55:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:55:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:55:50 compute-0 podman[87623]: 2025-10-02 18:55:50.444807147 +0000 UTC m=+0.062835425 container create daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 18:55:50 compute-0 podman[87623]: 2025-10-02 18:55:50.411978057 +0000 UTC m=+0.030006385 image pull ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 02 18:55:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:55:50 compute-0 python3[87490]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 02 18:55:50 compute-0 sudo[87488]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:51 compute-0 sudo[87809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwlmjsnnetwqdwcfucxniloagjapuwag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431350.8803508-572-119071815799305/AnsiballZ_stat.py'
Oct 02 18:55:51 compute-0 sudo[87809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:51 compute-0 python3.9[87811]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:55:51 compute-0 sudo[87809]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:52 compute-0 sudo[87963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reqwbdaymwhtiujawnfbzkawpnzwcxwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431351.8461912-581-133846678107482/AnsiballZ_file.py'
Oct 02 18:55:52 compute-0 sudo[87963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:52 compute-0 python3.9[87965]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:52 compute-0 sudo[87963]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:52 compute-0 sudo[88039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwvxoaardbilrlbqslfyqowxswnxpill ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431351.8461912-581-133846678107482/AnsiballZ_stat.py'
Oct 02 18:55:52 compute-0 sudo[88039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:52 compute-0 python3.9[88041]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:55:52 compute-0 sudo[88039]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:53 compute-0 sudo[88190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkgejzzadsbrspflvsmldseqshenwjta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431352.9885018-581-268287561307437/AnsiballZ_copy.py'
Oct 02 18:55:53 compute-0 sudo[88190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:53 compute-0 python3.9[88192]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759431352.9885018-581-268287561307437/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:53 compute-0 sudo[88190]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:54 compute-0 sudo[88266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dopkvgbjcukctywympyrxphjasuqtsdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431352.9885018-581-268287561307437/AnsiballZ_systemd.py'
Oct 02 18:55:54 compute-0 sudo[88266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:54 compute-0 python3.9[88268]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 18:55:54 compute-0 systemd[1]: Reloading.
Oct 02 18:55:54 compute-0 systemd-rc-local-generator[88296]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:55:54 compute-0 systemd-sysv-generator[88300]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:55:54 compute-0 sudo[88266]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:55 compute-0 sudo[88377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycxzqflmkopivxvafdpqyvbztlhcawfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431352.9885018-581-268287561307437/AnsiballZ_systemd.py'
Oct 02 18:55:55 compute-0 sudo[88377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:55 compute-0 python3.9[88379]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:55:55 compute-0 systemd[1]: Reloading.
Oct 02 18:55:55 compute-0 systemd-rc-local-generator[88403]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:55:55 compute-0 systemd-sysv-generator[88407]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:55:55 compute-0 systemd[1]: Starting ovn_controller container...
Oct 02 18:55:55 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 02 18:55:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 18:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897821e1ddcb09320057b509f29582505aadad21c8dd383d4960cd29e693ee71/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 02 18:55:55 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97.
Oct 02 18:55:55 compute-0 podman[88420]: 2025-10-02 18:55:55.850983398 +0000 UTC m=+0.147093126 container init daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 18:55:55 compute-0 ovn_controller[88435]: + sudo -E kolla_set_configs
Oct 02 18:55:55 compute-0 podman[88420]: 2025-10-02 18:55:55.890659946 +0000 UTC m=+0.186769644 container start daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 18:55:55 compute-0 edpm-start-podman-container[88420]: ovn_controller
Oct 02 18:55:55 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 02 18:55:55 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 02 18:55:55 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 02 18:55:55 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 02 18:55:56 compute-0 systemd[88476]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 02 18:55:56 compute-0 edpm-start-podman-container[88419]: Creating additional drop-in dependency for "ovn_controller" (daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97)
Oct 02 18:55:56 compute-0 podman[88442]: 2025-10-02 18:55:56.022249062 +0000 UTC m=+0.116539229 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 18:55:56 compute-0 systemd[1]: daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97-51e5f300454d5e94.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 18:55:56 compute-0 systemd[1]: daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97-51e5f300454d5e94.service: Failed with result 'exit-code'.
Oct 02 18:55:56 compute-0 systemd[1]: Reloading.
Oct 02 18:55:56 compute-0 systemd-rc-local-generator[88519]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:55:56 compute-0 systemd[88476]: Queued start job for default target Main User Target.
Oct 02 18:55:56 compute-0 systemd-sysv-generator[88522]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:55:56 compute-0 systemd[88476]: Created slice User Application Slice.
Oct 02 18:55:56 compute-0 systemd[88476]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 02 18:55:56 compute-0 systemd[88476]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 18:55:56 compute-0 systemd[88476]: Reached target Paths.
Oct 02 18:55:56 compute-0 systemd[88476]: Reached target Timers.
Oct 02 18:55:56 compute-0 systemd[88476]: Starting D-Bus User Message Bus Socket...
Oct 02 18:55:56 compute-0 systemd[88476]: Starting Create User's Volatile Files and Directories...
Oct 02 18:55:56 compute-0 systemd[88476]: Listening on D-Bus User Message Bus Socket.
Oct 02 18:55:56 compute-0 systemd[88476]: Reached target Sockets.
Oct 02 18:55:56 compute-0 systemd[88476]: Finished Create User's Volatile Files and Directories.
Oct 02 18:55:56 compute-0 systemd[88476]: Reached target Basic System.
Oct 02 18:55:56 compute-0 systemd[88476]: Reached target Main User Target.
Oct 02 18:55:56 compute-0 systemd[88476]: Startup finished in 151ms.
Oct 02 18:55:56 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 02 18:55:56 compute-0 systemd[1]: Started ovn_controller container.
Oct 02 18:55:56 compute-0 systemd[1]: Started Session c1 of User root.
Oct 02 18:55:56 compute-0 sudo[88377]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:56 compute-0 ovn_controller[88435]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 18:55:56 compute-0 ovn_controller[88435]: INFO:__main__:Validating config file
Oct 02 18:55:56 compute-0 ovn_controller[88435]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 18:55:56 compute-0 ovn_controller[88435]: INFO:__main__:Writing out command to execute
Oct 02 18:55:56 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 02 18:55:56 compute-0 ovn_controller[88435]: ++ cat /run_command
Oct 02 18:55:56 compute-0 ovn_controller[88435]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 02 18:55:56 compute-0 ovn_controller[88435]: + ARGS=
Oct 02 18:55:56 compute-0 ovn_controller[88435]: + sudo kolla_copy_cacerts
Oct 02 18:55:56 compute-0 systemd[1]: Started Session c2 of User root.
Oct 02 18:55:56 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 02 18:55:56 compute-0 ovn_controller[88435]: + [[ ! -n '' ]]
Oct 02 18:55:56 compute-0 ovn_controller[88435]: + . kolla_extend_start
Oct 02 18:55:56 compute-0 ovn_controller[88435]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 02 18:55:56 compute-0 ovn_controller[88435]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 02 18:55:56 compute-0 ovn_controller[88435]: + umask 0022
Oct 02 18:55:56 compute-0 ovn_controller[88435]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 02 18:55:56 compute-0 NetworkManager[44968]: <info>  [1759431356.4747] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct 02 18:55:56 compute-0 NetworkManager[44968]: <info>  [1759431356.4757] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:55:56 compute-0 NetworkManager[44968]: <info>  [1759431356.4774] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 02 18:55:56 compute-0 NetworkManager[44968]: <info>  [1759431356.4782] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct 02 18:55:56 compute-0 NetworkManager[44968]: <info>  [1759431356.4787] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 02 18:55:56 compute-0 kernel: br-int: entered promiscuous mode
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00022|main|INFO|OVS feature set changed, force recompute.
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 18:55:56 compute-0 ovn_controller[88435]: 2025-10-02T18:55:56Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 18:55:56 compute-0 NetworkManager[44968]: <info>  [1759431356.5070] manager: (ovn-1f936b-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 02 18:55:56 compute-0 systemd-udevd[88593]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 18:55:56 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Oct 02 18:55:56 compute-0 systemd-udevd[88595]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 18:55:56 compute-0 NetworkManager[44968]: <info>  [1759431356.5349] device (genev_sys_6081): carrier: link connected
Oct 02 18:55:56 compute-0 NetworkManager[44968]: <info>  [1759431356.5353] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Oct 02 18:55:56 compute-0 sudo[88700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmqevncczwbsoiuvfepknwgxnjftuaph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431356.5724955-609-277279807433209/AnsiballZ_command.py'
Oct 02 18:55:56 compute-0 sudo[88700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:57 compute-0 python3.9[88702]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:55:57 compute-0 ovs-vsctl[88703]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 02 18:55:57 compute-0 sudo[88700]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:57 compute-0 sudo[88853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvoqvedxeyikixvbznugmtsqmhcnrmqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431357.434873-617-21184054278598/AnsiballZ_command.py'
Oct 02 18:55:57 compute-0 sudo[88853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:58 compute-0 python3.9[88855]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:55:58 compute-0 ovs-vsctl[88857]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 02 18:55:58 compute-0 sudo[88853]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:58 compute-0 sudo[89008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyuffkfyulemzrlyhetobxtufvqnelyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431358.5279758-631-109749318027513/AnsiballZ_command.py'
Oct 02 18:55:58 compute-0 sudo[89008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:59 compute-0 python3.9[89010]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:55:59 compute-0 ovs-vsctl[89011]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 02 18:55:59 compute-0 sudo[89008]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:59 compute-0 sshd-session[77843]: Connection closed by 192.168.122.30 port 38818
Oct 02 18:55:59 compute-0 sshd-session[77840]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:55:59 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Oct 02 18:55:59 compute-0 systemd[1]: session-19.scope: Consumed 1min 8.145s CPU time.
Oct 02 18:55:59 compute-0 systemd-logind[793]: Session 19 logged out. Waiting for processes to exit.
Oct 02 18:55:59 compute-0 systemd-logind[793]: Removed session 19.
Oct 02 18:56:05 compute-0 sshd-session[89036]: Accepted publickey for zuul from 192.168.122.30 port 40076 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 18:56:05 compute-0 systemd-logind[793]: New session 21 of user zuul.
Oct 02 18:56:05 compute-0 systemd[1]: Started Session 21 of User zuul.
Oct 02 18:56:05 compute-0 sshd-session[89036]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:56:06 compute-0 python3.9[89189]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:56:06 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 02 18:56:06 compute-0 systemd[88476]: Activating special unit Exit the Session...
Oct 02 18:56:06 compute-0 systemd[88476]: Stopped target Main User Target.
Oct 02 18:56:06 compute-0 systemd[88476]: Stopped target Basic System.
Oct 02 18:56:06 compute-0 systemd[88476]: Stopped target Paths.
Oct 02 18:56:06 compute-0 systemd[88476]: Stopped target Sockets.
Oct 02 18:56:06 compute-0 systemd[88476]: Stopped target Timers.
Oct 02 18:56:06 compute-0 systemd[88476]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 18:56:06 compute-0 systemd[88476]: Closed D-Bus User Message Bus Socket.
Oct 02 18:56:06 compute-0 systemd[88476]: Stopped Create User's Volatile Files and Directories.
Oct 02 18:56:06 compute-0 systemd[88476]: Removed slice User Application Slice.
Oct 02 18:56:06 compute-0 systemd[88476]: Reached target Shutdown.
Oct 02 18:56:06 compute-0 systemd[88476]: Finished Exit the Session.
Oct 02 18:56:06 compute-0 systemd[88476]: Reached target Exit the Session.
Oct 02 18:56:06 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 02 18:56:06 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 02 18:56:06 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 02 18:56:06 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 02 18:56:06 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 02 18:56:06 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 02 18:56:06 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 02 18:56:07 compute-0 sudo[89345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbwexmxekfroaxqdpdkenmtcdxmevoqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431366.7994847-34-269512893668501/AnsiballZ_command.py'
Oct 02 18:56:07 compute-0 sudo[89345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:07 compute-0 python3.9[89347]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:56:07 compute-0 sudo[89345]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:08 compute-0 sudo[89510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcwntwlnfkcbclnenxeggukbmqtkkejm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431368.0502257-45-14413295763567/AnsiballZ_systemd_service.py'
Oct 02 18:56:08 compute-0 sudo[89510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:09 compute-0 python3.9[89512]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 18:56:09 compute-0 systemd[1]: Reloading.
Oct 02 18:56:09 compute-0 systemd-rc-local-generator[89541]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:56:09 compute-0 systemd-sysv-generator[89544]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:56:09 compute-0 sudo[89510]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:10 compute-0 python3.9[89698]: ansible-ansible.builtin.service_facts Invoked
Oct 02 18:56:10 compute-0 network[89715]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 18:56:10 compute-0 network[89716]: 'network-scripts' will be removed from distribution in near future.
Oct 02 18:56:10 compute-0 network[89717]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 18:56:14 compute-0 sudo[89979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwtxbgjmbutpsbpatkcmgtjqaijatbxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431373.9708722-64-175221739161867/AnsiballZ_systemd_service.py'
Oct 02 18:56:14 compute-0 sudo[89979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:14 compute-0 python3.9[89981]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:56:14 compute-0 sudo[89979]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:15 compute-0 sudo[90133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrmypsftskglezfdgdnlcvxzhkdpxmnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431374.7909677-64-160882772925666/AnsiballZ_systemd_service.py'
Oct 02 18:56:15 compute-0 sudo[90133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:15 compute-0 python3.9[90135]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:56:15 compute-0 sudo[90133]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:16 compute-0 sudo[90286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jowfwuylgxrpggbvnvonypkxqbgatang ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431375.6782155-64-199272077960738/AnsiballZ_systemd_service.py'
Oct 02 18:56:16 compute-0 sudo[90286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:16 compute-0 python3.9[90288]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:56:16 compute-0 sudo[90286]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:16 compute-0 sudo[90439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znwjqgwvovswgmgeywqkphpqbyxbruif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431376.5615995-64-219923601966356/AnsiballZ_systemd_service.py'
Oct 02 18:56:16 compute-0 sudo[90439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:17 compute-0 python3.9[90441]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:56:17 compute-0 sudo[90439]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:17 compute-0 sudo[90592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jufjxqyjjppodlsyiffkcoeugxzdvklm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431377.5224757-64-124211673809038/AnsiballZ_systemd_service.py'
Oct 02 18:56:17 compute-0 sudo[90592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:18 compute-0 python3.9[90594]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:56:18 compute-0 sudo[90592]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:19 compute-0 sudo[90745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ureotqscvjhwntoilpcsgfupakcagtub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431378.72512-64-197773060518991/AnsiballZ_systemd_service.py'
Oct 02 18:56:19 compute-0 sudo[90745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:19 compute-0 python3.9[90747]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:56:19 compute-0 sudo[90745]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:20 compute-0 sudo[90898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csucfkvnveyjwzbirzhlozyuvuglvqkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431379.642172-64-254952358126197/AnsiballZ_systemd_service.py'
Oct 02 18:56:20 compute-0 sudo[90898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:20 compute-0 python3.9[90900]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:56:20 compute-0 sudo[90898]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:21 compute-0 sudo[91051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itufntcggunrbuixgbuluezigyooxqth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431380.7916794-116-47225704884796/AnsiballZ_file.py'
Oct 02 18:56:21 compute-0 sudo[91051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:21 compute-0 python3.9[91053]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:21 compute-0 sudo[91051]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:22 compute-0 sudo[91203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkzpmuqnggqtenspjcypapinnyidnuvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431381.7323132-116-205823378599103/AnsiballZ_file.py'
Oct 02 18:56:22 compute-0 sudo[91203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:22 compute-0 python3.9[91205]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:22 compute-0 sudo[91203]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:22 compute-0 sudo[91355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdvbkmpdkgylgtolbwistgbpaftqbbzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431382.4853163-116-281205305335546/AnsiballZ_file.py'
Oct 02 18:56:22 compute-0 sudo[91355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:23 compute-0 python3.9[91357]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:23 compute-0 sudo[91355]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:23 compute-0 sudo[91507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxjwcitqucxkvfiwskloypbkdsaamwtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431383.3270822-116-98475413259215/AnsiballZ_file.py'
Oct 02 18:56:23 compute-0 sudo[91507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:23 compute-0 python3.9[91509]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:23 compute-0 sudo[91507]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:24 compute-0 sudo[91659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eogqyamcdytvisfgsqqnhlpcfrwapvzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431384.1374443-116-40014463095324/AnsiballZ_file.py'
Oct 02 18:56:24 compute-0 sudo[91659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:24 compute-0 python3.9[91661]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:24 compute-0 sudo[91659]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:25 compute-0 sudo[91811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntwfpqhnjhjmliiwkwzphwqbzovrwzyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431384.942524-116-202624496352446/AnsiballZ_file.py'
Oct 02 18:56:25 compute-0 sudo[91811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:25 compute-0 python3.9[91813]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:25 compute-0 sudo[91811]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:26 compute-0 sudo[91963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqlooolooatrrqiydhcsbwtwlxxgqkmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431385.6830006-116-188896212685677/AnsiballZ_file.py'
Oct 02 18:56:26 compute-0 sudo[91963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:26 compute-0 ovn_controller[88435]: 2025-10-02T18:56:26Z|00025|memory|INFO|17152 kB peak resident set size after 29.8 seconds
Oct 02 18:56:26 compute-0 ovn_controller[88435]: 2025-10-02T18:56:26Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Oct 02 18:56:26 compute-0 podman[91965]: 2025-10-02 18:56:26.269845924 +0000 UTC m=+0.147701664 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 18:56:26 compute-0 python3.9[91966]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:26 compute-0 sudo[91963]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:27 compute-0 sudo[92140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdrlofjewwlcclwrpxlrywtscuopsygk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431386.6569164-166-63261728614842/AnsiballZ_file.py'
Oct 02 18:56:27 compute-0 sudo[92140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:27 compute-0 python3.9[92142]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:27 compute-0 sudo[92140]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:27 compute-0 sudo[92292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjduqtjmgahocabkeeldzcskghvlkeiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431387.5050557-166-50834912275506/AnsiballZ_file.py'
Oct 02 18:56:27 compute-0 sudo[92292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:28 compute-0 python3.9[92294]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:28 compute-0 sudo[92292]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:28 compute-0 sudo[92444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbdmvimuxtseeswuqrmefnbokosvzaoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431388.3098857-166-56401132098532/AnsiballZ_file.py'
Oct 02 18:56:28 compute-0 sudo[92444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:28 compute-0 python3.9[92446]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:28 compute-0 sudo[92444]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:29 compute-0 sudo[92596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rekxicedazdtrhaszkzkfywvxrqtwlro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431389.1195283-166-212854537800525/AnsiballZ_file.py'
Oct 02 18:56:29 compute-0 sudo[92596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:29 compute-0 python3.9[92598]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:29 compute-0 sudo[92596]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:30 compute-0 sudo[92748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxzgbqlelapvewopduvestyydixegbby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431389.8625665-166-171314091712054/AnsiballZ_file.py'
Oct 02 18:56:30 compute-0 sudo[92748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:30 compute-0 python3.9[92750]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:30 compute-0 sudo[92748]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:31 compute-0 sudo[92900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fohqujjjeieoonmhukfqgfbxbohrdvcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431390.651653-166-111253590519338/AnsiballZ_file.py'
Oct 02 18:56:31 compute-0 sudo[92900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:31 compute-0 python3.9[92902]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:31 compute-0 sudo[92900]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:31 compute-0 sudo[93052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uighlnuoqdtbobdjmeerhgrhoxcityhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431391.4103265-166-187839741056532/AnsiballZ_file.py'
Oct 02 18:56:31 compute-0 sudo[93052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:32 compute-0 python3.9[93054]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:32 compute-0 sudo[93052]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:32 compute-0 sudo[93204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvcpnadsctmqpjkneygqvsenkuqnkktf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431392.341574-217-277835506666206/AnsiballZ_command.py'
Oct 02 18:56:32 compute-0 sudo[93204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:32 compute-0 python3.9[93206]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                              systemctl disable --now certmonger.service
                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                            fi
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:56:33 compute-0 sudo[93204]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:33 compute-0 python3.9[93358]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 18:56:34 compute-0 sudo[93508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idrdnetiumfzsiamgmtfcpdrciyzhypo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431394.2763913-235-78174382315423/AnsiballZ_systemd_service.py'
Oct 02 18:56:34 compute-0 sudo[93508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:34 compute-0 python3.9[93510]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 18:56:35 compute-0 systemd[1]: Reloading.
Oct 02 18:56:35 compute-0 systemd-rc-local-generator[93539]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:56:35 compute-0 systemd-sysv-generator[93542]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:56:35 compute-0 sudo[93508]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:35 compute-0 sudo[93696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmvsekaylglroqkfqhisgjaogtynmmwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431395.519329-243-248592420129606/AnsiballZ_command.py'
Oct 02 18:56:35 compute-0 sudo[93696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:36 compute-0 python3.9[93698]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:56:36 compute-0 sudo[93696]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:36 compute-0 sudo[93849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwkbrgyptitpjvjyvngomahxgzwhdcfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431396.290331-243-255386339628506/AnsiballZ_command.py'
Oct 02 18:56:36 compute-0 sudo[93849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:36 compute-0 python3.9[93851]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:56:36 compute-0 sudo[93849]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:37 compute-0 sudo[94002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvncnuzejzrbubsapalpwnccrmsiznbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431397.0483341-243-104335698696673/AnsiballZ_command.py'
Oct 02 18:56:37 compute-0 sudo[94002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:37 compute-0 python3.9[94004]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:56:37 compute-0 sudo[94002]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:38 compute-0 sudo[94155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkcqfctaypqsxevkojcuzqoqidklqrpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431397.8197865-243-13988908006214/AnsiballZ_command.py'
Oct 02 18:56:38 compute-0 sudo[94155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:38 compute-0 python3.9[94157]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:56:38 compute-0 sudo[94155]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:39 compute-0 sudo[94308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oailhdyswglgjntglbgqdfmydzhqakit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431398.6530929-243-135630690707557/AnsiballZ_command.py'
Oct 02 18:56:39 compute-0 sudo[94308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:39 compute-0 python3.9[94310]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:56:39 compute-0 sudo[94308]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:39 compute-0 sudo[94461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqemsouyihxcwndrlxtbcglnvgbuqevt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431399.4536343-243-202664943571918/AnsiballZ_command.py'
Oct 02 18:56:39 compute-0 sudo[94461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:40 compute-0 python3.9[94463]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:56:40 compute-0 sudo[94461]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:40 compute-0 sudo[94614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntcjpqtonkpquncnmsixqtoruvtknosz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431400.2995384-243-173663338577695/AnsiballZ_command.py'
Oct 02 18:56:40 compute-0 sudo[94614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:40 compute-0 python3.9[94616]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:56:40 compute-0 sudo[94614]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:41 compute-0 sudo[94767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdfkpwsxkucufhqvgtheoyeemhhtnhkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431401.3122518-297-109497861147896/AnsiballZ_getent.py'
Oct 02 18:56:41 compute-0 sudo[94767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:42 compute-0 python3.9[94769]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 02 18:56:42 compute-0 sudo[94767]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:42 compute-0 sudo[94920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtwgxchvaykaumpqxfbfdfulaqwtllyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431402.3606596-305-31444160780320/AnsiballZ_group.py'
Oct 02 18:56:42 compute-0 sudo[94920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:43 compute-0 python3.9[94922]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 18:56:43 compute-0 groupadd[94923]: group added to /etc/group: name=libvirt, GID=42473
Oct 02 18:56:43 compute-0 groupadd[94923]: group added to /etc/gshadow: name=libvirt
Oct 02 18:56:43 compute-0 groupadd[94923]: new group: name=libvirt, GID=42473
Oct 02 18:56:43 compute-0 sudo[94920]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:44 compute-0 sudo[95078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syivizqgalebiiatvrkdsvdjglwqnrny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431403.5764983-313-116504154641388/AnsiballZ_user.py'
Oct 02 18:56:44 compute-0 sudo[95078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:44 compute-0 python3.9[95080]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 18:56:44 compute-0 useradd[95082]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Oct 02 18:56:44 compute-0 sudo[95078]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:45 compute-0 sudo[95238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgflvrbkpdcrlfttjdaumwmwaacaepih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431404.9034884-324-42290993226670/AnsiballZ_setup.py'
Oct 02 18:56:45 compute-0 sudo[95238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:45 compute-0 python3.9[95240]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:56:45 compute-0 sudo[95238]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:46 compute-0 sudo[95322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvodciszianjmdpdrmaivzohssorslhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431404.9034884-324-42290993226670/AnsiballZ_dnf.py'
Oct 02 18:56:46 compute-0 sudo[95322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:46 compute-0 python3.9[95324]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:56:56 compute-0 podman[95336]: 2025-10-02 18:56:56.747210675 +0000 UTC m=+0.162354354 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct 02 18:57:20 compute-0 kernel: SELinux:  Converting 2753 SID table entries...
Oct 02 18:57:20 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:57:20 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 18:57:20 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:57:20 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:57:20 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:57:20 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:57:20 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:57:27 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct 02 18:57:27 compute-0 podman[95550]: 2025-10-02 18:57:27.761836153 +0000 UTC m=+0.167267924 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 02 18:57:30 compute-0 kernel: SELinux:  Converting 2753 SID table entries...
Oct 02 18:57:30 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:57:30 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 18:57:30 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:57:30 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:57:30 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:57:30 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:57:30 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:57:58 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 02 18:57:58 compute-0 podman[102977]: 2025-10-02 18:57:58.74713862 +0000 UTC m=+0.147820204 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 18:58:29 compute-0 podman[112358]: 2025-10-02 18:58:29.704822182 +0000 UTC m=+0.122182622 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, tcib_managed=true)
Oct 02 18:58:31 compute-0 kernel: SELinux:  Converting 2754 SID table entries...
Oct 02 18:58:31 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:58:31 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 18:58:31 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:58:31 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:58:31 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:58:31 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:58:31 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:58:32 compute-0 groupadd[112395]: group added to /etc/group: name=dnsmasq, GID=992
Oct 02 18:58:32 compute-0 groupadd[112395]: group added to /etc/gshadow: name=dnsmasq
Oct 02 18:58:32 compute-0 groupadd[112395]: new group: name=dnsmasq, GID=992
Oct 02 18:58:32 compute-0 useradd[112402]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Oct 02 18:58:32 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Oct 02 18:58:32 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 02 18:58:32 compute-0 dbus-broker-launch[761]: Noticed file-system modification, trigger reload.
Oct 02 18:58:33 compute-0 groupadd[112415]: group added to /etc/group: name=clevis, GID=991
Oct 02 18:58:33 compute-0 groupadd[112415]: group added to /etc/gshadow: name=clevis
Oct 02 18:58:33 compute-0 groupadd[112415]: new group: name=clevis, GID=991
Oct 02 18:58:33 compute-0 useradd[112422]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Oct 02 18:58:33 compute-0 usermod[112432]: add 'clevis' to group 'tss'
Oct 02 18:58:33 compute-0 usermod[112432]: add 'clevis' to shadow group 'tss'
Oct 02 18:58:35 compute-0 polkitd[6325]: Reloading rules
Oct 02 18:58:35 compute-0 polkitd[6325]: Collecting garbage unconditionally...
Oct 02 18:58:35 compute-0 polkitd[6325]: Loading rules from directory /etc/polkit-1/rules.d
Oct 02 18:58:35 compute-0 polkitd[6325]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 02 18:58:35 compute-0 polkitd[6325]: Finished loading, compiling and executing 4 rules
Oct 02 18:58:35 compute-0 polkitd[6325]: Reloading rules
Oct 02 18:58:35 compute-0 polkitd[6325]: Collecting garbage unconditionally...
Oct 02 18:58:35 compute-0 polkitd[6325]: Loading rules from directory /etc/polkit-1/rules.d
Oct 02 18:58:35 compute-0 polkitd[6325]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 02 18:58:35 compute-0 polkitd[6325]: Finished loading, compiling and executing 4 rules
Oct 02 18:58:37 compute-0 groupadd[112619]: group added to /etc/group: name=ceph, GID=167
Oct 02 18:58:37 compute-0 groupadd[112619]: group added to /etc/gshadow: name=ceph
Oct 02 18:58:37 compute-0 groupadd[112619]: new group: name=ceph, GID=167
Oct 02 18:58:37 compute-0 useradd[112625]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Oct 02 18:58:40 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Oct 02 18:58:40 compute-0 sshd[1009]: Received signal 15; terminating.
Oct 02 18:58:40 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Oct 02 18:58:40 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Oct 02 18:58:40 compute-0 systemd[1]: sshd.service: Consumed 1.673s CPU time, read 0B from disk, written 8.0K to disk.
Oct 02 18:58:40 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Oct 02 18:58:40 compute-0 systemd[1]: Stopping sshd-keygen.target...
Oct 02 18:58:40 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 18:58:40 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 18:58:40 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 18:58:40 compute-0 systemd[1]: Reached target sshd-keygen.target.
Oct 02 18:58:40 compute-0 systemd[1]: Starting OpenSSH server daemon...
Oct 02 18:58:40 compute-0 sshd[113124]: Server listening on 0.0.0.0 port 22.
Oct 02 18:58:40 compute-0 sshd[113124]: Server listening on :: port 22.
Oct 02 18:58:40 compute-0 systemd[1]: Started OpenSSH server daemon.
Oct 02 18:58:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 18:58:43 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 18:58:43 compute-0 systemd[1]: Reloading.
Oct 02 18:58:43 compute-0 systemd-sysv-generator[113381]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:58:43 compute-0 systemd-rc-local-generator[113377]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:58:43 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 18:58:45 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 02 18:58:45 compute-0 PackageKit[115205]: daemon start
Oct 02 18:58:45 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 02 18:58:45 compute-0 sudo[95322]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:46 compute-0 sudo[116521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcfmgeibqbeykwcpgsdzwqcudcnykdth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431526.02932-336-155379737125174/AnsiballZ_systemd.py'
Oct 02 18:58:46 compute-0 sudo[116521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:47 compute-0 python3.9[116544]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 18:58:47 compute-0 systemd[1]: Reloading.
Oct 02 18:58:47 compute-0 systemd-rc-local-generator[116934]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:58:47 compute-0 systemd-sysv-generator[116940]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:58:47 compute-0 sudo[116521]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:47 compute-0 sudo[117660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjigmweeltizybtkpfrlqddhkptcbusr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431527.606573-336-102291742395256/AnsiballZ_systemd.py'
Oct 02 18:58:47 compute-0 sudo[117660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:48 compute-0 python3.9[117681]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 18:58:48 compute-0 systemd[1]: Reloading.
Oct 02 18:58:48 compute-0 systemd-sysv-generator[118127]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:58:48 compute-0 systemd-rc-local-generator[118122]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:58:48 compute-0 sudo[117660]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:49 compute-0 sudo[118869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfudbyztgvbztsifemwqeffxkayufkdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431528.7150092-336-3478626084840/AnsiballZ_systemd.py'
Oct 02 18:58:49 compute-0 sudo[118869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:49 compute-0 python3.9[118884]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 18:58:49 compute-0 systemd[1]: Reloading.
Oct 02 18:58:49 compute-0 systemd-rc-local-generator[119327]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:58:49 compute-0 systemd-sysv-generator[119331]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:58:49 compute-0 sudo[118869]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:50 compute-0 sudo[120103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocxaxdnsmlroiohqpgolhxejzkzfbttn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431529.9111314-336-78121196567526/AnsiballZ_systemd.py'
Oct 02 18:58:50 compute-0 sudo[120103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:50 compute-0 python3.9[120118]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 18:58:50 compute-0 systemd[1]: Reloading.
Oct 02 18:58:50 compute-0 systemd-sysv-generator[120577]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:58:50 compute-0 systemd-rc-local-generator[120572]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:58:50 compute-0 sudo[120103]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:51 compute-0 sudo[121339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drcsbenpagvwiqdpqhgugnthgcqkrnnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431531.1110444-365-266025648823821/AnsiballZ_systemd.py'
Oct 02 18:58:51 compute-0 sudo[121339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:51 compute-0 python3.9[121363]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:58:51 compute-0 systemd[1]: Reloading.
Oct 02 18:58:51 compute-0 systemd-rc-local-generator[121756]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:58:51 compute-0 systemd-sysv-generator[121763]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:58:52 compute-0 sudo[121339]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:52 compute-0 sudo[122503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwerbtuhyzakvgmqlvraivbbnvbtwphv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431532.2895172-365-204434862101336/AnsiballZ_systemd.py'
Oct 02 18:58:52 compute-0 sudo[122503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:52 compute-0 python3.9[122522]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:58:52 compute-0 systemd[1]: Reloading.
Oct 02 18:58:53 compute-0 systemd-rc-local-generator[122753]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:58:53 compute-0 systemd-sysv-generator[122757]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:58:53 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 18:58:53 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 18:58:53 compute-0 systemd[1]: man-db-cache-update.service: Consumed 12.540s CPU time.
Oct 02 18:58:53 compute-0 systemd[1]: run-r0bbc8d3e61ce46439436fa7798c0e18b.service: Deactivated successfully.
Oct 02 18:58:53 compute-0 sudo[122503]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:53 compute-0 sudo[122916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzqkmbumkszecdtdmdshwzizmtphoari ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431533.4875393-365-80199931319271/AnsiballZ_systemd.py'
Oct 02 18:58:53 compute-0 sudo[122916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:54 compute-0 python3.9[122918]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:58:54 compute-0 systemd[1]: Reloading.
Oct 02 18:58:54 compute-0 systemd-rc-local-generator[122948]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:58:54 compute-0 systemd-sysv-generator[122951]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:58:54 compute-0 sudo[122916]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:55 compute-0 sudo[123106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyiekmzaqtpabxibobnuiupztwvskcek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431534.8575706-365-171771689916447/AnsiballZ_systemd.py'
Oct 02 18:58:55 compute-0 sudo[123106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:55 compute-0 python3.9[123108]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:58:55 compute-0 sudo[123106]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:56 compute-0 sudo[123261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dazoacguqmihfughvxzneyjreivzzjtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431535.860682-365-112764909314433/AnsiballZ_systemd.py'
Oct 02 18:58:56 compute-0 sudo[123261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:56 compute-0 python3.9[123263]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:58:56 compute-0 systemd[1]: Reloading.
Oct 02 18:58:56 compute-0 systemd-rc-local-generator[123293]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:58:56 compute-0 systemd-sysv-generator[123296]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:58:57 compute-0 sudo[123261]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:57 compute-0 sudo[123451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imwzabsygxaimnoopnhkysavxohjbwsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431537.2742932-401-104129633695292/AnsiballZ_systemd.py'
Oct 02 18:58:57 compute-0 sudo[123451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:57 compute-0 python3.9[123453]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 18:58:58 compute-0 systemd[1]: Reloading.
Oct 02 18:58:58 compute-0 systemd-sysv-generator[123485]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:58:58 compute-0 systemd-rc-local-generator[123481]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:58:58 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 02 18:58:58 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 02 18:58:58 compute-0 sudo[123451]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:59 compute-0 sudo[123643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdaorgxhqfdfnzxbjhloyksljahtxprb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431538.6585886-409-169718382228675/AnsiballZ_systemd.py'
Oct 02 18:58:59 compute-0 sudo[123643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:59 compute-0 python3.9[123645]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:58:59 compute-0 sudo[123643]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:00 compute-0 sudo[123809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llkrjvdeucuoxpdetowufypbxclrefvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431539.726818-409-198655730490364/AnsiballZ_systemd.py'
Oct 02 18:59:00 compute-0 sudo[123809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:00 compute-0 podman[123772]: 2025-10-02 18:59:00.273781898 +0000 UTC m=+0.207798102 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_controller)
Oct 02 18:59:00 compute-0 python3.9[123817]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:59:00 compute-0 sudo[123809]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:01 compute-0 sudo[123977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpoitseatzcamvnbjbxozkqczyqlpnev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431540.7787235-409-175616428487026/AnsiballZ_systemd.py'
Oct 02 18:59:01 compute-0 sudo[123977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:01 compute-0 python3.9[123979]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:59:01 compute-0 sudo[123977]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:02 compute-0 sudo[124132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sytlopcisqypbvsscsxggfwwyhdrebbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431541.759881-409-47332261106460/AnsiballZ_systemd.py'
Oct 02 18:59:02 compute-0 sudo[124132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:02 compute-0 python3.9[124134]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:59:03 compute-0 sudo[124132]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:04 compute-0 sudo[124287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skeuwsgssotpevtutofixrrflathmagk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431543.8328943-409-245027571386686/AnsiballZ_systemd.py'
Oct 02 18:59:04 compute-0 sudo[124287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:04 compute-0 python3.9[124289]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:59:04 compute-0 sudo[124287]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:05 compute-0 sudo[124442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxayuhkllsojbiazqyblnhprazljfydh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431544.8299909-409-18956276227832/AnsiballZ_systemd.py'
Oct 02 18:59:05 compute-0 sudo[124442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:05 compute-0 python3.9[124444]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:59:05 compute-0 sudo[124442]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:06 compute-0 sudo[124597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykgsngdwdlqxixqhbwxvqtvaizadxmwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431545.809645-409-230675039214000/AnsiballZ_systemd.py'
Oct 02 18:59:06 compute-0 sudo[124597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:06 compute-0 python3.9[124599]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:59:06 compute-0 sudo[124597]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:07 compute-0 sudo[124752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgdjpctylqyrndwgijnqjpsbbqxmwkyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431546.8318474-409-159791293959364/AnsiballZ_systemd.py'
Oct 02 18:59:07 compute-0 sudo[124752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:07 compute-0 python3.9[124754]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:59:07 compute-0 sudo[124752]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:08 compute-0 sudo[124907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qonpxeyprpfagrclfcsxitepjqmqhkvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431547.9159956-409-203369547928794/AnsiballZ_systemd.py'
Oct 02 18:59:08 compute-0 sudo[124907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:08 compute-0 python3.9[124909]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:59:08 compute-0 sudo[124907]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:09 compute-0 sudo[125062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aghdlxdnnfvjfywghxrnhwjnwiifwkpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431548.952888-409-183171679863239/AnsiballZ_systemd.py'
Oct 02 18:59:09 compute-0 sudo[125062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:09 compute-0 python3.9[125064]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:59:09 compute-0 sudo[125062]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:10 compute-0 sudo[125217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvocwvopaqczqshnqeohuusvnkxvlvfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431549.9374087-409-15011223610297/AnsiballZ_systemd.py'
Oct 02 18:59:10 compute-0 sudo[125217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:10 compute-0 python3.9[125219]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:59:10 compute-0 sudo[125217]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:11 compute-0 sudo[125372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwdiepdoapbemofbeffpllssehwidzvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431551.0093062-409-26886247267769/AnsiballZ_systemd.py'
Oct 02 18:59:11 compute-0 sudo[125372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:11 compute-0 python3.9[125374]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:59:11 compute-0 sudo[125372]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:12 compute-0 sudo[125527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jddacycmtsvekuwvxxvdoalehlrnsfvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431552.080714-409-230487592710353/AnsiballZ_systemd.py'
Oct 02 18:59:12 compute-0 sudo[125527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:12 compute-0 python3.9[125529]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:59:12 compute-0 sudo[125527]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:13 compute-0 sudo[125682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvjejmzkvgcgjccijhgguoidmobgzqek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431552.9834418-409-102659204215611/AnsiballZ_systemd.py'
Oct 02 18:59:13 compute-0 sudo[125682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:13 compute-0 python3.9[125684]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 18:59:13 compute-0 sudo[125682]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:14 compute-0 sudo[125837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwaecmeabbczgjaigkcfxnwkzgzlhqna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431554.1927023-511-255594735544109/AnsiballZ_file.py'
Oct 02 18:59:14 compute-0 sudo[125837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:14 compute-0 python3.9[125839]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:14 compute-0 sudo[125837]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:15 compute-0 sudo[125989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itlypoaiqdeccrcncuqzuexfsouzzags ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431554.978901-511-211483176432731/AnsiballZ_file.py'
Oct 02 18:59:15 compute-0 sudo[125989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:15 compute-0 python3.9[125991]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:15 compute-0 sudo[125989]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:16 compute-0 sudo[126141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnmaszoreuuxafzcaldafhqxqngvlqtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431555.7701764-511-16165433041874/AnsiballZ_file.py'
Oct 02 18:59:16 compute-0 sudo[126141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:16 compute-0 python3.9[126143]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:16 compute-0 sudo[126141]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:16 compute-0 sudo[126293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkuphzbhwrixylqqifkxltphxrruiljb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431556.4616604-511-203284107159749/AnsiballZ_file.py'
Oct 02 18:59:16 compute-0 sudo[126293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:17 compute-0 python3.9[126295]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:17 compute-0 sudo[126293]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:17 compute-0 sudo[126445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hchnxgysxplxsztaduvhfrybkygdtxrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431557.36706-511-8988080069868/AnsiballZ_file.py'
Oct 02 18:59:17 compute-0 sudo[126445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:17 compute-0 python3.9[126447]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:17 compute-0 sudo[126445]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:18 compute-0 sudo[126597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbfarleevofpvcdsepqorvcavgzbycsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431558.0475583-511-88164150372325/AnsiballZ_file.py'
Oct 02 18:59:18 compute-0 sudo[126597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:18 compute-0 python3.9[126599]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:18 compute-0 sudo[126597]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:19 compute-0 sudo[126749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrjmhyxioxrmoelzownlsjooslnkenbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431558.8340023-554-226980757007488/AnsiballZ_stat.py'
Oct 02 18:59:19 compute-0 sudo[126749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:19 compute-0 python3.9[126751]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:19 compute-0 sudo[126749]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:20 compute-0 sudo[126874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezhsxsidvmaxnqqhsugokhynvaquvatd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431558.8340023-554-226980757007488/AnsiballZ_copy.py'
Oct 02 18:59:20 compute-0 sudo[126874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:20 compute-0 python3.9[126876]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759431558.8340023-554-226980757007488/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:20 compute-0 sudo[126874]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:21 compute-0 sudo[127026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nelstaaugxxrvjnbxeffzowzshekdans ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431560.7317321-554-266482462363709/AnsiballZ_stat.py'
Oct 02 18:59:21 compute-0 sudo[127026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:21 compute-0 python3.9[127028]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:21 compute-0 sudo[127026]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:21 compute-0 sudo[127151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcxbiwsofhrojxjxsxcwuwzvetdetopg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431560.7317321-554-266482462363709/AnsiballZ_copy.py'
Oct 02 18:59:21 compute-0 sudo[127151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:21 compute-0 python3.9[127153]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759431560.7317321-554-266482462363709/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:21 compute-0 sudo[127151]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:22 compute-0 sudo[127303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucsndrdrvxdquxqjincesnrpylfkivog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431562.1541498-554-65189546550301/AnsiballZ_stat.py'
Oct 02 18:59:22 compute-0 sudo[127303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:22 compute-0 python3.9[127305]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:22 compute-0 sudo[127303]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:23 compute-0 sudo[127428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqiwyeidapwwjkadvksutegpftsisiqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431562.1541498-554-65189546550301/AnsiballZ_copy.py'
Oct 02 18:59:23 compute-0 sudo[127428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:23 compute-0 python3.9[127430]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759431562.1541498-554-65189546550301/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:23 compute-0 sudo[127428]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:23 compute-0 sudo[127580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nugwpoenxvvcoeabobokiaybngblpcfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431563.5593994-554-175811226543388/AnsiballZ_stat.py'
Oct 02 18:59:23 compute-0 sudo[127580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:24 compute-0 python3.9[127582]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:24 compute-0 sudo[127580]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:24 compute-0 sudo[127705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzimsbywmzorrvtnwjkunuukqxqprtaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431563.5593994-554-175811226543388/AnsiballZ_copy.py'
Oct 02 18:59:24 compute-0 sudo[127705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:24 compute-0 python3.9[127707]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759431563.5593994-554-175811226543388/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:24 compute-0 sudo[127705]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:25 compute-0 sudo[127857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvjjezuqzyvxqnyyydaecfvghqnrrnoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431565.024854-554-38995716488632/AnsiballZ_stat.py'
Oct 02 18:59:25 compute-0 sudo[127857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:25 compute-0 python3.9[127859]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:25 compute-0 sudo[127857]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:26 compute-0 sudo[127982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsfuedztlcvgrzhncffkseyzqrzssvxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431565.024854-554-38995716488632/AnsiballZ_copy.py'
Oct 02 18:59:26 compute-0 sudo[127982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:26 compute-0 python3.9[127984]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759431565.024854-554-38995716488632/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:26 compute-0 sudo[127982]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:27 compute-0 sudo[128134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koeulxyerdqecqgweheiorsuvrhubcba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431566.7074273-554-224294381560392/AnsiballZ_stat.py'
Oct 02 18:59:27 compute-0 sudo[128134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:27 compute-0 python3.9[128136]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:27 compute-0 sudo[128134]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:27 compute-0 sudo[128259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jahovltetgvjrynimomskxiativwrskz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431566.7074273-554-224294381560392/AnsiballZ_copy.py'
Oct 02 18:59:27 compute-0 sudo[128259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:28 compute-0 python3.9[128261]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759431566.7074273-554-224294381560392/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:28 compute-0 sudo[128259]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:28 compute-0 sudo[128411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgsxlopyrqnsymkvbldghdanweauvnmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431568.3612537-554-175388488499045/AnsiballZ_stat.py'
Oct 02 18:59:28 compute-0 sudo[128411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:28 compute-0 python3.9[128413]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:28 compute-0 sudo[128411]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:29 compute-0 sudo[128534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knkjlaglehespaargygccpepflyekypm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431568.3612537-554-175388488499045/AnsiballZ_copy.py'
Oct 02 18:59:29 compute-0 sudo[128534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:29 compute-0 python3.9[128536]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759431568.3612537-554-175388488499045/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:29 compute-0 sudo[128534]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:30 compute-0 sudo[128697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvmjyulncdtmydcutsyrclcwzkxaxome ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431570.0539393-554-207259236645307/AnsiballZ_stat.py'
Oct 02 18:59:30 compute-0 sudo[128697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:30 compute-0 podman[128660]: 2025-10-02 18:59:30.517421255 +0000 UTC m=+0.137313524 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Oct 02 18:59:30 compute-0 python3.9[128706]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:30 compute-0 sudo[128697]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:31 compute-0 sudo[128835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mszfbqjzpepfmwwivxaaledyheaewunm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431570.0539393-554-207259236645307/AnsiballZ_copy.py'
Oct 02 18:59:31 compute-0 sudo[128835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:31 compute-0 python3.9[128837]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759431570.0539393-554-207259236645307/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:31 compute-0 sudo[128835]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:31 compute-0 sudo[128987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfdczcuvaksmqkjzwmwlhmaazdnjdnbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431571.590416-667-195601372591332/AnsiballZ_command.py'
Oct 02 18:59:31 compute-0 sudo[128987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:32 compute-0 python3.9[128989]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 02 18:59:32 compute-0 sudo[128987]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:32 compute-0 sudo[129140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdodnypvtjhdrjusbuwqmhqledcjivgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431572.3745325-676-179062510949490/AnsiballZ_file.py'
Oct 02 18:59:32 compute-0 sudo[129140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:32 compute-0 python3.9[129142]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:33 compute-0 sudo[129140]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:33 compute-0 sudo[129292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbuhkfqinnyqgcvrjzfsztncpkvellks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431573.1668706-676-42287480994832/AnsiballZ_file.py'
Oct 02 18:59:33 compute-0 sudo[129292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:33 compute-0 python3.9[129294]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:33 compute-0 sudo[129292]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:34 compute-0 sudo[129444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oizunltwlaletyuwuxcvimosaxovbkbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431574.130869-676-42383529966560/AnsiballZ_file.py'
Oct 02 18:59:34 compute-0 sudo[129444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:34 compute-0 python3.9[129446]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:34 compute-0 sudo[129444]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:35 compute-0 sudo[129596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bchgepjdkydqoclsimrfjohvgdelljsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431574.9593842-676-76449498474639/AnsiballZ_file.py'
Oct 02 18:59:35 compute-0 sudo[129596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:35 compute-0 python3.9[129598]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:35 compute-0 sudo[129596]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:36 compute-0 sudo[129748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inpdkyyqbvwdaashotqmqhguqgrstpxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431575.801727-676-137904129172735/AnsiballZ_file.py'
Oct 02 18:59:36 compute-0 sudo[129748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:36 compute-0 python3.9[129750]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:36 compute-0 sudo[129748]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:36 compute-0 sudo[129900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upeiqmhpiayhzdcaktmjeofqroaczzla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431576.5978444-676-244916979818913/AnsiballZ_file.py'
Oct 02 18:59:36 compute-0 sudo[129900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:37 compute-0 python3.9[129902]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:37 compute-0 sudo[129900]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:37 compute-0 sudo[130052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epyrdszgtsdomymerrdloqyshsgmnlhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431577.4159923-676-33477937081277/AnsiballZ_file.py'
Oct 02 18:59:37 compute-0 sudo[130052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:38 compute-0 python3.9[130054]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:38 compute-0 sudo[130052]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:38 compute-0 sudo[130204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blnzejufeqvbvsukzrvbxdmrbytejarv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431578.3241506-676-240397837117247/AnsiballZ_file.py'
Oct 02 18:59:38 compute-0 sudo[130204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:38 compute-0 python3.9[130206]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:38 compute-0 sudo[130204]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:39 compute-0 sudo[130356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nydmpnfqxtldouvmnkaymcihwqpgjcbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431579.0608025-676-61331488828830/AnsiballZ_file.py'
Oct 02 18:59:39 compute-0 sudo[130356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:39 compute-0 python3.9[130358]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:39 compute-0 sudo[130356]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:40 compute-0 sudo[130508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnmhjklouoypncpdihohldsjekbymszw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431579.8015487-676-49948785240673/AnsiballZ_file.py'
Oct 02 18:59:40 compute-0 sudo[130508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:40 compute-0 python3.9[130510]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:40 compute-0 sudo[130508]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:40 compute-0 sudo[130660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teijdulxbgfupjssvddltuhjemgmnngr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431580.6011002-676-237240120892604/AnsiballZ_file.py'
Oct 02 18:59:40 compute-0 sudo[130660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:41 compute-0 python3.9[130662]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:41 compute-0 sudo[130660]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:41 compute-0 sudo[130812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzktaohntqsluaoqujwjmqhcedwpxtzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431581.3731737-676-47372443908116/AnsiballZ_file.py'
Oct 02 18:59:41 compute-0 sudo[130812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:41 compute-0 python3.9[130814]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:42 compute-0 sudo[130812]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:42 compute-0 sudo[130964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsfcjzdbpeenmasrrpptrbeiwchromnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431582.1920187-676-192767106461073/AnsiballZ_file.py'
Oct 02 18:59:42 compute-0 sudo[130964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:42 compute-0 python3.9[130966]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:42 compute-0 sudo[130964]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:43 compute-0 sudo[131116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwnpmrfrpcgrqxkrkfgaaodwduifehep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431582.9390767-676-101486101130832/AnsiballZ_file.py'
Oct 02 18:59:43 compute-0 sudo[131116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:43 compute-0 python3.9[131118]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:43 compute-0 sudo[131116]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:44 compute-0 sudo[131268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsnqlzqdmetzlinbmtjzlemwyhbxudlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431583.8301256-775-55514015847781/AnsiballZ_stat.py'
Oct 02 18:59:44 compute-0 sudo[131268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:44 compute-0 python3.9[131270]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:44 compute-0 sudo[131268]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:44 compute-0 sudo[131391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heqzzbcqvdkhdnqgqypsjxxevzpbjuoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431583.8301256-775-55514015847781/AnsiballZ_copy.py'
Oct 02 18:59:44 compute-0 sudo[131391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:45 compute-0 python3.9[131393]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431583.8301256-775-55514015847781/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:45 compute-0 sudo[131391]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:45 compute-0 sudo[131543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsilwbkguncbyoudfoxqavrqljezwrme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431585.236351-775-190688172334834/AnsiballZ_stat.py'
Oct 02 18:59:45 compute-0 sudo[131543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:45 compute-0 python3.9[131545]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:45 compute-0 sudo[131543]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:46 compute-0 sudo[131666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvpwcgqpwxxpmocqqvcqlkxdltdxyxpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431585.236351-775-190688172334834/AnsiballZ_copy.py'
Oct 02 18:59:46 compute-0 sudo[131666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:46 compute-0 python3.9[131668]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431585.236351-775-190688172334834/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:46 compute-0 sudo[131666]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:47 compute-0 sudo[131818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntljlxbtirqcasbgknywypvahidjeqvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431586.738022-775-211839638434201/AnsiballZ_stat.py'
Oct 02 18:59:47 compute-0 sudo[131818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:47 compute-0 python3.9[131820]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:47 compute-0 sudo[131818]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:47 compute-0 sudo[131941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnsnetqytfrvexbupvtwfpwriydkipkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431586.738022-775-211839638434201/AnsiballZ_copy.py'
Oct 02 18:59:47 compute-0 sudo[131941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:48 compute-0 python3.9[131943]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431586.738022-775-211839638434201/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:48 compute-0 sudo[131941]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:48 compute-0 sudo[132093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aafwdrsgeelytgfwqgwqidhvyxpqdvju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431588.3231936-775-6070862934257/AnsiballZ_stat.py'
Oct 02 18:59:48 compute-0 sudo[132093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:48 compute-0 python3.9[132095]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:48 compute-0 sudo[132093]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:49 compute-0 sudo[132216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxqaaxxtmzrhoeguocbzggvakbeoitxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431588.3231936-775-6070862934257/AnsiballZ_copy.py'
Oct 02 18:59:49 compute-0 sudo[132216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:49 compute-0 python3.9[132218]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431588.3231936-775-6070862934257/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:49 compute-0 sudo[132216]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:50 compute-0 sudo[132368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcfjgwtapbgdpsqqimcuctlzmnzxcfxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431589.6908336-775-23966732016884/AnsiballZ_stat.py'
Oct 02 18:59:50 compute-0 sudo[132368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:50 compute-0 python3.9[132370]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:50 compute-0 sudo[132368]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:50 compute-0 sudo[132491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqfgtmaahzqvdiwhvpqxtdfirkavxzlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431589.6908336-775-23966732016884/AnsiballZ_copy.py'
Oct 02 18:59:50 compute-0 sudo[132491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:50 compute-0 python3.9[132493]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431589.6908336-775-23966732016884/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:50 compute-0 sudo[132491]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:51 compute-0 sudo[132643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkokbjyyhmwevwffsaiifjbdfuvlhucy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431591.0228372-775-162505111355482/AnsiballZ_stat.py'
Oct 02 18:59:51 compute-0 sudo[132643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:51 compute-0 python3.9[132645]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:51 compute-0 sudo[132643]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:51 compute-0 sudo[132766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkdnuxctlzhfeathfpnmbgmqdpubfyac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431591.0228372-775-162505111355482/AnsiballZ_copy.py'
Oct 02 18:59:51 compute-0 sudo[132766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:52 compute-0 python3.9[132768]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431591.0228372-775-162505111355482/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:52 compute-0 sudo[132766]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:52 compute-0 sudo[132918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bavasvnxkwbwvrkvrkwupdmdxhemzgbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431592.3583186-775-19279402509605/AnsiballZ_stat.py'
Oct 02 18:59:52 compute-0 sudo[132918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:52 compute-0 python3.9[132920]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:52 compute-0 sudo[132918]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:53 compute-0 sudo[133041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zassghfetqpikirklekgcjuthhcxzwxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431592.3583186-775-19279402509605/AnsiballZ_copy.py'
Oct 02 18:59:53 compute-0 sudo[133041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:53 compute-0 python3.9[133043]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431592.3583186-775-19279402509605/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:53 compute-0 sudo[133041]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:54 compute-0 sudo[133193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xerjeyeqbjmglrjqzpwyhmmwrmkbgpas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431593.8544698-775-11671094748093/AnsiballZ_stat.py'
Oct 02 18:59:54 compute-0 sudo[133193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:54 compute-0 python3.9[133195]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:54 compute-0 sudo[133193]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:54 compute-0 sudo[133316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jadijjkrbpwhspbsgtnijuleyiasjree ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431593.8544698-775-11671094748093/AnsiballZ_copy.py'
Oct 02 18:59:54 compute-0 sudo[133316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:55 compute-0 python3.9[133318]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431593.8544698-775-11671094748093/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:55 compute-0 sudo[133316]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:55 compute-0 sudo[133468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hflhjfatrplgbruxfrfagkeiqyazrjeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431595.2050612-775-77907430763965/AnsiballZ_stat.py'
Oct 02 18:59:55 compute-0 sudo[133468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:55 compute-0 python3.9[133470]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:55 compute-0 sudo[133468]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:56 compute-0 sudo[133591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duttikyvnubsickegxkzflemqlcvywhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431595.2050612-775-77907430763965/AnsiballZ_copy.py'
Oct 02 18:59:56 compute-0 sudo[133591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:56 compute-0 python3.9[133593]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431595.2050612-775-77907430763965/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:56 compute-0 sudo[133591]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:57 compute-0 sudo[133743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnjaadxffpzesucxgrhbnfzromzrjvye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431596.611822-775-160052167108629/AnsiballZ_stat.py'
Oct 02 18:59:57 compute-0 sudo[133743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:57 compute-0 python3.9[133745]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:57 compute-0 sudo[133743]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:57 compute-0 sudo[133866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrxfitvqwwavhtcwialgeurasappgihc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431596.611822-775-160052167108629/AnsiballZ_copy.py'
Oct 02 18:59:57 compute-0 sudo[133866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:57 compute-0 python3.9[133868]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431596.611822-775-160052167108629/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:58 compute-0 sudo[133866]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:58 compute-0 sudo[134018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojilqiafglypkjaonhrnivwqkdoyvfbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431598.172955-775-229531231681564/AnsiballZ_stat.py'
Oct 02 18:59:58 compute-0 sudo[134018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:58 compute-0 python3.9[134020]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:58 compute-0 sudo[134018]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:59 compute-0 sudo[134141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdkczerqogjkjialszfqenbyzjgdaetm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431598.172955-775-229531231681564/AnsiballZ_copy.py'
Oct 02 18:59:59 compute-0 sudo[134141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:59 compute-0 python3.9[134143]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431598.172955-775-229531231681564/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:59 compute-0 sudo[134141]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:00 compute-0 sudo[134293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nixdmlfedrzczywuupzuvviobatbiqhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431599.7417903-775-149795386832133/AnsiballZ_stat.py'
Oct 02 19:00:00 compute-0 sudo[134293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:00 compute-0 python3.9[134295]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:00 compute-0 sudo[134293]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:00 compute-0 podman[134347]: 2025-10-02 19:00:00.708934804 +0000 UTC m=+0.134477161 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:00:00 compute-0 sudo[134444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cagryiinqgfgqachzwyolfwsbccobcqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431599.7417903-775-149795386832133/AnsiballZ_copy.py'
Oct 02 19:00:00 compute-0 sudo[134444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:01 compute-0 python3.9[134446]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431599.7417903-775-149795386832133/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:01 compute-0 sudo[134444]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:01 compute-0 sudo[134596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijscmjdtjeuzayxhnlzpndkpzcalwjjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431601.422016-775-44570933099549/AnsiballZ_stat.py'
Oct 02 19:00:01 compute-0 sudo[134596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:01 compute-0 python3.9[134598]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:02 compute-0 sudo[134596]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:02 compute-0 sudo[134719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rumhsukoloicyonhklztfonmcmshsxth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431601.422016-775-44570933099549/AnsiballZ_copy.py'
Oct 02 19:00:02 compute-0 sudo[134719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:02 compute-0 python3.9[134721]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431601.422016-775-44570933099549/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:02 compute-0 sudo[134719]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:03 compute-0 sudo[134871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nozhuyfnllvlucdqqpkdqwjshugyrzwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431602.8440888-775-45314709366894/AnsiballZ_stat.py'
Oct 02 19:00:03 compute-0 sudo[134871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:03 compute-0 python3.9[134873]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:03 compute-0 sudo[134871]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:03 compute-0 sudo[134994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpfavmbnevnpeyekwoazkpdqyzflvsbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431602.8440888-775-45314709366894/AnsiballZ_copy.py'
Oct 02 19:00:03 compute-0 sudo[134994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:04 compute-0 python3.9[134996]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431602.8440888-775-45314709366894/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:04 compute-0 sudo[134994]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:04 compute-0 python3.9[135146]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:00:05 compute-0 sudo[135299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydbelexaqytisbcpwdpuxwvmaerdvvcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431605.3009393-981-208768849891267/AnsiballZ_seboolean.py'
Oct 02 19:00:05 compute-0 sudo[135299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:06 compute-0 python3.9[135301]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 02 19:00:07 compute-0 sudo[135299]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:07 compute-0 sudo[135455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhfheqgseojndfxqynbdbucasehphnor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431607.5231342-989-43912090293908/AnsiballZ_copy.py'
Oct 02 19:00:07 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 02 19:00:07 compute-0 sudo[135455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:08 compute-0 python3.9[135457]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:08 compute-0 sudo[135455]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:08 compute-0 sudo[135607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmscynawagzdnjulhfxwbnhachasbmah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431608.3682067-989-205450456877026/AnsiballZ_copy.py'
Oct 02 19:00:08 compute-0 sudo[135607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:08 compute-0 python3.9[135609]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:08 compute-0 sudo[135607]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:09 compute-0 sudo[135759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tigljmzmjsoxraczgrgyrhgwzybvxhre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431609.1489873-989-280794856689250/AnsiballZ_copy.py'
Oct 02 19:00:09 compute-0 sudo[135759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:09 compute-0 python3.9[135761]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:09 compute-0 sudo[135759]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:10 compute-0 sudo[135911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgvizsgppfjrdgbxuzemzsvrciniiick ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431609.8518963-989-214142636419549/AnsiballZ_copy.py'
Oct 02 19:00:10 compute-0 sudo[135911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:10 compute-0 python3.9[135913]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:10 compute-0 sudo[135911]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:11 compute-0 sudo[136063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bslzfiihmkudusysizmkgrhkebqowqkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431610.6796737-989-75442207402833/AnsiballZ_copy.py'
Oct 02 19:00:11 compute-0 sudo[136063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:11 compute-0 python3.9[136065]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:11 compute-0 sudo[136063]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:11 compute-0 sudo[136215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvcmyavtqhszassmeskklyhkxbtbnxws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431611.4964812-1025-21087922433543/AnsiballZ_copy.py'
Oct 02 19:00:11 compute-0 sudo[136215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:12 compute-0 python3.9[136217]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:12 compute-0 sudo[136215]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:12 compute-0 sudo[136367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhtlxdackwxuzaafruuzfyyhnvgsbjum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431612.246787-1025-112446141292554/AnsiballZ_copy.py'
Oct 02 19:00:12 compute-0 sudo[136367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:12 compute-0 python3.9[136369]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:12 compute-0 sudo[136367]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:13 compute-0 sudo[136519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uotvzpahdxzahrwzufjkbycjujjwntka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431613.081829-1025-205939318296604/AnsiballZ_copy.py'
Oct 02 19:00:13 compute-0 sudo[136519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:13 compute-0 python3.9[136521]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:13 compute-0 sudo[136519]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:14 compute-0 sudo[136671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljhqsbcrirdrzzixpwygcpsyoxwlaysf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431613.8440964-1025-233521254722447/AnsiballZ_copy.py'
Oct 02 19:00:14 compute-0 sudo[136671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:14 compute-0 python3.9[136673]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:14 compute-0 sudo[136671]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:15 compute-0 sudo[136823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjvplopjjnwyrqmrzmvpqmsuudbircep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431614.7275386-1025-121048129103344/AnsiballZ_copy.py'
Oct 02 19:00:15 compute-0 sudo[136823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:15 compute-0 python3.9[136825]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:15 compute-0 sudo[136823]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:15 compute-0 sudo[136975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlbjelwqouyzdunfclicqnaabrddyufd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431615.6008356-1061-10953086240897/AnsiballZ_systemd.py'
Oct 02 19:00:15 compute-0 sudo[136975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:16 compute-0 python3.9[136977]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:00:16 compute-0 systemd[1]: Reloading.
Oct 02 19:00:16 compute-0 systemd-rc-local-generator[137006]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:00:16 compute-0 systemd-sysv-generator[137011]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:00:16 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Oct 02 19:00:16 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Oct 02 19:00:16 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 02 19:00:16 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 02 19:00:16 compute-0 systemd[1]: Starting libvirt logging daemon...
Oct 02 19:00:16 compute-0 systemd[1]: Started libvirt logging daemon.
Oct 02 19:00:16 compute-0 sudo[136975]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:17 compute-0 sudo[137168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymrpjfawgfhbkdvxvdrbyzlmukzhgygl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431617.006982-1061-12217752417434/AnsiballZ_systemd.py'
Oct 02 19:00:17 compute-0 sudo[137168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:17 compute-0 python3.9[137170]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:00:17 compute-0 systemd[1]: Reloading.
Oct 02 19:00:17 compute-0 systemd-rc-local-generator[137199]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:00:17 compute-0 systemd-sysv-generator[137203]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:00:18 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 02 19:00:18 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 02 19:00:18 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 02 19:00:18 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 02 19:00:18 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 02 19:00:18 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 02 19:00:18 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 02 19:00:18 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 02 19:00:18 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 02 19:00:18 compute-0 sudo[137168]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:18 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 02 19:00:18 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 02 19:00:18 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 02 19:00:18 compute-0 sudo[137391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsblfemygporbzdgqjqwwpyarprowxok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431618.2887833-1061-91014320180832/AnsiballZ_systemd.py'
Oct 02 19:00:18 compute-0 sudo[137391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:18 compute-0 python3.9[137393]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:00:19 compute-0 systemd[1]: Reloading.
Oct 02 19:00:19 compute-0 systemd-rc-local-generator[137424]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:00:19 compute-0 systemd-sysv-generator[137427]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:00:19 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 02 19:00:19 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 02 19:00:19 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 02 19:00:19 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 02 19:00:19 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 02 19:00:19 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 02 19:00:19 compute-0 sudo[137391]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:19 compute-0 setroubleshoot[137232]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l e54c3c4c-a546-4646-8679-28b6ac1dc224
Oct 02 19:00:19 compute-0 setroubleshoot[137232]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 02 19:00:19 compute-0 setroubleshoot[137232]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l e54c3c4c-a546-4646-8679-28b6ac1dc224
Oct 02 19:00:19 compute-0 setroubleshoot[137232]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 02 19:00:19 compute-0 sudo[137604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwjkzoepygeoqdkphrsxnyjndcltxmck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431619.5932715-1061-209625588637917/AnsiballZ_systemd.py'
Oct 02 19:00:19 compute-0 sudo[137604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:20 compute-0 python3.9[137606]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:00:20 compute-0 systemd[1]: Reloading.
Oct 02 19:00:20 compute-0 systemd-sysv-generator[137633]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:00:20 compute-0 systemd-rc-local-generator[137629]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:00:20 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Oct 02 19:00:20 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 02 19:00:20 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 02 19:00:20 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 02 19:00:20 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 02 19:00:20 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 02 19:00:20 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 02 19:00:20 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 02 19:00:20 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 02 19:00:20 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 02 19:00:20 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 02 19:00:20 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 02 19:00:20 compute-0 sudo[137604]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:21 compute-0 sudo[137817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfngkxoesasuohkxivtkovvesjjskebp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431620.9063623-1061-176904722950455/AnsiballZ_systemd.py'
Oct 02 19:00:21 compute-0 sudo[137817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:21 compute-0 python3.9[137819]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:00:21 compute-0 systemd[1]: Reloading.
Oct 02 19:00:21 compute-0 systemd-rc-local-generator[137848]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:00:21 compute-0 systemd-sysv-generator[137852]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:00:21 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Oct 02 19:00:21 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Oct 02 19:00:21 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 02 19:00:21 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 02 19:00:21 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 02 19:00:21 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 02 19:00:21 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 02 19:00:21 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 02 19:00:22 compute-0 sudo[137817]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:22 compute-0 sudo[138027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzemebjakfftgfwodhwpabiuqtlcbwuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431622.3152812-1098-25529159141897/AnsiballZ_file.py'
Oct 02 19:00:22 compute-0 sudo[138027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:22 compute-0 python3.9[138029]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:22 compute-0 sudo[138027]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:23 compute-0 sudo[138179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pawgtdfxrdmkxgjwyhspdghozozbrign ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431623.0788503-1106-140429436859138/AnsiballZ_find.py'
Oct 02 19:00:23 compute-0 sudo[138179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:23 compute-0 python3.9[138181]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:00:23 compute-0 sudo[138179]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:24 compute-0 sudo[138331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osifrbtqplvmesiwucvisapfhpzbomem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431623.9824667-1120-90458988700889/AnsiballZ_stat.py'
Oct 02 19:00:24 compute-0 sudo[138331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:24 compute-0 python3.9[138333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:24 compute-0 sudo[138331]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:25 compute-0 sudo[138454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjzfhghkqejyjeqivjijmqodeofpnaum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431623.9824667-1120-90458988700889/AnsiballZ_copy.py'
Oct 02 19:00:25 compute-0 sudo[138454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:25 compute-0 python3.9[138456]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431623.9824667-1120-90458988700889/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:25 compute-0 sudo[138454]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:26 compute-0 sudo[138606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvshbfaysfukeadayaauzjohmwstijna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431625.623587-1136-37682863105387/AnsiballZ_file.py'
Oct 02 19:00:26 compute-0 sudo[138606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:26 compute-0 python3.9[138608]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:26 compute-0 sudo[138606]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:26 compute-0 sudo[138758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdmwvzedccqtrrdtprqoxkfqlpzeroxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431626.4730124-1144-60877915078894/AnsiballZ_stat.py'
Oct 02 19:00:26 compute-0 sudo[138758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:27 compute-0 python3.9[138760]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:27 compute-0 sudo[138758]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:27 compute-0 sudo[138836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifjueonixlusdjrssjjqkaupdldnpvgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431626.4730124-1144-60877915078894/AnsiballZ_file.py'
Oct 02 19:00:27 compute-0 sudo[138836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:27 compute-0 python3.9[138838]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:27 compute-0 sudo[138836]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:28 compute-0 sudo[138988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lawvtowzjwiggtlffujkhfpslwrszffa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431627.842799-1156-177233650687236/AnsiballZ_stat.py'
Oct 02 19:00:28 compute-0 sudo[138988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:28 compute-0 python3.9[138990]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:28 compute-0 sudo[138988]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:28 compute-0 sudo[139066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibnrjzyxynumtkcrhqqfjpwctxblnyxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431627.842799-1156-177233650687236/AnsiballZ_file.py'
Oct 02 19:00:28 compute-0 sudo[139066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:28 compute-0 python3.9[139068]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.tgg2ji3d recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:28 compute-0 sudo[139066]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:29 compute-0 sudo[139218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-regoygqxoogosgwbkvxqmggkfrhrgpnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431629.1313195-1168-137892808266537/AnsiballZ_stat.py'
Oct 02 19:00:29 compute-0 sudo[139218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:29 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 02 19:00:29 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 02 19:00:29 compute-0 python3.9[139220]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:29 compute-0 sudo[139218]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:29 compute-0 sudo[139296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oywvmrnglinledwguodjuqybfiyfbbhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431629.1313195-1168-137892808266537/AnsiballZ_file.py'
Oct 02 19:00:29 compute-0 sudo[139296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:30 compute-0 python3.9[139298]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:30 compute-0 sudo[139296]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:30 compute-0 sudo[139459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqtpdosvdevynzzhucombjrihbcamqzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431630.4281073-1181-40238893979500/AnsiballZ_command.py'
Oct 02 19:00:30 compute-0 sudo[139459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:30 compute-0 podman[139422]: 2025-10-02 19:00:30.904862113 +0000 UTC m=+0.121913638 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:00:31 compute-0 python3.9[139465]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:00:31 compute-0 sudo[139459]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:31 compute-0 sudo[139625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqecodclvjujrzakwcpoexndzdqausil ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431631.2892127-1189-228121551547041/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 19:00:31 compute-0 sudo[139625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:32 compute-0 python3[139627]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 19:00:32 compute-0 sudo[139625]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:32 compute-0 sudo[139777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enactwvohnbtizqmbfoxflvfxqutowpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431632.33934-1197-265145580352059/AnsiballZ_stat.py'
Oct 02 19:00:32 compute-0 sudo[139777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:32 compute-0 python3.9[139779]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:32 compute-0 sudo[139777]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:33 compute-0 sudo[139855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laqgmckalnpdbewckktivhjjemjrpomu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431632.33934-1197-265145580352059/AnsiballZ_file.py'
Oct 02 19:00:33 compute-0 sudo[139855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:33 compute-0 python3.9[139857]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:33 compute-0 sudo[139855]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:34 compute-0 sudo[140007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aztosmwyhhaegufdelxjrudxsyovvxsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431633.6736069-1209-12892384485980/AnsiballZ_stat.py'
Oct 02 19:00:34 compute-0 sudo[140007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:34 compute-0 python3.9[140009]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:34 compute-0 sudo[140007]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:34 compute-0 sudo[140085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enfzlymutbnezlihjwpwyoibwahheqjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431633.6736069-1209-12892384485980/AnsiballZ_file.py'
Oct 02 19:00:34 compute-0 sudo[140085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:34 compute-0 python3.9[140087]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:34 compute-0 sudo[140085]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:35 compute-0 sudo[140237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhexggaphszpsivzuvbmbgqtjhuqkuft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431635.107244-1221-54469359501432/AnsiballZ_stat.py'
Oct 02 19:00:35 compute-0 sudo[140237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:35 compute-0 python3.9[140239]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:35 compute-0 sudo[140237]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:36 compute-0 sudo[140315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soieesfrmiofixkygvhmosbybrdbfiwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431635.107244-1221-54469359501432/AnsiballZ_file.py'
Oct 02 19:00:36 compute-0 sudo[140315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:36 compute-0 python3.9[140317]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:36 compute-0 sudo[140315]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:36 compute-0 sudo[140467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhforvzwvdisuxoipmraywaurrengaop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431636.465943-1233-270373677747258/AnsiballZ_stat.py'
Oct 02 19:00:36 compute-0 sudo[140467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:37 compute-0 python3.9[140469]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:37 compute-0 sudo[140467]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:37 compute-0 sudo[140545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myfopiipidtcxcjvkzdqvndkqvomogzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431636.465943-1233-270373677747258/AnsiballZ_file.py'
Oct 02 19:00:37 compute-0 sudo[140545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:37 compute-0 python3.9[140547]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:37 compute-0 sudo[140545]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:38 compute-0 sudo[140697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaymrvodpxecufvyfxjbwysbvhpznjqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431637.8430803-1245-265853709554682/AnsiballZ_stat.py'
Oct 02 19:00:38 compute-0 sudo[140697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:38 compute-0 python3.9[140699]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:38 compute-0 sudo[140697]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:39 compute-0 sudo[140822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kifhdiesxruhekkxvvmvgngrvslcpyri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431637.8430803-1245-265853709554682/AnsiballZ_copy.py'
Oct 02 19:00:39 compute-0 sudo[140822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:39 compute-0 python3.9[140824]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431637.8430803-1245-265853709554682/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:39 compute-0 sudo[140822]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:39 compute-0 sudo[140974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjjwenvhpqbetyyuapvvucagjuixexev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431639.461628-1260-183972810866753/AnsiballZ_file.py'
Oct 02 19:00:39 compute-0 sudo[140974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:40 compute-0 python3.9[140976]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:40 compute-0 sudo[140974]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:40 compute-0 sudo[141126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clxfllmrwqmgeonzhqksivisbntbbjom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431640.2921677-1268-226967312591632/AnsiballZ_command.py'
Oct 02 19:00:40 compute-0 sudo[141126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:40 compute-0 python3.9[141128]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:00:40 compute-0 sudo[141126]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:41 compute-0 sudo[141281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjckftnwvqqllgjyvegobnsfxjbgamsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431641.1381006-1276-98519856931642/AnsiballZ_blockinfile.py'
Oct 02 19:00:41 compute-0 sudo[141281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:41 compute-0 python3.9[141283]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:41 compute-0 sudo[141281]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:42 compute-0 sudo[141433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epeqfbipoodagtwuosnlbfhehrutgyis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431642.1706495-1285-159350597503446/AnsiballZ_command.py'
Oct 02 19:00:42 compute-0 sudo[141433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:42 compute-0 python3.9[141435]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:00:42 compute-0 sudo[141433]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:43 compute-0 sudo[141586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqnyldytlvpyhbewfvbdgajsyywoqegq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431643.0262852-1293-168277953380120/AnsiballZ_stat.py'
Oct 02 19:00:43 compute-0 sudo[141586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:43 compute-0 python3.9[141588]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:00:43 compute-0 sudo[141586]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:44 compute-0 sudo[141740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaqxoqnqfvxiwikewfseqsgvbcobjuup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431643.9437742-1301-193898078587702/AnsiballZ_command.py'
Oct 02 19:00:44 compute-0 sudo[141740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:44 compute-0 python3.9[141742]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:00:44 compute-0 sudo[141740]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:45 compute-0 sudo[141895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrqrslpamnfyohoajmrskjwhpyxeulor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431644.886909-1309-52839871871688/AnsiballZ_file.py'
Oct 02 19:00:45 compute-0 sudo[141895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:45 compute-0 python3.9[141897]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:45 compute-0 sudo[141895]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:46 compute-0 sudo[142047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcxlemgopvdkfkqdqchcsxfwronspzec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431645.6947913-1317-260564114118264/AnsiballZ_stat.py'
Oct 02 19:00:46 compute-0 sudo[142047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:46 compute-0 python3.9[142049]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:46 compute-0 sudo[142047]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:46 compute-0 sudo[142170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buadrssxcicfxhpiuicmfpbagptbqqof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431645.6947913-1317-260564114118264/AnsiballZ_copy.py'
Oct 02 19:00:46 compute-0 sudo[142170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:47 compute-0 python3.9[142172]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431645.6947913-1317-260564114118264/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:47 compute-0 sudo[142170]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:47 compute-0 sudo[142322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlgcrmbkhyvyvowhdgcjesbbxepokjmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431647.2387009-1332-22525711130787/AnsiballZ_stat.py'
Oct 02 19:00:47 compute-0 sudo[142322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:47 compute-0 python3.9[142324]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:47 compute-0 sudo[142322]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:48 compute-0 sudo[142445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tspzzfzdmyfbqddrlvdrrritzbtaxaow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431647.2387009-1332-22525711130787/AnsiballZ_copy.py'
Oct 02 19:00:48 compute-0 sudo[142445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:48 compute-0 python3.9[142447]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431647.2387009-1332-22525711130787/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:48 compute-0 sudo[142445]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:49 compute-0 sudo[142597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrdcfofzophldywlyhsmkwzaeiglhhls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431648.7967145-1347-41197612568864/AnsiballZ_stat.py'
Oct 02 19:00:49 compute-0 sudo[142597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:49 compute-0 python3.9[142599]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:49 compute-0 sudo[142597]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:49 compute-0 sudo[142720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htukmngszpgnenjyvmyxcucoougpbjuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431648.7967145-1347-41197612568864/AnsiballZ_copy.py'
Oct 02 19:00:49 compute-0 sudo[142720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:50 compute-0 python3.9[142722]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431648.7967145-1347-41197612568864/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:50 compute-0 sudo[142720]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:50 compute-0 sudo[142872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiqzwvfybvingpaiwfcazhgiwihpbdnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431650.4005985-1362-199693427179854/AnsiballZ_systemd.py'
Oct 02 19:00:50 compute-0 sudo[142872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:51 compute-0 python3.9[142874]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:00:51 compute-0 systemd[1]: Reloading.
Oct 02 19:00:51 compute-0 systemd-rc-local-generator[142898]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:00:51 compute-0 systemd-sysv-generator[142901]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:00:51 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Oct 02 19:00:51 compute-0 sudo[142872]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:52 compute-0 sudo[143064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijtmvvfosojdcfvbtxaxazefxjolyydx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431651.7090683-1370-216464967913353/AnsiballZ_systemd.py'
Oct 02 19:00:52 compute-0 sudo[143064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:52 compute-0 python3.9[143066]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 02 19:00:52 compute-0 systemd[1]: Reloading.
Oct 02 19:00:52 compute-0 systemd-rc-local-generator[143091]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:00:52 compute-0 systemd-sysv-generator[143094]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:00:52 compute-0 systemd[1]: Reloading.
Oct 02 19:00:52 compute-0 systemd-rc-local-generator[143128]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:00:52 compute-0 systemd-sysv-generator[143131]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:00:53 compute-0 sudo[143064]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:53 compute-0 sshd-session[89039]: Connection closed by 192.168.122.30 port 40076
Oct 02 19:00:53 compute-0 sshd-session[89036]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:00:53 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Oct 02 19:00:53 compute-0 systemd[1]: session-21.scope: Consumed 4min 734ms CPU time.
Oct 02 19:00:53 compute-0 systemd-logind[793]: Session 21 logged out. Waiting for processes to exit.
Oct 02 19:00:53 compute-0 systemd-logind[793]: Removed session 21.
Oct 02 19:00:59 compute-0 sshd-session[143163]: Accepted publickey for zuul from 192.168.122.30 port 39726 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:00:59 compute-0 systemd-logind[793]: New session 22 of user zuul.
Oct 02 19:00:59 compute-0 systemd[1]: Started Session 22 of User zuul.
Oct 02 19:00:59 compute-0 sshd-session[143163]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:01:00 compute-0 python3.9[143316]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:01:01 compute-0 podman[143420]: 2025-10-02 19:01:01.69376369 +0000 UTC m=+0.120026737 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:01:01 compute-0 sudo[143496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhtczuxbizocdlwugbgukabaoitgcioa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431661.057923-36-17306628411714/AnsiballZ_systemd_service.py'
Oct 02 19:01:01 compute-0 sudo[143496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:01 compute-0 CROND[143501]: (root) CMD (run-parts /etc/cron.hourly)
Oct 02 19:01:01 compute-0 run-parts[143504]: (/etc/cron.hourly) starting 0anacron
Oct 02 19:01:01 compute-0 anacron[143512]: Anacron started on 2025-10-02
Oct 02 19:01:01 compute-0 anacron[143512]: Will run job `cron.daily' in 25 min.
Oct 02 19:01:01 compute-0 anacron[143512]: Will run job `cron.weekly' in 45 min.
Oct 02 19:01:01 compute-0 anacron[143512]: Will run job `cron.monthly' in 65 min.
Oct 02 19:01:01 compute-0 anacron[143512]: Jobs will be executed sequentially
Oct 02 19:01:01 compute-0 run-parts[143514]: (/etc/cron.hourly) finished 0anacron
Oct 02 19:01:01 compute-0 CROND[143500]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 02 19:01:02 compute-0 python3.9[143498]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:01:02 compute-0 systemd[1]: Reloading.
Oct 02 19:01:02 compute-0 systemd-rc-local-generator[143539]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:01:02 compute-0 systemd-sysv-generator[143545]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:01:02 compute-0 sudo[143496]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:03 compute-0 python3.9[143699]: ansible-ansible.builtin.service_facts Invoked
Oct 02 19:01:03 compute-0 network[143716]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 19:01:03 compute-0 network[143717]: 'network-scripts' will be removed from distribution in near future.
Oct 02 19:01:03 compute-0 network[143718]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 19:01:09 compute-0 sudo[143989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psdanizrfbxvrrrsvbvqcgenauozejzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431668.771118-55-108160596274329/AnsiballZ_systemd_service.py'
Oct 02 19:01:09 compute-0 sudo[143989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:09 compute-0 python3.9[143991]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:01:09 compute-0 sudo[143989]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:10 compute-0 sudo[144142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owbquwalqzpzdfekyszszbhxvqcfkwkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431669.893732-65-244462293274753/AnsiballZ_file.py'
Oct 02 19:01:10 compute-0 sudo[144142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:10 compute-0 python3.9[144144]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:10 compute-0 sudo[144142]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:11 compute-0 sudo[144294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxscphfnoinegvhqhhdbvrqpsjxeheue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431670.9512267-73-113115458202464/AnsiballZ_file.py'
Oct 02 19:01:11 compute-0 sudo[144294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:11 compute-0 python3.9[144296]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:11 compute-0 sudo[144294]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:12 compute-0 sudo[144446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qomyadinwyvrpiiyfjxrqaauuhzxzrbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431671.8404422-82-121810246190491/AnsiballZ_command.py'
Oct 02 19:01:12 compute-0 sudo[144446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:12 compute-0 python3.9[144448]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:01:12 compute-0 sudo[144446]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:13 compute-0 python3.9[144600]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:01:14 compute-0 sudo[144750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiwekyvuafapqqvixffwkfdbcrdnecci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431673.9533675-100-57473248775480/AnsiballZ_systemd_service.py'
Oct 02 19:01:14 compute-0 sudo[144750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:14 compute-0 python3.9[144752]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:01:14 compute-0 systemd[1]: Reloading.
Oct 02 19:01:14 compute-0 systemd-rc-local-generator[144775]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:01:14 compute-0 systemd-sysv-generator[144782]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:01:15 compute-0 sudo[144750]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:15 compute-0 sudo[144937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdrtrvvphrgsihcfesbcuefffvikstws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431675.2473736-108-31004404849801/AnsiballZ_command.py'
Oct 02 19:01:15 compute-0 sudo[144937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:15 compute-0 python3.9[144939]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:01:15 compute-0 sudo[144937]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:16 compute-0 sudo[145090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxugqvrnodyqsnonximsjyzprpscobhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431676.2384284-117-23692529175900/AnsiballZ_file.py'
Oct 02 19:01:16 compute-0 sudo[145090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:16 compute-0 python3.9[145092]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:01:16 compute-0 sudo[145090]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:17 compute-0 python3.9[145242]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:01:18 compute-0 python3.9[145394]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:19 compute-0 python3.9[145515]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431678.1496317-133-200753176419667/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:01:20 compute-0 sudo[145665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmppypexqzcvgkfryiwyymsfikmkxaqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431679.8219256-148-162981110533415/AnsiballZ_group.py'
Oct 02 19:01:20 compute-0 sudo[145665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:20 compute-0 python3.9[145667]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Oct 02 19:01:20 compute-0 sudo[145665]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:21 compute-0 sudo[145817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjplhhqcgqgcremgserrbcchyuziyeaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431680.8821104-159-89569174974493/AnsiballZ_getent.py'
Oct 02 19:01:21 compute-0 sudo[145817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:21 compute-0 python3.9[145819]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Oct 02 19:01:21 compute-0 sudo[145817]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:22 compute-0 sudo[145970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yawfesbqazigdcnsvbvrwhprtnfadabv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431681.7650867-167-274700669912991/AnsiballZ_group.py'
Oct 02 19:01:22 compute-0 sudo[145970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:22 compute-0 python3.9[145972]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 19:01:22 compute-0 groupadd[145973]: group added to /etc/group: name=ceilometer, GID=42405
Oct 02 19:01:22 compute-0 groupadd[145973]: group added to /etc/gshadow: name=ceilometer
Oct 02 19:01:22 compute-0 groupadd[145973]: new group: name=ceilometer, GID=42405
Oct 02 19:01:22 compute-0 sudo[145970]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:23 compute-0 sudo[146128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbnmloklliwzvjjhuipxlqyqfjmtutfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431682.6536815-175-104707504955353/AnsiballZ_user.py'
Oct 02 19:01:23 compute-0 sudo[146128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:23 compute-0 python3.9[146130]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 19:01:23 compute-0 useradd[146132]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Oct 02 19:01:23 compute-0 useradd[146132]: add 'ceilometer' to group 'libvirt'
Oct 02 19:01:23 compute-0 useradd[146132]: add 'ceilometer' to shadow group 'libvirt'
Oct 02 19:01:23 compute-0 sudo[146128]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:24 compute-0 python3.9[146288]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:25 compute-0 python3.9[146409]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759431684.3588648-201-235331792602191/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:26 compute-0 python3.9[146559]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:27 compute-0 python3.9[146680]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759431685.830842-201-220061104064979/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:27 compute-0 python3.9[146830]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:28 compute-0 python3.9[146951]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759431687.2942789-201-178353658183438/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:29 compute-0 python3.9[147101]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:01:30 compute-0 python3.9[147253]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:01:30 compute-0 python3.9[147405]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:31 compute-0 python3.9[147526]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431690.3700826-260-53223505788103/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:32 compute-0 podman[147650]: 2025-10-02 19:01:32.235180432 +0000 UTC m=+0.124520284 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:01:32 compute-0 python3.9[147686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:32 compute-0 python3.9[147778]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:33 compute-0 python3.9[147928]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:34 compute-0 python3.9[148049]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431693.0864902-260-35737664288978/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:35 compute-0 python3.9[148199]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:35 compute-0 python3.9[148320]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431694.533513-260-78688287137960/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:36 compute-0 python3.9[148470]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:37 compute-0 python3.9[148591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431696.023573-260-178759446712251/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:38 compute-0 python3.9[148741]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:38 compute-0 python3.9[148862]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431697.445851-260-237206279319269/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:39 compute-0 python3.9[149012]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:40 compute-0 python3.9[149133]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431698.874978-260-232669516969650/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:40 compute-0 python3.9[149283]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:41 compute-0 python3.9[149404]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431700.3547926-260-87581529290669/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:42 compute-0 python3.9[149554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:42 compute-0 python3.9[149675]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431701.6899617-260-279787280658549/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:43 compute-0 python3.9[149825]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:44 compute-0 python3.9[149946]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431702.9764552-260-206171683551843/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:44 compute-0 python3.9[150096]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:45 compute-0 python3.9[150217]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431704.3761265-260-133441487454237/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:46 compute-0 python3.9[150367]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:46 compute-0 python3.9[150443]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:47 compute-0 python3.9[150593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:47 compute-0 python3.9[150669]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:48 compute-0 python3.9[150819]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:49 compute-0 python3.9[150896]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:49 compute-0 sudo[151046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyxxmrhgloottqjeeydmytptwdnbsufq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431709.5676155-449-38041960042417/AnsiballZ_file.py'
Oct 02 19:01:49 compute-0 sudo[151046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:50 compute-0 python3.9[151048]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:50 compute-0 sudo[151046]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:50 compute-0 sudo[151198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-punxtmsauhqqrfiqiszcsaibjxchdrsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431710.454589-457-271441937920556/AnsiballZ_file.py'
Oct 02 19:01:50 compute-0 sudo[151198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:50 compute-0 python3.9[151200]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:51 compute-0 sudo[151198]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:51 compute-0 sudo[151350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpkfqkeigfroiwuhiegputppweynrrft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431711.235072-465-128693608874897/AnsiballZ_file.py'
Oct 02 19:01:51 compute-0 sudo[151350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:51 compute-0 python3.9[151352]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:01:51 compute-0 sudo[151350]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:52 compute-0 sudo[151502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdphiraskrltunvjqzocmixeiscimrgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431712.0798922-473-17074841051576/AnsiballZ_systemd_service.py'
Oct 02 19:01:52 compute-0 sudo[151502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:52 compute-0 python3.9[151504]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:01:52 compute-0 systemd[1]: Reloading.
Oct 02 19:01:52 compute-0 systemd-sysv-generator[151537]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:01:52 compute-0 systemd-rc-local-generator[151534]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:01:53 compute-0 systemd[1]: Listening on Podman API Socket.
Oct 02 19:01:53 compute-0 sudo[151502]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:53 compute-0 sudo[151693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcdobjosndjgbekyiiclphsasgqdnukc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431713.5527165-482-244244743319987/AnsiballZ_stat.py'
Oct 02 19:01:53 compute-0 sudo[151693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:54 compute-0 python3.9[151695]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:54 compute-0 sudo[151693]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:54 compute-0 sudo[151816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtarxodtkvnzuosgwihfywnifjdcmzlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431713.5527165-482-244244743319987/AnsiballZ_copy.py'
Oct 02 19:01:54 compute-0 sudo[151816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:54 compute-0 python3.9[151818]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431713.5527165-482-244244743319987/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:01:54 compute-0 sudo[151816]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:55 compute-0 sudo[151892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylhygsdsmipapypflxighcjlccuaektu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431713.5527165-482-244244743319987/AnsiballZ_stat.py'
Oct 02 19:01:55 compute-0 sudo[151892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:55 compute-0 python3.9[151894]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:55 compute-0 sudo[151892]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:55 compute-0 sudo[152015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbmvdxultsjgqsqznvujowpvbsbivedq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431713.5527165-482-244244743319987/AnsiballZ_copy.py'
Oct 02 19:01:56 compute-0 sudo[152015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:56 compute-0 python3.9[152017]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431713.5527165-482-244244743319987/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:01:56 compute-0 sudo[152015]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:57 compute-0 sudo[152167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojiompdmlotbgaugiyxtohzjvivblldp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431716.600886-510-103474391102782/AnsiballZ_container_config_data.py'
Oct 02 19:01:57 compute-0 sudo[152167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:57 compute-0 python3.9[152169]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Oct 02 19:01:57 compute-0 sudo[152167]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:58 compute-0 sudo[152319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bearwwhkcdfjbfirrxnkwejkfgvrcqiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431717.7347271-519-182976083081067/AnsiballZ_container_config_hash.py'
Oct 02 19:01:58 compute-0 sudo[152319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:58 compute-0 python3.9[152321]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:01:58 compute-0 sudo[152319]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:59 compute-0 sudo[152471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwdiffctlpbxpjssnkrdopfbgzpltuki ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431718.9708889-529-166825893278449/AnsiballZ_edpm_container_manage.py'
Oct 02 19:01:59 compute-0 sudo[152471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:59 compute-0 python3[152473]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:02:02 compute-0 podman[152523]: 2025-10-02 19:02:02.674665225 +0000 UTC m=+0.103518230 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:02:15 compute-0 podman[152485]: 2025-10-02 19:02:15.606307938 +0000 UTC m=+15.650741898 image pull af55c482fa6ac3c7068a40d60290d5ada8b2ec948be38389742c3fe61801742f quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Oct 02 19:02:15 compute-0 podman[152655]: 2025-10-02 19:02:15.832006679 +0000 UTC m=+0.077975231 container create b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250930)
Oct 02 19:02:15 compute-0 podman[152655]: 2025-10-02 19:02:15.791709666 +0000 UTC m=+0.037678268 image pull af55c482fa6ac3c7068a40d60290d5ada8b2ec948be38389742c3fe61801742f quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Oct 02 19:02:15 compute-0 python3[152473]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Oct 02 19:02:16 compute-0 sudo[152471]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:16 compute-0 sudo[152842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsxcnkerecxzpaxxrjnodltibjfubkvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431736.2143302-537-127578288139669/AnsiballZ_stat.py'
Oct 02 19:02:16 compute-0 sudo[152842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:16 compute-0 python3.9[152844]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:02:16 compute-0 sudo[152842]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:17 compute-0 sudo[152996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roowixuqpndxzkkknzqmfrickxzrgffj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431737.0783246-546-60383872325321/AnsiballZ_file.py'
Oct 02 19:02:17 compute-0 sudo[152996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:17 compute-0 python3.9[152998]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:17 compute-0 sudo[152996]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:18 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 02 19:02:18 compute-0 sudo[153148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mubetglxmtkiuwommkhmhvbnuijogqus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431737.6912513-546-180032078578071/AnsiballZ_copy.py'
Oct 02 19:02:18 compute-0 sudo[153148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:18 compute-0 python3.9[153150]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759431737.6912513-546-180032078578071/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:18 compute-0 sudo[153148]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:19 compute-0 sudo[153224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvisqopeuyqbblhkswphxckltspfqhxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431737.6912513-546-180032078578071/AnsiballZ_systemd.py'
Oct 02 19:02:19 compute-0 sudo[153224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:19 compute-0 python3.9[153226]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:02:19 compute-0 systemd[1]: Reloading.
Oct 02 19:02:19 compute-0 systemd-rc-local-generator[153257]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:02:19 compute-0 systemd-sysv-generator[153261]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:02:19 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 02 19:02:19 compute-0 sudo[153224]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:20 compute-0 sudo[153338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tglvtpguinwkcedmweyjrrlssgjelkjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431737.6912513-546-180032078578071/AnsiballZ_systemd.py'
Oct 02 19:02:20 compute-0 sudo[153338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:20 compute-0 python3.9[153340]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:02:20 compute-0 systemd[1]: Reloading.
Oct 02 19:02:20 compute-0 systemd-sysv-generator[153375]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:02:20 compute-0 systemd-rc-local-generator[153371]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:02:20 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Oct 02 19:02:20 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 02 19:02:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c137816fdf11fec8535861d16e376952678cff0dca35dc89ae89fcade482c6de/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c137816fdf11fec8535861d16e376952678cff0dca35dc89ae89fcade482c6de/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c137816fdf11fec8535861d16e376952678cff0dca35dc89ae89fcade482c6de/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Oct 02 19:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c137816fdf11fec8535861d16e376952678cff0dca35dc89ae89fcade482c6de/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct 02 19:02:20 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.
Oct 02 19:02:20 compute-0 podman[153380]: 2025-10-02 19:02:20.937463748 +0000 UTC m=+0.181313475 container init b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 19:02:20 compute-0 ceilometer_agent_compute[153396]: + sudo -E kolla_set_configs
Oct 02 19:02:20 compute-0 sudo[153402]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 19:02:20 compute-0 ceilometer_agent_compute[153396]: sudo: unable to send audit message: Operation not permitted
Oct 02 19:02:20 compute-0 sudo[153402]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:02:20 compute-0 podman[153380]: 2025-10-02 19:02:20.98200741 +0000 UTC m=+0.225857057 container start b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 19:02:20 compute-0 podman[153380]: ceilometer_agent_compute
Oct 02 19:02:20 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: INFO:__main__:Validating config file
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: INFO:__main__:Copying service configuration files
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: INFO:__main__:Writing out command to execute
Oct 02 19:02:21 compute-0 sudo[153402]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: ++ cat /run_command
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: + ARGS=
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: + sudo kolla_copy_cacerts
Oct 02 19:02:21 compute-0 sudo[153338]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:21 compute-0 sudo[153418]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: sudo: unable to send audit message: Operation not permitted
Oct 02 19:02:21 compute-0 sudo[153418]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:02:21 compute-0 sudo[153418]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: + [[ ! -n '' ]]
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: + . kolla_extend_start
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: + umask 0022
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Oct 02 19:02:21 compute-0 podman[153403]: 2025-10-02 19:02:21.075436132 +0000 UTC m=+0.072623975 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Oct 02 19:02:21 compute-0 systemd[1]: b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca-76593471a0b2f4cf.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:02:21 compute-0 systemd[1]: b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca-76593471a0b2f4cf.service: Failed with result 'exit-code'.
Oct 02 19:02:21 compute-0 sudo[153576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncghxiaydbmoguiwohoktmtrcuqyzwpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431741.2579577-570-128731877056946/AnsiballZ_systemd.py'
Oct 02 19:02:21 compute-0 sudo[153576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.821 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.821 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.821 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.821 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.821 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.821 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.821 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.822 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.822 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.822 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.822 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.822 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.822 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.822 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.822 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.822 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.822 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.822 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.822 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.823 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.823 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.823 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.823 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.823 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.823 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.823 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.823 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.823 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.823 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.823 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.824 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.824 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.824 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.824 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.824 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.824 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.824 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.824 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.824 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.824 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.824 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.824 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.824 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.824 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.825 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.825 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.825 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.825 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.825 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.825 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.825 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.825 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.825 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.825 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.825 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.825 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.825 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.826 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.826 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.826 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.826 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.826 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.826 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.826 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.826 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.826 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.826 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.826 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.826 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.826 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.826 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.827 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.827 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.827 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.827 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.827 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.827 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.827 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.827 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.827 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.827 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.827 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.827 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.827 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.828 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.828 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.828 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.828 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.828 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.828 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.828 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.828 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.828 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.828 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.828 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.828 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.829 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.829 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.829 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.829 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.829 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.829 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.829 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.829 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.829 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.829 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.829 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.829 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.829 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.830 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.830 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.830 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.830 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.830 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.830 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.830 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.830 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.830 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.830 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.830 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.830 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.830 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.831 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.831 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.831 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.831 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.831 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.831 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.831 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.831 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.831 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.831 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.831 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.831 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.831 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.831 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.832 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.832 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.832 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.833 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.833 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.833 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.853 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.854 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.854 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.854 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.854 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.854 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.855 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.855 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.855 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.855 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.855 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.855 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.855 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.855 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.855 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.855 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.855 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.855 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.855 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.855 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.856 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.857 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.858 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.858 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.858 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.858 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.858 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.858 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.858 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.858 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.858 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.858 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.858 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.858 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.858 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.858 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.859 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.859 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.859 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.859 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.859 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.859 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.859 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.859 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.859 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.859 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.859 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.860 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.860 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.860 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.860 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.860 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.860 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.860 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.860 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.860 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.860 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.860 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.860 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.860 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.860 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.861 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.861 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.861 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.861 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.861 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.861 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.861 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.861 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.861 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.861 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.861 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.861 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.861 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.862 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.862 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.862 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.862 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.862 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.862 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.862 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.862 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.862 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.862 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.863 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.863 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.863 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.863 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.863 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.863 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.863 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.863 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.863 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.863 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.863 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.863 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.863 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.863 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.864 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.864 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.864 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.864 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.865 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.865 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.865 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.865 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.865 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.865 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.865 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.867 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.868 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Oct 02 19:02:21 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:21.869 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Oct 02 19:02:21 compute-0 python3.9[153578]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:02:21 compute-0 systemd[1]: Stopping ceilometer_agent_compute container...
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.016 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.057 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Oct 02 19:02:22 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 02 19:02:22 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 02 19:02:22 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.117 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.117 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.117 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.117 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.117 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.117 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.244 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.244 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.244 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.244 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.244 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.244 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.245 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.245 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.245 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.245 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.245 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.245 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.245 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.245 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.245 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.245 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.246 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.246 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.246 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.246 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.246 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.246 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.246 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.246 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.246 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.246 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.246 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.247 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.247 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.247 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.247 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.247 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.247 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.247 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.247 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.247 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.247 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.247 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.247 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.247 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.248 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.248 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.248 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.248 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.248 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.248 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.248 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.248 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.248 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.248 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.248 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.248 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.249 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.249 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.249 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.249 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.249 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.249 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.249 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.249 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.249 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.250 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.250 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.250 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.250 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.250 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.250 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.250 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.250 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.250 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.250 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.250 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.250 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.251 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.251 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.251 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.251 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.251 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.251 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.251 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.251 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.251 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.251 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.251 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.252 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.252 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.252 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.252 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.252 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.252 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.252 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.252 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.252 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.252 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.252 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.252 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.253 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.253 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.253 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.253 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.253 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.253 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.253 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.253 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.254 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.254 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.254 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.254 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.254 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.254 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.254 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.254 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.254 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.254 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.254 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.254 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.254 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.254 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.254 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.254 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.255 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.256 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.256 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.256 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.256 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.256 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.256 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.256 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.256 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.256 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.256 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.256 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.256 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.256 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.257 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.257 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.257 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.257 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.257 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.257 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.257 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.257 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.257 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.257 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.257 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.257 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.258 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Oct 02 19:02:22 compute-0 virtqemud[153606]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 02 19:02:22 compute-0 virtqemud[153606]: hostname: compute-0
Oct 02 19:02:22 compute-0 virtqemud[153606]: End of file while reading data: Input/output error
Oct 02 19:02:22 compute-0 ceilometer_agent_compute[153396]: 2025-10-02 19:02:22.265 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Oct 02 19:02:22 compute-0 systemd[1]: libpod-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope: Deactivated successfully.
Oct 02 19:02:22 compute-0 systemd[1]: libpod-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope: Consumed 1.497s CPU time.
Oct 02 19:02:22 compute-0 podman[153590]: 2025-10-02 19:02:22.482797413 +0000 UTC m=+0.513351908 container died b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 19:02:22 compute-0 systemd[1]: b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca-76593471a0b2f4cf.timer: Deactivated successfully.
Oct 02 19:02:22 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.
Oct 02 19:02:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca-userdata-shm.mount: Deactivated successfully.
Oct 02 19:02:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c137816fdf11fec8535861d16e376952678cff0dca35dc89ae89fcade482c6de-merged.mount: Deactivated successfully.
Oct 02 19:02:23 compute-0 podman[153590]: 2025-10-02 19:02:23.032536625 +0000 UTC m=+1.063091130 container cleanup b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:02:23 compute-0 podman[153590]: ceilometer_agent_compute
Oct 02 19:02:23 compute-0 podman[153645]: ceilometer_agent_compute
Oct 02 19:02:23 compute-0 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Oct 02 19:02:23 compute-0 systemd[1]: Stopped ceilometer_agent_compute container.
Oct 02 19:02:23 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Oct 02 19:02:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c137816fdf11fec8535861d16e376952678cff0dca35dc89ae89fcade482c6de/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c137816fdf11fec8535861d16e376952678cff0dca35dc89ae89fcade482c6de/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c137816fdf11fec8535861d16e376952678cff0dca35dc89ae89fcade482c6de/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Oct 02 19:02:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c137816fdf11fec8535861d16e376952678cff0dca35dc89ae89fcade482c6de/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct 02 19:02:23 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.
Oct 02 19:02:23 compute-0 podman[153658]: 2025-10-02 19:02:23.316922537 +0000 UTC m=+0.165368281 container init b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: + sudo -E kolla_set_configs
Oct 02 19:02:23 compute-0 sudo[153680]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: sudo: unable to send audit message: Operation not permitted
Oct 02 19:02:23 compute-0 podman[153658]: 2025-10-02 19:02:23.346754935 +0000 UTC m=+0.195200679 container start b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.license=GPLv2)
Oct 02 19:02:23 compute-0 sudo[153680]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:02:23 compute-0 podman[153658]: ceilometer_agent_compute
Oct 02 19:02:23 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Oct 02 19:02:23 compute-0 sudo[153576]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: INFO:__main__:Validating config file
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: INFO:__main__:Copying service configuration files
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: INFO:__main__:Writing out command to execute
Oct 02 19:02:23 compute-0 sudo[153680]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: ++ cat /run_command
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: + ARGS=
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: + sudo kolla_copy_cacerts
Oct 02 19:02:23 compute-0 podman[153681]: 2025-10-02 19:02:23.442923627 +0000 UTC m=+0.078969317 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute)
Oct 02 19:02:23 compute-0 sudo[153706]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: sudo: unable to send audit message: Operation not permitted
Oct 02 19:02:23 compute-0 sudo[153706]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:02:23 compute-0 systemd[1]: b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca-929a170e456c76d.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:02:23 compute-0 systemd[1]: b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca-929a170e456c76d.service: Failed with result 'exit-code'.
Oct 02 19:02:23 compute-0 sudo[153706]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: + [[ ! -n '' ]]
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: + . kolla_extend_start
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: + umask 0022
Oct 02 19:02:23 compute-0 ceilometer_agent_compute[153674]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Oct 02 19:02:23 compute-0 sudo[153855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctqrfbszmaxrmxqxzdommsnqivrfivww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431743.6026475-578-53124034474924/AnsiballZ_stat.py'
Oct 02 19:02:23 compute-0 sudo[153855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:24 compute-0 python3.9[153857]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:02:24 compute-0 sudo[153855]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.209 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.209 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.209 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.209 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.209 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.209 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.210 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.210 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.210 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.210 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.210 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.210 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.210 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.210 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.210 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.210 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.210 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.210 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.211 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.211 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.211 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.211 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.211 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.211 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.211 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.211 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.211 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.211 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.212 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.212 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.212 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.212 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.212 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.212 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.212 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.212 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.212 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.212 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.212 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.212 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.213 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.213 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.213 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.213 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.213 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.213 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.213 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.213 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.213 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.213 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.213 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.214 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.214 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.214 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.214 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.214 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.214 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.214 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.214 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.214 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.214 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.215 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.215 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.215 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.215 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.215 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.215 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.215 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.215 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.215 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.215 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.216 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.216 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.216 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.216 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.216 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.216 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.216 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.216 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.216 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.217 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.217 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.217 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.217 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.217 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.217 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.217 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.217 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.217 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.218 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.218 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.218 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.218 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.218 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.218 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.218 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.218 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.218 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.219 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.219 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.219 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.219 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.219 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.219 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.219 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.219 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.219 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.220 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.220 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.220 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.220 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.220 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.220 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.220 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.220 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.220 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.221 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.221 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.221 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.221 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.221 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.221 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.221 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.221 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.221 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.221 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.222 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.222 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.222 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.222 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.222 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.222 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.222 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.222 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.222 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.223 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.223 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.223 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.223 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.223 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.223 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.223 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.223 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.223 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.223 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.224 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.224 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.224 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.224 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.224 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.224 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.224 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.224 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.244 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.245 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.245 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.245 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.245 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.245 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.245 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.245 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.245 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.246 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.246 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.246 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.246 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.246 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.246 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.246 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.246 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.246 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.246 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.246 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.246 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.246 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.247 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.247 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.247 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.247 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.247 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.247 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.247 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.247 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.247 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.247 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.247 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.247 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.247 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.247 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.247 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.247 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.248 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.248 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.248 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.248 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.248 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.248 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.248 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.248 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.248 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.248 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.248 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.248 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.248 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.248 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.248 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.248 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.249 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.249 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.249 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.249 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.249 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.249 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.249 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.249 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.249 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.249 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.249 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.249 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.249 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.249 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.249 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.249 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.249 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.250 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.250 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.250 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.250 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.250 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.250 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.250 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.250 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.250 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.250 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.250 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.250 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.250 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.250 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.250 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.250 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.251 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.251 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.251 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.251 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.251 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.251 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.251 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.251 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.251 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.251 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.251 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.251 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.251 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.251 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.251 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.251 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.252 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.252 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.252 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.252 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.252 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.252 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.252 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.252 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.252 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.252 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.252 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.252 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.252 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.252 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.252 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.252 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.253 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.253 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.253 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.253 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.253 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.253 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.253 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.253 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.253 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.253 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.253 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.253 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.253 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.253 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.253 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.253 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.254 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.254 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.254 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.254 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.254 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.254 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.254 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.254 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.254 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.254 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.254 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.254 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.254 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.254 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.254 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.255 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.255 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.255 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.255 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.256 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.258 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.259 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.271 17 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.277 17 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.278 17 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.278 17 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.393 17 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.393 17 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.394 17 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.394 17 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.394 17 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.394 17 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.394 17 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.394 17 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.394 17 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.394 17 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.394 17 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.394 17 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.395 17 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.395 17 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.395 17 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.395 17 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.395 17 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.395 17 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.395 17 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.395 17 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.395 17 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.395 17 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.395 17 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.396 17 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.396 17 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.396 17 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.396 17 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.396 17 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.396 17 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.396 17 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.396 17 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.396 17 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.396 17 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.396 17 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.396 17 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.397 17 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.397 17 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.397 17 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.397 17 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.397 17 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.397 17 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.397 17 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.397 17 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.397 17 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.397 17 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.397 17 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.397 17 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.397 17 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.398 17 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.398 17 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.398 17 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.398 17 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.398 17 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.398 17 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.398 17 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.398 17 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.398 17 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.398 17 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.398 17 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.398 17 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.399 17 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.399 17 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.399 17 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.399 17 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.399 17 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.399 17 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.399 17 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.399 17 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.399 17 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.399 17 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.399 17 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.399 17 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.399 17 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.400 17 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.400 17 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.400 17 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.400 17 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.400 17 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.400 17 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.400 17 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.400 17 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.400 17 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.400 17 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.400 17 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.401 17 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.401 17 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.401 17 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.401 17 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.401 17 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.401 17 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.401 17 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.401 17 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.401 17 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.401 17 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.401 17 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.401 17 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.401 17 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.401 17 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.402 17 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.402 17 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.402 17 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.402 17 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.402 17 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.402 17 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.402 17 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.402 17 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.402 17 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.402 17 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.402 17 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.402 17 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.402 17 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.403 17 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.403 17 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.403 17 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.403 17 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.403 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.403 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.403 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.403 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.403 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.403 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.403 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.403 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.403 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.403 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.403 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.404 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.404 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.404 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.404 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.404 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.404 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.404 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.404 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.404 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.404 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.404 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.404 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.404 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.404 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.404 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.404 17 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.404 17 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.405 17 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.405 17 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.405 17 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.405 17 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.405 17 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.405 17 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.405 17 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.405 17 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.405 17 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.405 17 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.405 17 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.406 17 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.406 17 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.406 17 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.406 17 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.406 17 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.406 17 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.406 17 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.406 17 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.406 17 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.406 17 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.406 17 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.406 17 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.406 17 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.407 17 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.407 17 DEBUG cotyledon._service [-] Run service AgentManager(0) [17] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.410 17 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.432 17 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.433 17 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.433 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a00e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.434 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feec38a00b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.434 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.435 17 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.436 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec5f86ab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38272c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec6d242f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38273e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38274d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feec72bb7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feec38264b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.446 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec4ac1ee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feec3827c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.450 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feec3827230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.451 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3825730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.451 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.452 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feec3825700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.452 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec337de50>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.453 17 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.453 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feec3827290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.454 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.454 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feec38277a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.454 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.454 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feec38272f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.455 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.455 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feec3827350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.455 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.455 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feec38a0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.455 17 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.456 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feec38273b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.456 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.456 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feec49646e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.457 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.457 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feec3827440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.457 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.457 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feec3827c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.457 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.458 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feec38274a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.458 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.458 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feec3827ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.458 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.458 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feec3827d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.459 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.459 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feec3827dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.459 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.459 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feec3827e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.459 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.460 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feec38a0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.460 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.460 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feec38276e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.460 17 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.460 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feec3827ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.461 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.461 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feec49641d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.461 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.461 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feec3827740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.461 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.462 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feec3827f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.462 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.462 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.465 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.465 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.465 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.465 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.465 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.465 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.465 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.465 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.466 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:02:24.466 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:02:24 compute-0 sudo[153991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sboqgywteyejxszbmkjfddbmjrqxxkxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431743.6026475-578-53124034474924/AnsiballZ_copy.py'
Oct 02 19:02:24 compute-0 sudo[153991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:24 compute-0 python3.9[153993]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431743.6026475-578-53124034474924/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:02:24 compute-0 sudo[153991]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:25 compute-0 sudo[154143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyocnxzdavwbdkxeoihgoaczvtpgdgzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431745.410341-595-249686083880233/AnsiballZ_container_config_data.py'
Oct 02 19:02:25 compute-0 sudo[154143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:26 compute-0 python3.9[154145]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Oct 02 19:02:26 compute-0 sudo[154143]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:26 compute-0 sudo[154295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbtprmutvgboovoezvxznlecegkmvfnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431746.3391519-604-141856690784424/AnsiballZ_container_config_hash.py'
Oct 02 19:02:26 compute-0 sudo[154295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:26 compute-0 python3.9[154297]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:02:27 compute-0 sudo[154295]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:27 compute-0 sudo[154447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdnhxvknrzucwxouuazenxhzcxaxwyjc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431747.3346531-614-94526391782419/AnsiballZ_edpm_container_manage.py'
Oct 02 19:02:27 compute-0 sudo[154447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:28 compute-0 python3[154449]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:02:29 compute-0 podman[154461]: 2025-10-02 19:02:29.408194 +0000 UTC m=+1.317964431 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Oct 02 19:02:29 compute-0 podman[154559]: 2025-10-02 19:02:29.597733004 +0000 UTC m=+0.067108585 container create d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible)
Oct 02 19:02:29 compute-0 podman[154559]: 2025-10-02 19:02:29.557732848 +0000 UTC m=+0.027108449 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Oct 02 19:02:29 compute-0 python3[154449]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Oct 02 19:02:29 compute-0 sudo[154447]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:30 compute-0 sudo[154749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbkypironsmcdulnxhxumigwjczymmyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431749.952502-622-103922929020477/AnsiballZ_stat.py'
Oct 02 19:02:30 compute-0 sudo[154749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:30 compute-0 python3.9[154751]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:02:30 compute-0 sudo[154749]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:31 compute-0 sudo[154903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvaeygempernpzuoodrvwvkjuxotuaow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431750.921705-631-65138865543900/AnsiballZ_file.py'
Oct 02 19:02:31 compute-0 sudo[154903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:31 compute-0 python3.9[154905]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:31 compute-0 sudo[154903]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:32 compute-0 sudo[155054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qegpdevatculjxtxsnvvhnujlxyyxlyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431751.6756034-631-196987120715735/AnsiballZ_copy.py'
Oct 02 19:02:32 compute-0 sudo[155054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:32 compute-0 python3.9[155056]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759431751.6756034-631-196987120715735/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:32 compute-0 sudo[155054]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:32 compute-0 sudo[155143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkbrnzreybqpfolgbfsjurbmqogklade ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431751.6756034-631-196987120715735/AnsiballZ_systemd.py'
Oct 02 19:02:32 compute-0 sudo[155143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:32 compute-0 podman[155104]: 2025-10-02 19:02:32.937325596 +0000 UTC m=+0.128336750 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:02:33 compute-0 python3.9[155152]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:02:33 compute-0 systemd[1]: Reloading.
Oct 02 19:02:33 compute-0 systemd-rc-local-generator[155181]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:02:33 compute-0 systemd-sysv-generator[155184]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:02:33 compute-0 sudo[155143]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:33 compute-0 sudo[155268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afeldwroeqhcbkeimtkjsxwolzvyzlua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431751.6756034-631-196987120715735/AnsiballZ_systemd.py'
Oct 02 19:02:33 compute-0 sudo[155268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:34 compute-0 python3.9[155270]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:02:34 compute-0 systemd[1]: Reloading.
Oct 02 19:02:34 compute-0 systemd-sysv-generator[155303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:02:34 compute-0 systemd-rc-local-generator[155297]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:02:34 compute-0 systemd[1]: Starting node_exporter container...
Oct 02 19:02:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d013b77d80e4ab5eced721a5416ff9e029a62bd82f80e1313035fb8538e14c1b/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d013b77d80e4ab5eced721a5416ff9e029a62bd82f80e1313035fb8538e14c1b/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:02:34 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1.
Oct 02 19:02:34 compute-0 podman[155310]: 2025-10-02 19:02:34.82554937 +0000 UTC m=+0.189210176 container init d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.844Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.844Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.844Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.844Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.844Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=arp
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=bcache
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=bonding
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=btrfs
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=conntrack
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=cpu
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=diskstats
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=edac
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=filefd
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=filesystem
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=infiniband
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=ipvs
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=loadavg
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=mdadm
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=meminfo
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=netclass
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=netdev
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=netstat
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=nfs
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=nfsd
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=nvme
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=schedstat
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=sockstat
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.845Z caller=node_exporter.go:117 level=info collector=softnet
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.846Z caller=node_exporter.go:117 level=info collector=systemd
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.846Z caller=node_exporter.go:117 level=info collector=tapestats
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.846Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.846Z caller=node_exporter.go:117 level=info collector=vmstat
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.846Z caller=node_exporter.go:117 level=info collector=xfs
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.846Z caller=node_exporter.go:117 level=info collector=zfs
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.846Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Oct 02 19:02:34 compute-0 node_exporter[155326]: ts=2025-10-02T19:02:34.847Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Oct 02 19:02:34 compute-0 podman[155310]: 2025-10-02 19:02:34.858663401 +0000 UTC m=+0.222324217 container start d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:02:34 compute-0 podman[155310]: node_exporter
Oct 02 19:02:34 compute-0 systemd[1]: Started node_exporter container.
Oct 02 19:02:34 compute-0 sudo[155268]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:34 compute-0 podman[155335]: 2025-10-02 19:02:34.967840654 +0000 UTC m=+0.090152131 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:02:35 compute-0 sudo[155508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azditktnxvwjedzuvuhkzdtjvanvipmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431755.151638-655-239040841771734/AnsiballZ_systemd.py'
Oct 02 19:02:35 compute-0 sudo[155508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:35 compute-0 python3.9[155510]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:02:35 compute-0 systemd[1]: Stopping node_exporter container...
Oct 02 19:02:36 compute-0 systemd[1]: libpod-d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1.scope: Deactivated successfully.
Oct 02 19:02:36 compute-0 podman[155514]: 2025-10-02 19:02:36.031514884 +0000 UTC m=+0.070682570 container died d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:02:36 compute-0 systemd[1]: d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1-c530380796f3195.timer: Deactivated successfully.
Oct 02 19:02:36 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1.
Oct 02 19:02:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1-userdata-shm.mount: Deactivated successfully.
Oct 02 19:02:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d013b77d80e4ab5eced721a5416ff9e029a62bd82f80e1313035fb8538e14c1b-merged.mount: Deactivated successfully.
Oct 02 19:02:36 compute-0 podman[155514]: 2025-10-02 19:02:36.199647227 +0000 UTC m=+0.238814863 container cleanup d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:02:36 compute-0 podman[155514]: node_exporter
Oct 02 19:02:36 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 02 19:02:36 compute-0 podman[155539]: node_exporter
Oct 02 19:02:36 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Oct 02 19:02:36 compute-0 systemd[1]: Stopped node_exporter container.
Oct 02 19:02:36 compute-0 systemd[1]: Starting node_exporter container...
Oct 02 19:02:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:02:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d013b77d80e4ab5eced721a5416ff9e029a62bd82f80e1313035fb8538e14c1b/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:02:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d013b77d80e4ab5eced721a5416ff9e029a62bd82f80e1313035fb8538e14c1b/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:02:36 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1.
Oct 02 19:02:36 compute-0 podman[155552]: 2025-10-02 19:02:36.484278486 +0000 UTC m=+0.158750469 container init d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.503Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.503Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.503Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.504Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.504Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.504Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.504Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.505Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.505Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=arp
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=bcache
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=bonding
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=btrfs
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=conntrack
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=cpu
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=diskstats
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=edac
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=filefd
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=filesystem
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=infiniband
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=ipvs
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=loadavg
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=mdadm
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=meminfo
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=netclass
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=netdev
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=netstat
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=nfs
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=nfsd
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=nvme
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=schedstat
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=sockstat
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=softnet
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=systemd
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=tapestats
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=vmstat
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=xfs
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.506Z caller=node_exporter.go:117 level=info collector=zfs
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.507Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Oct 02 19:02:36 compute-0 node_exporter[155567]: ts=2025-10-02T19:02:36.508Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Oct 02 19:02:36 compute-0 podman[155552]: 2025-10-02 19:02:36.519467733 +0000 UTC m=+0.193939646 container start d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:02:36 compute-0 podman[155552]: node_exporter
Oct 02 19:02:36 compute-0 systemd[1]: Started node_exporter container.
Oct 02 19:02:36 compute-0 sudo[155508]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:36 compute-0 podman[155576]: 2025-10-02 19:02:36.589832965 +0000 UTC m=+0.058678299 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:02:37 compute-0 sudo[155749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvxoajyepjgxaltqhzqrehigisuwjupb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431756.8014445-663-232405447822544/AnsiballZ_stat.py'
Oct 02 19:02:37 compute-0 sudo[155749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:37 compute-0 python3.9[155751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:02:37 compute-0 sudo[155749]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:37 compute-0 sudo[155872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ievvkpvmrctdwdmluujuktbteemsyjed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431756.8014445-663-232405447822544/AnsiballZ_copy.py'
Oct 02 19:02:37 compute-0 sudo[155872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:38 compute-0 python3.9[155874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431756.8014445-663-232405447822544/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:02:38 compute-0 sudo[155872]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:38 compute-0 sudo[156024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfoxljwriftgwevbglgvjjrxywzzvdtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431758.5324292-680-243372248972068/AnsiballZ_container_config_data.py'
Oct 02 19:02:38 compute-0 sudo[156024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:39 compute-0 python3.9[156026]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Oct 02 19:02:39 compute-0 sudo[156024]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:39 compute-0 sudo[156176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkpcytprfnxbfgncajgqvzbntasqwvfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431759.349066-689-89539351647295/AnsiballZ_container_config_hash.py'
Oct 02 19:02:39 compute-0 sudo[156176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:39 compute-0 python3.9[156178]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:02:39 compute-0 sudo[156176]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:40 compute-0 sudo[156328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kguwpkshlppbissvkorcjxzuekbbapla ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431760.3479834-699-144343734392807/AnsiballZ_edpm_container_manage.py'
Oct 02 19:02:40 compute-0 sudo[156328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:41 compute-0 python3[156330]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:02:42 compute-0 podman[156343]: 2025-10-02 19:02:42.438032044 +0000 UTC m=+1.303815260 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Oct 02 19:02:42 compute-0 podman[156441]: 2025-10-02 19:02:42.623581706 +0000 UTC m=+0.064866124 container create ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:02:42 compute-0 podman[156441]: 2025-10-02 19:02:42.590326951 +0000 UTC m=+0.031611379 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Oct 02 19:02:42 compute-0 python3[156330]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Oct 02 19:02:42 compute-0 sudo[156328]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:43 compute-0 sudo[156626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khporggslvbhwnimxvxkkcgacansjypu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431763.0305154-707-12923797529625/AnsiballZ_stat.py'
Oct 02 19:02:43 compute-0 sudo[156626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:43 compute-0 python3.9[156628]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:02:43 compute-0 sudo[156626]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:44 compute-0 sudo[156780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axyjncpsdtpdsvlmnkavoxfbbahsgcyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431763.9865432-716-20136201480644/AnsiballZ_file.py'
Oct 02 19:02:44 compute-0 sudo[156780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:44 compute-0 python3.9[156782]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:44 compute-0 sudo[156780]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:45 compute-0 sudo[156931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uajdywxzxnxqysjzuixoankusjrwnujv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431764.6250062-716-240711281101748/AnsiballZ_copy.py'
Oct 02 19:02:45 compute-0 sudo[156931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:45 compute-0 python3.9[156933]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759431764.6250062-716-240711281101748/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:45 compute-0 sudo[156931]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:45 compute-0 sudo[157007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgzazphjeogbrldlrentltgjbwbnmigw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431764.6250062-716-240711281101748/AnsiballZ_systemd.py'
Oct 02 19:02:45 compute-0 sudo[157007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:45 compute-0 python3.9[157009]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:02:45 compute-0 systemd[1]: Reloading.
Oct 02 19:02:46 compute-0 systemd-sysv-generator[157038]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:02:46 compute-0 systemd-rc-local-generator[157034]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:02:46 compute-0 sudo[157007]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:46 compute-0 sudo[157118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwtnzyyxbrirtevzfcprueiuvucqvgpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431764.6250062-716-240711281101748/AnsiballZ_systemd.py'
Oct 02 19:02:46 compute-0 sudo[157118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:46 compute-0 python3.9[157120]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:02:47 compute-0 systemd[1]: Reloading.
Oct 02 19:02:47 compute-0 systemd-rc-local-generator[157150]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:02:47 compute-0 systemd-sysv-generator[157154]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:02:47 compute-0 systemd[1]: Starting podman_exporter container...
Oct 02 19:02:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01cdb2b5d837c9752b51f05dcf62960f0de0a536f40fb083d0036db55f0da90/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01cdb2b5d837c9752b51f05dcf62960f0de0a536f40fb083d0036db55f0da90/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:02:47 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f.
Oct 02 19:02:47 compute-0 podman[157160]: 2025-10-02 19:02:47.54551064 +0000 UTC m=+0.155409876 container init ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:02:47 compute-0 podman_exporter[157175]: ts=2025-10-02T19:02:47.570Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Oct 02 19:02:47 compute-0 podman_exporter[157175]: ts=2025-10-02T19:02:47.570Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Oct 02 19:02:47 compute-0 podman_exporter[157175]: ts=2025-10-02T19:02:47.570Z caller=handler.go:94 level=info msg="enabled collectors"
Oct 02 19:02:47 compute-0 podman_exporter[157175]: ts=2025-10-02T19:02:47.570Z caller=handler.go:105 level=info collector=container
Oct 02 19:02:47 compute-0 podman[157160]: 2025-10-02 19:02:47.582977296 +0000 UTC m=+0.192876552 container start ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:02:47 compute-0 podman[157160]: podman_exporter
Oct 02 19:02:47 compute-0 systemd[1]: Starting Podman API Service...
Oct 02 19:02:47 compute-0 systemd[1]: Started podman_exporter container.
Oct 02 19:02:47 compute-0 systemd[1]: Started Podman API Service.
Oct 02 19:02:47 compute-0 sudo[157118]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:47 compute-0 podman[157186]: time="2025-10-02T19:02:47Z" level=info msg="/usr/bin/podman filtering at log level info"
Oct 02 19:02:47 compute-0 podman[157186]: time="2025-10-02T19:02:47Z" level=info msg="Setting parallel job count to 25"
Oct 02 19:02:47 compute-0 podman[157186]: time="2025-10-02T19:02:47Z" level=info msg="Using sqlite as database backend"
Oct 02 19:02:47 compute-0 podman[157186]: time="2025-10-02T19:02:47Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct 02 19:02:47 compute-0 podman[157186]: time="2025-10-02T19:02:47Z" level=info msg="Using systemd socket activation to determine API endpoint"
Oct 02 19:02:47 compute-0 podman[157186]: time="2025-10-02T19:02:47Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Oct 02 19:02:47 compute-0 podman[157186]: @ - - [02/Oct/2025:19:02:47 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Oct 02 19:02:47 compute-0 podman[157186]: time="2025-10-02T19:02:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:02:47 compute-0 podman[157186]: @ - - [02/Oct/2025:19:02:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 9685 "" "Go-http-client/1.1"
Oct 02 19:02:47 compute-0 podman_exporter[157175]: ts=2025-10-02T19:02:47.687Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Oct 02 19:02:47 compute-0 podman_exporter[157175]: ts=2025-10-02T19:02:47.687Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Oct 02 19:02:47 compute-0 podman_exporter[157175]: ts=2025-10-02T19:02:47.688Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Oct 02 19:02:47 compute-0 podman[157184]: 2025-10-02 19:02:47.68875593 +0000 UTC m=+0.084844829 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:02:47 compute-0 systemd[1]: ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f-683a42c27f726d4b.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:02:47 compute-0 systemd[1]: ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f-683a42c27f726d4b.service: Failed with result 'exit-code'.
Oct 02 19:02:48 compute-0 sudo[157371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taycecmpaxwpcwxjmoxlpbgifzvfwlbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431767.8492646-740-124430880173593/AnsiballZ_systemd.py'
Oct 02 19:02:48 compute-0 sudo[157371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:48 compute-0 python3.9[157373]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:02:48 compute-0 systemd[1]: Stopping podman_exporter container...
Oct 02 19:02:48 compute-0 podman[157186]: @ - - [02/Oct/2025:19:02:47 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Oct 02 19:02:48 compute-0 systemd[1]: libpod-ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f.scope: Deactivated successfully.
Oct 02 19:02:48 compute-0 podman[157377]: 2025-10-02 19:02:48.743113935 +0000 UTC m=+0.067230695 container died ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:02:48 compute-0 systemd[1]: ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f-683a42c27f726d4b.timer: Deactivated successfully.
Oct 02 19:02:48 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f.
Oct 02 19:02:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f-userdata-shm.mount: Deactivated successfully.
Oct 02 19:02:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c01cdb2b5d837c9752b51f05dcf62960f0de0a536f40fb083d0036db55f0da90-merged.mount: Deactivated successfully.
Oct 02 19:02:48 compute-0 podman[157377]: 2025-10-02 19:02:48.984024872 +0000 UTC m=+0.308141642 container cleanup ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:02:48 compute-0 podman[157377]: podman_exporter
Oct 02 19:02:48 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 02 19:02:49 compute-0 podman[157405]: podman_exporter
Oct 02 19:02:49 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Oct 02 19:02:49 compute-0 systemd[1]: Stopped podman_exporter container.
Oct 02 19:02:49 compute-0 systemd[1]: Starting podman_exporter container...
Oct 02 19:02:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01cdb2b5d837c9752b51f05dcf62960f0de0a536f40fb083d0036db55f0da90/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01cdb2b5d837c9752b51f05dcf62960f0de0a536f40fb083d0036db55f0da90/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:02:49 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f.
Oct 02 19:02:49 compute-0 podman[157418]: 2025-10-02 19:02:49.265008663 +0000 UTC m=+0.147356681 container init ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:02:49 compute-0 podman_exporter[157434]: ts=2025-10-02T19:02:49.293Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Oct 02 19:02:49 compute-0 podman_exporter[157434]: ts=2025-10-02T19:02:49.293Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Oct 02 19:02:49 compute-0 podman_exporter[157434]: ts=2025-10-02T19:02:49.293Z caller=handler.go:94 level=info msg="enabled collectors"
Oct 02 19:02:49 compute-0 podman_exporter[157434]: ts=2025-10-02T19:02:49.293Z caller=handler.go:105 level=info collector=container
Oct 02 19:02:49 compute-0 podman[157186]: @ - - [02/Oct/2025:19:02:49 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Oct 02 19:02:49 compute-0 podman[157186]: time="2025-10-02T19:02:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:02:49 compute-0 podman[157418]: 2025-10-02 19:02:49.30344043 +0000 UTC m=+0.185788388 container start ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:02:49 compute-0 podman[157418]: podman_exporter
Oct 02 19:02:49 compute-0 podman[157186]: @ - - [02/Oct/2025:19:02:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 9687 "" "Go-http-client/1.1"
Oct 02 19:02:49 compute-0 podman_exporter[157434]: ts=2025-10-02T19:02:49.318Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Oct 02 19:02:49 compute-0 systemd[1]: Started podman_exporter container.
Oct 02 19:02:49 compute-0 podman_exporter[157434]: ts=2025-10-02T19:02:49.319Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Oct 02 19:02:49 compute-0 podman_exporter[157434]: ts=2025-10-02T19:02:49.320Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Oct 02 19:02:49 compute-0 sudo[157371]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:49 compute-0 podman[157443]: 2025-10-02 19:02:49.401624368 +0000 UTC m=+0.081107807 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:02:49 compute-0 sudo[157618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aearcldthzogvecrrvxbeggvfgajvtju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431769.5994494-748-168828184547190/AnsiballZ_stat.py'
Oct 02 19:02:49 compute-0 sudo[157618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:50 compute-0 python3.9[157620]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:02:50 compute-0 sudo[157618]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:50 compute-0 sudo[157741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyprnpoqprtxicoejnezvgxkfyfstfwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431769.5994494-748-168828184547190/AnsiballZ_copy.py'
Oct 02 19:02:50 compute-0 sudo[157741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:50 compute-0 python3.9[157743]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431769.5994494-748-168828184547190/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:02:50 compute-0 sudo[157741]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:51 compute-0 sudo[157893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnrdlalyrodjufusqqapbdfejikqpppz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431771.2236917-765-214234788771225/AnsiballZ_container_config_data.py'
Oct 02 19:02:51 compute-0 sudo[157893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:51 compute-0 python3.9[157895]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Oct 02 19:02:51 compute-0 sudo[157893]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:52 compute-0 sudo[158045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eskssjbdnsvhbfykxiaapuzhcsyvloul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431772.131213-774-47126133871036/AnsiballZ_container_config_hash.py'
Oct 02 19:02:52 compute-0 sudo[158045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:52 compute-0 python3.9[158047]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:02:52 compute-0 sudo[158045]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:53 compute-0 sudo[158197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwlpwmbttexmweegkketjfzixmozhzpk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431773.0148027-784-233700002166623/AnsiballZ_edpm_container_manage.py'
Oct 02 19:02:53 compute-0 sudo[158197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:53 compute-0 podman[158200]: 2025-10-02 19:02:53.64297352 +0000 UTC m=+0.073828709 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Oct 02 19:02:53 compute-0 python3[158199]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:02:53 compute-0 systemd[1]: b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca-929a170e456c76d.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:02:53 compute-0 systemd[1]: b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca-929a170e456c76d.service: Failed with result 'exit-code'.
Oct 02 19:02:56 compute-0 podman[158231]: 2025-10-02 19:02:56.175331097 +0000 UTC m=+2.431833918 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct 02 19:02:56 compute-0 podman[158328]: 2025-10-02 19:02:56.381843325 +0000 UTC m=+0.076392435 container create 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, release=1755695350, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, config_id=edpm)
Oct 02 19:02:56 compute-0 podman[158328]: 2025-10-02 19:02:56.342186842 +0000 UTC m=+0.036735992 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct 02 19:02:56 compute-0 python3[158199]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct 02 19:02:56 compute-0 sudo[158197]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:57 compute-0 sudo[158516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grkepfgoniwanvllrjtarkinsexhagbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431776.7855568-792-240244657362308/AnsiballZ_stat.py'
Oct 02 19:02:57 compute-0 sudo[158516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:57 compute-0 python3.9[158518]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:02:57 compute-0 sudo[158516]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:58 compute-0 sudo[158670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uikubudlzktnojbujfflgiyrlzqdrdtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431777.680311-801-21340247727630/AnsiballZ_file.py'
Oct 02 19:02:58 compute-0 sudo[158670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:58 compute-0 python3.9[158672]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:58 compute-0 sudo[158670]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:58 compute-0 sudo[158821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqytjckttcmpauvcpnweodyygvsozsdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431778.3521564-801-81740479000924/AnsiballZ_copy.py'
Oct 02 19:02:58 compute-0 sudo[158821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:59 compute-0 python3.9[158823]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759431778.3521564-801-81740479000924/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:59 compute-0 sudo[158821]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:59 compute-0 sudo[158897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrxlrydqfzboujggjvikxqtgthnioasd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431778.3521564-801-81740479000924/AnsiballZ_systemd.py'
Oct 02 19:02:59 compute-0 sudo[158897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:59 compute-0 python3.9[158899]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:02:59 compute-0 systemd[1]: Reloading.
Oct 02 19:02:59 compute-0 systemd-rc-local-generator[158926]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:02:59 compute-0 systemd-sysv-generator[158931]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:03:00 compute-0 sudo[158897]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:00 compute-0 sudo[159008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwmmwgcmgpwhyyixwcublyyawypkjkcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431778.3521564-801-81740479000924/AnsiballZ_systemd.py'
Oct 02 19:03:00 compute-0 sudo[159008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:00 compute-0 python3.9[159010]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:03:00 compute-0 systemd[1]: Reloading.
Oct 02 19:03:00 compute-0 systemd-sysv-generator[159044]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:03:00 compute-0 systemd-rc-local-generator[159040]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:03:01 compute-0 systemd[1]: Starting openstack_network_exporter container...
Oct 02 19:03:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89429ad713d311412b587f61436e5af086a987cd352971cdba0bbe58ade0f3f1/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 02 19:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89429ad713d311412b587f61436e5af086a987cd352971cdba0bbe58ade0f3f1/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89429ad713d311412b587f61436e5af086a987cd352971cdba0bbe58ade0f3f1/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:03:01 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38.
Oct 02 19:03:01 compute-0 podman[159049]: 2025-10-02 19:03:01.367283353 +0000 UTC m=+0.185113083 container init 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64)
Oct 02 19:03:01 compute-0 openstack_network_exporter[159065]: INFO    19:03:01 main.go:48: registering *bridge.Collector
Oct 02 19:03:01 compute-0 openstack_network_exporter[159065]: INFO    19:03:01 main.go:48: registering *coverage.Collector
Oct 02 19:03:01 compute-0 openstack_network_exporter[159065]: INFO    19:03:01 main.go:48: registering *datapath.Collector
Oct 02 19:03:01 compute-0 openstack_network_exporter[159065]: INFO    19:03:01 main.go:48: registering *iface.Collector
Oct 02 19:03:01 compute-0 openstack_network_exporter[159065]: INFO    19:03:01 main.go:48: registering *memory.Collector
Oct 02 19:03:01 compute-0 openstack_network_exporter[159065]: INFO    19:03:01 main.go:48: registering *ovnnorthd.Collector
Oct 02 19:03:01 compute-0 openstack_network_exporter[159065]: INFO    19:03:01 main.go:48: registering *ovn.Collector
Oct 02 19:03:01 compute-0 openstack_network_exporter[159065]: INFO    19:03:01 main.go:48: registering *ovsdbserver.Collector
Oct 02 19:03:01 compute-0 openstack_network_exporter[159065]: INFO    19:03:01 main.go:48: registering *pmd_perf.Collector
Oct 02 19:03:01 compute-0 openstack_network_exporter[159065]: INFO    19:03:01 main.go:48: registering *pmd_rxq.Collector
Oct 02 19:03:01 compute-0 openstack_network_exporter[159065]: INFO    19:03:01 main.go:48: registering *vswitch.Collector
Oct 02 19:03:01 compute-0 openstack_network_exporter[159065]: NOTICE  19:03:01 main.go:76: listening on https://:9105/metrics
Oct 02 19:03:01 compute-0 podman[159049]: 2025-10-02 19:03:01.40750845 +0000 UTC m=+0.225338110 container start 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, release=1755695350, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, name=ubi9-minimal, architecture=x86_64, config_id=edpm)
Oct 02 19:03:01 compute-0 podman[159049]: openstack_network_exporter
Oct 02 19:03:01 compute-0 systemd[1]: Started openstack_network_exporter container.
Oct 02 19:03:01 compute-0 sudo[159008]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:01 compute-0 podman[159075]: 2025-10-02 19:03:01.514114802 +0000 UTC m=+0.095942441 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7)
Oct 02 19:03:02 compute-0 sudo[159247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxyyfbnlmenifvtoykzspwgoxfwuabrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431781.6900384-825-44169081473488/AnsiballZ_systemd.py'
Oct 02 19:03:02 compute-0 sudo[159247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:02 compute-0 python3.9[159249]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:03:02 compute-0 systemd[1]: Stopping openstack_network_exporter container...
Oct 02 19:03:02 compute-0 systemd[1]: libpod-2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38.scope: Deactivated successfully.
Oct 02 19:03:02 compute-0 podman[159253]: 2025-10-02 19:03:02.55023424 +0000 UTC m=+0.068662207 container died 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_id=edpm, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 02 19:03:02 compute-0 systemd[1]: 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38-545c2774645e8b8d.timer: Deactivated successfully.
Oct 02 19:03:02 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38.
Oct 02 19:03:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38-userdata-shm.mount: Deactivated successfully.
Oct 02 19:03:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-89429ad713d311412b587f61436e5af086a987cd352971cdba0bbe58ade0f3f1-merged.mount: Deactivated successfully.
Oct 02 19:03:03 compute-0 podman[159253]: 2025-10-02 19:03:03.208694272 +0000 UTC m=+0.727122209 container cleanup 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public)
Oct 02 19:03:03 compute-0 podman[159253]: openstack_network_exporter
Oct 02 19:03:03 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 02 19:03:03 compute-0 podman[159286]: openstack_network_exporter
Oct 02 19:03:03 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Oct 02 19:03:03 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Oct 02 19:03:03 compute-0 systemd[1]: Starting openstack_network_exporter container...
Oct 02 19:03:03 compute-0 podman[159285]: 2025-10-02 19:03:03.378617983 +0000 UTC m=+0.136038704 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:03:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89429ad713d311412b587f61436e5af086a987cd352971cdba0bbe58ade0f3f1/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 02 19:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89429ad713d311412b587f61436e5af086a987cd352971cdba0bbe58ade0f3f1/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89429ad713d311412b587f61436e5af086a987cd352971cdba0bbe58ade0f3f1/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:03:03 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38.
Oct 02 19:03:03 compute-0 podman[159315]: 2025-10-02 19:03:03.441126295 +0000 UTC m=+0.127370326 container init 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, version=9.6, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 19:03:03 compute-0 openstack_network_exporter[159337]: INFO    19:03:03 main.go:48: registering *bridge.Collector
Oct 02 19:03:03 compute-0 openstack_network_exporter[159337]: INFO    19:03:03 main.go:48: registering *coverage.Collector
Oct 02 19:03:03 compute-0 openstack_network_exporter[159337]: INFO    19:03:03 main.go:48: registering *datapath.Collector
Oct 02 19:03:03 compute-0 openstack_network_exporter[159337]: INFO    19:03:03 main.go:48: registering *iface.Collector
Oct 02 19:03:03 compute-0 openstack_network_exporter[159337]: INFO    19:03:03 main.go:48: registering *memory.Collector
Oct 02 19:03:03 compute-0 openstack_network_exporter[159337]: INFO    19:03:03 main.go:48: registering *ovnnorthd.Collector
Oct 02 19:03:03 compute-0 openstack_network_exporter[159337]: INFO    19:03:03 main.go:48: registering *ovn.Collector
Oct 02 19:03:03 compute-0 openstack_network_exporter[159337]: INFO    19:03:03 main.go:48: registering *ovsdbserver.Collector
Oct 02 19:03:03 compute-0 openstack_network_exporter[159337]: INFO    19:03:03 main.go:48: registering *pmd_perf.Collector
Oct 02 19:03:03 compute-0 openstack_network_exporter[159337]: INFO    19:03:03 main.go:48: registering *pmd_rxq.Collector
Oct 02 19:03:03 compute-0 openstack_network_exporter[159337]: INFO    19:03:03 main.go:48: registering *vswitch.Collector
Oct 02 19:03:03 compute-0 openstack_network_exporter[159337]: NOTICE  19:03:03 main.go:76: listening on https://:9105/metrics
Oct 02 19:03:03 compute-0 podman[159315]: 2025-10-02 19:03:03.489231792 +0000 UTC m=+0.175475743 container start 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 19:03:03 compute-0 podman[159315]: openstack_network_exporter
Oct 02 19:03:03 compute-0 systemd[1]: Started openstack_network_exporter container.
Oct 02 19:03:03 compute-0 sudo[159247]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:03 compute-0 podman[159347]: 2025-10-02 19:03:03.568132441 +0000 UTC m=+0.076535068 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.buildah.version=1.33.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, version=9.6, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350)
Oct 02 19:03:04 compute-0 sudo[159517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llaakubqaeauvcxgicmmmgnpqhdhsqis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431783.8923857-833-134532014448002/AnsiballZ_find.py'
Oct 02 19:03:04 compute-0 sudo[159517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:04 compute-0 python3.9[159519]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:03:04 compute-0 sudo[159517]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:05 compute-0 sudo[159669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzvsgkjrnoqmgwemhgrvdxudgizsojnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431785.0312614-843-190849068598670/AnsiballZ_podman_container_info.py'
Oct 02 19:03:05 compute-0 sudo[159669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:05 compute-0 python3.9[159671]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Oct 02 19:03:05 compute-0 sudo[159669]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:06 compute-0 sudo[159845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqfhaivnjvzsbcugacsyyetkkiosphuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431786.1320713-851-164981272800876/AnsiballZ_podman_container_exec.py'
Oct 02 19:03:06 compute-0 sudo[159845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:06 compute-0 podman[159808]: 2025-10-02 19:03:06.748344579 +0000 UTC m=+0.070894865 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:03:06 compute-0 python3.9[159851]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:03:07 compute-0 systemd[1]: Started libpod-conmon-daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97.scope.
Oct 02 19:03:07 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:03:07 compute-0 podman[159861]: 2025-10-02 19:03:07.050946891 +0000 UTC m=+0.110804385 container exec daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:03:07 compute-0 podman[159861]: 2025-10-02 19:03:07.090282587 +0000 UTC m=+0.150140041 container exec_died daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 19:03:07 compute-0 systemd[1]: libpod-conmon-daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97.scope: Deactivated successfully.
Oct 02 19:03:07 compute-0 sudo[159845]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:07 compute-0 sudo[160043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krvmwghceoddailgkbbytcbyonpnluje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431787.3803735-859-92719293535357/AnsiballZ_podman_container_exec.py'
Oct 02 19:03:07 compute-0 sudo[160043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:08 compute-0 python3.9[160045]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:03:08 compute-0 systemd[1]: Started libpod-conmon-daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97.scope.
Oct 02 19:03:08 compute-0 podman[160046]: 2025-10-02 19:03:08.149191582 +0000 UTC m=+0.098645340 container exec daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 19:03:08 compute-0 podman[160046]: 2025-10-02 19:03:08.154559739 +0000 UTC m=+0.104013477 container exec_died daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 19:03:08 compute-0 sudo[160043]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:08 compute-0 systemd[1]: libpod-conmon-daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97.scope: Deactivated successfully.
Oct 02 19:03:08 compute-0 sudo[160227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anxcgttkzlgbgafbaqvtkeymthsryted ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431788.3966165-867-49619789570504/AnsiballZ_file.py'
Oct 02 19:03:08 compute-0 sudo[160227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:08 compute-0 python3.9[160229]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:09 compute-0 sudo[160227]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:09 compute-0 sudo[160379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bghujahugmrfqbrmzbdyoplzxypezpgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431789.3609715-876-46617075755899/AnsiballZ_podman_container_info.py'
Oct 02 19:03:09 compute-0 sudo[160379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:09 compute-0 python3.9[160381]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Oct 02 19:03:10 compute-0 sudo[160379]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:10 compute-0 sudo[160544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbeiewwyaghdgxjxyyshpjdlvuknacwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431790.3045638-884-243393018174063/AnsiballZ_podman_container_exec.py'
Oct 02 19:03:10 compute-0 sudo[160544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:10 compute-0 python3.9[160546]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:03:11 compute-0 systemd[1]: Started libpod-conmon-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope.
Oct 02 19:03:11 compute-0 podman[160547]: 2025-10-02 19:03:11.066102815 +0000 UTC m=+0.110495627 container exec b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Oct 02 19:03:11 compute-0 podman[160547]: 2025-10-02 19:03:11.104835669 +0000 UTC m=+0.149228421 container exec_died b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:03:11 compute-0 systemd[1]: libpod-conmon-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope: Deactivated successfully.
Oct 02 19:03:11 compute-0 sudo[160544]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:11 compute-0 sudo[160728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pakqcamtztnzdeuvaawgwjeobrveipjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431791.392562-892-145961550458251/AnsiballZ_podman_container_exec.py'
Oct 02 19:03:11 compute-0 sudo[160728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:11 compute-0 python3.9[160730]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:03:12 compute-0 systemd[1]: Started libpod-conmon-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope.
Oct 02 19:03:12 compute-0 podman[160731]: 2025-10-02 19:03:12.123426314 +0000 UTC m=+0.102613086 container exec b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:03:12 compute-0 podman[160731]: 2025-10-02 19:03:12.159903679 +0000 UTC m=+0.139090381 container exec_died b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2)
Oct 02 19:03:12 compute-0 systemd[1]: libpod-conmon-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope: Deactivated successfully.
Oct 02 19:03:12 compute-0 sudo[160728]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:12 compute-0 sudo[160911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsvrcppmknwlfziasivtzprakwvknyim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431792.4674032-900-120351585123221/AnsiballZ_file.py'
Oct 02 19:03:12 compute-0 sudo[160911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:13 compute-0 python3.9[160913]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:13 compute-0 sudo[160911]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:13 compute-0 sudo[161063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weuotbybiwvbivghnqvwycxobjxjrwzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431793.4343805-909-17364964147635/AnsiballZ_podman_container_info.py'
Oct 02 19:03:13 compute-0 sudo[161063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:14 compute-0 python3.9[161065]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Oct 02 19:03:14 compute-0 sudo[161063]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:14 compute-0 sudo[161228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjbfxjrpiypymtcykseohcufbxvvwefu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431794.4133365-917-44977160442757/AnsiballZ_podman_container_exec.py'
Oct 02 19:03:14 compute-0 sudo[161228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:15 compute-0 python3.9[161230]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:03:15 compute-0 systemd[1]: Started libpod-conmon-d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1.scope.
Oct 02 19:03:15 compute-0 podman[161231]: 2025-10-02 19:03:15.272968166 +0000 UTC m=+0.083374347 container exec d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:03:15 compute-0 podman[161231]: 2025-10-02 19:03:15.307939587 +0000 UTC m=+0.118345698 container exec_died d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:03:15 compute-0 systemd[1]: libpod-conmon-d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1.scope: Deactivated successfully.
Oct 02 19:03:15 compute-0 sudo[161228]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:16 compute-0 sudo[161410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdofmxnzyqyzavqrginrsfagcyssuhfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431795.5483377-925-223101953145021/AnsiballZ_podman_container_exec.py'
Oct 02 19:03:16 compute-0 sudo[161410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:16 compute-0 python3.9[161412]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:03:16 compute-0 systemd[1]: Started libpod-conmon-d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1.scope.
Oct 02 19:03:16 compute-0 podman[161413]: 2025-10-02 19:03:16.390881695 +0000 UTC m=+0.102297260 container exec d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:03:16 compute-0 podman[161413]: 2025-10-02 19:03:16.424905066 +0000 UTC m=+0.136320641 container exec_died d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:03:16 compute-0 systemd[1]: libpod-conmon-d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1.scope: Deactivated successfully.
Oct 02 19:03:16 compute-0 sudo[161410]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:17 compute-0 sudo[161595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdvfgpbspwcehjcwutpjjqskvbthttcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431796.7391577-933-11940305826624/AnsiballZ_file.py'
Oct 02 19:03:17 compute-0 sudo[161595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:17 compute-0 python3.9[161597]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:17 compute-0 sudo[161595]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:17 compute-0 sudo[161747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqzkjjnnglmrjemskvylezikkqnasevh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431797.6266165-942-190970311928558/AnsiballZ_podman_container_info.py'
Oct 02 19:03:17 compute-0 sudo[161747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:18 compute-0 python3.9[161749]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Oct 02 19:03:18 compute-0 sudo[161747]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:18 compute-0 sudo[161912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izjktaqzouulbtsdsguzapgmcqszeoux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431798.4625099-950-10015179285147/AnsiballZ_podman_container_exec.py'
Oct 02 19:03:18 compute-0 sudo[161912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:19 compute-0 python3.9[161914]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:03:19 compute-0 systemd[1]: Started libpod-conmon-ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f.scope.
Oct 02 19:03:19 compute-0 podman[161915]: 2025-10-02 19:03:19.108059357 +0000 UTC m=+0.061433799 container exec ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:03:19 compute-0 podman[161935]: 2025-10-02 19:03:19.16558575 +0000 UTC m=+0.047092506 container exec_died ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:03:19 compute-0 podman[161915]: 2025-10-02 19:03:19.173085184 +0000 UTC m=+0.126459616 container exec_died ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:03:19 compute-0 systemd[1]: libpod-conmon-ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f.scope: Deactivated successfully.
Oct 02 19:03:19 compute-0 sudo[161912]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:19 compute-0 podman[162041]: 2025-10-02 19:03:19.657064895 +0000 UTC m=+0.080145466 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:03:19 compute-0 sudo[162121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euvmepofvidvzmfqytbvkikxtysisrsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431799.3818789-958-268678649293990/AnsiballZ_podman_container_exec.py'
Oct 02 19:03:19 compute-0 sudo[162121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:20 compute-0 python3.9[162123]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:03:20 compute-0 systemd[1]: Started libpod-conmon-ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f.scope.
Oct 02 19:03:20 compute-0 podman[162124]: 2025-10-02 19:03:20.130088679 +0000 UTC m=+0.083635693 container exec ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:03:20 compute-0 podman[162124]: 2025-10-02 19:03:20.163797043 +0000 UTC m=+0.117344057 container exec_died ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:03:20 compute-0 systemd[1]: libpod-conmon-ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f.scope: Deactivated successfully.
Oct 02 19:03:20 compute-0 sudo[162121]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:20 compute-0 sudo[162306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idxphsxmqegsbqzpimdrdpbznmjwvksi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431800.4347997-966-163693419802442/AnsiballZ_file.py'
Oct 02 19:03:20 compute-0 sudo[162306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:21 compute-0 python3.9[162308]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:21 compute-0 sudo[162306]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:21 compute-0 sudo[162458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkzyxrcfbpspdjamheaatszlfwydqsmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431801.3540642-975-258241137227104/AnsiballZ_podman_container_info.py'
Oct 02 19:03:21 compute-0 sudo[162458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:21 compute-0 python3.9[162460]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Oct 02 19:03:22 compute-0 sudo[162458]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:22 compute-0 sudo[162622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-domqqlixwxzrdxdvbhjuijjtptdzwwum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431802.3104672-983-138146502036403/AnsiballZ_podman_container_exec.py'
Oct 02 19:03:22 compute-0 sudo[162622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:22 compute-0 python3.9[162624]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:03:23 compute-0 systemd[1]: Started libpod-conmon-2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38.scope.
Oct 02 19:03:23 compute-0 podman[162625]: 2025-10-02 19:03:23.060612789 +0000 UTC m=+0.110282984 container exec 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, distribution-scope=public, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 02 19:03:23 compute-0 podman[162625]: 2025-10-02 19:03:23.099024705 +0000 UTC m=+0.148694830 container exec_died 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Oct 02 19:03:23 compute-0 systemd[1]: libpod-conmon-2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38.scope: Deactivated successfully.
Oct 02 19:03:23 compute-0 sudo[162622]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:23 compute-0 sudo[162805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kydzpzkwdmjkbtbspzcwdrmgahwfwbla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431803.3244774-991-59853913909073/AnsiballZ_podman_container_exec.py'
Oct 02 19:03:23 compute-0 sudo[162805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:23 compute-0 podman[162807]: 2025-10-02 19:03:23.798313757 +0000 UTC m=+0.105572111 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Oct 02 19:03:23 compute-0 python3.9[162808]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:03:23 compute-0 systemd[1]: Started libpod-conmon-2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38.scope.
Oct 02 19:03:24 compute-0 podman[162828]: 2025-10-02 19:03:24.015507338 +0000 UTC m=+0.095408700 container exec 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, architecture=x86_64, release=1755695350, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 02 19:03:24 compute-0 podman[162828]: 2025-10-02 19:03:24.05003966 +0000 UTC m=+0.129941022 container exec_died 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Oct 02 19:03:24 compute-0 systemd[1]: libpod-conmon-2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38.scope: Deactivated successfully.
Oct 02 19:03:24 compute-0 sudo[162805]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:24 compute-0 sudo[163009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrzougycamndhubzvddraaaawxunkvkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431804.295166-999-279346207903754/AnsiballZ_file.py'
Oct 02 19:03:24 compute-0 sudo[163009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:24 compute-0 python3.9[163011]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:24 compute-0 sudo[163009]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:25 compute-0 sudo[163161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvmrmrubpxwqryqvepxtfhfkxzcaaork ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431805.1174796-1008-150991877909358/AnsiballZ_file.py'
Oct 02 19:03:25 compute-0 sudo[163161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:25 compute-0 python3.9[163163]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:25 compute-0 sudo[163161]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:26 compute-0 sudo[163313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbmknwlxfonausyxknqxcuhgxvpuuobx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431805.9353623-1016-222209631944684/AnsiballZ_stat.py'
Oct 02 19:03:26 compute-0 sudo[163313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:26 compute-0 python3.9[163315]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:03:26 compute-0 sudo[163313]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:27 compute-0 sudo[163436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvrajnbafiabalizblzjyrzahenakblz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431805.9353623-1016-222209631944684/AnsiballZ_copy.py'
Oct 02 19:03:27 compute-0 sudo[163436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:27 compute-0 python3.9[163438]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431805.9353623-1016-222209631944684/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:27 compute-0 sudo[163436]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:28 compute-0 sudo[163588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwuuytvbjfahoklwwabrpkrickpjcvkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431807.7344477-1032-232981958062184/AnsiballZ_file.py'
Oct 02 19:03:28 compute-0 sudo[163588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:28 compute-0 python3.9[163590]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:28 compute-0 sudo[163588]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:28 compute-0 sudo[163740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmubccoyvabexuerruwoirrdqdbemhuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431808.6158352-1040-51177576596755/AnsiballZ_stat.py'
Oct 02 19:03:29 compute-0 sudo[163740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:29 compute-0 python3.9[163742]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:03:29 compute-0 sudo[163740]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:29 compute-0 sudo[163818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fshmpdtakmwihbhqrafgsulfvmtucmaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431808.6158352-1040-51177576596755/AnsiballZ_file.py'
Oct 02 19:03:29 compute-0 sudo[163818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:29 compute-0 python3.9[163820]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:29 compute-0 sudo[163818]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:30 compute-0 sudo[163970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrqiaobrqckdcmrrlfjgtxglofriajcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431810.0156012-1052-191319329061067/AnsiballZ_stat.py'
Oct 02 19:03:30 compute-0 sudo[163970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:30 compute-0 python3.9[163972]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:03:30 compute-0 sudo[163970]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:30 compute-0 sudo[164048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfnqestorbqselwlrjdgrfxcjmdfusqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431810.0156012-1052-191319329061067/AnsiballZ_file.py'
Oct 02 19:03:30 compute-0 sudo[164048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:31 compute-0 python3.9[164050]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.cmudiijx recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:31 compute-0 sudo[164048]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:31 compute-0 sudo[164200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfajcpzptoimyobdmjldrxlnntfeeduf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431811.556918-1064-115482974719291/AnsiballZ_stat.py'
Oct 02 19:03:31 compute-0 sudo[164200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:32 compute-0 python3.9[164202]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:03:32 compute-0 sudo[164200]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:32 compute-0 sudo[164278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzngkbfarlekeuhtvzlmeahkyinzbppw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431811.556918-1064-115482974719291/AnsiballZ_file.py'
Oct 02 19:03:32 compute-0 sudo[164278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:32 compute-0 python3.9[164280]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:32 compute-0 sudo[164278]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:33 compute-0 sudo[164430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zncrnlloyofqhkiuafkuhqscthgiabci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431812.9927657-1077-161555179680156/AnsiballZ_command.py'
Oct 02 19:03:33 compute-0 sudo[164430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:33 compute-0 python3.9[164432]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:03:33 compute-0 sudo[164430]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:33 compute-0 podman[164433]: 2025-10-02 19:03:33.696076779 +0000 UTC m=+0.119590526 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:03:33 compute-0 podman[164484]: 2025-10-02 19:03:33.823451864 +0000 UTC m=+0.091933374 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Oct 02 19:03:34 compute-0 sudo[164631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peirlclxdlylymfbtbaqbhmrfrffarpm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431813.7902496-1085-117039326534425/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 19:03:34 compute-0 sudo[164631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:34 compute-0 python3[164633]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 19:03:34 compute-0 sudo[164631]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:35 compute-0 sudo[164783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpfmbkgwdpewbvntflodgwclyfqyyqni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431814.8482575-1093-66593535440933/AnsiballZ_stat.py'
Oct 02 19:03:35 compute-0 sudo[164783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:35 compute-0 python3.9[164785]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:03:35 compute-0 sudo[164783]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:35 compute-0 sudo[164861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blqbfvgwdxgtltkptwttemmetdyxdrlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431814.8482575-1093-66593535440933/AnsiballZ_file.py'
Oct 02 19:03:35 compute-0 sudo[164861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:36 compute-0 python3.9[164863]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:36 compute-0 sudo[164861]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:36 compute-0 sudo[165013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blsulkehwwvvamchjunuhlvwkhmbvejr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431816.3510833-1105-163193313606880/AnsiballZ_stat.py'
Oct 02 19:03:36 compute-0 sudo[165013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:36 compute-0 podman[165015]: 2025-10-02 19:03:36.938689667 +0000 UTC m=+0.072262005 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:03:37 compute-0 python3.9[165016]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:03:37 compute-0 sudo[165013]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:37 compute-0 sudo[165115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgxdwqynioomwsljzsednnhhsroesuwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431816.3510833-1105-163193313606880/AnsiballZ_file.py'
Oct 02 19:03:37 compute-0 sudo[165115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:37 compute-0 python3.9[165117]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:37 compute-0 sudo[165115]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:38 compute-0 sudo[165267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azawwjvztdxrsezeomkxtzrxkcqqtlhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431817.8761454-1117-202584393351416/AnsiballZ_stat.py'
Oct 02 19:03:38 compute-0 sudo[165267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:38 compute-0 python3.9[165269]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:03:38 compute-0 sudo[165267]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:38 compute-0 sudo[165345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlhlrhltzfsdngdznxqjsqiklyiebtgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431817.8761454-1117-202584393351416/AnsiballZ_file.py'
Oct 02 19:03:38 compute-0 sudo[165345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:39 compute-0 python3.9[165347]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:39 compute-0 sudo[165345]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:39 compute-0 sudo[165497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpcyivkqwktvvybildlhzydryglicuzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431819.2690995-1129-261528985378969/AnsiballZ_stat.py'
Oct 02 19:03:39 compute-0 sudo[165497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:39 compute-0 python3.9[165499]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:03:40 compute-0 sudo[165497]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:40 compute-0 sudo[165575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njyqbppsgzniyotidazhqtbpuqakygsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431819.2690995-1129-261528985378969/AnsiballZ_file.py'
Oct 02 19:03:40 compute-0 sudo[165575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:40 compute-0 python3.9[165577]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:40 compute-0 sudo[165575]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:41 compute-0 sudo[165727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spdybscxwtnctcugcuzqxyxuzllsxfnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431820.784473-1141-133998041403821/AnsiballZ_stat.py'
Oct 02 19:03:41 compute-0 sudo[165727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:41 compute-0 python3.9[165729]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:03:41 compute-0 sudo[165727]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:42 compute-0 sudo[165852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwfyirnojrjkeuuuqpdljojlyespftpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431820.784473-1141-133998041403821/AnsiballZ_copy.py'
Oct 02 19:03:42 compute-0 sudo[165852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:42 compute-0 python3.9[165854]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431820.784473-1141-133998041403821/.source.nft follow=False _original_basename=ruleset.j2 checksum=bc835bd485c96b4ac7465e87d3a790a8d097f2aa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:42 compute-0 sudo[165852]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:43 compute-0 sudo[166004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkuseorupzrlddfzvdytgspkgjottxoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431822.4986742-1156-246592211735264/AnsiballZ_file.py'
Oct 02 19:03:43 compute-0 sudo[166004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:43 compute-0 python3.9[166006]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:43 compute-0 sudo[166004]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:43 compute-0 sudo[166156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnrlumnywmywbfbehfvftgqpetmrndsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431823.5481691-1164-45954387384767/AnsiballZ_command.py'
Oct 02 19:03:43 compute-0 sudo[166156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:44 compute-0 python3.9[166158]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:03:44 compute-0 sudo[166156]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:45 compute-0 sudo[166311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqjxfblbjbygpgljcknxhgahtvqmwecs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431824.4821317-1172-21514375867357/AnsiballZ_blockinfile.py'
Oct 02 19:03:45 compute-0 sudo[166311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:45 compute-0 python3.9[166313]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:45 compute-0 sudo[166311]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:45 compute-0 sudo[166463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huohjkouapbbovreylzirpoionkdrgdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431825.5861034-1181-29707022066641/AnsiballZ_command.py'
Oct 02 19:03:45 compute-0 sudo[166463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:46 compute-0 python3.9[166465]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:03:46 compute-0 sudo[166463]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:46 compute-0 sudo[166616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stnykjenacgydbpsuhzxfqldkxugnuvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431826.421441-1189-257284583039794/AnsiballZ_stat.py'
Oct 02 19:03:46 compute-0 sudo[166616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:47 compute-0 python3.9[166618]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:03:47 compute-0 sudo[166616]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:47 compute-0 sudo[166770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpedjoxqcfgtxlrkqweffvxryuwcyeia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431827.4390182-1197-216579604231267/AnsiballZ_command.py'
Oct 02 19:03:47 compute-0 sudo[166770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:48 compute-0 python3.9[166772]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:03:48 compute-0 sudo[166770]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:48 compute-0 sudo[166925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srntljunxltddyjcakbzsrplyvesccvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431828.3590214-1205-170255316419002/AnsiballZ_file.py'
Oct 02 19:03:48 compute-0 sudo[166925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:48 compute-0 python3.9[166927]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:03:48 compute-0 sudo[166925]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:49 compute-0 sshd-session[143166]: Connection closed by 192.168.122.30 port 39726
Oct 02 19:03:49 compute-0 sshd-session[143163]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:03:49 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Oct 02 19:03:49 compute-0 systemd[1]: session-22.scope: Consumed 2min 29.718s CPU time.
Oct 02 19:03:49 compute-0 systemd-logind[793]: Session 22 logged out. Waiting for processes to exit.
Oct 02 19:03:49 compute-0 systemd-logind[793]: Removed session 22.
Oct 02 19:03:50 compute-0 PackageKit[115205]: daemon quit
Oct 02 19:03:50 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 02 19:03:50 compute-0 podman[166953]: 2025-10-02 19:03:50.479233718 +0000 UTC m=+0.071215886 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:03:54 compute-0 podman[166979]: 2025-10-02 19:03:54.675692406 +0000 UTC m=+0.097672183 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:03:55 compute-0 sshd-session[167000]: Accepted publickey for zuul from 192.168.122.30 port 41768 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:03:55 compute-0 systemd-logind[793]: New session 23 of user zuul.
Oct 02 19:03:55 compute-0 systemd[1]: Started Session 23 of User zuul.
Oct 02 19:03:55 compute-0 sshd-session[167000]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:03:56 compute-0 sudo[167153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxjwotwglyidlijboamlhrfjgewazuwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431835.5551531-24-158487002532199/AnsiballZ_systemd_service.py'
Oct 02 19:03:56 compute-0 sudo[167153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:03:56 compute-0 python3.9[167155]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:03:56 compute-0 systemd[1]: Reloading.
Oct 02 19:03:56 compute-0 systemd-rc-local-generator[167180]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:03:56 compute-0 systemd-sysv-generator[167184]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:03:57 compute-0 sudo[167153]: pam_unix(sudo:session): session closed for user root
Oct 02 19:03:57 compute-0 python3.9[167340]: ansible-ansible.builtin.service_facts Invoked
Oct 02 19:03:58 compute-0 network[167357]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 19:03:58 compute-0 network[167358]: 'network-scripts' will be removed from distribution in near future.
Oct 02 19:03:58 compute-0 network[167359]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 19:03:59 compute-0 podman[157186]: time="2025-10-02T19:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:03:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 12783 "" "Go-http-client/1.1"
Oct 02 19:03:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2142 "" "Go-http-client/1.1"
Oct 02 19:04:01 compute-0 openstack_network_exporter[159337]: ERROR   19:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:04:01 compute-0 openstack_network_exporter[159337]: ERROR   19:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:04:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:04:01 compute-0 openstack_network_exporter[159337]: ERROR   19:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:04:01 compute-0 openstack_network_exporter[159337]: ERROR   19:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:04:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:04:01 compute-0 openstack_network_exporter[159337]: ERROR   19:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:04:03 compute-0 sudo[167639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toydkrzyiteectbfszyywwapxrmwqchr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431842.6254694-47-108798639214188/AnsiballZ_systemd_service.py'
Oct 02 19:04:03 compute-0 sudo[167639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:03 compute-0 python3.9[167641]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:04:03 compute-0 sudo[167639]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:04 compute-0 sudo[167823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kemfrifmkznnenmxbehvhuhqyofhnvky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431843.713844-57-197807601213074/AnsiballZ_file.py'
Oct 02 19:04:04 compute-0 sudo[167823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:04 compute-0 podman[167766]: 2025-10-02 19:04:04.350665612 +0000 UTC m=+0.081978976 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, version=9.6, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Oct 02 19:04:04 compute-0 podman[167767]: 2025-10-02 19:04:04.389261782 +0000 UTC m=+0.114987281 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 19:04:04 compute-0 python3.9[167833]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:04 compute-0 sudo[167823]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:05 compute-0 sudo[167988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgmedofhuqwqcjewizxbasxvzhodvqva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431844.8698068-65-161985919994534/AnsiballZ_file.py'
Oct 02 19:04:05 compute-0 sudo[167988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:05 compute-0 python3.9[167990]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:05 compute-0 sudo[167988]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:06 compute-0 sudo[168140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrerxifxcuixpkmnftgllgpqrcouswlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431845.8478699-74-216047953684075/AnsiballZ_command.py'
Oct 02 19:04:06 compute-0 sudo[168140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:06 compute-0 python3.9[168142]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:04:06 compute-0 sudo[168140]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:07 compute-0 podman[168268]: 2025-10-02 19:04:07.459812722 +0000 UTC m=+0.085704382 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:04:07 compute-0 python3.9[168305]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:04:08 compute-0 sudo[168466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ircyzcclztuwknjrnokidtaczpicrkec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431847.8807068-92-12863136028105/AnsiballZ_systemd_service.py'
Oct 02 19:04:08 compute-0 sudo[168466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:08 compute-0 python3.9[168468]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:04:08 compute-0 systemd[1]: Reloading.
Oct 02 19:04:08 compute-0 systemd-rc-local-generator[168497]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:04:08 compute-0 systemd-sysv-generator[168501]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:04:08 compute-0 sudo[168466]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:09 compute-0 sudo[168653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkyyhialobltoncidmgmklfagacaklqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431849.1463132-100-96122922291995/AnsiballZ_command.py'
Oct 02 19:04:09 compute-0 sudo[168653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:09 compute-0 python3.9[168655]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:04:09 compute-0 sudo[168653]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:10 compute-0 sudo[168806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cazxdcjlxonliotwhrlbmjxwfkpmimov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431850.0031989-109-176089127670277/AnsiballZ_file.py'
Oct 02 19:04:10 compute-0 sudo[168806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:10 compute-0 python3.9[168808]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:04:10 compute-0 sudo[168806]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:11 compute-0 python3.9[168959]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:04:11 compute-0 sshd-session[168935]: Connection closed by 39.162.46.234 port 27017
Oct 02 19:04:12 compute-0 python3.9[169111]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:13 compute-0 python3.9[169232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431851.9267712-125-105454586232323/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:04:14 compute-0 sudo[169382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhvsfqbngypnnckdxldnbvujfqisjgmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431853.847923-143-164933606409807/AnsiballZ_getent.py'
Oct 02 19:04:14 compute-0 sudo[169382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:14 compute-0 python3.9[169384]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Oct 02 19:04:14 compute-0 sudo[169382]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:16 compute-0 python3.9[169535]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:16 compute-0 python3.9[169656]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759431855.5101113-171-275344858045309/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:17 compute-0 python3.9[169806]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:18 compute-0 python3.9[169927]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759431856.9186382-171-133539243454223/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:18 compute-0 python3.9[170077]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:19 compute-0 python3.9[170198]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759431858.3483474-171-250378650775283/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:20 compute-0 python3.9[170349]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:04:20 compute-0 podman[170376]: 2025-10-02 19:04:20.617604844 +0000 UTC m=+0.057191973 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:04:21 compute-0 python3.9[170525]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:04:21 compute-0 python3.9[170677]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:22 compute-0 python3.9[170798]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431861.3123438-230-114979068330144/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:23 compute-0 python3.9[170948]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:23 compute-0 python3.9[171024]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.432 17 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.433 17 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.434 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a00e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.435 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feec38a00b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.435 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.436 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.436 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.436 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec5f86ab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38272c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec6d242f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38273e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38274d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec4ac1ee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3825730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.440 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.441 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feec72bb7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.441 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.441 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feec38264b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.442 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.442 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feec3827c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.442 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.442 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feec3827230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.442 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.443 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feec3825700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.443 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feec3827290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.443 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feec38277a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feec38272f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feec3827350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feec38a0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feec38273b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feec49646e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feec3827440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feec3827c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feec38274a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feec3827ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feec3827d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feec3827dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feec3827e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feec38a0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feec38276e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feec3827ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feec49641d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feec3827740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feec3827f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.450 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.450 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:04:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:04:24 compute-0 python3.9[171174]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:25 compute-0 podman[171270]: 2025-10-02 19:04:25.140683795 +0000 UTC m=+0.063821895 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 19:04:25 compute-0 python3.9[171309]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431863.9878674-230-248488700495407/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:26 compute-0 python3.9[171466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:26 compute-0 python3.9[171587]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431865.5034964-230-208415114014803/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:27 compute-0 python3.9[171737]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:28 compute-0 python3.9[171858]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431866.803002-230-201755243970699/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:28 compute-0 python3.9[172008]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:29 compute-0 python3.9[172129]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431868.3365085-230-272811757221706/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:29 compute-0 podman[157186]: time="2025-10-02T19:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:04:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 12783 "" "Go-http-client/1.1"
Oct 02 19:04:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2146 "" "Go-http-client/1.1"
Oct 02 19:04:30 compute-0 python3.9[172279]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:30 compute-0 python3.9[172355]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:31 compute-0 sudo[172505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvqjfaemoiiczrwjuxmpmyrxqwnhgggy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431871.0131252-325-83582676311055/AnsiballZ_file.py'
Oct 02 19:04:31 compute-0 sudo[172505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:31 compute-0 openstack_network_exporter[159337]: ERROR   19:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:04:31 compute-0 openstack_network_exporter[159337]: ERROR   19:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:04:31 compute-0 openstack_network_exporter[159337]: ERROR   19:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:04:31 compute-0 openstack_network_exporter[159337]: ERROR   19:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:04:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:04:31 compute-0 openstack_network_exporter[159337]: ERROR   19:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:04:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:04:31 compute-0 python3.9[172507]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:31 compute-0 sudo[172505]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:32 compute-0 sudo[172657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhbhrvefqwabujyvccspfaqdawzbfnch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431871.9612577-333-40075892698804/AnsiballZ_file.py'
Oct 02 19:04:32 compute-0 sudo[172657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:32 compute-0 python3.9[172659]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:32 compute-0 sudo[172657]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:33 compute-0 sudo[172809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yetkqwykfyiirobmxvflsgqhdivhnkqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431872.7430024-341-167808074194618/AnsiballZ_file.py'
Oct 02 19:04:33 compute-0 sudo[172809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:33 compute-0 python3.9[172811]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:04:33 compute-0 sudo[172809]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:33 compute-0 sudo[172961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grolasskwqsapozzyqwsqfblgvzzjtxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431873.5620093-349-225009577398480/AnsiballZ_stat.py'
Oct 02 19:04:33 compute-0 sudo[172961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:34 compute-0 python3.9[172963]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:34 compute-0 sudo[172961]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:34 compute-0 podman[173034]: 2025-10-02 19:04:34.654200188 +0000 UTC m=+0.088161406 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter)
Oct 02 19:04:34 compute-0 podman[173038]: 2025-10-02 19:04:34.708994448 +0000 UTC m=+0.130064082 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:04:34 compute-0 sudo[173131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaontyofdxafxzceflsxnltmizchrdik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431873.5620093-349-225009577398480/AnsiballZ_copy.py'
Oct 02 19:04:34 compute-0 sudo[173131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:34 compute-0 python3.9[173133]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431873.5620093-349-225009577398480/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:04:34 compute-0 sudo[173131]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:35 compute-0 sudo[173207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khqzlpuonrhvgrchrraxrflrslzttdhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431873.5620093-349-225009577398480/AnsiballZ_stat.py'
Oct 02 19:04:35 compute-0 sudo[173207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:35 compute-0 python3.9[173209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:35 compute-0 sudo[173207]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:35 compute-0 sudo[173330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmwetrinvrycuscsreeccdjvyaeazgdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431873.5620093-349-225009577398480/AnsiballZ_copy.py'
Oct 02 19:04:35 compute-0 sudo[173330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:36 compute-0 python3.9[173332]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431873.5620093-349-225009577398480/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:04:36 compute-0 sudo[173330]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:36 compute-0 sudo[173482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynzlvhirjqkyebjkoxjicwmhzfzrhcdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431876.366185-349-232032453631895/AnsiballZ_stat.py'
Oct 02 19:04:36 compute-0 sudo[173482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:36 compute-0 python3.9[173484]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:36 compute-0 sudo[173482]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:37 compute-0 sudo[173605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lizlgcdoqgdklzwkrznjqsfrnaqxzxjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431876.366185-349-232032453631895/AnsiballZ_copy.py'
Oct 02 19:04:37 compute-0 sudo[173605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:37 compute-0 python3.9[173607]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431876.366185-349-232032453631895/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:04:37 compute-0 podman[173608]: 2025-10-02 19:04:37.659564008 +0000 UTC m=+0.084888109 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:04:37 compute-0 sudo[173605]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:38 compute-0 sudo[173781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qydbprdfaigzeurtkhoikuctmchigvcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431878.004825-391-184444583537751/AnsiballZ_container_config_data.py'
Oct 02 19:04:38 compute-0 sudo[173781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:38 compute-0 python3.9[173783]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Oct 02 19:04:38 compute-0 sudo[173781]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:39 compute-0 sudo[173933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcspcnrgahruaqdjwscniaihggxmlnyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431879.075694-400-196509519283271/AnsiballZ_container_config_hash.py'
Oct 02 19:04:39 compute-0 sudo[173933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:39 compute-0 python3.9[173935]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:04:39 compute-0 sudo[173933]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:40 compute-0 sudo[174085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugqwopbxvdaazwlqmrnihxruhmhhvkes ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431880.1584485-410-131638884416063/AnsiballZ_edpm_container_manage.py'
Oct 02 19:04:40 compute-0 sudo[174085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:41 compute-0 python3[174087]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:04:46 compute-0 podman[174100]: 2025-10-02 19:04:46.842964352 +0000 UTC m=+5.663245612 image pull 4e3fcb5b1fba62258ff06f167ae06a1ec1b5619d7c6c0d986039bf8e54f8eb69 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Oct 02 19:04:47 compute-0 podman[174198]: 2025-10-02 19:04:46.99453222 +0000 UTC m=+0.038575947 image pull 4e3fcb5b1fba62258ff06f167ae06a1ec1b5619d7c6c0d986039bf8e54f8eb69 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Oct 02 19:04:47 compute-0 podman[174198]: 2025-10-02 19:04:47.096567743 +0000 UTC m=+0.140611420 container create 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, config_id=edpm, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct 02 19:04:47 compute-0 python3[174087]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Oct 02 19:04:47 compute-0 sudo[174085]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:47 compute-0 sudo[174386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzuvdtjfdilqojlythmlyjyrtihliizo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431887.5189805-418-106857201085902/AnsiballZ_stat.py'
Oct 02 19:04:47 compute-0 sudo[174386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:48 compute-0 python3.9[174388]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:04:48 compute-0 sudo[174386]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:48 compute-0 sudo[174540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwljeyfsuzmaoypfcfelkqewcmpzdlxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431888.475344-427-104496012084814/AnsiballZ_file.py'
Oct 02 19:04:48 compute-0 sudo[174540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:49 compute-0 python3.9[174542]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:49 compute-0 sudo[174540]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:49 compute-0 sudo[174692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmhlobbrpsxuknsslkfvrratgicgqfmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431889.1925817-427-155924483018792/AnsiballZ_copy.py'
Oct 02 19:04:49 compute-0 sudo[174692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:50 compute-0 python3.9[174694]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759431889.1925817-427-155924483018792/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:50 compute-0 sudo[174692]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:50 compute-0 sudo[174781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afxnokczbqdlrgcrjxmlvkihhluswwtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431889.1925817-427-155924483018792/AnsiballZ_systemd.py'
Oct 02 19:04:50 compute-0 sudo[174781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:50 compute-0 podman[174742]: 2025-10-02 19:04:50.793796037 +0000 UTC m=+0.095705063 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:04:51 compute-0 python3.9[174794]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:04:51 compute-0 systemd[1]: Reloading.
Oct 02 19:04:51 compute-0 systemd-sysv-generator[174823]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:04:51 compute-0 systemd-rc-local-generator[174820]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:04:51 compute-0 sudo[174781]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:51 compute-0 sudo[174903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deyiaiksydqnrwxjpzaweejgaxbhxfdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431889.1925817-427-155924483018792/AnsiballZ_systemd.py'
Oct 02 19:04:51 compute-0 sudo[174903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:52 compute-0 python3.9[174905]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:04:52 compute-0 systemd[1]: Reloading.
Oct 02 19:04:52 compute-0 systemd-rc-local-generator[174934]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:04:52 compute-0 systemd-sysv-generator[174938]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:04:52 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Oct 02 19:04:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a66069c3cdfad53cc80c45e2ab83119ea33ca0a45ae7f8b3723a71c50dc63/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a66069c3cdfad53cc80c45e2ab83119ea33ca0a45ae7f8b3723a71c50dc63/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a66069c3cdfad53cc80c45e2ab83119ea33ca0a45ae7f8b3723a71c50dc63/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Oct 02 19:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a66069c3cdfad53cc80c45e2ab83119ea33ca0a45ae7f8b3723a71c50dc63/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct 02 19:04:52 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.
Oct 02 19:04:52 compute-0 podman[174945]: 2025-10-02 19:04:52.782162286 +0000 UTC m=+0.163106996 container init 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: + sudo -E kolla_set_configs
Oct 02 19:04:52 compute-0 podman[174945]: 2025-10-02 19:04:52.819449051 +0000 UTC m=+0.200393751 container start 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:04:52 compute-0 sudo[174967]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 19:04:52 compute-0 sudo[174967]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:04:52 compute-0 sudo[174967]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:04:52 compute-0 podman[174945]: ceilometer_agent_ipmi
Oct 02 19:04:52 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: INFO:__main__:Validating config file
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: INFO:__main__:Copying service configuration files
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:04:52 compute-0 sudo[174903]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: INFO:__main__:Writing out command to execute
Oct 02 19:04:52 compute-0 podman[174968]: 2025-10-02 19:04:52.913305774 +0000 UTC m=+0.073796506 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 02 19:04:52 compute-0 sudo[174967]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:52 compute-0 systemd[1]: 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17-69338d0d64dc119b.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:04:52 compute-0 systemd[1]: 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17-69338d0d64dc119b.service: Failed with result 'exit-code'.
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: ++ cat /run_command
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: + ARGS=
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: + sudo kolla_copy_cacerts
Oct 02 19:04:52 compute-0 sudo[174988]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 19:04:52 compute-0 sudo[174988]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:04:52 compute-0 sudo[174988]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:04:52 compute-0 sudo[174988]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: + [[ ! -n '' ]]
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: + . kolla_extend_start
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: + umask 0022
Oct 02 19:04:52 compute-0 ceilometer_agent_ipmi[174960]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Oct 02 19:04:53 compute-0 sudo[175140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhylwkjelcacziumkahkwdkkxxbobqed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431893.2568963-453-231871651949850/AnsiballZ_container_config_data.py'
Oct 02 19:04:53 compute-0 sudo[175140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.770 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.770 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.770 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.770 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.771 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.771 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.771 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.771 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.771 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.771 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.771 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.771 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.771 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.772 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.772 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.772 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.772 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.772 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.772 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.772 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.772 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.772 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.772 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.773 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.773 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.773 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.773 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.773 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.773 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.773 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.773 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.773 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.773 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.773 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.773 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.773 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.774 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.774 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.774 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.774 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.774 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.774 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.774 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.774 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.774 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.774 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.774 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.774 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.775 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.775 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.775 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.775 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.775 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.775 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.775 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.775 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.775 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.775 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.775 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.775 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.776 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.776 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.776 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.776 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.776 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.776 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.776 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.776 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.776 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.776 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.776 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.777 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.777 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.777 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.777 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.777 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.777 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.777 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.777 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.777 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.777 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.777 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.777 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.778 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.778 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.778 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.778 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.778 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.778 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.778 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.778 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.778 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.778 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.779 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.779 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.779 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.779 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.779 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.779 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.779 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.779 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.779 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.779 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.780 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.780 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.780 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.780 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.780 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.780 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.780 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.780 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.780 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.780 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.780 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.781 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.781 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.781 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.781 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.781 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.781 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.781 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.781 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.781 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.781 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.781 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.781 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.782 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.782 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.782 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.782 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.782 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.782 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.782 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.782 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.782 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.782 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.782 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.783 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.783 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.783 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.783 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.783 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.783 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.783 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.783 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.783 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.783 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.783 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.783 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.783 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.784 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.784 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.784 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.784 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.784 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.784 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.784 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.784 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.784 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.784 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.784 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.784 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.785 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.785 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.785 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.807 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.809 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.811 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Oct 02 19:04:53 compute-0 python3.9[175142]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Oct 02 19:04:53 compute-0 sudo[175140]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:53 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:53.925 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpodgj2jcx/privsep.sock']
Oct 02 19:04:53 compute-0 sudo[175147]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpodgj2jcx/privsep.sock
Oct 02 19:04:53 compute-0 sudo[175147]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:04:53 compute-0 sudo[175147]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:04:54 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 02 19:04:54 compute-0 sudo[175301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rksdknyerrywowkrzykbhsrlasxxxjiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431894.1892693-462-66308235233597/AnsiballZ_container_config_hash.py'
Oct 02 19:04:54 compute-0 sudo[175301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:54 compute-0 sudo[175147]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.606 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.607 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpodgj2jcx/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.481 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.487 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.489 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.490 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.728 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.728 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.729 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.729 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.730 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.730 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.730 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.730 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.730 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.730 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.731 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.731 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.731 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Oct 02 19:04:54 compute-0 python3.9[175303]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.734 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.734 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.734 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.734 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.734 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.734 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.735 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.735 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.735 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.735 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.735 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.735 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.735 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.736 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.736 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.736 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.736 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.736 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.736 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.736 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.736 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.737 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.737 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.737 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.737 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.737 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.737 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.737 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.737 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.737 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.737 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.737 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.738 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.738 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.738 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.738 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.738 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.738 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.738 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.738 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.738 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.739 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.739 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.739 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.739 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.739 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.739 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.739 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.739 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.740 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.740 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.740 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.740 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.740 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.740 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.740 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.740 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.741 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.741 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.741 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.741 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.741 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.741 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.741 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.741 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.741 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.742 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.742 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.742 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.742 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.742 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.742 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.742 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.742 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.742 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.742 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.742 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.743 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.743 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.743 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.743 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.743 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.743 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.743 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.743 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.743 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.743 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.743 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.744 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.744 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.744 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.744 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.744 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.744 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.744 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.744 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.744 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.744 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.744 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.745 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.745 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.745 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.745 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.745 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.745 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.745 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.745 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.745 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.745 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.745 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.746 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.746 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.746 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.746 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.746 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.746 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.746 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.746 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.746 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.746 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.747 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.747 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.747 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.747 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.747 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.747 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.747 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.747 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.747 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.747 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.747 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.748 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.748 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.748 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.748 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.748 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.748 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.748 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.748 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.748 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.748 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.749 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.749 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.749 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.749 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.749 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.749 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.749 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.749 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.749 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.749 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.749 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.750 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.750 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.750 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.750 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.750 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.750 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.750 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.750 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.750 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.750 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.750 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.750 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.751 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 sudo[175301]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.751 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.751 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.751 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.751 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.751 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.751 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.751 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.751 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.751 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.751 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.752 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.752 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.752 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.752 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.752 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.752 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.752 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.752 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.752 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.752 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.752 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.753 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.753 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.753 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.753 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.753 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.753 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.753 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.753 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.753 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.753 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.753 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.754 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.754 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.754 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.754 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.754 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.754 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Oct 02 19:04:54 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:04:54.756 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Oct 02 19:04:55 compute-0 sudo[175468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjeemocknixvvoymgkvasgjopafuvgzs ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431895.0589573-472-268046966655730/AnsiballZ_edpm_container_manage.py'
Oct 02 19:04:55 compute-0 sudo[175468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:55 compute-0 podman[175431]: 2025-10-02 19:04:55.422423046 +0000 UTC m=+0.078627600 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 19:04:55 compute-0 python3[175474]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:04:59 compute-0 podman[157186]: time="2025-10-02T19:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:04:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 15574 "" "Go-http-client/1.1"
Oct 02 19:04:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2583 "" "Go-http-client/1.1"
Oct 02 19:05:01 compute-0 openstack_network_exporter[159337]: ERROR   19:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:05:01 compute-0 openstack_network_exporter[159337]: ERROR   19:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:05:01 compute-0 openstack_network_exporter[159337]: ERROR   19:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:05:01 compute-0 openstack_network_exporter[159337]: ERROR   19:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:05:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:05:01 compute-0 openstack_network_exporter[159337]: ERROR   19:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:05:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:05:02 compute-0 podman[175495]: 2025-10-02 19:05:02.441958158 +0000 UTC m=+6.691928241 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Oct 02 19:05:02 compute-0 podman[175697]: 2025-10-02 19:05:02.593013079 +0000 UTC m=+0.031830552 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Oct 02 19:05:02 compute-0 podman[175697]: 2025-10-02 19:05:02.688232346 +0000 UTC m=+0.127049729 container create df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release=1214.1726694543, config_id=edpm)
Oct 02 19:05:02 compute-0 python3[175474]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Oct 02 19:05:02 compute-0 sudo[175468]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:03 compute-0 sudo[175886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkyuvkdecvcguazvgqlphmxpkiergcgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431903.0748367-480-173432397858810/AnsiballZ_stat.py'
Oct 02 19:05:03 compute-0 sudo[175886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:03 compute-0 python3.9[175888]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:05:03 compute-0 sudo[175886]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:04 compute-0 sudo[176040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltjmamebitsxsuvvsksvpjgjumwkeslh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431903.9975228-489-266819025919522/AnsiballZ_file.py'
Oct 02 19:05:04 compute-0 sudo[176040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:04 compute-0 python3.9[176042]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:04 compute-0 sudo[176040]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:05 compute-0 podman[176165]: 2025-10-02 19:05:05.209140401 +0000 UTC m=+0.083547836 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, io.buildah.version=1.33.7, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350)
Oct 02 19:05:05 compute-0 sudo[176220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pntxdbkirhiycczyncfrseqqkwfeopol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431904.6130204-489-169380215641861/AnsiballZ_copy.py'
Oct 02 19:05:05 compute-0 sudo[176220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:05 compute-0 podman[176166]: 2025-10-02 19:05:05.300442043 +0000 UTC m=+0.168973181 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:05:05 compute-0 python3.9[176232]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759431904.6130204-489-169380215641861/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:05 compute-0 sudo[176220]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:05 compute-0 sudo[176312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yscwulrxebsatpcvjpirddeerghlbeqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431904.6130204-489-169380215641861/AnsiballZ_systemd.py'
Oct 02 19:05:05 compute-0 sudo[176312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:06 compute-0 python3.9[176314]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:05:06 compute-0 systemd[1]: Reloading.
Oct 02 19:05:06 compute-0 systemd-rc-local-generator[176343]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:05:06 compute-0 systemd-sysv-generator[176346]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:05:06 compute-0 sudo[176312]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:06 compute-0 sudo[176424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awlwobnogpoxiioqhzfdyagabrpyvcsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431904.6130204-489-169380215641861/AnsiballZ_systemd.py'
Oct 02 19:05:06 compute-0 sudo[176424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:07 compute-0 python3.9[176426]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:05:07 compute-0 systemd[1]: Reloading.
Oct 02 19:05:07 compute-0 systemd-rc-local-generator[176456]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:05:07 compute-0 systemd-sysv-generator[176460]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:05:07 compute-0 systemd[1]: Starting kepler container...
Oct 02 19:05:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:05:07 compute-0 podman[176480]: 2025-10-02 19:05:07.89500516 +0000 UTC m=+0.175665184 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:05:08 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c.
Oct 02 19:05:08 compute-0 podman[176467]: 2025-10-02 19:05:08.151340788 +0000 UTC m=+0.496501903 container init df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, config_id=edpm, container_name=kepler, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=base rhel9, release-0.7.12=, architecture=x86_64, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, distribution-scope=public, maintainer=Red Hat, Inc.)
Oct 02 19:05:08 compute-0 kepler[176497]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct 02 19:05:08 compute-0 podman[176467]: 2025-10-02 19:05:08.187367773 +0000 UTC m=+0.532528888 container start df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, architecture=x86_64, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, container_name=kepler, com.redhat.component=ubi9-container, io.buildah.version=1.29.0)
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.194317       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.194595       1 config.go:293] using gCgroup ID in the BPF program: true
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.194634       1 config.go:295] kernel version: 5.14
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.195510       1 power.go:78] Unable to obtain power, use estimate method
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.195548       1 redfish.go:169] failed to get redfish credential file path
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.196293       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.196317       1 power.go:79] using none to obtain power
Oct 02 19:05:08 compute-0 kepler[176497]: E1002 19:05:08.196340       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Oct 02 19:05:08 compute-0 kepler[176497]: E1002 19:05:08.196392       1 exporter.go:154] failed to init GPU accelerators: no devices found
Oct 02 19:05:08 compute-0 kepler[176497]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.199611       1 exporter.go:84] Number of CPUs: 8
Oct 02 19:05:08 compute-0 podman[176467]: kepler
Oct 02 19:05:08 compute-0 systemd[1]: Started kepler container.
Oct 02 19:05:08 compute-0 sudo[176424]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:08 compute-0 podman[176518]: 2025-10-02 19:05:08.467957691 +0000 UTC m=+0.267650548 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, distribution-scope=public, name=ubi9, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release=1214.1726694543, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct 02 19:05:08 compute-0 systemd[1]: df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c-1b9c399fb87823b0.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:05:08 compute-0 systemd[1]: df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c-1b9c399fb87823b0.service: Failed with result 'exit-code'.
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.804389       1 watcher.go:83] Using in cluster k8s config
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.804470       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Oct 02 19:05:08 compute-0 kepler[176497]: E1002 19:05:08.804520       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.810478       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.810520       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.815439       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.815476       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.827143       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.827193       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.827210       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.838255       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.838299       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.838305       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.838330       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.838340       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.838356       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.838487       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.838517       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.838544       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.838602       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.838728       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Oct 02 19:05:08 compute-0 kepler[176497]: I1002 19:05:08.839134       1 exporter.go:208] Started Kepler in 645.202737ms
Oct 02 19:05:09 compute-0 sudo[176701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdejvrwucftohfieelesdxtmqasplmuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431908.6578362-513-231756306177684/AnsiballZ_systemd.py'
Oct 02 19:05:09 compute-0 sudo[176701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:10 compute-0 python3.9[176703]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:05:10 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Oct 02 19:05:10 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:05:10.687 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Oct 02 19:05:10 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:05:10.789 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Oct 02 19:05:10 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:05:10.789 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Oct 02 19:05:10 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:05:10.790 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Oct 02 19:05:10 compute-0 ceilometer_agent_ipmi[174960]: 2025-10-02 19:05:10.804 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Oct 02 19:05:10 compute-0 systemd[1]: libpod-0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.scope: Deactivated successfully.
Oct 02 19:05:10 compute-0 podman[176707]: 2025-10-02 19:05:10.968703505 +0000 UTC m=+0.474849224 container died 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:05:10 compute-0 systemd[1]: libpod-0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.scope: Consumed 2.239s CPU time.
Oct 02 19:05:10 compute-0 systemd[1]: 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17-69338d0d64dc119b.timer: Deactivated successfully.
Oct 02 19:05:10 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.
Oct 02 19:05:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17-userdata-shm.mount: Deactivated successfully.
Oct 02 19:05:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-131a66069c3cdfad53cc80c45e2ab83119ea33ca0a45ae7f8b3723a71c50dc63-merged.mount: Deactivated successfully.
Oct 02 19:05:11 compute-0 podman[176707]: 2025-10-02 19:05:11.888645875 +0000 UTC m=+1.394791624 container cleanup 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 02 19:05:11 compute-0 podman[176707]: ceilometer_agent_ipmi
Oct 02 19:05:12 compute-0 podman[176734]: ceilometer_agent_ipmi
Oct 02 19:05:12 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Oct 02 19:05:12 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Oct 02 19:05:12 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Oct 02 19:05:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a66069c3cdfad53cc80c45e2ab83119ea33ca0a45ae7f8b3723a71c50dc63/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a66069c3cdfad53cc80c45e2ab83119ea33ca0a45ae7f8b3723a71c50dc63/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a66069c3cdfad53cc80c45e2ab83119ea33ca0a45ae7f8b3723a71c50dc63/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Oct 02 19:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a66069c3cdfad53cc80c45e2ab83119ea33ca0a45ae7f8b3723a71c50dc63/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct 02 19:05:12 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.
Oct 02 19:05:12 compute-0 podman[176746]: 2025-10-02 19:05:12.283879078 +0000 UTC m=+0.210531743 container init 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: + sudo -E kolla_set_configs
Oct 02 19:05:12 compute-0 podman[176746]: 2025-10-02 19:05:12.334023642 +0000 UTC m=+0.260676257 container start 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Oct 02 19:05:12 compute-0 podman[176746]: ceilometer_agent_ipmi
Oct 02 19:05:12 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Oct 02 19:05:12 compute-0 sudo[176768]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 19:05:12 compute-0 sudo[176768]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:05:12 compute-0 sudo[176768]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:05:12 compute-0 sudo[176701]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: INFO:__main__:Validating config file
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: INFO:__main__:Copying service configuration files
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: INFO:__main__:Writing out command to execute
Oct 02 19:05:12 compute-0 podman[176769]: 2025-10-02 19:05:12.432058498 +0000 UTC m=+0.076149132 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi)
Oct 02 19:05:12 compute-0 sudo[176768]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: ++ cat /run_command
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: + ARGS=
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: + sudo kolla_copy_cacerts
Oct 02 19:05:12 compute-0 systemd[1]: 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17-43fb96673b7be5c0.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:05:12 compute-0 systemd[1]: 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17-43fb96673b7be5c0.service: Failed with result 'exit-code'.
Oct 02 19:05:12 compute-0 sudo[176797]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 19:05:12 compute-0 sudo[176797]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:05:12 compute-0 sudo[176797]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:05:12 compute-0 sudo[176797]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: + [[ ! -n '' ]]
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: + . kolla_extend_start
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: + umask 0022
Oct 02 19:05:12 compute-0 ceilometer_agent_ipmi[176762]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Oct 02 19:05:13 compute-0 sudo[176943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiqdpwlzyuyalxkkqxhfycgpdzqalimo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431912.646771-521-49466267088642/AnsiballZ_systemd.py'
Oct 02 19:05:13 compute-0 sudo[176943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.342 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.343 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.343 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.343 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.343 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.343 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.343 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.343 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.343 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.343 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.344 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.344 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.344 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.344 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.344 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.344 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.344 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.344 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.344 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.344 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.345 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.345 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.345 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.345 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.345 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.345 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.345 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.345 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.345 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.345 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.345 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.346 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.346 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.346 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.346 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.346 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.346 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.346 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.346 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.346 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.346 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.346 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.347 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.347 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.347 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.347 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.347 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.347 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.347 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.347 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.348 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.348 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.348 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.348 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.348 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.348 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.348 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.348 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.348 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.349 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.349 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.349 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.349 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.349 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.349 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.349 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.349 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.349 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.349 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.349 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.350 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.350 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.350 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.350 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.350 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.350 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.350 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.350 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.350 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.350 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.350 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.350 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.351 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.351 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.351 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.351 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.351 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.351 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.351 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.351 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.351 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.351 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.351 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.352 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.352 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.352 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.352 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.352 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.352 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.352 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.352 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.352 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.352 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.352 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.353 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.353 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.353 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.353 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.353 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.353 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.353 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.353 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.353 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.353 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.354 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.354 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.354 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.354 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.354 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.354 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.354 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.354 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.354 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.354 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.354 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.355 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.355 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.355 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.355 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.355 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.355 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.355 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.355 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.355 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.355 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.355 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.355 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.356 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.356 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.356 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.356 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.356 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.356 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.356 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.356 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.356 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.356 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.356 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.357 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.357 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.357 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.357 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.357 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.357 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.357 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.357 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.357 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.357 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.357 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.357 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.358 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.358 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.358 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.358 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.358 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.380 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.382 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.384 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Oct 02 19:05:13 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:13.411 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp0m1ffsmt/privsep.sock']
Oct 02 19:05:13 compute-0 sudo[176950]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmp0m1ffsmt/privsep.sock
Oct 02 19:05:13 compute-0 sudo[176950]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:05:13 compute-0 sudo[176950]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:05:13 compute-0 python3.9[176945]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:05:13 compute-0 systemd[1]: Stopping kepler container...
Oct 02 19:05:13 compute-0 kepler[176497]: I1002 19:05:13.693875       1 exporter.go:218] Received shutdown signal
Oct 02 19:05:13 compute-0 kepler[176497]: I1002 19:05:13.694743       1 exporter.go:226] Exiting...
Oct 02 19:05:13 compute-0 systemd[1]: libpod-df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c.scope: Deactivated successfully.
Oct 02 19:05:13 compute-0 podman[176956]: 2025-10-02 19:05:13.875564898 +0000 UTC m=+0.259715716 container died df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30)
Oct 02 19:05:13 compute-0 systemd[1]: df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c-1b9c399fb87823b0.timer: Deactivated successfully.
Oct 02 19:05:13 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c.
Oct 02 19:05:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c-userdata-shm.mount: Deactivated successfully.
Oct 02 19:05:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-fec69d7149010c9ce04b706cc88e5698403e870d64bbbb9ba31e78bedc9b025c-merged.mount: Deactivated successfully.
Oct 02 19:05:13 compute-0 podman[176956]: 2025-10-02 19:05:13.92689619 +0000 UTC m=+0.311047038 container cleanup df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, container_name=kepler, name=ubi9, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9)
Oct 02 19:05:13 compute-0 podman[176956]: kepler
Oct 02 19:05:14 compute-0 podman[176983]: kepler
Oct 02 19:05:14 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Oct 02 19:05:14 compute-0 systemd[1]: Stopped kepler container.
Oct 02 19:05:14 compute-0 systemd[1]: Starting kepler container...
Oct 02 19:05:14 compute-0 sudo[176950]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.146 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.147 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp0m1ffsmt/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.021 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.025 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.027 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.027 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Oct 02 19:05:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:05:14 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c.
Oct 02 19:05:14 compute-0 podman[176997]: 2025-10-02 19:05:14.207946833 +0000 UTC m=+0.159435629 container init df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=base rhel9, release-0.7.12=, container_name=kepler, io.openshift.expose-services=)
Oct 02 19:05:14 compute-0 podman[176997]: 2025-10-02 19:05:14.232675489 +0000 UTC m=+0.184164285 container start df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., container_name=kepler, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=base rhel9, release-0.7.12=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_id=edpm)
Oct 02 19:05:14 compute-0 podman[176997]: kepler
Oct 02 19:05:14 compute-0 kepler[177012]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.244 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.245 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:05:14 compute-0 systemd[1]: Started kepler container.
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.247 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.248 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.249 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.250 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.251 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.251 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.251 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.251 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.251 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.251 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.252 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.256 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.256 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.256 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.256 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.256 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.257 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.257 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.257 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.257 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.257 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.257 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.257 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.257 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.258 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.258225       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.258512       1 config.go:293] using gCgroup ID in the BPF program: true
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.258 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.258 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.258654       1 config.go:295] kernel version: 5.14
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.258 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.258 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.259 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.259 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.259 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.259484       1 power.go:78] Unable to obtain power, use estimate method
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.259526       1 redfish.go:169] failed to get redfish credential file path
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.259 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.259 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.259 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.259 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.259 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.260 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.260 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.260197       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.260215       1 power.go:79] using none to obtain power
Oct 02 19:05:14 compute-0 kepler[177012]: E1002 19:05:14.260238       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Oct 02 19:05:14 compute-0 kepler[177012]: E1002 19:05:14.260269       1 exporter.go:154] failed to init GPU accelerators: no devices found
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.260 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.260 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.260 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.260 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.260 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.260 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.261 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.261 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.261 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.261 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.261 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.261 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.261 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.261 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.262 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.262 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.262 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.262 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 kepler[177012]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.262 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.262 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.262 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.262 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.263 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.263 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.263 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.263 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.263 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.263707       1 exporter.go:84] Number of CPUs: 8
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.263 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.263 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.264 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.264 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.264 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.264 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.264 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.264 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.264 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.264 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.265 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.265 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.265 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.265 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.265 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.265 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.265 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.265 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.266 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.266 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.266 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.266 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.266 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.266 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.266 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.266 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.267 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.267 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.267 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.267 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.267 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.267 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.267 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.267 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.268 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.268 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.268 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.268 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.268 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.268 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.268 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.268 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.269 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.269 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.269 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.269 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.269 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.269 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.269 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.270 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.270 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.270 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.270 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.270 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.270 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.270 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.270 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.271 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.271 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.271 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.271 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.271 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.271 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.271 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.271 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.272 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.272 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.272 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.272 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.272 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.272 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.272 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.273 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.273 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.273 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.273 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.273 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.273 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.273 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.273 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.274 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.274 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.274 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.274 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.274 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.274 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.274 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.274 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.274 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.275 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.275 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.275 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.275 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.275 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.275 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.275 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.276 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.276 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.276 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.276 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.276 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.276 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.277 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.277 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.277 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.277 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.277 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.277 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.277 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.277 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.277 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.278 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.278 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.278 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.278 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.278 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.278 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.278 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.278 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.279 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.279 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.279 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.279 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.279 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.279 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.279 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.279 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.280 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.280 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.280 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.280 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.280 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.280 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.280 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.280 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.281 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.281 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.281 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.281 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.281 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.281 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.281 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.281 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.282 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.282 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.282 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.282 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.282 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Oct 02 19:05:14 compute-0 sudo[176943]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:14 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:05:14.288 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Oct 02 19:05:14 compute-0 podman[177020]: 2025-10-02 19:05:14.360508652 +0000 UTC m=+0.115519793 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, version=9.4, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vcs-type=git, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9)
Oct 02 19:05:14 compute-0 systemd[1]: df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c-10ee41276df8e535.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:05:14 compute-0 systemd[1]: df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c-10ee41276df8e535.service: Failed with result 'exit-code'.
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.801897       1 watcher.go:83] Using in cluster k8s config
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.801974       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Oct 02 19:05:14 compute-0 kepler[177012]: E1002 19:05:14.802061       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.806823       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.806887       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.810452       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.810492       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.818008       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.818053       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.818070       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.842792       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.842836       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.842842       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.842848       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.842858       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.842874       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.842966       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.842993       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.843018       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.843058       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.843135       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Oct 02 19:05:14 compute-0 kepler[177012]: I1002 19:05:14.843562       1 exporter.go:208] Started Kepler in 585.716637ms
Oct 02 19:05:14 compute-0 sudo[177209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akdxdimkgpdajfxxtqytkxowndztfdxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431914.523672-529-80321625124977/AnsiballZ_find.py'
Oct 02 19:05:14 compute-0 sudo[177209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:15 compute-0 python3.9[177211]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:05:15 compute-0 sudo[177209]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:16 compute-0 sudo[177361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctaintsriifsogsvqqakajcpmqvyybvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431915.5669208-539-122040084093421/AnsiballZ_podman_container_info.py'
Oct 02 19:05:16 compute-0 sudo[177361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:16 compute-0 python3.9[177363]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Oct 02 19:05:16 compute-0 sudo[177361]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:17 compute-0 sudo[177524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzhjmivvmflgqgiayfgvyamymqktuepp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431917.0184605-547-248503271933981/AnsiballZ_podman_container_exec.py'
Oct 02 19:05:17 compute-0 sudo[177524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:18 compute-0 python3.9[177526]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:05:18 compute-0 systemd[1]: Started libpod-conmon-daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97.scope.
Oct 02 19:05:18 compute-0 podman[177527]: 2025-10-02 19:05:18.242459945 +0000 UTC m=+0.151690762 container exec daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:05:18 compute-0 podman[177527]: 2025-10-02 19:05:18.281899138 +0000 UTC m=+0.191129955 container exec_died daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:05:18 compute-0 sudo[177524]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:18 compute-0 systemd[1]: libpod-conmon-daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97.scope: Deactivated successfully.
Oct 02 19:05:19 compute-0 sudo[177705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcsuodunbmmtswvtlamqngnodfdvtiij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431918.6185067-555-197058700223477/AnsiballZ_podman_container_exec.py'
Oct 02 19:05:19 compute-0 sudo[177705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:19 compute-0 python3.9[177707]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:05:19 compute-0 systemd[1]: Started libpod-conmon-daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97.scope.
Oct 02 19:05:19 compute-0 podman[177708]: 2025-10-02 19:05:19.5253415 +0000 UTC m=+0.155927467 container exec daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:05:19 compute-0 podman[177708]: 2025-10-02 19:05:19.568692368 +0000 UTC m=+0.199278305 container exec_died daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:05:19 compute-0 sudo[177705]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:20 compute-0 systemd[1]: libpod-conmon-daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97.scope: Deactivated successfully.
Oct 02 19:05:20 compute-0 sudo[177890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzamnxnepohakjzzbxiqcvjmxtakksnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431920.287695-563-26623288401889/AnsiballZ_file.py'
Oct 02 19:05:20 compute-0 sudo[177890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:20 compute-0 podman[177892]: 2025-10-02 19:05:20.993222906 +0000 UTC m=+0.108060006 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:05:21 compute-0 python3.9[177893]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:21 compute-0 sudo[177890]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:21 compute-0 sudo[178064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzoegarweqvhpfvuyqtguvyfejzfdiom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431921.4318683-572-151095067668183/AnsiballZ_podman_container_info.py'
Oct 02 19:05:21 compute-0 sudo[178064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:22 compute-0 python3.9[178066]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Oct 02 19:05:22 compute-0 sudo[178064]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:23 compute-0 sudo[178228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpsfvpussvgujffvxcuwvzangxbuabfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431922.5416052-580-266789827593765/AnsiballZ_podman_container_exec.py'
Oct 02 19:05:23 compute-0 sudo[178228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:23 compute-0 python3.9[178230]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:05:23 compute-0 systemd[1]: Started libpod-conmon-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope.
Oct 02 19:05:23 compute-0 podman[178231]: 2025-10-02 19:05:23.404197098 +0000 UTC m=+0.099432941 container exec b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:05:23 compute-0 podman[178231]: 2025-10-02 19:05:23.443958602 +0000 UTC m=+0.139194395 container exec_died b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 19:05:23 compute-0 systemd[1]: libpod-conmon-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope: Deactivated successfully.
Oct 02 19:05:23 compute-0 sudo[178228]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:24 compute-0 sudo[178410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsreoqsmazabdofdhslmshondcmgrhqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431923.7661955-588-32566348453106/AnsiballZ_podman_container_exec.py'
Oct 02 19:05:24 compute-0 sudo[178410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:24 compute-0 python3.9[178412]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:05:24 compute-0 systemd[1]: Started libpod-conmon-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope.
Oct 02 19:05:24 compute-0 podman[178413]: 2025-10-02 19:05:24.724019617 +0000 UTC m=+0.160161672 container exec b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 19:05:24 compute-0 podman[178413]: 2025-10-02 19:05:24.756131298 +0000 UTC m=+0.192273293 container exec_died b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm)
Oct 02 19:05:24 compute-0 systemd[1]: libpod-conmon-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope: Deactivated successfully.
Oct 02 19:05:24 compute-0 sudo[178410]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:25 compute-0 sudo[178607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liwpyihieoklpwsvhfqlmtcbdbqghvym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431925.0828776-596-139486171349210/AnsiballZ_file.py'
Oct 02 19:05:25 compute-0 sudo[178607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:25 compute-0 podman[178568]: 2025-10-02 19:05:25.594528886 +0000 UTC m=+0.119248421 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:05:25 compute-0 python3.9[178613]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:25 compute-0 sudo[178607]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:26 compute-0 sudo[178764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulthaqdswrhsxuyshncnfjfdvnkcdkbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431926.032383-605-45022301435312/AnsiballZ_podman_container_info.py'
Oct 02 19:05:26 compute-0 sudo[178764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:26 compute-0 python3.9[178766]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Oct 02 19:05:26 compute-0 sudo[178764]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:27 compute-0 sudo[178931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhpmutdqqxkfzfkshqxcwgdgktfavfmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431927.208508-613-14343096586599/AnsiballZ_podman_container_exec.py'
Oct 02 19:05:27 compute-0 sudo[178931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:27 compute-0 python3.9[178933]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:05:28 compute-0 systemd[1]: Started libpod-conmon-d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1.scope.
Oct 02 19:05:28 compute-0 podman[178934]: 2025-10-02 19:05:28.093730171 +0000 UTC m=+0.175178389 container exec d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:05:28 compute-0 podman[178934]: 2025-10-02 19:05:28.144041541 +0000 UTC m=+0.225489789 container exec_died d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:05:28 compute-0 systemd[1]: libpod-conmon-d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1.scope: Deactivated successfully.
Oct 02 19:05:28 compute-0 sudo[178931]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:29 compute-0 sudo[179113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxconpecatndezrplrxqbcsobbipxnam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431928.5125608-621-219966061766319/AnsiballZ_podman_container_exec.py'
Oct 02 19:05:29 compute-0 sudo[179113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:29 compute-0 python3.9[179115]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:05:29 compute-0 systemd[1]: Started libpod-conmon-d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1.scope.
Oct 02 19:05:29 compute-0 podman[179116]: 2025-10-02 19:05:29.407941183 +0000 UTC m=+0.171421790 container exec d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:05:29 compute-0 podman[179116]: 2025-10-02 19:05:29.442345617 +0000 UTC m=+0.205826174 container exec_died d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:05:29 compute-0 systemd[1]: libpod-conmon-d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1.scope: Deactivated successfully.
Oct 02 19:05:29 compute-0 sudo[179113]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:29 compute-0 podman[157186]: time="2025-10-02T19:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:05:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18534 "" "Go-http-client/1.1"
Oct 02 19:05:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2991 "" "Go-http-client/1.1"
Oct 02 19:05:30 compute-0 sudo[179294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pksruotlkfncaejjxvoerdfzvfmvkdjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431929.7975233-629-74152759789830/AnsiballZ_file.py'
Oct 02 19:05:30 compute-0 sudo[179294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:30 compute-0 python3.9[179296]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:30 compute-0 sudo[179294]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:31 compute-0 openstack_network_exporter[159337]: ERROR   19:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:05:31 compute-0 openstack_network_exporter[159337]: ERROR   19:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:05:31 compute-0 openstack_network_exporter[159337]: ERROR   19:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:05:31 compute-0 openstack_network_exporter[159337]: ERROR   19:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:05:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:05:31 compute-0 openstack_network_exporter[159337]: ERROR   19:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:05:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:05:31 compute-0 sudo[179446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boswjpwicwqgzrmlfitkuabixqdwudsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431931.0360758-638-209909763660001/AnsiballZ_podman_container_info.py'
Oct 02 19:05:31 compute-0 sudo[179446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:31 compute-0 python3.9[179448]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Oct 02 19:05:31 compute-0 sudo[179446]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:32 compute-0 sudo[179611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crrbxlhmhyzyhxtrinsyfiqpiynzwrav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431932.2460127-646-144543833001571/AnsiballZ_podman_container_exec.py'
Oct 02 19:05:32 compute-0 sudo[179611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:33 compute-0 python3.9[179613]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:05:33 compute-0 systemd[1]: Started libpod-conmon-ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f.scope.
Oct 02 19:05:33 compute-0 podman[179614]: 2025-10-02 19:05:33.186685858 +0000 UTC m=+0.130389545 container exec ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:05:33 compute-0 podman[179614]: 2025-10-02 19:05:33.220172413 +0000 UTC m=+0.163876080 container exec_died ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:05:33 compute-0 systemd[1]: libpod-conmon-ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f.scope: Deactivated successfully.
Oct 02 19:05:33 compute-0 sudo[179611]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:34 compute-0 sudo[179794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flnlacqdzapnskwrjyvhvgukjwbhnmym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431933.5386074-654-80968150590654/AnsiballZ_podman_container_exec.py'
Oct 02 19:05:34 compute-0 sudo[179794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:34 compute-0 python3.9[179796]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:05:34 compute-0 systemd[1]: Started libpod-conmon-ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f.scope.
Oct 02 19:05:34 compute-0 podman[179797]: 2025-10-02 19:05:34.548988459 +0000 UTC m=+0.157389604 container exec ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:05:34 compute-0 podman[179797]: 2025-10-02 19:05:34.583221577 +0000 UTC m=+0.191622752 container exec_died ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:05:34 compute-0 sudo[179794]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:34 compute-0 systemd[1]: libpod-conmon-ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f.scope: Deactivated successfully.
Oct 02 19:05:35 compute-0 podman[179951]: 2025-10-02 19:05:35.404780959 +0000 UTC m=+0.086725918 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, release=1755695350, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Oct 02 19:05:35 compute-0 sudo[179996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmkyaprarjjnkammzxqfcpoqflzgftut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431934.9079423-662-240258042318385/AnsiballZ_file.py'
Oct 02 19:05:35 compute-0 sudo[179996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:35 compute-0 podman[180000]: 2025-10-02 19:05:35.572894112 +0000 UTC m=+0.131870572 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:05:35 compute-0 python3.9[180001]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:35 compute-0 sudo[179996]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:36 compute-0 sudo[180177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pujyzmtxedorbqkbyemgdlbovzhxbfff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431936.0136008-671-3258750344670/AnsiballZ_podman_container_info.py'
Oct 02 19:05:36 compute-0 sudo[180177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:36 compute-0 python3.9[180179]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Oct 02 19:05:36 compute-0 sudo[180177]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:37 compute-0 sudo[180341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtarsgzbdnpyqbxlcvvfsjwtkaavodjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431937.260341-679-120951772186263/AnsiballZ_podman_container_exec.py'
Oct 02 19:05:37 compute-0 sudo[180341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:38 compute-0 python3.9[180343]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:05:38 compute-0 systemd[1]: Started libpod-conmon-2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38.scope.
Oct 02 19:05:38 compute-0 podman[180344]: 2025-10-02 19:05:38.202112271 +0000 UTC m=+0.139043280 container exec 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 02 19:05:38 compute-0 podman[180344]: 2025-10-02 19:05:38.255583221 +0000 UTC m=+0.192514180 container exec_died 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 19:05:38 compute-0 podman[180359]: 2025-10-02 19:05:38.344770836 +0000 UTC m=+0.153198711 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:05:38 compute-0 systemd[1]: libpod-conmon-2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38.scope: Deactivated successfully.
Oct 02 19:05:38 compute-0 sudo[180341]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:39 compute-0 sudo[180546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njwoxfzdoeflxxfvazjuacytvnpakawk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431938.6434863-687-191234574412876/AnsiballZ_podman_container_exec.py'
Oct 02 19:05:39 compute-0 sudo[180546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:39 compute-0 python3.9[180548]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:05:39 compute-0 systemd[1]: Started libpod-conmon-2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38.scope.
Oct 02 19:05:39 compute-0 podman[180549]: 2025-10-02 19:05:39.59334433 +0000 UTC m=+0.134158215 container exec 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=edpm, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-type=git, architecture=x86_64)
Oct 02 19:05:39 compute-0 podman[180549]: 2025-10-02 19:05:39.627138574 +0000 UTC m=+0.167952449 container exec_died 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41)
Oct 02 19:05:39 compute-0 systemd[1]: libpod-conmon-2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38.scope: Deactivated successfully.
Oct 02 19:05:39 compute-0 sudo[180546]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:40 compute-0 sudo[180729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzljbxofpumysupcijxpgkipjroibtyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431939.988347-695-219379821397906/AnsiballZ_file.py'
Oct 02 19:05:40 compute-0 sudo[180729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:40 compute-0 python3.9[180731]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:40 compute-0 sudo[180729]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:41 compute-0 sudo[180881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saldeotkjhbsmchuxqwhriairvghfkuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431941.1046698-704-236225172469831/AnsiballZ_podman_container_info.py'
Oct 02 19:05:41 compute-0 sudo[180881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:41 compute-0 python3.9[180883]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Oct 02 19:05:41 compute-0 sudo[180881]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:42 compute-0 podman[180994]: 2025-10-02 19:05:42.694694775 +0000 UTC m=+0.111354591 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct 02 19:05:42 compute-0 systemd[1]: 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17-43fb96673b7be5c0.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:05:42 compute-0 systemd[1]: 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17-43fb96673b7be5c0.service: Failed with result 'exit-code'.
Oct 02 19:05:42 compute-0 sudo[181061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxenyjyuxxjpyaajporsnifhzoiwcvua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431942.3129628-712-171377001622757/AnsiballZ_podman_container_exec.py'
Oct 02 19:05:42 compute-0 sudo[181061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:43 compute-0 python3.9[181063]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:05:43 compute-0 systemd[1]: Started libpod-conmon-0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.scope.
Oct 02 19:05:43 compute-0 podman[181064]: 2025-10-02 19:05:43.23316333 +0000 UTC m=+0.186686855 container exec 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm)
Oct 02 19:05:43 compute-0 podman[181064]: 2025-10-02 19:05:43.286060881 +0000 UTC m=+0.239584376 container exec_died 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 19:05:43 compute-0 systemd[1]: libpod-conmon-0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.scope: Deactivated successfully.
Oct 02 19:05:43 compute-0 sudo[181061]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:44 compute-0 sudo[181242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bndtukzmusgyjxkziwiriyrtlqlrxoqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431943.6823208-720-196057023251400/AnsiballZ_podman_container_exec.py'
Oct 02 19:05:44 compute-0 sudo[181242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:44 compute-0 python3.9[181244]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:05:44 compute-0 systemd[1]: Started libpod-conmon-0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.scope.
Oct 02 19:05:44 compute-0 podman[181245]: 2025-10-02 19:05:44.507343149 +0000 UTC m=+0.166908236 container exec 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:05:44 compute-0 podman[181245]: 2025-10-02 19:05:44.554991254 +0000 UTC m=+0.214556301 container exec_died 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 19:05:44 compute-0 podman[181259]: 2025-10-02 19:05:44.645012695 +0000 UTC m=+0.137815031 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, container_name=kepler, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, architecture=x86_64, version=9.4, io.buildah.version=1.29.0)
Oct 02 19:05:44 compute-0 systemd[1]: libpod-conmon-0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.scope: Deactivated successfully.
Oct 02 19:05:44 compute-0 sudo[181242]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:45 compute-0 sudo[181442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwtrqkwhjuaqhyhuyfmhbyffbbsgvtoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431944.972538-728-29365081315836/AnsiballZ_file.py'
Oct 02 19:05:45 compute-0 sudo[181442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:45 compute-0 python3.9[181444]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:45 compute-0 sudo[181442]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:46 compute-0 sudo[181594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaeaxtqursvzxibqtnpelxnrkjdcygrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431946.05051-737-200016512448103/AnsiballZ_podman_container_info.py'
Oct 02 19:05:46 compute-0 sudo[181594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:46 compute-0 python3.9[181596]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Oct 02 19:05:46 compute-0 sudo[181594]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:47 compute-0 sudo[181757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqrjezltyzpcjyhgppqpfklekwmdsjpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431947.2393856-745-269737165081523/AnsiballZ_podman_container_exec.py'
Oct 02 19:05:47 compute-0 sudo[181757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:47 compute-0 python3.9[181759]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:05:48 compute-0 systemd[1]: Started libpod-conmon-df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c.scope.
Oct 02 19:05:48 compute-0 podman[181760]: 2025-10-02 19:05:48.134303107 +0000 UTC m=+0.163810555 container exec df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, config_id=edpm, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=)
Oct 02 19:05:48 compute-0 podman[181760]: 2025-10-02 19:05:48.170452814 +0000 UTC m=+0.199960332 container exec_died df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, name=ubi9, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=)
Oct 02 19:05:48 compute-0 systemd[1]: libpod-conmon-df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c.scope: Deactivated successfully.
Oct 02 19:05:48 compute-0 sudo[181757]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:49 compute-0 sudo[181938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fovmcewhvowceuhrygbgmgcjnkpkylor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431948.5239267-753-273365683725577/AnsiballZ_podman_container_exec.py'
Oct 02 19:05:49 compute-0 sudo[181938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:49 compute-0 python3.9[181940]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:05:49 compute-0 systemd[1]: Started libpod-conmon-df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c.scope.
Oct 02 19:05:49 compute-0 podman[181941]: 2025-10-02 19:05:49.480304297 +0000 UTC m=+0.150170630 container exec df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, container_name=kepler, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, maintainer=Red Hat, Inc., release-0.7.12=, vendor=Red Hat, Inc., io.buildah.version=1.29.0)
Oct 02 19:05:49 compute-0 podman[181941]: 2025-10-02 19:05:49.51775677 +0000 UTC m=+0.187623103 container exec_died df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release=1214.1726694543, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, maintainer=Red Hat, Inc., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, version=9.4, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm)
Oct 02 19:05:49 compute-0 systemd[1]: libpod-conmon-df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c.scope: Deactivated successfully.
Oct 02 19:05:49 compute-0 sudo[181938]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:50 compute-0 sudo[182122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgheaizbgkmsjyohpsyidduhzgvrgdyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431949.8751066-761-149479468981754/AnsiballZ_file.py'
Oct 02 19:05:50 compute-0 sudo[182122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:50 compute-0 python3.9[182124]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:50 compute-0 sudo[182122]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:51 compute-0 sudo[182284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugqwqdimjxxsiqqlemechhkqzaoivksy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431950.9839253-770-142294198850239/AnsiballZ_file.py'
Oct 02 19:05:51 compute-0 sudo[182284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:51 compute-0 podman[182248]: 2025-10-02 19:05:51.537952525 +0000 UTC m=+0.133497714 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:05:51 compute-0 python3.9[182297]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:51 compute-0 sudo[182284]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:52 compute-0 sudo[182447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frqvjftxorbsgrbpialwmamgdtakixlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431952.0695763-778-191838769171554/AnsiballZ_stat.py'
Oct 02 19:05:52 compute-0 sudo[182447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:52 compute-0 python3.9[182449]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:52 compute-0 sudo[182447]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:53 compute-0 sudo[182570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csaakyaspvhblmwnqijpjahaunyjhfua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431952.0695763-778-191838769171554/AnsiballZ_copy.py'
Oct 02 19:05:53 compute-0 sudo[182570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:53 compute-0 python3.9[182572]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431952.0695763-778-191838769171554/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:53 compute-0 sudo[182570]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:54 compute-0 sudo[182722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezfkmxfwpycdkvudizyyivgdsplhzorp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431954.190432-794-228742526936732/AnsiballZ_file.py'
Oct 02 19:05:54 compute-0 sudo[182722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:54 compute-0 python3.9[182724]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:54 compute-0 sudo[182722]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:55 compute-0 sudo[182874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygbkafsxqdqfukvadkggdfntitopxkpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431955.1907504-802-267757233704111/AnsiballZ_stat.py'
Oct 02 19:05:55 compute-0 sudo[182874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:55 compute-0 podman[182876]: 2025-10-02 19:05:55.910806225 +0000 UTC m=+0.132050797 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Oct 02 19:05:56 compute-0 python3.9[182877]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:56 compute-0 sudo[182874]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:56 compute-0 sudo[182971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jefgffsjpnepwtfojrjsyixotkmsqbxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431955.1907504-802-267757233704111/AnsiballZ_file.py'
Oct 02 19:05:56 compute-0 sudo[182971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:56 compute-0 python3.9[182973]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:56 compute-0 sudo[182971]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:57 compute-0 sudo[183123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gghtscfzoftcgeknkjrnzlxayqhefjnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431957.0875516-814-122680315744592/AnsiballZ_stat.py'
Oct 02 19:05:57 compute-0 sudo[183123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:57 compute-0 python3.9[183125]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:57 compute-0 sudo[183123]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:58 compute-0 sudo[183201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sohdzgembaaslcifagkxunqozkqcnsoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431957.0875516-814-122680315744592/AnsiballZ_file.py'
Oct 02 19:05:58 compute-0 sudo[183201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:58 compute-0 python3.9[183203]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.zox_uxd2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:58 compute-0 sudo[183201]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:59 compute-0 sudo[183353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikkqshsldysxvinslonzdptrupfbbdhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431958.7644496-826-104927311047368/AnsiballZ_stat.py'
Oct 02 19:05:59 compute-0 sudo[183353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:59 compute-0 python3.9[183355]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:59 compute-0 sudo[183353]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:59 compute-0 podman[157186]: time="2025-10-02T19:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:05:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Oct 02 19:05:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2991 "" "Go-http-client/1.1"
Oct 02 19:05:59 compute-0 sudo[183431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmxvmtphsilwvsebvzfggrshcolkgnfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431958.7644496-826-104927311047368/AnsiballZ_file.py'
Oct 02 19:05:59 compute-0 sudo[183431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:00 compute-0 python3.9[183433]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:00 compute-0 sudo[183431]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:01 compute-0 sudo[183583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfldzbyvtfktbajxhrnhkpdztopayanp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431960.56601-839-242379721941098/AnsiballZ_command.py'
Oct 02 19:06:01 compute-0 sudo[183583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:01 compute-0 python3.9[183585]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:06:01 compute-0 sudo[183583]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:01 compute-0 openstack_network_exporter[159337]: ERROR   19:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:06:01 compute-0 openstack_network_exporter[159337]: ERROR   19:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:06:01 compute-0 openstack_network_exporter[159337]: ERROR   19:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:06:01 compute-0 openstack_network_exporter[159337]: ERROR   19:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:06:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:06:01 compute-0 openstack_network_exporter[159337]: ERROR   19:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:06:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:06:02 compute-0 sudo[183736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skppjkaduzqvdihosxaosecmacouefmr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431961.7365687-847-168163050391952/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 19:06:02 compute-0 sudo[183736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:02 compute-0 python3[183738]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 19:06:02 compute-0 sudo[183736]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:03 compute-0 sudo[183888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zivsnxmkdsxrivjybtcbikwpjycuxroh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431963.0773647-855-235725191985931/AnsiballZ_stat.py'
Oct 02 19:06:03 compute-0 sudo[183888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:03 compute-0 python3.9[183890]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:03 compute-0 sudo[183888]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:04 compute-0 sudo[183966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xieesqwuftkterhlbhdelxorvnhchqyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431963.0773647-855-235725191985931/AnsiballZ_file.py'
Oct 02 19:06:04 compute-0 sudo[183966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:04 compute-0 python3.9[183968]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:04 compute-0 sudo[183966]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:05 compute-0 sudo[184118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rddyxdpxupvmgteieatgjcrexhqiwwxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431964.802138-867-248518355220500/AnsiballZ_stat.py'
Oct 02 19:06:05 compute-0 sudo[184118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:05 compute-0 podman[184121]: 2025-10-02 19:06:05.664436832 +0000 UTC m=+0.095825845 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350)
Oct 02 19:06:05 compute-0 python3.9[184120]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:05 compute-0 sudo[184118]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:05 compute-0 podman[184142]: 2025-10-02 19:06:05.851592385 +0000 UTC m=+0.148022597 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 02 19:06:06 compute-0 sudo[184242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuxkgfcxhjmoiitioltipqnaebqauuqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431964.802138-867-248518355220500/AnsiballZ_file.py'
Oct 02 19:06:06 compute-0 sudo[184242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:06 compute-0 python3.9[184244]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:06 compute-0 sudo[184242]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:07 compute-0 sudo[184394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auhdcfihgvxepwwvvbzfaqmuknzlbnoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431966.6732478-879-96063972066321/AnsiballZ_stat.py'
Oct 02 19:06:07 compute-0 sudo[184394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:07 compute-0 python3.9[184396]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:07 compute-0 sudo[184394]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:07 compute-0 sudo[184472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsetbmcjpvtgpspiuarxmsnkibiidoan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431966.6732478-879-96063972066321/AnsiballZ_file.py'
Oct 02 19:06:07 compute-0 sudo[184472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:08 compute-0 python3.9[184474]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:08 compute-0 sudo[184472]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:08 compute-0 podman[184542]: 2025-10-02 19:06:08.704009229 +0000 UTC m=+0.129004978 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:06:09 compute-0 sudo[184647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqgwxzggyxnclcnwkepvenswxmwxyozh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431968.4578938-891-226028655835185/AnsiballZ_stat.py'
Oct 02 19:06:09 compute-0 sudo[184647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:09 compute-0 python3.9[184649]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:09 compute-0 sudo[184647]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:09 compute-0 sudo[184725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czjtokmtxkduhuqgzfabuhzjnvojxgwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431968.4578938-891-226028655835185/AnsiballZ_file.py'
Oct 02 19:06:09 compute-0 sudo[184725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:09 compute-0 python3.9[184727]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:10 compute-0 sudo[184725]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:10 compute-0 sudo[184877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etvofinlgsidwzmfvlsihsqfjuopdtne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431970.2411392-903-261625927912257/AnsiballZ_stat.py'
Oct 02 19:06:10 compute-0 sudo[184877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:11 compute-0 python3.9[184879]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:11 compute-0 sudo[184877]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:11 compute-0 sudo[185002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltftwpmiyozmcttuyauwkypoghahkdoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431970.2411392-903-261625927912257/AnsiballZ_copy.py'
Oct 02 19:06:11 compute-0 sudo[185002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:12 compute-0 python3.9[185004]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431970.2411392-903-261625927912257/.source.nft follow=False _original_basename=ruleset.j2 checksum=195cfcdc3ed4fc7d98b13eed88ef5cb7956fa1b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:12 compute-0 sudo[185002]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:12 compute-0 sudo[185169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onpnyrbxsdfxysxlmqkvbbddtcgcxpbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431972.3657997-918-234585055347370/AnsiballZ_file.py'
Oct 02 19:06:12 compute-0 sudo[185169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:12 compute-0 podman[185128]: 2025-10-02 19:06:12.970481054 +0000 UTC m=+0.131324138 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct 02 19:06:13 compute-0 python3.9[185174]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:13 compute-0 sudo[185169]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:14 compute-0 sudo[185324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozppbqmpkdnnmflbxpnbxlyjomatpthn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431973.5119462-926-232576901047341/AnsiballZ_command.py'
Oct 02 19:06:14 compute-0 sudo[185324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:14 compute-0 python3.9[185326]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:06:14 compute-0 sudo[185324]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:14 compute-0 podman[185377]: 2025-10-02 19:06:14.853942453 +0000 UTC m=+0.143836020 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, managed_by=edpm_ansible, com.redhat.component=ubi9-container, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct 02 19:06:15 compute-0 sudo[185499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oerasbfbkanrrelpaulzowvjiozxkvjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431974.6755784-934-254593197567090/AnsiballZ_blockinfile.py'
Oct 02 19:06:15 compute-0 sudo[185499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:15 compute-0 python3.9[185501]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:15 compute-0 sudo[185499]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:16 compute-0 sudo[185651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ustiyzpjpqnybdlzyhnjgxptlqqqmzlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431976.0304644-943-42136904589068/AnsiballZ_command.py'
Oct 02 19:06:16 compute-0 sudo[185651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:16 compute-0 python3.9[185653]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:06:16 compute-0 sudo[185651]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:17 compute-0 sudo[185804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aalfvfyxibfpeijafvvuornqxtmaucoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431977.1754541-951-95566908618668/AnsiballZ_stat.py'
Oct 02 19:06:17 compute-0 sudo[185804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:17 compute-0 python3.9[185806]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:06:18 compute-0 sudo[185804]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:18 compute-0 sudo[185958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-innfppqnmxpqomwwbixhhjotcgokccyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431978.332927-959-249594456011928/AnsiballZ_command.py'
Oct 02 19:06:18 compute-0 sudo[185958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:19 compute-0 python3.9[185960]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:06:19 compute-0 sudo[185958]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:20 compute-0 sudo[186114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbbduxdjowpifzbohrhuovgwnykbyyht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431979.4585776-967-160242298784941/AnsiballZ_file.py'
Oct 02 19:06:20 compute-0 sudo[186114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:20 compute-0 python3.9[186116]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:20 compute-0 sudo[186114]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:20 compute-0 sshd-session[167003]: Connection closed by 192.168.122.30 port 41768
Oct 02 19:06:20 compute-0 sshd-session[167000]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:06:20 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Oct 02 19:06:20 compute-0 systemd[1]: session-23.scope: Consumed 2min 8.141s CPU time.
Oct 02 19:06:20 compute-0 systemd-logind[793]: Session 23 logged out. Waiting for processes to exit.
Oct 02 19:06:20 compute-0 systemd-logind[793]: Removed session 23.
Oct 02 19:06:22 compute-0 podman[186141]: 2025-10-02 19:06:22.692050137 +0000 UTC m=+0.112697039 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.433 17 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.434 17 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.434 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a00e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.435 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feec38a00b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.435 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.436 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.436 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.436 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec5f86ab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38272c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec6d242f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.438 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.439 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feec72bb7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.439 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.440 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feec38264b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38273e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.440 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.440 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feec3827c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.441 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.441 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feec3827230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.442 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.442 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feec3825700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.443 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feec3827290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38274d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feec38277a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feec38272f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feec3827350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.446 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feec38a0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feec38273b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feec49646e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feec3827440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.448 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec4ac1ee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feec3827c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feec38274a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.450 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.451 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.451 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3825730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.451 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.452 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.452 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feec3827ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.452 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.452 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feec3827d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.452 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.453 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feec3827dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.453 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.453 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feec3827e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.453 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.453 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feec38a0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.453 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.453 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feec38276e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.453 17 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.453 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feec3827ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.454 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.454 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feec49641d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.454 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.454 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feec3827740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.454 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.454 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feec3827f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.454 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.458 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.458 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.458 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.458 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.458 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.458 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.458 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:06:24.459 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:06:26 compute-0 sshd-session[186165]: Accepted publickey for zuul from 192.168.122.30 port 49942 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:06:26 compute-0 systemd-logind[793]: New session 24 of user zuul.
Oct 02 19:06:26 compute-0 systemd[1]: Started Session 24 of User zuul.
Oct 02 19:06:26 compute-0 sshd-session[186165]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:06:26 compute-0 podman[186167]: 2025-10-02 19:06:26.135629802 +0000 UTC m=+0.121286679 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:06:27 compute-0 python3.9[186337]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:06:29 compute-0 sudo[186491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heglieucjzbctatlbjttkgydwypysnzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431988.1350029-34-201950725842698/AnsiballZ_systemd.py'
Oct 02 19:06:29 compute-0 sudo[186491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:29 compute-0 python3.9[186493]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Oct 02 19:06:29 compute-0 sudo[186491]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:29 compute-0 podman[157186]: time="2025-10-02T19:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:06:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18532 "" "Go-http-client/1.1"
Oct 02 19:06:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2988 "" "Go-http-client/1.1"
Oct 02 19:06:30 compute-0 sudo[186644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuygrpzrwscrgvmnqkezumdqgmpgvpgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431989.6944141-42-162441300150266/AnsiballZ_setup.py'
Oct 02 19:06:30 compute-0 sudo[186644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:30 compute-0 python3.9[186646]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 19:06:30 compute-0 sudo[186644]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:31 compute-0 openstack_network_exporter[159337]: ERROR   19:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:06:31 compute-0 openstack_network_exporter[159337]: ERROR   19:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:06:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:06:31 compute-0 openstack_network_exporter[159337]: ERROR   19:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:06:31 compute-0 openstack_network_exporter[159337]: ERROR   19:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:06:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:06:31 compute-0 openstack_network_exporter[159337]: ERROR   19:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:06:31 compute-0 sudo[186728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbwvawsztascwjhmueqqsfjpfafxleze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431989.6944141-42-162441300150266/AnsiballZ_dnf.py'
Oct 02 19:06:31 compute-0 sudo[186728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:31 compute-0 python3.9[186730]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:06:36 compute-0 podman[186732]: 2025-10-02 19:06:36.697123241 +0000 UTC m=+0.116870267 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, container_name=openstack_network_exporter, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 19:06:36 compute-0 podman[186733]: 2025-10-02 19:06:36.768929947 +0000 UTC m=+0.179069686 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 02 19:06:38 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 02 19:06:38 compute-0 PackageKit[186781]: daemon start
Oct 02 19:06:38 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 02 19:06:38 compute-0 sudo[186728]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:39 compute-0 sudo[186951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxzbwrnhqbcpohbsgzcycogixlpbrsxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431998.9590075-54-158331621529145/AnsiballZ_stat.py'
Oct 02 19:06:39 compute-0 sudo[186951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:39 compute-0 podman[186908]: 2025-10-02 19:06:39.676568101 +0000 UTC m=+0.105085353 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:06:39 compute-0 python3.9[186960]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:39 compute-0 sudo[186951]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:40 compute-0 sudo[187081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrqwubgtpzdmdnwmmqpoxngdnomnuhei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431998.9590075-54-158331621529145/AnsiballZ_copy.py'
Oct 02 19:06:40 compute-0 sudo[187081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:40 compute-0 python3.9[187083]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759431998.9590075-54-158331621529145/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:41 compute-0 sudo[187081]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:42 compute-0 sudo[187233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoynwiplbfdeeuzqqfrmrcdbtttqycid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432001.3215988-69-99101602445818/AnsiballZ_file.py'
Oct 02 19:06:42 compute-0 sudo[187233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:42 compute-0 python3.9[187235]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:42 compute-0 sudo[187233]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:43 compute-0 sudo[187401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpmuwayenfcrbkcrujdjgubvaokgfaoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432002.6104863-77-219310789864153/AnsiballZ_stat.py'
Oct 02 19:06:43 compute-0 sudo[187401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:43 compute-0 podman[187359]: 2025-10-02 19:06:43.206152089 +0000 UTC m=+0.130697182 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Oct 02 19:06:43 compute-0 python3.9[187406]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:43 compute-0 sudo[187401]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:44 compute-0 sudo[187527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsznuxcizzdelrvkekryndusxuqxlhmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432002.6104863-77-219310789864153/AnsiballZ_copy.py'
Oct 02 19:06:44 compute-0 sudo[187527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:44 compute-0 python3.9[187529]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759432002.6104863-77-219310789864153/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:44 compute-0 sudo[187527]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:45 compute-0 sudo[187695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuibziuzhcinejoddileaalazcmcyska ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432004.6179996-92-106570737160738/AnsiballZ_systemd.py'
Oct 02 19:06:45 compute-0 podman[187653]: 2025-10-02 19:06:45.218357509 +0000 UTC m=+0.129057359 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, release=1214.1726694543, name=ubi9)
Oct 02 19:06:45 compute-0 sudo[187695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:45 compute-0 python3.9[187698]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:06:45 compute-0 systemd[1]: Stopping System Logging Service...
Oct 02 19:06:46 compute-0 rsyslogd[1008]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1008" x-info="https://www.rsyslog.com"] exiting on signal 15.
Oct 02 19:06:46 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Oct 02 19:06:46 compute-0 systemd[1]: Stopped System Logging Service.
Oct 02 19:06:46 compute-0 systemd[1]: rsyslog.service: Consumed 2.255s CPU time, 5.2M memory peak, read 0B from disk, written 4.0M to disk.
Oct 02 19:06:46 compute-0 systemd[1]: Starting System Logging Service...
Oct 02 19:06:46 compute-0 rsyslogd[187702]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="187702" x-info="https://www.rsyslog.com"] start
Oct 02 19:06:46 compute-0 systemd[1]: Started System Logging Service.
Oct 02 19:06:46 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:06:46 compute-0 rsyslogd[187702]: Warning: Certificate file is not set [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Oct 02 19:06:46 compute-0 rsyslogd[187702]: Warning: Key file is not set [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Oct 02 19:06:46 compute-0 rsyslogd[187702]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2506.0-2.el9]
Oct 02 19:06:46 compute-0 sudo[187695]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:46 compute-0 rsyslogd[187702]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2506.0-2.el9]
Oct 02 19:06:46 compute-0 sshd-session[186184]: Connection closed by 192.168.122.30 port 49942
Oct 02 19:06:46 compute-0 sshd-session[186165]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:06:46 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Oct 02 19:06:46 compute-0 systemd[1]: session-24.scope: Consumed 16.745s CPU time.
Oct 02 19:06:46 compute-0 systemd-logind[793]: Session 24 logged out. Waiting for processes to exit.
Oct 02 19:06:46 compute-0 systemd-logind[793]: Removed session 24.
Oct 02 19:06:53 compute-0 podman[187732]: 2025-10-02 19:06:53.684287225 +0000 UTC m=+0.107114391 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:06:54 compute-0 sshd-session[187756]: Accepted publickey for zuul from 38.102.83.68 port 51164 ssh2: RSA SHA256:ehN6Fkjxbk4ZZsnbj2qNHppfucDBFar0ERCGpW0xI0M
Oct 02 19:06:54 compute-0 systemd-logind[793]: New session 25 of user zuul.
Oct 02 19:06:54 compute-0 systemd[1]: Started Session 25 of User zuul.
Oct 02 19:06:54 compute-0 sshd-session[187756]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:06:55 compute-0 sudo[187832]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywohycyuofavoeyilrmzryvxrxmlbpnf ; /usr/bin/python3'
Oct 02 19:06:55 compute-0 sudo[187832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:55 compute-0 useradd[187836]: new group: name=ceph-admin, GID=42478
Oct 02 19:06:55 compute-0 useradd[187836]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Oct 02 19:06:55 compute-0 sudo[187832]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:55 compute-0 sudo[187918]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxvctcqwjapyrwvzmeycujqndtcfbpyl ; /usr/bin/python3'
Oct 02 19:06:56 compute-0 sudo[187918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:56 compute-0 sudo[187918]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:56 compute-0 podman[187968]: 2025-10-02 19:06:56.695019672 +0000 UTC m=+0.121660426 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0)
Oct 02 19:06:56 compute-0 sudo[188010]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eidypzuxdjayiiznbiiaatahlkxainnr ; /usr/bin/python3'
Oct 02 19:06:56 compute-0 sudo[188010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:56 compute-0 sudo[188010]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:57 compute-0 sudo[188060]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwwnsyzjerfifccahhylvejfelfpdjsu ; /usr/bin/python3'
Oct 02 19:06:57 compute-0 sudo[188060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:57 compute-0 sudo[188060]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:57 compute-0 sudo[188086]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjxanacrtinvadlxxdgjvgfqnbafazow ; /usr/bin/python3'
Oct 02 19:06:57 compute-0 sudo[188086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:58 compute-0 sudo[188086]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:58 compute-0 sudo[188112]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jagqkerdsdvexsusjjolpehwdbwwouqd ; /usr/bin/python3'
Oct 02 19:06:58 compute-0 sudo[188112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:58 compute-0 sudo[188112]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:58 compute-0 sudo[188138]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivwmpsnrtfqswsqtmxxepsxcbzynwcas ; /usr/bin/python3'
Oct 02 19:06:58 compute-0 sudo[188138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:59 compute-0 sudo[188138]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:59 compute-0 sudo[188216]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgqoluiwngfvzmokvggopwukoellibwc ; /usr/bin/python3'
Oct 02 19:06:59 compute-0 sudo[188216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:59 compute-0 sudo[188216]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:59 compute-0 podman[157186]: time="2025-10-02T19:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:06:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18532 "" "Go-http-client/1.1"
Oct 02 19:06:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2990 "" "Go-http-client/1.1"
Oct 02 19:07:00 compute-0 sudo[188289]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkzlhtghhespoifnlqxxoksyuwblfqtc ; /usr/bin/python3'
Oct 02 19:07:00 compute-0 sudo[188289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:00 compute-0 sudo[188289]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:00 compute-0 sudo[188391]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuckbpzxccjaxcvcijrifclfhsoelyhz ; /usr/bin/python3'
Oct 02 19:07:00 compute-0 sudo[188391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:01 compute-0 sudo[188391]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:01 compute-0 sudo[188464]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgnsikjyfdjzqlkplhoqkawblpzxsqev ; /usr/bin/python3'
Oct 02 19:07:01 compute-0 openstack_network_exporter[159337]: ERROR   19:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:07:01 compute-0 openstack_network_exporter[159337]: ERROR   19:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:07:01 compute-0 openstack_network_exporter[159337]: ERROR   19:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:07:01 compute-0 openstack_network_exporter[159337]: ERROR   19:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:07:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:07:01 compute-0 sudo[188464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:01 compute-0 openstack_network_exporter[159337]: ERROR   19:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:07:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:07:01 compute-0 sudo[188464]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:02 compute-0 sudo[188514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkxnzmivlakvvnenywvtzpytglxgtpws ; /usr/bin/python3'
Oct 02 19:07:02 compute-0 sudo[188514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:02 compute-0 python3[188516]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:07:03 compute-0 sudo[188514]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:04 compute-0 sudo[188618]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgwvhkxockhsbzgajdypsezdtwikxazv ; /usr/bin/python3'
Oct 02 19:07:04 compute-0 sudo[188618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:04 compute-0 python3[188620]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 02 19:07:05 compute-0 sudo[188618]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:06 compute-0 sudo[188645]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpayrffyfwkkptzhaevkcickntgoygak ; /usr/bin/python3'
Oct 02 19:07:06 compute-0 sudo[188645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:06 compute-0 python3[188647]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:07:06 compute-0 sudo[188645]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:06 compute-0 sudo[188671]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilabqdfrerwniimrzibdjkvotthpymbo ; /usr/bin/python3'
Oct 02 19:07:06 compute-0 sudo[188671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:06 compute-0 python3[188673]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                           losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                           lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:07:06 compute-0 kernel: loop: module loaded
Oct 02 19:07:06 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Oct 02 19:07:06 compute-0 sudo[188671]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:06 compute-0 podman[188678]: 2025-10-02 19:07:06.887668354 +0000 UTC m=+0.132701626 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-type=git, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9)
Oct 02 19:07:06 compute-0 sudo[188744]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pncwovwvhshyqtjhjyxborhluykigygw ; /usr/bin/python3'
Oct 02 19:07:06 compute-0 sudo[188744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:06 compute-0 podman[188693]: 2025-10-02 19:07:06.98127974 +0000 UTC m=+0.160778066 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:07:07 compute-0 python3[188750]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                           vgcreate ceph_vg0 /dev/loop3
                                           lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                           lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:07:07 compute-0 lvm[188753]: PV /dev/loop3 not used.
Oct 02 19:07:07 compute-0 lvm[188762]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 19:07:07 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 02 19:07:07 compute-0 sudo[188744]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:07 compute-0 lvm[188764]:   1 logical volume(s) in volume group "ceph_vg0" now active
Oct 02 19:07:07 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 02 19:07:07 compute-0 sudo[188840]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfdltyaxjvffcvndqceccfxflfsggxlq ; /usr/bin/python3'
Oct 02 19:07:07 compute-0 sudo[188840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:08 compute-0 python3[188842]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 19:07:08 compute-0 sudo[188840]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:08 compute-0 sudo[188913]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zceomhxexqsgljvxkxokjpevztugwvup ; /usr/bin/python3'
Oct 02 19:07:08 compute-0 sudo[188913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:08 compute-0 python3[188915]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432027.6877565-33316-195420723217758/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:08 compute-0 sudo[188913]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:09 compute-0 sudo[188963]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bttgkxnjqgigcgfaaentiwaagmvtawmc ; /usr/bin/python3'
Oct 02 19:07:09 compute-0 sudo[188963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:09 compute-0 python3[188965]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:07:09 compute-0 systemd[1]: Reloading.
Oct 02 19:07:09 compute-0 systemd-sysv-generator[188997]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:07:09 compute-0 systemd-rc-local-generator[188994]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:07:10 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 02 19:07:10 compute-0 bash[189006]: /dev/loop3: [64513]:4194937 (/var/lib/ceph-osd-0.img)
Oct 02 19:07:10 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 02 19:07:10 compute-0 sudo[188963]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:10 compute-0 lvm[189027]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 19:07:10 compute-0 lvm[189027]: VG ceph_vg0 finished
Oct 02 19:07:10 compute-0 podman[189004]: 2025-10-02 19:07:10.238410015 +0000 UTC m=+0.098871305 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:07:10 compute-0 sudo[189056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ughznrynwvgewdltmxkctjbbrjlnhipm ; /usr/bin/python3'
Oct 02 19:07:10 compute-0 sudo[189056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:10 compute-0 python3[189058]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 02 19:07:11 compute-0 sudo[189056]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:12 compute-0 sudo[189083]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpfqlvsazliyqgdmzbtufkhnqjgtitna ; /usr/bin/python3'
Oct 02 19:07:12 compute-0 sudo[189083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:12 compute-0 python3[189085]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:07:12 compute-0 sudo[189083]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:12 compute-0 sudo[189109]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skyvxfvvninnxekcxjapleqvnjgvvwoc ; /usr/bin/python3'
Oct 02 19:07:12 compute-0 sudo[189109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:12 compute-0 python3[189111]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                           losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                           lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:07:12 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Oct 02 19:07:12 compute-0 sudo[189109]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:12 compute-0 sudo[189140]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xunybznaamddxdpqagjtojgmjrvfqnep ; /usr/bin/python3'
Oct 02 19:07:12 compute-0 sudo[189140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:13 compute-0 python3[189142]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                           vgcreate ceph_vg1 /dev/loop4
                                           lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                           lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:07:13 compute-0 lvm[189145]: PV /dev/loop4 not used.
Oct 02 19:07:13 compute-0 lvm[189159]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 02 19:07:13 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Oct 02 19:07:13 compute-0 podman[189147]: 2025-10-02 19:07:13.391203833 +0000 UTC m=+0.121684576 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:07:13 compute-0 lvm[189176]:   1 logical volume(s) in volume group "ceph_vg1" now active
Oct 02 19:07:13 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Oct 02 19:07:13 compute-0 sudo[189140]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:13 compute-0 sudo[189255]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrrddehsiytgtjbepyhjkngicomzesft ; /usr/bin/python3'
Oct 02 19:07:13 compute-0 sudo[189255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:14 compute-0 python3[189257]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 19:07:14 compute-0 sudo[189255]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:14 compute-0 sudo[189328]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjnwusssquuaebqmndkucslreyrydjed ; /usr/bin/python3'
Oct 02 19:07:14 compute-0 sudo[189328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:14 compute-0 python3[189330]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432033.6217544-33343-137437685547050/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:14 compute-0 sudo[189328]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:14 compute-0 sudo[189378]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzntevpwicrhcxufbetleyvhmrpyudqb ; /usr/bin/python3'
Oct 02 19:07:14 compute-0 sudo[189378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:15 compute-0 python3[189380]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:07:15 compute-0 systemd[1]: Reloading.
Oct 02 19:07:15 compute-0 podman[189383]: 2025-10-02 19:07:15.40758164 +0000 UTC m=+0.114653041 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, release-0.7.12=, architecture=x86_64, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, version=9.4, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9)
Oct 02 19:07:15 compute-0 systemd-rc-local-generator[189430]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:07:15 compute-0 systemd-sysv-generator[189434]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:07:15 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 02 19:07:15 compute-0 bash[189438]: /dev/loop4: [64513]:4469531 (/var/lib/ceph-osd-1.img)
Oct 02 19:07:15 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 02 19:07:15 compute-0 lvm[189439]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 02 19:07:15 compute-0 lvm[189439]: VG ceph_vg1 finished
Oct 02 19:07:15 compute-0 sudo[189378]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:16 compute-0 sudo[189463]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqgrgzuqsldrdoyxxifwgplhapyvpnqr ; /usr/bin/python3'
Oct 02 19:07:16 compute-0 sudo[189463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:16 compute-0 python3[189465]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 02 19:07:17 compute-0 sudo[189463]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:17 compute-0 sudo[189490]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dljisesclrdccsnbhaorfzdahyqyrnlg ; /usr/bin/python3'
Oct 02 19:07:17 compute-0 sudo[189490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:18 compute-0 python3[189492]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:07:18 compute-0 sudo[189490]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:18 compute-0 sudo[189516]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbfhezvfepzgumumsnndmqxlkuctuzmc ; /usr/bin/python3'
Oct 02 19:07:18 compute-0 sudo[189516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:18 compute-0 python3[189518]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                           losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                           lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:07:18 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Oct 02 19:07:18 compute-0 sudo[189516]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:18 compute-0 sudo[189548]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buccuxldcbvozeobqvctarlqrbykncbp ; /usr/bin/python3'
Oct 02 19:07:18 compute-0 sudo[189548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:18 compute-0 python3[189550]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                           vgcreate ceph_vg2 /dev/loop5
                                           lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                           lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:07:18 compute-0 lvm[189553]: PV /dev/loop5 not used.
Oct 02 19:07:19 compute-0 lvm[189555]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 02 19:07:19 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Oct 02 19:07:19 compute-0 lvm[189566]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 02 19:07:19 compute-0 lvm[189566]: VG ceph_vg2 finished
Oct 02 19:07:19 compute-0 lvm[189564]:   1 logical volume(s) in volume group "ceph_vg2" now active
Oct 02 19:07:19 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Oct 02 19:07:19 compute-0 sudo[189548]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:19 compute-0 sudo[189643]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heespjsqwsaoachssedjhtygcwxjgciw ; /usr/bin/python3'
Oct 02 19:07:19 compute-0 sudo[189643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:19 compute-0 python3[189645]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 19:07:19 compute-0 sudo[189643]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:20 compute-0 sudo[189716]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgemvwrrcbxunwqmiszjbzdpfsktflgy ; /usr/bin/python3'
Oct 02 19:07:20 compute-0 sudo[189716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:20 compute-0 python3[189718]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432039.418772-33370-176032310747390/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:20 compute-0 sudo[189716]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:20 compute-0 sudo[189766]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esgmhexismswbzslheuhboikusygubyy ; /usr/bin/python3'
Oct 02 19:07:20 compute-0 sudo[189766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:21 compute-0 python3[189768]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:07:21 compute-0 systemd[1]: Reloading.
Oct 02 19:07:21 compute-0 systemd-sysv-generator[189801]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:07:21 compute-0 systemd-rc-local-generator[189795]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:07:21 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 02 19:07:21 compute-0 bash[189808]: /dev/loop5: [64513]:4469532 (/var/lib/ceph-osd-2.img)
Oct 02 19:07:21 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 02 19:07:21 compute-0 sudo[189766]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:21 compute-0 lvm[189810]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 02 19:07:21 compute-0 lvm[189810]: VG ceph_vg2 finished
Oct 02 19:07:23 compute-0 python3[189834]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:07:24 compute-0 podman[189886]: 2025-10-02 19:07:24.691047137 +0000 UTC m=+0.113762758 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:07:26 compute-0 sudo[189957]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieioogjmjgsweuuisfbmtnrdxvqtzzfv ; /usr/bin/python3'
Oct 02 19:07:26 compute-0 sudo[189957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:26 compute-0 python3[189959]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 02 19:07:27 compute-0 podman[189961]: 2025-10-02 19:07:27.66103985 +0000 UTC m=+0.093988557 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:07:27 compute-0 groupadd[189986]: group added to /etc/group: name=cephadm, GID=990
Oct 02 19:07:27 compute-0 groupadd[189986]: group added to /etc/gshadow: name=cephadm
Oct 02 19:07:27 compute-0 groupadd[189986]: new group: name=cephadm, GID=990
Oct 02 19:07:27 compute-0 useradd[189993]: new user: name=cephadm, UID=990, GID=990, home=/var/lib/cephadm, shell=/bin/bash, from=none
Oct 02 19:07:28 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 19:07:28 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 19:07:28 compute-0 sudo[189957]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:28 compute-0 sudo[190108]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydwlhmkudxatsmpvqctbozwhfpkqekjo ; /usr/bin/python3'
Oct 02 19:07:28 compute-0 sudo[190108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:28 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 19:07:28 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 19:07:28 compute-0 systemd[1]: run-rb7db19381a02470398189ee24398228d.service: Deactivated successfully.
Oct 02 19:07:28 compute-0 python3[190110]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:07:29 compute-0 sudo[190108]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:29 compute-0 sudo[190137]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhlyhknvxlfqziubgzgwchwqbefgvrlg ; /usr/bin/python3'
Oct 02 19:07:29 compute-0 sudo[190137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:29 compute-0 python3[190139]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:07:29 compute-0 podman[157186]: time="2025-10-02T19:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:07:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18532 "" "Go-http-client/1.1"
Oct 02 19:07:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2995 "" "Go-http-client/1.1"
Oct 02 19:07:29 compute-0 sudo[190137]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:30 compute-0 sudo[190204]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjidbbhgvqvakzomslmoxyjxrjzgprrn ; /usr/bin/python3'
Oct 02 19:07:30 compute-0 sudo[190204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:30 compute-0 python3[190206]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:30 compute-0 sudo[190204]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:30 compute-0 sudo[190230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoqtbvfkvxopoxkxrcoefhdimbvleaho ; /usr/bin/python3'
Oct 02 19:07:30 compute-0 sudo[190230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:31 compute-0 python3[190232]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:31 compute-0 sudo[190230]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:31 compute-0 openstack_network_exporter[159337]: ERROR   19:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:07:31 compute-0 openstack_network_exporter[159337]: ERROR   19:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:07:31 compute-0 openstack_network_exporter[159337]: ERROR   19:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:07:31 compute-0 openstack_network_exporter[159337]: ERROR   19:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:07:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:07:31 compute-0 openstack_network_exporter[159337]: ERROR   19:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:07:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:07:31 compute-0 sudo[190308]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdtzjxyvwnddykbouhamifrlxgehynan ; /usr/bin/python3'
Oct 02 19:07:31 compute-0 sudo[190308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:31 compute-0 python3[190310]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 19:07:31 compute-0 sudo[190308]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:32 compute-0 sudo[190381]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acfauvchoadrymqntfzrfyricfzhvjmf ; /usr/bin/python3'
Oct 02 19:07:32 compute-0 sudo[190381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:32 compute-0 python3[190383]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432051.5256643-33517-209262000930648/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:32 compute-0 sudo[190381]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:33 compute-0 sudo[190483]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssjooqsyaloiwbnmjljvzqqcowjqqcfa ; /usr/bin/python3'
Oct 02 19:07:33 compute-0 sudo[190483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:33 compute-0 python3[190485]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 19:07:33 compute-0 sudo[190483]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:34 compute-0 sudo[190556]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbfqinxutkdcybdfulanytwvmjsqxhxn ; /usr/bin/python3'
Oct 02 19:07:34 compute-0 sudo[190556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:34 compute-0 python3[190558]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432053.2590725-33535-159966602080885/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:34 compute-0 sudo[190556]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:34 compute-0 sudo[190606]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmvprvslafeymbbkxpmqgzxbdcuesgzl ; /usr/bin/python3'
Oct 02 19:07:34 compute-0 sudo[190606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:34 compute-0 python3[190608]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:07:34 compute-0 sudo[190606]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:35 compute-0 sudo[190634]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lucymxlatapknsvcmkapgzzuudkpgmrt ; /usr/bin/python3'
Oct 02 19:07:35 compute-0 sudo[190634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:35 compute-0 python3[190636]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:07:35 compute-0 sudo[190634]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:35 compute-0 sudo[190662]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iitdwhyafbpjjfnaqmlqnjuxbpdkhelt ; /usr/bin/python3'
Oct 02 19:07:35 compute-0 sudo[190662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:35 compute-0 python3[190664]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:07:35 compute-0 sudo[190662]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:36 compute-0 sudo[190690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qanwruesiwqtatsgjgcmfenkwgxqodui ; /usr/bin/python3'
Oct 02 19:07:36 compute-0 sudo[190690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:36 compute-0 python3[190692]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:07:36 compute-0 sshd-session[190708]: Accepted publickey for ceph-admin from 192.168.122.100 port 45128 ssh2: RSA SHA256:8UDIrmWQ3f2YlQUx/D4IcSJEMjMc9LCtdqeeOKt60sQ
Oct 02 19:07:36 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 02 19:07:36 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 02 19:07:36 compute-0 systemd-logind[793]: New session 26 of user ceph-admin.
Oct 02 19:07:36 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 02 19:07:36 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 02 19:07:36 compute-0 systemd[190712]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 19:07:36 compute-0 systemd[190712]: Queued start job for default target Main User Target.
Oct 02 19:07:36 compute-0 systemd[190712]: Created slice User Application Slice.
Oct 02 19:07:36 compute-0 systemd[190712]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 19:07:36 compute-0 systemd[190712]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 19:07:36 compute-0 systemd[190712]: Reached target Paths.
Oct 02 19:07:36 compute-0 systemd[190712]: Reached target Timers.
Oct 02 19:07:36 compute-0 systemd[190712]: Starting D-Bus User Message Bus Socket...
Oct 02 19:07:36 compute-0 systemd[190712]: Starting Create User's Volatile Files and Directories...
Oct 02 19:07:36 compute-0 systemd[190712]: Listening on D-Bus User Message Bus Socket.
Oct 02 19:07:36 compute-0 systemd[190712]: Reached target Sockets.
Oct 02 19:07:36 compute-0 systemd[190712]: Finished Create User's Volatile Files and Directories.
Oct 02 19:07:36 compute-0 systemd[190712]: Reached target Basic System.
Oct 02 19:07:36 compute-0 systemd[190712]: Reached target Main User Target.
Oct 02 19:07:36 compute-0 systemd[190712]: Startup finished in 161ms.
Oct 02 19:07:36 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 02 19:07:36 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Oct 02 19:07:36 compute-0 sshd-session[190708]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 19:07:37 compute-0 sudo[190728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Oct 02 19:07:37 compute-0 sudo[190728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:07:37 compute-0 sudo[190728]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:37 compute-0 sshd-session[190727]: Received disconnect from 192.168.122.100 port 45128:11: disconnected by user
Oct 02 19:07:37 compute-0 sshd-session[190727]: Disconnected from user ceph-admin 192.168.122.100 port 45128
Oct 02 19:07:37 compute-0 sshd-session[190708]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 02 19:07:37 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Oct 02 19:07:37 compute-0 systemd-logind[793]: Session 26 logged out. Waiting for processes to exit.
Oct 02 19:07:37 compute-0 systemd-logind[793]: Removed session 26.
Oct 02 19:07:37 compute-0 podman[190752]: 2025-10-02 19:07:37.179351251 +0000 UTC m=+0.136086945 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1755695350)
Oct 02 19:07:37 compute-0 podman[190753]: 2025-10-02 19:07:37.250659019 +0000 UTC m=+0.195086309 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:07:42 compute-0 podman[190848]: 2025-10-02 19:07:42.954449335 +0000 UTC m=+2.384149495 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:07:43 compute-0 podman[190873]: 2025-10-02 19:07:43.655347966 +0000 UTC m=+0.086270174 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:07:46 compute-0 podman[190891]: 2025-10-02 19:07:46.631742948 +0000 UTC m=+0.070011275 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release-0.7.12=, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, io.openshift.expose-services=, name=ubi9, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, release=1214.1726694543)
Oct 02 19:07:47 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Oct 02 19:07:47 compute-0 systemd[190712]: Activating special unit Exit the Session...
Oct 02 19:07:47 compute-0 systemd[190712]: Stopped target Main User Target.
Oct 02 19:07:47 compute-0 systemd[190712]: Stopped target Basic System.
Oct 02 19:07:47 compute-0 systemd[190712]: Stopped target Paths.
Oct 02 19:07:47 compute-0 systemd[190712]: Stopped target Sockets.
Oct 02 19:07:47 compute-0 systemd[190712]: Stopped target Timers.
Oct 02 19:07:47 compute-0 systemd[190712]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 19:07:47 compute-0 systemd[190712]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 19:07:47 compute-0 systemd[190712]: Closed D-Bus User Message Bus Socket.
Oct 02 19:07:47 compute-0 systemd[190712]: Stopped Create User's Volatile Files and Directories.
Oct 02 19:07:47 compute-0 systemd[190712]: Removed slice User Application Slice.
Oct 02 19:07:47 compute-0 systemd[190712]: Reached target Shutdown.
Oct 02 19:07:47 compute-0 systemd[190712]: Finished Exit the Session.
Oct 02 19:07:47 compute-0 systemd[190712]: Reached target Exit the Session.
Oct 02 19:07:47 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Oct 02 19:07:47 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Oct 02 19:07:47 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 02 19:07:47 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 02 19:07:47 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 02 19:07:47 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 02 19:07:47 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Oct 02 19:07:59 compute-0 podman[157186]: time="2025-10-02T19:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:07:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18532 "" "Go-http-client/1.1"
Oct 02 19:07:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2995 "" "Go-http-client/1.1"
Oct 02 19:08:01 compute-0 openstack_network_exporter[159337]: ERROR   19:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:08:01 compute-0 openstack_network_exporter[159337]: ERROR   19:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:08:01 compute-0 openstack_network_exporter[159337]: ERROR   19:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:08:01 compute-0 openstack_network_exporter[159337]: ERROR   19:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:08:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:08:01 compute-0 openstack_network_exporter[159337]: ERROR   19:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:08:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:08:03 compute-0 podman[190929]: 2025-10-02 19:08:03.287805642 +0000 UTC m=+7.715020898 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:08:03 compute-0 podman[190939]: 2025-10-02 19:08:03.326290102 +0000 UTC m=+4.752332728 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20250930)
Oct 02 19:08:03 compute-0 podman[190793]: 2025-10-02 19:08:03.366959089 +0000 UTC m=+26.207197705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:03 compute-0 podman[190970]: 2025-10-02 19:08:03.505625017 +0000 UTC m=+0.090872065 container create 584ea073e998efbd1683a7ec39f59158cd99459639d056e6ade1aee2270ef580 (image=quay.io/ceph/ceph:v18, name=jolly_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:08:03 compute-0 podman[190970]: 2025-10-02 19:08:03.470795603 +0000 UTC m=+0.056042711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:03 compute-0 systemd[1]: Started libpod-conmon-584ea073e998efbd1683a7ec39f59158cd99459639d056e6ade1aee2270ef580.scope.
Oct 02 19:08:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:03 compute-0 podman[190970]: 2025-10-02 19:08:03.671294814 +0000 UTC m=+0.256541922 container init 584ea073e998efbd1683a7ec39f59158cd99459639d056e6ade1aee2270ef580 (image=quay.io/ceph/ceph:v18, name=jolly_taussig, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Oct 02 19:08:03 compute-0 podman[190970]: 2025-10-02 19:08:03.68219431 +0000 UTC m=+0.267441358 container start 584ea073e998efbd1683a7ec39f59158cd99459639d056e6ade1aee2270ef580 (image=quay.io/ceph/ceph:v18, name=jolly_taussig, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 19:08:03 compute-0 podman[190970]: 2025-10-02 19:08:03.68829145 +0000 UTC m=+0.273538488 container attach 584ea073e998efbd1683a7ec39f59158cd99459639d056e6ade1aee2270ef580 (image=quay.io/ceph/ceph:v18, name=jolly_taussig, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:08:03 compute-0 jolly_taussig[190985]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 02 19:08:04 compute-0 systemd[1]: libpod-584ea073e998efbd1683a7ec39f59158cd99459639d056e6ade1aee2270ef580.scope: Deactivated successfully.
Oct 02 19:08:04 compute-0 podman[190970]: 2025-10-02 19:08:04.021073151 +0000 UTC m=+0.606320189 container died 584ea073e998efbd1683a7ec39f59158cd99459639d056e6ade1aee2270ef580 (image=quay.io/ceph/ceph:v18, name=jolly_taussig, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 19:08:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-952f44dcbe833eb17244d14f79dad361e67a0f5716e135e597e2d80f8ddd5fd8-merged.mount: Deactivated successfully.
Oct 02 19:08:04 compute-0 podman[190970]: 2025-10-02 19:08:04.112757666 +0000 UTC m=+0.698004724 container remove 584ea073e998efbd1683a7ec39f59158cd99459639d056e6ade1aee2270ef580 (image=quay.io/ceph/ceph:v18, name=jolly_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 19:08:04 compute-0 systemd[1]: libpod-conmon-584ea073e998efbd1683a7ec39f59158cd99459639d056e6ade1aee2270ef580.scope: Deactivated successfully.
Oct 02 19:08:04 compute-0 podman[191002]: 2025-10-02 19:08:04.246525026 +0000 UTC m=+0.083648856 container create b86f5258d256cbde9979be208af55c525ae65710e63bc3d5da45ef4ffa6e86ea (image=quay.io/ceph/ceph:v18, name=magical_ride, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:08:04 compute-0 podman[191002]: 2025-10-02 19:08:04.212635097 +0000 UTC m=+0.049758977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:04 compute-0 systemd[1]: Started libpod-conmon-b86f5258d256cbde9979be208af55c525ae65710e63bc3d5da45ef4ffa6e86ea.scope.
Oct 02 19:08:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:04 compute-0 podman[191002]: 2025-10-02 19:08:04.377160874 +0000 UTC m=+0.214284774 container init b86f5258d256cbde9979be208af55c525ae65710e63bc3d5da45ef4ffa6e86ea (image=quay.io/ceph/ceph:v18, name=magical_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:08:04 compute-0 podman[191002]: 2025-10-02 19:08:04.389521068 +0000 UTC m=+0.226644898 container start b86f5258d256cbde9979be208af55c525ae65710e63bc3d5da45ef4ffa6e86ea (image=quay.io/ceph/ceph:v18, name=magical_ride, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:08:04 compute-0 magical_ride[191017]: 167 167
Oct 02 19:08:04 compute-0 podman[191002]: 2025-10-02 19:08:04.39568372 +0000 UTC m=+0.232807590 container attach b86f5258d256cbde9979be208af55c525ae65710e63bc3d5da45ef4ffa6e86ea (image=quay.io/ceph/ceph:v18, name=magical_ride, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:08:04 compute-0 systemd[1]: libpod-b86f5258d256cbde9979be208af55c525ae65710e63bc3d5da45ef4ffa6e86ea.scope: Deactivated successfully.
Oct 02 19:08:04 compute-0 podman[191002]: 2025-10-02 19:08:04.39914999 +0000 UTC m=+0.236273830 container died b86f5258d256cbde9979be208af55c525ae65710e63bc3d5da45ef4ffa6e86ea (image=quay.io/ceph/ceph:v18, name=magical_ride, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 19:08:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c9c1c2b90368f5cd9359fe6c28ed8377df8ec1f6011236723abb900ac087887-merged.mount: Deactivated successfully.
Oct 02 19:08:04 compute-0 podman[191002]: 2025-10-02 19:08:04.472364331 +0000 UTC m=+0.309488181 container remove b86f5258d256cbde9979be208af55c525ae65710e63bc3d5da45ef4ffa6e86ea (image=quay.io/ceph/ceph:v18, name=magical_ride, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:08:04 compute-0 systemd[1]: libpod-conmon-b86f5258d256cbde9979be208af55c525ae65710e63bc3d5da45ef4ffa6e86ea.scope: Deactivated successfully.
Oct 02 19:08:04 compute-0 podman[191033]: 2025-10-02 19:08:04.596069827 +0000 UTC m=+0.075406299 container create 8f7b891f2e2cbc1ca3035f7d6f834dfcea5ac281cdd122807a44c2e6c06782b3 (image=quay.io/ceph/ceph:v18, name=elated_lamarr, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:08:04 compute-0 systemd[1]: Started libpod-conmon-8f7b891f2e2cbc1ca3035f7d6f834dfcea5ac281cdd122807a44c2e6c06782b3.scope.
Oct 02 19:08:04 compute-0 podman[191033]: 2025-10-02 19:08:04.570732102 +0000 UTC m=+0.050068554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:04 compute-0 podman[191033]: 2025-10-02 19:08:04.697925889 +0000 UTC m=+0.177262391 container init 8f7b891f2e2cbc1ca3035f7d6f834dfcea5ac281cdd122807a44c2e6c06782b3 (image=quay.io/ceph/ceph:v18, name=elated_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 19:08:04 compute-0 podman[191033]: 2025-10-02 19:08:04.715431459 +0000 UTC m=+0.194767931 container start 8f7b891f2e2cbc1ca3035f7d6f834dfcea5ac281cdd122807a44c2e6c06782b3 (image=quay.io/ceph/ceph:v18, name=elated_lamarr, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:08:04 compute-0 podman[191033]: 2025-10-02 19:08:04.722217447 +0000 UTC m=+0.201553909 container attach 8f7b891f2e2cbc1ca3035f7d6f834dfcea5ac281cdd122807a44c2e6c06782b3 (image=quay.io/ceph/ceph:v18, name=elated_lamarr, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 02 19:08:04 compute-0 elated_lamarr[191049]: AQCUzd5ojzNHLBAATTXkR9QGt+1knGObWXlBLA==
Oct 02 19:08:04 compute-0 systemd[1]: libpod-8f7b891f2e2cbc1ca3035f7d6f834dfcea5ac281cdd122807a44c2e6c06782b3.scope: Deactivated successfully.
Oct 02 19:08:04 compute-0 podman[191033]: 2025-10-02 19:08:04.750014806 +0000 UTC m=+0.229351328 container died 8f7b891f2e2cbc1ca3035f7d6f834dfcea5ac281cdd122807a44c2e6c06782b3 (image=quay.io/ceph/ceph:v18, name=elated_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:08:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e598bdf8a77fe0bdca82d63b1aa83a64a68bc2c50ab0084b99f75d31c9ab953b-merged.mount: Deactivated successfully.
Oct 02 19:08:04 compute-0 podman[191033]: 2025-10-02 19:08:04.808173692 +0000 UTC m=+0.287510124 container remove 8f7b891f2e2cbc1ca3035f7d6f834dfcea5ac281cdd122807a44c2e6c06782b3 (image=quay.io/ceph/ceph:v18, name=elated_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 19:08:04 compute-0 systemd[1]: libpod-conmon-8f7b891f2e2cbc1ca3035f7d6f834dfcea5ac281cdd122807a44c2e6c06782b3.scope: Deactivated successfully.
Oct 02 19:08:04 compute-0 podman[191068]: 2025-10-02 19:08:04.914232325 +0000 UTC m=+0.065267374 container create 63690279383d355eaffa8dc6c7f600573d1904a0a67bbec06d3fc37b1a57058a (image=quay.io/ceph/ceph:v18, name=sharp_ishizaka, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:08:04 compute-0 systemd[1]: Started libpod-conmon-63690279383d355eaffa8dc6c7f600573d1904a0a67bbec06d3fc37b1a57058a.scope.
Oct 02 19:08:04 compute-0 podman[191068]: 2025-10-02 19:08:04.8938395 +0000 UTC m=+0.044874539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:05 compute-0 podman[191068]: 2025-10-02 19:08:05.053730125 +0000 UTC m=+0.204765244 container init 63690279383d355eaffa8dc6c7f600573d1904a0a67bbec06d3fc37b1a57058a (image=quay.io/ceph/ceph:v18, name=sharp_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 19:08:05 compute-0 podman[191068]: 2025-10-02 19:08:05.069804326 +0000 UTC m=+0.220839385 container start 63690279383d355eaffa8dc6c7f600573d1904a0a67bbec06d3fc37b1a57058a (image=quay.io/ceph/ceph:v18, name=sharp_ishizaka, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:08:05 compute-0 podman[191068]: 2025-10-02 19:08:05.076685117 +0000 UTC m=+0.227720166 container attach 63690279383d355eaffa8dc6c7f600573d1904a0a67bbec06d3fc37b1a57058a (image=quay.io/ceph/ceph:v18, name=sharp_ishizaka, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 19:08:05 compute-0 sharp_ishizaka[191084]: AQCVzd5orfCLBRAAXMDlrMhQEn8SfUkNV1zz9A==
Oct 02 19:08:05 compute-0 systemd[1]: libpod-63690279383d355eaffa8dc6c7f600573d1904a0a67bbec06d3fc37b1a57058a.scope: Deactivated successfully.
Oct 02 19:08:05 compute-0 podman[191068]: 2025-10-02 19:08:05.098688704 +0000 UTC m=+0.249723763 container died 63690279383d355eaffa8dc6c7f600573d1904a0a67bbec06d3fc37b1a57058a (image=quay.io/ceph/ceph:v18, name=sharp_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 19:08:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-727842eafc284989286a0f40b3bef83360d6df3a256d165eaf7ea608024413ba-merged.mount: Deactivated successfully.
Oct 02 19:08:05 compute-0 podman[191068]: 2025-10-02 19:08:05.17970046 +0000 UTC m=+0.330735469 container remove 63690279383d355eaffa8dc6c7f600573d1904a0a67bbec06d3fc37b1a57058a (image=quay.io/ceph/ceph:v18, name=sharp_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:08:05 compute-0 systemd[1]: libpod-conmon-63690279383d355eaffa8dc6c7f600573d1904a0a67bbec06d3fc37b1a57058a.scope: Deactivated successfully.
Oct 02 19:08:05 compute-0 podman[191102]: 2025-10-02 19:08:05.293015932 +0000 UTC m=+0.072386149 container create e9fae18bdb87ddb30170293c3863f4e92695716e980a577a0c1f029cc434fc99 (image=quay.io/ceph/ceph:v18, name=dazzling_nash, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 02 19:08:05 compute-0 podman[191102]: 2025-10-02 19:08:05.263620461 +0000 UTC m=+0.042990768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:05 compute-0 systemd[1]: Started libpod-conmon-e9fae18bdb87ddb30170293c3863f4e92695716e980a577a0c1f029cc434fc99.scope.
Oct 02 19:08:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:05 compute-0 podman[191102]: 2025-10-02 19:08:05.424204214 +0000 UTC m=+0.203574511 container init e9fae18bdb87ddb30170293c3863f4e92695716e980a577a0c1f029cc434fc99 (image=quay.io/ceph/ceph:v18, name=dazzling_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:08:05 compute-0 podman[191102]: 2025-10-02 19:08:05.439821474 +0000 UTC m=+0.219191721 container start e9fae18bdb87ddb30170293c3863f4e92695716e980a577a0c1f029cc434fc99 (image=quay.io/ceph/ceph:v18, name=dazzling_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 19:08:05 compute-0 podman[191102]: 2025-10-02 19:08:05.448488651 +0000 UTC m=+0.227858898 container attach e9fae18bdb87ddb30170293c3863f4e92695716e980a577a0c1f029cc434fc99 (image=quay.io/ceph/ceph:v18, name=dazzling_nash, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:08:05 compute-0 dazzling_nash[191117]: AQCVzd5oJ0n7HBAAkJMEA8tTydH2XH7osUwTxg==
Oct 02 19:08:05 compute-0 systemd[1]: libpod-e9fae18bdb87ddb30170293c3863f4e92695716e980a577a0c1f029cc434fc99.scope: Deactivated successfully.
Oct 02 19:08:05 compute-0 podman[191102]: 2025-10-02 19:08:05.495861614 +0000 UTC m=+0.275231831 container died e9fae18bdb87ddb30170293c3863f4e92695716e980a577a0c1f029cc434fc99 (image=quay.io/ceph/ceph:v18, name=dazzling_nash, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:08:05 compute-0 podman[191102]: 2025-10-02 19:08:05.555317524 +0000 UTC m=+0.334687771 container remove e9fae18bdb87ddb30170293c3863f4e92695716e980a577a0c1f029cc434fc99 (image=quay.io/ceph/ceph:v18, name=dazzling_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:08:05 compute-0 systemd[1]: libpod-conmon-e9fae18bdb87ddb30170293c3863f4e92695716e980a577a0c1f029cc434fc99.scope: Deactivated successfully.
Oct 02 19:08:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fa72f0c68a620b5213d5bec1669ef466215e9b9311a406c9a87a8304a156f65-merged.mount: Deactivated successfully.
Oct 02 19:08:05 compute-0 podman[191135]: 2025-10-02 19:08:05.648952121 +0000 UTC m=+0.063781125 container create 6b5115b2583b5335835c06f95764f7933080106c338ef1d1449ff3375667e748 (image=quay.io/ceph/ceph:v18, name=festive_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:08:05 compute-0 systemd[1]: Started libpod-conmon-6b5115b2583b5335835c06f95764f7933080106c338ef1d1449ff3375667e748.scope.
Oct 02 19:08:05 compute-0 podman[191135]: 2025-10-02 19:08:05.620367461 +0000 UTC m=+0.035196495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17c344b1543eec84aa74e9813784a5afc660162818bbf8b08269a933ea6ac21/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:05 compute-0 podman[191135]: 2025-10-02 19:08:05.790564866 +0000 UTC m=+0.205393890 container init 6b5115b2583b5335835c06f95764f7933080106c338ef1d1449ff3375667e748 (image=quay.io/ceph/ceph:v18, name=festive_chatterjee, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:08:05 compute-0 podman[191135]: 2025-10-02 19:08:05.805031366 +0000 UTC m=+0.219860400 container start 6b5115b2583b5335835c06f95764f7933080106c338ef1d1449ff3375667e748 (image=quay.io/ceph/ceph:v18, name=festive_chatterjee, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 19:08:05 compute-0 podman[191135]: 2025-10-02 19:08:05.812581774 +0000 UTC m=+0.227410828 container attach 6b5115b2583b5335835c06f95764f7933080106c338ef1d1449ff3375667e748 (image=quay.io/ceph/ceph:v18, name=festive_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:08:05 compute-0 festive_chatterjee[191154]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct 02 19:08:05 compute-0 festive_chatterjee[191154]: setting min_mon_release = pacific
Oct 02 19:08:05 compute-0 festive_chatterjee[191154]: /usr/bin/monmaptool: set fsid to 6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:08:05 compute-0 festive_chatterjee[191154]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct 02 19:08:05 compute-0 systemd[1]: libpod-6b5115b2583b5335835c06f95764f7933080106c338ef1d1449ff3375667e748.scope: Deactivated successfully.
Oct 02 19:08:05 compute-0 podman[191135]: 2025-10-02 19:08:05.878623546 +0000 UTC m=+0.293452600 container died 6b5115b2583b5335835c06f95764f7933080106c338ef1d1449ff3375667e748 (image=quay.io/ceph/ceph:v18, name=festive_chatterjee, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 19:08:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f17c344b1543eec84aa74e9813784a5afc660162818bbf8b08269a933ea6ac21-merged.mount: Deactivated successfully.
Oct 02 19:08:05 compute-0 podman[191135]: 2025-10-02 19:08:05.953683906 +0000 UTC m=+0.368512910 container remove 6b5115b2583b5335835c06f95764f7933080106c338ef1d1449ff3375667e748 (image=quay.io/ceph/ceph:v18, name=festive_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 19:08:05 compute-0 systemd[1]: libpod-conmon-6b5115b2583b5335835c06f95764f7933080106c338ef1d1449ff3375667e748.scope: Deactivated successfully.
Oct 02 19:08:06 compute-0 podman[191172]: 2025-10-02 19:08:06.080254507 +0000 UTC m=+0.079067496 container create 2a1bf302f39bec4ba78f2248738c4777cff316a0bb493df23e6756c474cea153 (image=quay.io/ceph/ceph:v18, name=sweet_feynman, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:08:06 compute-0 podman[191172]: 2025-10-02 19:08:06.044482948 +0000 UTC m=+0.043295987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:06 compute-0 systemd[1]: Started libpod-conmon-2a1bf302f39bec4ba78f2248738c4777cff316a0bb493df23e6756c474cea153.scope.
Oct 02 19:08:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b80418e87ff649039addb1b979fe0f7b8c3ab9caab78c973d11222b7d951de3f/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b80418e87ff649039addb1b979fe0f7b8c3ab9caab78c973d11222b7d951de3f/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b80418e87ff649039addb1b979fe0f7b8c3ab9caab78c973d11222b7d951de3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b80418e87ff649039addb1b979fe0f7b8c3ab9caab78c973d11222b7d951de3f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:06 compute-0 podman[191172]: 2025-10-02 19:08:06.225767945 +0000 UTC m=+0.224580964 container init 2a1bf302f39bec4ba78f2248738c4777cff316a0bb493df23e6756c474cea153 (image=quay.io/ceph/ceph:v18, name=sweet_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 19:08:06 compute-0 podman[191172]: 2025-10-02 19:08:06.258145794 +0000 UTC m=+0.256958783 container start 2a1bf302f39bec4ba78f2248738c4777cff316a0bb493df23e6756c474cea153 (image=quay.io/ceph/ceph:v18, name=sweet_feynman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:08:06 compute-0 podman[191172]: 2025-10-02 19:08:06.263964397 +0000 UTC m=+0.262777446 container attach 2a1bf302f39bec4ba78f2248738c4777cff316a0bb493df23e6756c474cea153 (image=quay.io/ceph/ceph:v18, name=sweet_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 19:08:06 compute-0 systemd[1]: libpod-2a1bf302f39bec4ba78f2248738c4777cff316a0bb493df23e6756c474cea153.scope: Deactivated successfully.
Oct 02 19:08:06 compute-0 podman[191172]: 2025-10-02 19:08:06.377800944 +0000 UTC m=+0.376613903 container died 2a1bf302f39bec4ba78f2248738c4777cff316a0bb493df23e6756c474cea153 (image=quay.io/ceph/ceph:v18, name=sweet_feynman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:08:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-b80418e87ff649039addb1b979fe0f7b8c3ab9caab78c973d11222b7d951de3f-merged.mount: Deactivated successfully.
Oct 02 19:08:06 compute-0 podman[191172]: 2025-10-02 19:08:06.438972148 +0000 UTC m=+0.437785107 container remove 2a1bf302f39bec4ba78f2248738c4777cff316a0bb493df23e6756c474cea153 (image=quay.io/ceph/ceph:v18, name=sweet_feynman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:08:06 compute-0 systemd[1]: libpod-conmon-2a1bf302f39bec4ba78f2248738c4777cff316a0bb493df23e6756c474cea153.scope: Deactivated successfully.
Oct 02 19:08:06 compute-0 systemd[1]: Reloading.
Oct 02 19:08:06 compute-0 systemd-sysv-generator[191257]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:08:06 compute-0 systemd-rc-local-generator[191254]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:08:07 compute-0 systemd[1]: Reloading.
Oct 02 19:08:07 compute-0 systemd-sysv-generator[191296]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:08:07 compute-0 systemd-rc-local-generator[191289]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:08:07 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Oct 02 19:08:07 compute-0 systemd[1]: Reloading.
Oct 02 19:08:07 compute-0 podman[191302]: 2025-10-02 19:08:07.697800256 +0000 UTC m=+0.155483260 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., version=9.6, distribution-scope=public, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, name=ubi9-minimal, config_id=edpm, com.redhat.component=ubi9-minimal-container, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41)
Oct 02 19:08:07 compute-0 systemd-rc-local-generator[191370]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:08:07 compute-0 systemd-sysv-generator[191373]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:08:07 compute-0 podman[191303]: 2025-10-02 19:08:07.730853594 +0000 UTC m=+0.187810719 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:08:07 compute-0 systemd[1]: Reached target Ceph cluster 6019f664-a1c2-5955-8391-692cb79a59f9.
Oct 02 19:08:07 compute-0 systemd[1]: Reloading.
Oct 02 19:08:08 compute-0 systemd-sysv-generator[191417]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:08:08 compute-0 systemd-rc-local-generator[191413]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:08:08 compute-0 systemd[1]: Reloading.
Oct 02 19:08:08 compute-0 systemd-rc-local-generator[191453]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:08:08 compute-0 systemd-sysv-generator[191458]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:08:09 compute-0 systemd[1]: Created slice Slice /system/ceph-6019f664-a1c2-5955-8391-692cb79a59f9.
Oct 02 19:08:09 compute-0 systemd[1]: Reached target System Time Set.
Oct 02 19:08:09 compute-0 systemd[1]: Reached target System Time Synchronized.
Oct 02 19:08:09 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 6019f664-a1c2-5955-8391-692cb79a59f9...
Oct 02 19:08:09 compute-0 podman[191508]: 2025-10-02 19:08:09.523511057 +0000 UTC m=+0.092621141 container create 037d4537a6ee0f3e7f2d42f83a378e44b727bf75a47cddf70a7b1fae7b3d9be8 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 19:08:09 compute-0 podman[191508]: 2025-10-02 19:08:09.489263608 +0000 UTC m=+0.058373752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8074cb5f8680f0229d0a5fdc42a15a0926696267025c86b5ebef66547020ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8074cb5f8680f0229d0a5fdc42a15a0926696267025c86b5ebef66547020ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8074cb5f8680f0229d0a5fdc42a15a0926696267025c86b5ebef66547020ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8074cb5f8680f0229d0a5fdc42a15a0926696267025c86b5ebef66547020ab/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:09 compute-0 podman[191508]: 2025-10-02 19:08:09.675499524 +0000 UTC m=+0.244609668 container init 037d4537a6ee0f3e7f2d42f83a378e44b727bf75a47cddf70a7b1fae7b3d9be8 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:08:09 compute-0 podman[191508]: 2025-10-02 19:08:09.692915781 +0000 UTC m=+0.262025865 container start 037d4537a6ee0f3e7f2d42f83a378e44b727bf75a47cddf70a7b1fae7b3d9be8 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:08:09 compute-0 bash[191508]: 037d4537a6ee0f3e7f2d42f83a378e44b727bf75a47cddf70a7b1fae7b3d9be8
Oct 02 19:08:09 compute-0 systemd[1]: Started Ceph mon.compute-0 for 6019f664-a1c2-5955-8391-692cb79a59f9.
Oct 02 19:08:09 compute-0 ceph-mon[191527]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 19:08:09 compute-0 ceph-mon[191527]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 02 19:08:09 compute-0 ceph-mon[191527]: pidfile_write: ignore empty --pid-file
Oct 02 19:08:09 compute-0 ceph-mon[191527]: load: jerasure load: lrc 
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: RocksDB version: 7.9.2
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Git sha 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: DB SUMMARY
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: DB Session ID:  JSM2FJ23N0GT6XI9R6YN
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: CURRENT file:  CURRENT
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                         Options.error_if_exists: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                       Options.create_if_missing: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                                     Options.env: 0x55b828ea0c40
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                                Options.info_log: 0x55b829f9ce80
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                              Options.statistics: (nil)
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                               Options.use_fsync: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                              Options.db_log_dir: 
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                                 Options.wal_dir: 
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                    Options.write_buffer_manager: 0x55b829facb40
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                  Options.unordered_write: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                               Options.row_cache: None
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                              Options.wal_filter: None
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.two_write_queues: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.wal_compression: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.atomic_flush: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.max_background_jobs: 2
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.max_background_compactions: -1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.max_subcompactions: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.max_total_wal_size: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                          Options.max_open_files: -1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:       Options.compaction_readahead_size: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Compression algorithms supported:
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         kZSTD supported: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         kXpressCompression supported: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         kBZip2Compression supported: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         kLZ4Compression supported: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         kZlibCompression supported: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         kSnappyCompression supported: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:           Options.merge_operator: 
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:        Options.compaction_filter: None
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b829f9ca80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b829f951f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:        Options.write_buffer_size: 33554432
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:  Options.max_write_buffer_number: 2
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:          Options.compression: NoCompression
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.num_levels: 7
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 11520233-f463-4c1a-a5e6-8f5a74526a6e
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432089774017, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432089787000, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "JSM2FJ23N0GT6XI9R6YN", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432089787138, "job": 1, "event": "recovery_finished"}
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b829fbee00
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: DB pointer 0x55b82a0c8000
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:08:09 compute-0 ceph-mon[191527]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 0.0 total, 0.0 interval
                                            Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                            Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                            Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0
                                             Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.0 total, 0.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.06 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.06 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b829f951f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Oct 02 19:08:09 compute-0 ceph-mon[191527]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@-1(???) e0 preinit fsid 6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(probing) e0 win_standalone_election
Oct 02 19:08:09 compute-0 ceph-mon[191527]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 19:08:09 compute-0 ceph-mon[191527]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 02 19:08:09 compute-0 ceph-mon[191527]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 19:08:09 compute-0 ceph-mon[191527]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 02 19:08:09 compute-0 ceph-mon[191527]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-10-02T19:08:06.309832Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864100,os=Linux}
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).mds e1 new map
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).mds e1 print_map
                                            e1
                                            enable_multiple, ever_enabled_multiple: 1,1
                                            default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            legacy client fscid: -1
                                             
                                            No filesystems configured
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 02 19:08:09 compute-0 ceph-mon[191527]: log_channel(cluster) log [DBG] : fsmap 
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mkfs 6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 02 19:08:09 compute-0 podman[191528]: 2025-10-02 19:08:09.854110021 +0000 UTC m=+0.091625205 container create 05f67e0a8c21ea1553959c9e2c9b9aeba12f2040292b2920751f9a20c5884267 (image=quay.io/ceph/ceph:v18, name=elated_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct 02 19:08:09 compute-0 ceph-mon[191527]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 02 19:08:09 compute-0 ceph-mon[191527]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 02 19:08:09 compute-0 ceph-mon[191527]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 02 19:08:09 compute-0 systemd[1]: Started libpod-conmon-05f67e0a8c21ea1553959c9e2c9b9aeba12f2040292b2920751f9a20c5884267.scope.
Oct 02 19:08:09 compute-0 podman[191528]: 2025-10-02 19:08:09.817781787 +0000 UTC m=+0.055297001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674fa7067cec9bd80ce9358e8da9bd6037adf3ce1445120dcc270b179cc706bd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674fa7067cec9bd80ce9358e8da9bd6037adf3ce1445120dcc270b179cc706bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674fa7067cec9bd80ce9358e8da9bd6037adf3ce1445120dcc270b179cc706bd/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:09 compute-0 podman[191528]: 2025-10-02 19:08:09.995074759 +0000 UTC m=+0.232590033 container init 05f67e0a8c21ea1553959c9e2c9b9aeba12f2040292b2920751f9a20c5884267 (image=quay.io/ceph/ceph:v18, name=elated_bartik, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:08:10 compute-0 podman[191528]: 2025-10-02 19:08:10.027660084 +0000 UTC m=+0.265175298 container start 05f67e0a8c21ea1553959c9e2c9b9aeba12f2040292b2920751f9a20c5884267 (image=quay.io/ceph/ceph:v18, name=elated_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:08:10 compute-0 podman[191528]: 2025-10-02 19:08:10.037697657 +0000 UTC m=+0.275212871 container attach 05f67e0a8c21ea1553959c9e2c9b9aeba12f2040292b2920751f9a20c5884267 (image=quay.io/ceph/ceph:v18, name=elated_bartik, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:08:10 compute-0 ceph-mon[191527]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 19:08:10 compute-0 ceph-mon[191527]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3820912341' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 19:08:10 compute-0 elated_bartik[191579]:   cluster:
Oct 02 19:08:10 compute-0 elated_bartik[191579]:     id:     6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:08:10 compute-0 elated_bartik[191579]:     health: HEALTH_OK
Oct 02 19:08:10 compute-0 elated_bartik[191579]:  
Oct 02 19:08:10 compute-0 elated_bartik[191579]:   services:
Oct 02 19:08:10 compute-0 elated_bartik[191579]:     mon: 1 daemons, quorum compute-0 (age 0.640423s)
Oct 02 19:08:10 compute-0 elated_bartik[191579]:     mgr: no daemons active
Oct 02 19:08:10 compute-0 elated_bartik[191579]:     osd: 0 osds: 0 up, 0 in
Oct 02 19:08:10 compute-0 elated_bartik[191579]:  
Oct 02 19:08:10 compute-0 elated_bartik[191579]:   data:
Oct 02 19:08:10 compute-0 elated_bartik[191579]:     pools:   0 pools, 0 pgs
Oct 02 19:08:10 compute-0 elated_bartik[191579]:     objects: 0 objects, 0 B
Oct 02 19:08:10 compute-0 elated_bartik[191579]:     usage:   0 B used, 0 B / 0 B avail
Oct 02 19:08:10 compute-0 elated_bartik[191579]:     pgs:     
Oct 02 19:08:10 compute-0 elated_bartik[191579]:  
Oct 02 19:08:10 compute-0 systemd[1]: libpod-05f67e0a8c21ea1553959c9e2c9b9aeba12f2040292b2920751f9a20c5884267.scope: Deactivated successfully.
Oct 02 19:08:10 compute-0 podman[191528]: 2025-10-02 19:08:10.498166689 +0000 UTC m=+0.735681913 container died 05f67e0a8c21ea1553959c9e2c9b9aeba12f2040292b2920751f9a20c5884267 (image=quay.io/ceph/ceph:v18, name=elated_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 19:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-674fa7067cec9bd80ce9358e8da9bd6037adf3ce1445120dcc270b179cc706bd-merged.mount: Deactivated successfully.
Oct 02 19:08:10 compute-0 podman[191528]: 2025-10-02 19:08:10.583669412 +0000 UTC m=+0.821184626 container remove 05f67e0a8c21ea1553959c9e2c9b9aeba12f2040292b2920751f9a20c5884267 (image=quay.io/ceph/ceph:v18, name=elated_bartik, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 19:08:10 compute-0 systemd[1]: libpod-conmon-05f67e0a8c21ea1553959c9e2c9b9aeba12f2040292b2920751f9a20c5884267.scope: Deactivated successfully.
Oct 02 19:08:10 compute-0 podman[191618]: 2025-10-02 19:08:10.705830307 +0000 UTC m=+0.075698737 container create a706fbd15def523e88aeabfa3efadace3493e9292f1a5ab8a66eb8edf9890554 (image=quay.io/ceph/ceph:v18, name=tender_nash, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:08:10 compute-0 podman[191618]: 2025-10-02 19:08:10.675364908 +0000 UTC m=+0.045233418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:10 compute-0 systemd[1]: Started libpod-conmon-a706fbd15def523e88aeabfa3efadace3493e9292f1a5ab8a66eb8edf9890554.scope.
Oct 02 19:08:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/574386c164b654c04e04b3c5ad827c8069a445508f277c009e609a18bc7ed630/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/574386c164b654c04e04b3c5ad827c8069a445508f277c009e609a18bc7ed630/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/574386c164b654c04e04b3c5ad827c8069a445508f277c009e609a18bc7ed630/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/574386c164b654c04e04b3c5ad827c8069a445508f277c009e609a18bc7ed630/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:10 compute-0 podman[191618]: 2025-10-02 19:08:10.85725415 +0000 UTC m=+0.227122580 container init a706fbd15def523e88aeabfa3efadace3493e9292f1a5ab8a66eb8edf9890554 (image=quay.io/ceph/ceph:v18, name=tender_nash, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 19:08:10 compute-0 ceph-mon[191527]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 02 19:08:10 compute-0 ceph-mon[191527]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 19:08:10 compute-0 ceph-mon[191527]: fsmap 
Oct 02 19:08:10 compute-0 ceph-mon[191527]: osdmap e1: 0 total, 0 up, 0 in
Oct 02 19:08:10 compute-0 ceph-mon[191527]: mgrmap e1: no daemons active
Oct 02 19:08:10 compute-0 ceph-mon[191527]: from='client.? 192.168.122.100:0/3820912341' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 19:08:10 compute-0 podman[191618]: 2025-10-02 19:08:10.899833247 +0000 UTC m=+0.269701697 container start a706fbd15def523e88aeabfa3efadace3493e9292f1a5ab8a66eb8edf9890554 (image=quay.io/ceph/ceph:v18, name=tender_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:08:10 compute-0 podman[191618]: 2025-10-02 19:08:10.906446501 +0000 UTC m=+0.276314921 container attach a706fbd15def523e88aeabfa3efadace3493e9292f1a5ab8a66eb8edf9890554 (image=quay.io/ceph/ceph:v18, name=tender_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:08:11 compute-0 ceph-mon[191527]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 02 19:08:11 compute-0 ceph-mon[191527]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1782666774' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 19:08:11 compute-0 ceph-mon[191527]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1782666774' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 02 19:08:11 compute-0 tender_nash[191634]: 
Oct 02 19:08:11 compute-0 tender_nash[191634]: [global]
Oct 02 19:08:11 compute-0 tender_nash[191634]:         fsid = 6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:08:11 compute-0 tender_nash[191634]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct 02 19:08:11 compute-0 tender_nash[191634]:         osd_crush_chooseleaf_type = 0
Oct 02 19:08:11 compute-0 systemd[1]: libpod-a706fbd15def523e88aeabfa3efadace3493e9292f1a5ab8a66eb8edf9890554.scope: Deactivated successfully.
Oct 02 19:08:11 compute-0 podman[191660]: 2025-10-02 19:08:11.448270517 +0000 UTC m=+0.051152634 container died a706fbd15def523e88aeabfa3efadace3493e9292f1a5ab8a66eb8edf9890554 (image=quay.io/ceph/ceph:v18, name=tender_nash, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 19:08:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-574386c164b654c04e04b3c5ad827c8069a445508f277c009e609a18bc7ed630-merged.mount: Deactivated successfully.
Oct 02 19:08:11 compute-0 podman[191660]: 2025-10-02 19:08:11.531929542 +0000 UTC m=+0.134811639 container remove a706fbd15def523e88aeabfa3efadace3493e9292f1a5ab8a66eb8edf9890554 (image=quay.io/ceph/ceph:v18, name=tender_nash, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 19:08:11 compute-0 systemd[1]: libpod-conmon-a706fbd15def523e88aeabfa3efadace3493e9292f1a5ab8a66eb8edf9890554.scope: Deactivated successfully.
Oct 02 19:08:11 compute-0 podman[191674]: 2025-10-02 19:08:11.673547287 +0000 UTC m=+0.086487620 container create 8bbe319303175d7227e24a0e803815ef25cc88337686167cc6e8ebf1036e93d0 (image=quay.io/ceph/ceph:v18, name=nervous_snyder, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 19:08:11 compute-0 systemd[1]: Started libpod-conmon-8bbe319303175d7227e24a0e803815ef25cc88337686167cc6e8ebf1036e93d0.scope.
Oct 02 19:08:11 compute-0 podman[191674]: 2025-10-02 19:08:11.642820971 +0000 UTC m=+0.055761404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bcdf0cea9acd0100a63e948dc9869757f360b4e06599348a2c21cb31431c4a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bcdf0cea9acd0100a63e948dc9869757f360b4e06599348a2c21cb31431c4a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bcdf0cea9acd0100a63e948dc9869757f360b4e06599348a2c21cb31431c4a9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bcdf0cea9acd0100a63e948dc9869757f360b4e06599348a2c21cb31431c4a9/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:11 compute-0 podman[191674]: 2025-10-02 19:08:11.800985621 +0000 UTC m=+0.213926044 container init 8bbe319303175d7227e24a0e803815ef25cc88337686167cc6e8ebf1036e93d0 (image=quay.io/ceph/ceph:v18, name=nervous_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 19:08:11 compute-0 podman[191674]: 2025-10-02 19:08:11.812637737 +0000 UTC m=+0.225578090 container start 8bbe319303175d7227e24a0e803815ef25cc88337686167cc6e8ebf1036e93d0 (image=quay.io/ceph/ceph:v18, name=nervous_snyder, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 19:08:11 compute-0 podman[191674]: 2025-10-02 19:08:11.818704806 +0000 UTC m=+0.231645229 container attach 8bbe319303175d7227e24a0e803815ef25cc88337686167cc6e8ebf1036e93d0 (image=quay.io/ceph/ceph:v18, name=nervous_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:08:11 compute-0 ceph-mon[191527]: from='client.? 192.168.122.100:0/1782666774' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 19:08:11 compute-0 ceph-mon[191527]: from='client.? 192.168.122.100:0/1782666774' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 02 19:08:12 compute-0 ceph-mon[191527]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:08:12 compute-0 ceph-mon[191527]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2990950747' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:08:12 compute-0 systemd[1]: libpod-8bbe319303175d7227e24a0e803815ef25cc88337686167cc6e8ebf1036e93d0.scope: Deactivated successfully.
Oct 02 19:08:12 compute-0 podman[191674]: 2025-10-02 19:08:12.376005998 +0000 UTC m=+0.788946361 container died 8bbe319303175d7227e24a0e803815ef25cc88337686167cc6e8ebf1036e93d0 (image=quay.io/ceph/ceph:v18, name=nervous_snyder, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:08:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bcdf0cea9acd0100a63e948dc9869757f360b4e06599348a2c21cb31431c4a9-merged.mount: Deactivated successfully.
Oct 02 19:08:12 compute-0 podman[191674]: 2025-10-02 19:08:12.472797236 +0000 UTC m=+0.885737599 container remove 8bbe319303175d7227e24a0e803815ef25cc88337686167cc6e8ebf1036e93d0 (image=quay.io/ceph/ceph:v18, name=nervous_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 19:08:12 compute-0 systemd[1]: libpod-conmon-8bbe319303175d7227e24a0e803815ef25cc88337686167cc6e8ebf1036e93d0.scope: Deactivated successfully.
Oct 02 19:08:12 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 6019f664-a1c2-5955-8391-692cb79a59f9...
Oct 02 19:08:12 compute-0 ceph-mon[191527]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 02 19:08:12 compute-0 ceph-mon[191527]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 02 19:08:12 compute-0 ceph-mon[191527]: mon.compute-0@0(leader) e1 shutdown
Oct 02 19:08:12 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0[191523]: 2025-10-02T19:08:12.874+0000 7f3b78cf1640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 02 19:08:12 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0[191523]: 2025-10-02T19:08:12.874+0000 7f3b78cf1640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 02 19:08:12 compute-0 ceph-mon[191527]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 02 19:08:12 compute-0 ceph-mon[191527]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 02 19:08:12 compute-0 podman[191755]: 2025-10-02 19:08:12.993227351 +0000 UTC m=+0.208565404 container died 037d4537a6ee0f3e7f2d42f83a378e44b727bf75a47cddf70a7b1fae7b3d9be8 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 19:08:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e8074cb5f8680f0229d0a5fdc42a15a0926696267025c86b5ebef66547020ab-merged.mount: Deactivated successfully.
Oct 02 19:08:13 compute-0 podman[191755]: 2025-10-02 19:08:13.06981805 +0000 UTC m=+0.285156103 container remove 037d4537a6ee0f3e7f2d42f83a378e44b727bf75a47cddf70a7b1fae7b3d9be8 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 19:08:13 compute-0 bash[191755]: ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0
Oct 02 19:08:13 compute-0 podman[191779]: 2025-10-02 19:08:13.188040032 +0000 UTC m=+0.107159643 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:08:13 compute-0 systemd[1]: ceph-6019f664-a1c2-5955-8391-692cb79a59f9@mon.compute-0.service: Deactivated successfully.
Oct 02 19:08:13 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 6019f664-a1c2-5955-8391-692cb79a59f9.
Oct 02 19:08:13 compute-0 systemd[1]: ceph-6019f664-a1c2-5955-8391-692cb79a59f9@mon.compute-0.service: Consumed 2.216s CPU time.
Oct 02 19:08:13 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 6019f664-a1c2-5955-8391-692cb79a59f9...
Oct 02 19:08:13 compute-0 podman[191876]: 2025-10-02 19:08:13.844894046 +0000 UTC m=+0.100435386 container create a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:08:13 compute-0 podman[191876]: 2025-10-02 19:08:13.807979657 +0000 UTC m=+0.063521047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf59bec9a24f0d86f99042fd6c96603b7385d248868fc58f2826c7094d3f7d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf59bec9a24f0d86f99042fd6c96603b7385d248868fc58f2826c7094d3f7d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf59bec9a24f0d86f99042fd6c96603b7385d248868fc58f2826c7094d3f7d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf59bec9a24f0d86f99042fd6c96603b7385d248868fc58f2826c7094d3f7d3/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:13 compute-0 podman[191876]: 2025-10-02 19:08:13.950525087 +0000 UTC m=+0.206066467 container init a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 19:08:13 compute-0 podman[191876]: 2025-10-02 19:08:13.981813598 +0000 UTC m=+0.237354928 container start a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 19:08:13 compute-0 bash[191876]: a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1
Oct 02 19:08:14 compute-0 systemd[1]: Started Ceph mon.compute-0 for 6019f664-a1c2-5955-8391-692cb79a59f9.
Oct 02 19:08:14 compute-0 podman[191889]: 2025-10-02 19:08:14.024059957 +0000 UTC m=+0.124942240 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:08:14 compute-0 ceph-mon[191910]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 19:08:14 compute-0 ceph-mon[191910]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 02 19:08:14 compute-0 ceph-mon[191910]: pidfile_write: ignore empty --pid-file
Oct 02 19:08:14 compute-0 ceph-mon[191910]: load: jerasure load: lrc 
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: RocksDB version: 7.9.2
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Git sha 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: DB SUMMARY
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: DB Session ID:  FLU3CD8VPCVBNG3UEXDY
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: CURRENT file:  CURRENT
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 54564 ; 
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                         Options.error_if_exists: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                       Options.create_if_missing: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                                     Options.env: 0x557df131fc40
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                                Options.info_log: 0x557df355b040
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                              Options.statistics: (nil)
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                               Options.use_fsync: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                              Options.db_log_dir: 
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                                 Options.wal_dir: 
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                    Options.write_buffer_manager: 0x557df356ab40
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                  Options.unordered_write: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                               Options.row_cache: None
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                              Options.wal_filter: None
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.two_write_queues: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.wal_compression: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.atomic_flush: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.max_background_jobs: 2
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.max_background_compactions: -1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.max_subcompactions: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.max_total_wal_size: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                          Options.max_open_files: -1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:       Options.compaction_readahead_size: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Compression algorithms supported:
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         kZSTD supported: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         kXpressCompression supported: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         kBZip2Compression supported: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         kLZ4Compression supported: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         kZlibCompression supported: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         kSnappyCompression supported: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:           Options.merge_operator: 
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:        Options.compaction_filter: None
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557df355ac40)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x557df35531f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:        Options.write_buffer_size: 33554432
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:  Options.max_write_buffer_number: 2
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:          Options.compression: NoCompression
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.num_levels: 7
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 11520233-f463-4c1a-a5e6-8f5a74526a6e
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432094059035, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432094064340, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54153, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 52695, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3023, "raw_average_key_size": 30, "raw_value_size": 50297, "raw_average_value_size": 502, "num_data_blocks": 8, "num_entries": 100, "num_filter_entries": 100, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432094, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432094064630, "job": 1, "event": "recovery_finished"}
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557df357ce00
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: DB pointer 0x557df3606000
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:08:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 0.0 total, 0.0 interval
                                            Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                            Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                            Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0   54.78 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Sum      2/0   54.78 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.0 total, 0.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 2.25 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 2.25 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x557df35531f0#2 capacity: 512.00 MB usage: 25.89 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 0.000103 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,25.11 KB,0.00478923%) FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Oct 02 19:08:14 compute-0 ceph-mon[191910]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:08:14 compute-0 ceph-mon[191910]: mon.compute-0@-1(???) e1 preinit fsid 6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:08:14 compute-0 ceph-mon[191910]: mon.compute-0@-1(???).mds e1 new map
Oct 02 19:08:14 compute-0 ceph-mon[191910]: mon.compute-0@-1(???).mds e1 print_map
                                            e1
                                            enable_multiple, ever_enabled_multiple: 1,1
                                            default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            legacy client fscid: -1
                                             
                                            No filesystems configured
Oct 02 19:08:14 compute-0 ceph-mon[191910]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 02 19:08:14 compute-0 ceph-mon[191910]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 02 19:08:14 compute-0 ceph-mon[191910]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 02 19:08:14 compute-0 ceph-mon[191910]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 02 19:08:14 compute-0 ceph-mon[191910]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct 02 19:08:14 compute-0 ceph-mon[191910]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct 02 19:08:14 compute-0 ceph-mon[191910]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 02 19:08:14 compute-0 ceph-mon[191910]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct 02 19:08:14 compute-0 ceph-mon[191910]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 19:08:14 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 02 19:08:14 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 19:08:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 19:08:14 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : fsmap 
Oct 02 19:08:14 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 02 19:08:14 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 02 19:08:14 compute-0 podman[191914]: 2025-10-02 19:08:14.123762502 +0000 UTC m=+0.077480223 container create 32da52c0a2f023e24a8d98d52a2420b739756f35a91ffad85b476d618a87049e (image=quay.io/ceph/ceph:v18, name=competent_hodgkin, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 19:08:14 compute-0 ceph-mon[191910]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 02 19:08:14 compute-0 ceph-mon[191910]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 19:08:14 compute-0 ceph-mon[191910]: fsmap 
Oct 02 19:08:14 compute-0 ceph-mon[191910]: osdmap e1: 0 total, 0 up, 0 in
Oct 02 19:08:14 compute-0 ceph-mon[191910]: mgrmap e1: no daemons active
Oct 02 19:08:14 compute-0 podman[191914]: 2025-10-02 19:08:14.091113016 +0000 UTC m=+0.044830777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:14 compute-0 systemd[1]: Started libpod-conmon-32da52c0a2f023e24a8d98d52a2420b739756f35a91ffad85b476d618a87049e.scope.
Oct 02 19:08:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a310ef34df30cafe86654d64e91fcb60b2708ac5f6882d27d4146fd30e9c76b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a310ef34df30cafe86654d64e91fcb60b2708ac5f6882d27d4146fd30e9c76b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a310ef34df30cafe86654d64e91fcb60b2708ac5f6882d27d4146fd30e9c76b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:14 compute-0 podman[191914]: 2025-10-02 19:08:14.288164186 +0000 UTC m=+0.241881947 container init 32da52c0a2f023e24a8d98d52a2420b739756f35a91ffad85b476d618a87049e (image=quay.io/ceph/ceph:v18, name=competent_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct 02 19:08:14 compute-0 podman[191914]: 2025-10-02 19:08:14.306669471 +0000 UTC m=+0.260387172 container start 32da52c0a2f023e24a8d98d52a2420b739756f35a91ffad85b476d618a87049e (image=quay.io/ceph/ceph:v18, name=competent_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 19:08:14 compute-0 podman[191914]: 2025-10-02 19:08:14.315090962 +0000 UTC m=+0.268808723 container attach 32da52c0a2f023e24a8d98d52a2420b739756f35a91ffad85b476d618a87049e (image=quay.io/ceph/ceph:v18, name=competent_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:08:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Oct 02 19:08:14 compute-0 systemd[1]: libpod-32da52c0a2f023e24a8d98d52a2420b739756f35a91ffad85b476d618a87049e.scope: Deactivated successfully.
Oct 02 19:08:14 compute-0 podman[191914]: 2025-10-02 19:08:14.770361757 +0000 UTC m=+0.724079498 container died 32da52c0a2f023e24a8d98d52a2420b739756f35a91ffad85b476d618a87049e (image=quay.io/ceph/ceph:v18, name=competent_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:08:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a310ef34df30cafe86654d64e91fcb60b2708ac5f6882d27d4146fd30e9c76b-merged.mount: Deactivated successfully.
Oct 02 19:08:14 compute-0 podman[191914]: 2025-10-02 19:08:14.836677137 +0000 UTC m=+0.790394828 container remove 32da52c0a2f023e24a8d98d52a2420b739756f35a91ffad85b476d618a87049e (image=quay.io/ceph/ceph:v18, name=competent_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:08:14 compute-0 systemd[1]: libpod-conmon-32da52c0a2f023e24a8d98d52a2420b739756f35a91ffad85b476d618a87049e.scope: Deactivated successfully.
Oct 02 19:08:14 compute-0 podman[192007]: 2025-10-02 19:08:14.969567264 +0000 UTC m=+0.086813709 container create ee4834c1caa29d9416549ef49ca03a0dcab1b4d7abcf8e08b73eeeb567216455 (image=quay.io/ceph/ceph:v18, name=modest_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 19:08:15 compute-0 podman[192007]: 2025-10-02 19:08:14.934674718 +0000 UTC m=+0.051921193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:15 compute-0 systemd[1]: Started libpod-conmon-ee4834c1caa29d9416549ef49ca03a0dcab1b4d7abcf8e08b73eeeb567216455.scope.
Oct 02 19:08:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22af0070f795e7cc01caff5a94b60f6d12c047d42e8cb5c800c7e83b201e0edb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22af0070f795e7cc01caff5a94b60f6d12c047d42e8cb5c800c7e83b201e0edb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22af0070f795e7cc01caff5a94b60f6d12c047d42e8cb5c800c7e83b201e0edb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:15 compute-0 podman[192007]: 2025-10-02 19:08:15.127618401 +0000 UTC m=+0.244864846 container init ee4834c1caa29d9416549ef49ca03a0dcab1b4d7abcf8e08b73eeeb567216455 (image=quay.io/ceph/ceph:v18, name=modest_shaw, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 19:08:15 compute-0 podman[192007]: 2025-10-02 19:08:15.160835312 +0000 UTC m=+0.278081757 container start ee4834c1caa29d9416549ef49ca03a0dcab1b4d7abcf8e08b73eeeb567216455 (image=quay.io/ceph/ceph:v18, name=modest_shaw, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:08:15 compute-0 podman[192007]: 2025-10-02 19:08:15.167792035 +0000 UTC m=+0.285038530 container attach ee4834c1caa29d9416549ef49ca03a0dcab1b4d7abcf8e08b73eeeb567216455 (image=quay.io/ceph/ceph:v18, name=modest_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 19:08:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Oct 02 19:08:15 compute-0 systemd[1]: libpod-ee4834c1caa29d9416549ef49ca03a0dcab1b4d7abcf8e08b73eeeb567216455.scope: Deactivated successfully.
Oct 02 19:08:15 compute-0 podman[192007]: 2025-10-02 19:08:15.659222108 +0000 UTC m=+0.776468543 container died ee4834c1caa29d9416549ef49ca03a0dcab1b4d7abcf8e08b73eeeb567216455 (image=quay.io/ceph/ceph:v18, name=modest_shaw, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 02 19:08:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-22af0070f795e7cc01caff5a94b60f6d12c047d42e8cb5c800c7e83b201e0edb-merged.mount: Deactivated successfully.
Oct 02 19:08:15 compute-0 podman[192007]: 2025-10-02 19:08:15.738893169 +0000 UTC m=+0.856139614 container remove ee4834c1caa29d9416549ef49ca03a0dcab1b4d7abcf8e08b73eeeb567216455 (image=quay.io/ceph/ceph:v18, name=modest_shaw, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:08:15 compute-0 systemd[1]: libpod-conmon-ee4834c1caa29d9416549ef49ca03a0dcab1b4d7abcf8e08b73eeeb567216455.scope: Deactivated successfully.
Oct 02 19:08:15 compute-0 systemd[1]: Reloading.
Oct 02 19:08:16 compute-0 systemd-rc-local-generator[192087]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:08:16 compute-0 systemd-sysv-generator[192090]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:08:16 compute-0 systemd[1]: Reloading.
Oct 02 19:08:16 compute-0 systemd-rc-local-generator[192129]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:08:16 compute-0 systemd-sysv-generator[192134]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:08:17 compute-0 systemd[1]: Starting Ceph mgr.compute-0.uktbkz for 6019f664-a1c2-5955-8391-692cb79a59f9...
Oct 02 19:08:17 compute-0 podman[192140]: 2025-10-02 19:08:17.157541569 +0000 UTC m=+0.119900677 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, container_name=kepler, io.openshift.expose-services=, version=9.4, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, name=ubi9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:08:17 compute-0 podman[192203]: 2025-10-02 19:08:17.464282467 +0000 UTC m=+0.092630151 container create f7f69af0ab8128941783b568600ae67d11486f19ee33fe71de3bbdbbe910bf46 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:08:17 compute-0 podman[192203]: 2025-10-02 19:08:17.432529104 +0000 UTC m=+0.060876838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aec9a16bd91e4162176a3c3500ef652025cd0d7a44062a68f342d3568fc79d13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aec9a16bd91e4162176a3c3500ef652025cd0d7a44062a68f342d3568fc79d13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aec9a16bd91e4162176a3c3500ef652025cd0d7a44062a68f342d3568fc79d13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aec9a16bd91e4162176a3c3500ef652025cd0d7a44062a68f342d3568fc79d13/merged/var/lib/ceph/mgr/ceph-compute-0.uktbkz supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:17 compute-0 podman[192203]: 2025-10-02 19:08:17.741203463 +0000 UTC m=+0.369551197 container init f7f69af0ab8128941783b568600ae67d11486f19ee33fe71de3bbdbbe910bf46 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:08:17 compute-0 podman[192203]: 2025-10-02 19:08:17.758223979 +0000 UTC m=+0.386571663 container start f7f69af0ab8128941783b568600ae67d11486f19ee33fe71de3bbdbbe910bf46 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 19:08:17 compute-0 bash[192203]: f7f69af0ab8128941783b568600ae67d11486f19ee33fe71de3bbdbbe910bf46
Oct 02 19:08:17 compute-0 systemd[1]: Started Ceph mgr.compute-0.uktbkz for 6019f664-a1c2-5955-8391-692cb79a59f9.
Oct 02 19:08:17 compute-0 ceph-mgr[192222]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 19:08:17 compute-0 ceph-mgr[192222]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 02 19:08:17 compute-0 ceph-mgr[192222]: pidfile_write: ignore empty --pid-file
Oct 02 19:08:17 compute-0 podman[192223]: 2025-10-02 19:08:17.952704582 +0000 UTC m=+0.107871182 container create a25ac8f58b9ffbb097a48e282a955a92f95db9f4ccd3afe2cd0643dc9cca1e4a (image=quay.io/ceph/ceph:v18, name=kind_neumann, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:08:17 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'alerts'
Oct 02 19:08:17 compute-0 podman[192223]: 2025-10-02 19:08:17.915538607 +0000 UTC m=+0.070705197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:18 compute-0 systemd[1]: Started libpod-conmon-a25ac8f58b9ffbb097a48e282a955a92f95db9f4ccd3afe2cd0643dc9cca1e4a.scope.
Oct 02 19:08:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b826e75fc8c940543c71a6ea1377c2bc3ec5ffb11aa5fb70089eebf01d3c3f15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b826e75fc8c940543c71a6ea1377c2bc3ec5ffb11aa5fb70089eebf01d3c3f15/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b826e75fc8c940543c71a6ea1377c2bc3ec5ffb11aa5fb70089eebf01d3c3f15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:18 compute-0 podman[192223]: 2025-10-02 19:08:18.165673409 +0000 UTC m=+0.320840049 container init a25ac8f58b9ffbb097a48e282a955a92f95db9f4ccd3afe2cd0643dc9cca1e4a (image=quay.io/ceph/ceph:v18, name=kind_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:08:18 compute-0 podman[192223]: 2025-10-02 19:08:18.182154122 +0000 UTC m=+0.337320712 container start a25ac8f58b9ffbb097a48e282a955a92f95db9f4ccd3afe2cd0643dc9cca1e4a (image=quay.io/ceph/ceph:v18, name=kind_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:08:18 compute-0 podman[192223]: 2025-10-02 19:08:18.190366127 +0000 UTC m=+0.345532717 container attach a25ac8f58b9ffbb097a48e282a955a92f95db9f4ccd3afe2cd0643dc9cca1e4a (image=quay.io/ceph/ceph:v18, name=kind_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 19:08:18 compute-0 ceph-mgr[192222]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 02 19:08:18 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:18.313+0000 7f092430b140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 02 19:08:18 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'balancer'
Oct 02 19:08:18 compute-0 ceph-mgr[192222]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 02 19:08:18 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:18.560+0000 7f092430b140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 02 19:08:18 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'cephadm'
Oct 02 19:08:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 19:08:18 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3931782251' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 19:08:18 compute-0 kind_neumann[192260]: 
Oct 02 19:08:18 compute-0 kind_neumann[192260]: {
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     "fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     "health": {
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "status": "HEALTH_OK",
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "checks": {},
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "mutes": []
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     },
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     "election_epoch": 5,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     "quorum": [
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         0
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     ],
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     "quorum_names": [
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "compute-0"
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     ],
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     "quorum_age": 4,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     "monmap": {
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "epoch": 1,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "min_mon_release_name": "reef",
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "num_mons": 1
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     },
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     "osdmap": {
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "epoch": 1,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "num_osds": 0,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "num_up_osds": 0,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "osd_up_since": 0,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "num_in_osds": 0,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "osd_in_since": 0,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "num_remapped_pgs": 0
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     },
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     "pgmap": {
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "pgs_by_state": [],
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "num_pgs": 0,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "num_pools": 0,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "num_objects": 0,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "data_bytes": 0,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "bytes_used": 0,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "bytes_avail": 0,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "bytes_total": 0
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     },
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     "fsmap": {
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "epoch": 1,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "by_rank": [],
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "up:standby": 0
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     },
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     "mgrmap": {
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "available": false,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "num_standbys": 0,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "modules": [
Oct 02 19:08:18 compute-0 kind_neumann[192260]:             "iostat",
Oct 02 19:08:18 compute-0 kind_neumann[192260]:             "nfs",
Oct 02 19:08:18 compute-0 kind_neumann[192260]:             "restful"
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         ],
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "services": {}
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     },
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     "servicemap": {
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "epoch": 1,
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "modified": "2025-10-02T19:08:09.836492+0000",
Oct 02 19:08:18 compute-0 kind_neumann[192260]:         "services": {}
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     },
Oct 02 19:08:18 compute-0 kind_neumann[192260]:     "progress_events": {}
Oct 02 19:08:18 compute-0 kind_neumann[192260]: }
Oct 02 19:08:18 compute-0 systemd[1]: libpod-a25ac8f58b9ffbb097a48e282a955a92f95db9f4ccd3afe2cd0643dc9cca1e4a.scope: Deactivated successfully.
Oct 02 19:08:18 compute-0 conmon[192260]: conmon a25ac8f58b9ffbb097a4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a25ac8f58b9ffbb097a48e282a955a92f95db9f4ccd3afe2cd0643dc9cca1e4a.scope/container/memory.events
Oct 02 19:08:18 compute-0 podman[192223]: 2025-10-02 19:08:18.71415071 +0000 UTC m=+0.869317310 container died a25ac8f58b9ffbb097a48e282a955a92f95db9f4ccd3afe2cd0643dc9cca1e4a (image=quay.io/ceph/ceph:v18, name=kind_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:08:18 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3931782251' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 19:08:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b826e75fc8c940543c71a6ea1377c2bc3ec5ffb11aa5fb70089eebf01d3c3f15-merged.mount: Deactivated successfully.
Oct 02 19:08:18 compute-0 podman[192223]: 2025-10-02 19:08:18.79649226 +0000 UTC m=+0.951658820 container remove a25ac8f58b9ffbb097a48e282a955a92f95db9f4ccd3afe2cd0643dc9cca1e4a (image=quay.io/ceph/ceph:v18, name=kind_neumann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:08:18 compute-0 systemd[1]: libpod-conmon-a25ac8f58b9ffbb097a48e282a955a92f95db9f4ccd3afe2cd0643dc9cca1e4a.scope: Deactivated successfully.
Oct 02 19:08:20 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'crash'
Oct 02 19:08:20 compute-0 podman[192310]: 2025-10-02 19:08:20.914190471 +0000 UTC m=+0.067216614 container create c582fc798802a07966bffae1db86585c8461700d0a3fdebc92324b5b8097bdae (image=quay.io/ceph/ceph:v18, name=zen_cohen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 19:08:20 compute-0 ceph-mgr[192222]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 02 19:08:20 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:20.955+0000 7f092430b140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 02 19:08:20 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'dashboard'
Oct 02 19:08:20 compute-0 podman[192310]: 2025-10-02 19:08:20.889292248 +0000 UTC m=+0.042318481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:20 compute-0 systemd[1]: Started libpod-conmon-c582fc798802a07966bffae1db86585c8461700d0a3fdebc92324b5b8097bdae.scope.
Oct 02 19:08:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3426528d488d566b90cac5a007409d46aeb8652837c1bced84da175392c353e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3426528d488d566b90cac5a007409d46aeb8652837c1bced84da175392c353e2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3426528d488d566b90cac5a007409d46aeb8652837c1bced84da175392c353e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:21 compute-0 podman[192310]: 2025-10-02 19:08:21.073902112 +0000 UTC m=+0.226928285 container init c582fc798802a07966bffae1db86585c8461700d0a3fdebc92324b5b8097bdae (image=quay.io/ceph/ceph:v18, name=zen_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct 02 19:08:21 compute-0 podman[192310]: 2025-10-02 19:08:21.082855307 +0000 UTC m=+0.235881460 container start c582fc798802a07966bffae1db86585c8461700d0a3fdebc92324b5b8097bdae (image=quay.io/ceph/ceph:v18, name=zen_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:08:21 compute-0 podman[192310]: 2025-10-02 19:08:21.089198373 +0000 UTC m=+0.242224586 container attach c582fc798802a07966bffae1db86585c8461700d0a3fdebc92324b5b8097bdae (image=quay.io/ceph/ceph:v18, name=zen_cohen, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:08:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 19:08:21 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3956202487' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 19:08:21 compute-0 zen_cohen[192327]: 
Oct 02 19:08:21 compute-0 zen_cohen[192327]: {
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     "fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     "health": {
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "status": "HEALTH_OK",
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "checks": {},
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "mutes": []
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     },
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     "election_epoch": 5,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     "quorum": [
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         0
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     ],
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     "quorum_names": [
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "compute-0"
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     ],
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     "quorum_age": 7,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     "monmap": {
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "epoch": 1,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "min_mon_release_name": "reef",
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "num_mons": 1
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     },
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     "osdmap": {
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "epoch": 1,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "num_osds": 0,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "num_up_osds": 0,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "osd_up_since": 0,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "num_in_osds": 0,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "osd_in_since": 0,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "num_remapped_pgs": 0
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     },
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     "pgmap": {
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "pgs_by_state": [],
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "num_pgs": 0,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "num_pools": 0,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "num_objects": 0,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "data_bytes": 0,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "bytes_used": 0,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "bytes_avail": 0,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "bytes_total": 0
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     },
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     "fsmap": {
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "epoch": 1,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "by_rank": [],
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "up:standby": 0
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     },
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     "mgrmap": {
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "available": false,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "num_standbys": 0,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "modules": [
Oct 02 19:08:21 compute-0 zen_cohen[192327]:             "iostat",
Oct 02 19:08:21 compute-0 zen_cohen[192327]:             "nfs",
Oct 02 19:08:21 compute-0 zen_cohen[192327]:             "restful"
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         ],
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "services": {}
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     },
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     "servicemap": {
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "epoch": 1,
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "modified": "2025-10-02T19:08:09.836492+0000",
Oct 02 19:08:21 compute-0 zen_cohen[192327]:         "services": {}
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     },
Oct 02 19:08:21 compute-0 zen_cohen[192327]:     "progress_events": {}
Oct 02 19:08:21 compute-0 zen_cohen[192327]: }
Oct 02 19:08:21 compute-0 systemd[1]: libpod-c582fc798802a07966bffae1db86585c8461700d0a3fdebc92324b5b8097bdae.scope: Deactivated successfully.
Oct 02 19:08:21 compute-0 podman[192310]: 2025-10-02 19:08:21.62761299 +0000 UTC m=+0.780639163 container died c582fc798802a07966bffae1db86585c8461700d0a3fdebc92324b5b8097bdae (image=quay.io/ceph/ceph:v18, name=zen_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 19:08:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3956202487' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 19:08:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3426528d488d566b90cac5a007409d46aeb8652837c1bced84da175392c353e2-merged.mount: Deactivated successfully.
Oct 02 19:08:21 compute-0 podman[192310]: 2025-10-02 19:08:21.717025716 +0000 UTC m=+0.870051859 container remove c582fc798802a07966bffae1db86585c8461700d0a3fdebc92324b5b8097bdae (image=quay.io/ceph/ceph:v18, name=zen_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 19:08:21 compute-0 systemd[1]: libpod-conmon-c582fc798802a07966bffae1db86585c8461700d0a3fdebc92324b5b8097bdae.scope: Deactivated successfully.
Oct 02 19:08:22 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'devicehealth'
Oct 02 19:08:22 compute-0 ceph-mgr[192222]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 02 19:08:22 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'diskprediction_local'
Oct 02 19:08:22 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:22.663+0000 7f092430b140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 02 19:08:23 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 02 19:08:23 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 02 19:08:23 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]:   from numpy import show_config as show_numpy_config
Oct 02 19:08:23 compute-0 ceph-mgr[192222]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 02 19:08:23 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:23.211+0000 7f092430b140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 02 19:08:23 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'influx'
Oct 02 19:08:23 compute-0 ceph-mgr[192222]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 02 19:08:23 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:23.447+0000 7f092430b140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 02 19:08:23 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'insights'
Oct 02 19:08:23 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'iostat'
Oct 02 19:08:23 compute-0 podman[192366]: 2025-10-02 19:08:23.819842897 +0000 UTC m=+0.065547491 container create 316197cf56715d0e175e0931e176bbdb2b97377f23e6f4fa483eae8982f46cb0 (image=quay.io/ceph/ceph:v18, name=festive_cori, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 19:08:23 compute-0 podman[192366]: 2025-10-02 19:08:23.793841794 +0000 UTC m=+0.039546368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:23 compute-0 systemd[1]: Started libpod-conmon-316197cf56715d0e175e0931e176bbdb2b97377f23e6f4fa483eae8982f46cb0.scope.
Oct 02 19:08:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a5f275364b7eedd80c06b159973fff81ed746a8e84dc1cf52d036129565383/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a5f275364b7eedd80c06b159973fff81ed746a8e84dc1cf52d036129565383/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:23 compute-0 ceph-mgr[192222]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 02 19:08:23 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:23.952+0000 7f092430b140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 02 19:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a5f275364b7eedd80c06b159973fff81ed746a8e84dc1cf52d036129565383/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:23 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'k8sevents'
Oct 02 19:08:23 compute-0 podman[192366]: 2025-10-02 19:08:23.992205459 +0000 UTC m=+0.237910113 container init 316197cf56715d0e175e0931e176bbdb2b97377f23e6f4fa483eae8982f46cb0 (image=quay.io/ceph/ceph:v18, name=festive_cori, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 19:08:24 compute-0 podman[192366]: 2025-10-02 19:08:24.002650473 +0000 UTC m=+0.248355067 container start 316197cf56715d0e175e0931e176bbdb2b97377f23e6f4fa483eae8982f46cb0 (image=quay.io/ceph/ceph:v18, name=festive_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 19:08:24 compute-0 podman[192366]: 2025-10-02 19:08:24.009040471 +0000 UTC m=+0.254745035 container attach 316197cf56715d0e175e0931e176bbdb2b97377f23e6f4fa483eae8982f46cb0 (image=quay.io/ceph/ceph:v18, name=festive_cori, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:08:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 19:08:24 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3547162445' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 19:08:24 compute-0 festive_cori[192382]: 
Oct 02 19:08:24 compute-0 festive_cori[192382]: {
Oct 02 19:08:24 compute-0 festive_cori[192382]:     "fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:08:24 compute-0 festive_cori[192382]:     "health": {
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "status": "HEALTH_OK",
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "checks": {},
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "mutes": []
Oct 02 19:08:24 compute-0 festive_cori[192382]:     },
Oct 02 19:08:24 compute-0 festive_cori[192382]:     "election_epoch": 5,
Oct 02 19:08:24 compute-0 festive_cori[192382]:     "quorum": [
Oct 02 19:08:24 compute-0 festive_cori[192382]:         0
Oct 02 19:08:24 compute-0 festive_cori[192382]:     ],
Oct 02 19:08:24 compute-0 festive_cori[192382]:     "quorum_names": [
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "compute-0"
Oct 02 19:08:24 compute-0 festive_cori[192382]:     ],
Oct 02 19:08:24 compute-0 festive_cori[192382]:     "quorum_age": 10,
Oct 02 19:08:24 compute-0 festive_cori[192382]:     "monmap": {
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "epoch": 1,
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "min_mon_release_name": "reef",
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "num_mons": 1
Oct 02 19:08:24 compute-0 festive_cori[192382]:     },
Oct 02 19:08:24 compute-0 festive_cori[192382]:     "osdmap": {
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "epoch": 1,
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "num_osds": 0,
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "num_up_osds": 0,
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "osd_up_since": 0,
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "num_in_osds": 0,
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "osd_in_since": 0,
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "num_remapped_pgs": 0
Oct 02 19:08:24 compute-0 festive_cori[192382]:     },
Oct 02 19:08:24 compute-0 festive_cori[192382]:     "pgmap": {
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "pgs_by_state": [],
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "num_pgs": 0,
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "num_pools": 0,
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "num_objects": 0,
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "data_bytes": 0,
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "bytes_used": 0,
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "bytes_avail": 0,
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "bytes_total": 0
Oct 02 19:08:24 compute-0 festive_cori[192382]:     },
Oct 02 19:08:24 compute-0 festive_cori[192382]:     "fsmap": {
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "epoch": 1,
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "by_rank": [],
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "up:standby": 0
Oct 02 19:08:24 compute-0 festive_cori[192382]:     },
Oct 02 19:08:24 compute-0 festive_cori[192382]:     "mgrmap": {
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "available": false,
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "num_standbys": 0,
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "modules": [
Oct 02 19:08:24 compute-0 festive_cori[192382]:             "iostat",
Oct 02 19:08:24 compute-0 festive_cori[192382]:             "nfs",
Oct 02 19:08:24 compute-0 festive_cori[192382]:             "restful"
Oct 02 19:08:24 compute-0 festive_cori[192382]:         ],
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "services": {}
Oct 02 19:08:24 compute-0 festive_cori[192382]:     },
Oct 02 19:08:24 compute-0 festive_cori[192382]:     "servicemap": {
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "epoch": 1,
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "modified": "2025-10-02T19:08:09.836492+0000",
Oct 02 19:08:24 compute-0 festive_cori[192382]:         "services": {}
Oct 02 19:08:24 compute-0 festive_cori[192382]:     },
Oct 02 19:08:24 compute-0 festive_cori[192382]:     "progress_events": {}
Oct 02 19:08:24 compute-0 festive_cori[192382]: }
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.434 17 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.436 17 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.436 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a00e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.437 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feec38a00b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec5f86ab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38272c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec6d242f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38273e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38274d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec4ac1ee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feec72bb7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feec38264b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3825730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feec3827c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feec3827230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.446 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feec3825700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feec3827290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feec38277a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feec38272f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feec3827350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feec38a0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feec38273b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feec49646e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feec3827440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feec3827c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feec38274a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feec3827ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feec3827d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feec3827dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.452 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.452 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feec3827e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.452 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.452 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feec38a0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.452 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.452 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feec38276e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.452 17 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.452 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feec3827ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.453 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.453 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feec49641d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.453 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.453 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feec3827740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.453 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.453 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feec3827f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.453 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.458 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.458 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:08:24.458 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:08:24 compute-0 systemd[1]: libpod-316197cf56715d0e175e0931e176bbdb2b97377f23e6f4fa483eae8982f46cb0.scope: Deactivated successfully.
Oct 02 19:08:24 compute-0 podman[192366]: 2025-10-02 19:08:24.466363129 +0000 UTC m=+0.712067723 container died 316197cf56715d0e175e0931e176bbdb2b97377f23e6f4fa483eae8982f46cb0 (image=quay.io/ceph/ceph:v18, name=festive_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:08:24 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3547162445' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 19:08:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-23a5f275364b7eedd80c06b159973fff81ed746a8e84dc1cf52d036129565383-merged.mount: Deactivated successfully.
Oct 02 19:08:24 compute-0 podman[192366]: 2025-10-02 19:08:24.555694623 +0000 UTC m=+0.801399187 container remove 316197cf56715d0e175e0931e176bbdb2b97377f23e6f4fa483eae8982f46cb0 (image=quay.io/ceph/ceph:v18, name=festive_cori, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 19:08:24 compute-0 systemd[1]: libpod-conmon-316197cf56715d0e175e0931e176bbdb2b97377f23e6f4fa483eae8982f46cb0.scope: Deactivated successfully.
Oct 02 19:08:25 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'localpool'
Oct 02 19:08:25 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'mds_autoscaler'
Oct 02 19:08:26 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'mirroring'
Oct 02 19:08:26 compute-0 podman[192421]: 2025-10-02 19:08:26.695796353 +0000 UTC m=+0.093157145 container create f4e374ab4151387fff20f02d37e20c6c2e1478c3af0046c4455263afacd2c2bd (image=quay.io/ceph/ceph:v18, name=adoring_grothendieck, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 19:08:26 compute-0 systemd[1]: Started libpod-conmon-f4e374ab4151387fff20f02d37e20c6c2e1478c3af0046c4455263afacd2c2bd.scope.
Oct 02 19:08:26 compute-0 podman[192421]: 2025-10-02 19:08:26.664685007 +0000 UTC m=+0.062045859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b8bddf809046ccf39b9207db940519a5776b3b010a26c53e0e2d39fe4a249e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b8bddf809046ccf39b9207db940519a5776b3b010a26c53e0e2d39fe4a249e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b8bddf809046ccf39b9207db940519a5776b3b010a26c53e0e2d39fe4a249e5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:26 compute-0 podman[192421]: 2025-10-02 19:08:26.849112525 +0000 UTC m=+0.246473357 container init f4e374ab4151387fff20f02d37e20c6c2e1478c3af0046c4455263afacd2c2bd (image=quay.io/ceph/ceph:v18, name=adoring_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:08:26 compute-0 podman[192421]: 2025-10-02 19:08:26.864418226 +0000 UTC m=+0.261779028 container start f4e374ab4151387fff20f02d37e20c6c2e1478c3af0046c4455263afacd2c2bd (image=quay.io/ceph/ceph:v18, name=adoring_grothendieck, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:08:26 compute-0 podman[192421]: 2025-10-02 19:08:26.871605085 +0000 UTC m=+0.268965907 container attach f4e374ab4151387fff20f02d37e20c6c2e1478c3af0046c4455263afacd2c2bd (image=quay.io/ceph/ceph:v18, name=adoring_grothendieck, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:08:26 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'nfs'
Oct 02 19:08:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 19:08:27 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2032713112' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]: 
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]: {
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     "fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     "health": {
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "status": "HEALTH_OK",
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "checks": {},
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "mutes": []
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     },
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     "election_epoch": 5,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     "quorum": [
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         0
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     ],
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     "quorum_names": [
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "compute-0"
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     ],
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     "quorum_age": 13,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     "monmap": {
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "epoch": 1,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "min_mon_release_name": "reef",
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "num_mons": 1
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     },
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     "osdmap": {
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "epoch": 1,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "num_osds": 0,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "num_up_osds": 0,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "osd_up_since": 0,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "num_in_osds": 0,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "osd_in_since": 0,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "num_remapped_pgs": 0
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     },
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     "pgmap": {
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "pgs_by_state": [],
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "num_pgs": 0,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "num_pools": 0,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "num_objects": 0,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "data_bytes": 0,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "bytes_used": 0,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "bytes_avail": 0,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "bytes_total": 0
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     },
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     "fsmap": {
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "epoch": 1,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "by_rank": [],
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "up:standby": 0
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     },
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     "mgrmap": {
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "available": false,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "num_standbys": 0,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "modules": [
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:             "iostat",
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:             "nfs",
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:             "restful"
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         ],
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "services": {}
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     },
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     "servicemap": {
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "epoch": 1,
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "modified": "2025-10-02T19:08:09.836492+0000",
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:         "services": {}
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     },
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]:     "progress_events": {}
Oct 02 19:08:27 compute-0 adoring_grothendieck[192437]: }
Oct 02 19:08:27 compute-0 systemd[1]: libpod-f4e374ab4151387fff20f02d37e20c6c2e1478c3af0046c4455263afacd2c2bd.scope: Deactivated successfully.
Oct 02 19:08:27 compute-0 conmon[192437]: conmon f4e374ab4151387fff20 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f4e374ab4151387fff20f02d37e20c6c2e1478c3af0046c4455263afacd2c2bd.scope/container/memory.events
Oct 02 19:08:27 compute-0 podman[192421]: 2025-10-02 19:08:27.383911496 +0000 UTC m=+0.781272388 container died f4e374ab4151387fff20f02d37e20c6c2e1478c3af0046c4455263afacd2c2bd (image=quay.io/ceph/ceph:v18, name=adoring_grothendieck, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:08:27 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2032713112' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 19:08:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b8bddf809046ccf39b9207db940519a5776b3b010a26c53e0e2d39fe4a249e5-merged.mount: Deactivated successfully.
Oct 02 19:08:27 compute-0 podman[192421]: 2025-10-02 19:08:27.489719222 +0000 UTC m=+0.887080004 container remove f4e374ab4151387fff20f02d37e20c6c2e1478c3af0046c4455263afacd2c2bd (image=quay.io/ceph/ceph:v18, name=adoring_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 19:08:27 compute-0 systemd[1]: libpod-conmon-f4e374ab4151387fff20f02d37e20c6c2e1478c3af0046c4455263afacd2c2bd.scope: Deactivated successfully.
Oct 02 19:08:27 compute-0 ceph-mgr[192222]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 02 19:08:27 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'orchestrator'
Oct 02 19:08:27 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:27.600+0000 7f092430b140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 02 19:08:28 compute-0 ceph-mgr[192222]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 02 19:08:28 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'osd_perf_query'
Oct 02 19:08:28 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:28.212+0000 7f092430b140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 02 19:08:28 compute-0 ceph-mgr[192222]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 02 19:08:28 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'osd_support'
Oct 02 19:08:28 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:28.459+0000 7f092430b140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 02 19:08:28 compute-0 ceph-mgr[192222]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 02 19:08:28 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'pg_autoscaler'
Oct 02 19:08:28 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:28.677+0000 7f092430b140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 02 19:08:28 compute-0 ceph-mgr[192222]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 02 19:08:28 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'progress'
Oct 02 19:08:28 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:28.937+0000 7f092430b140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 02 19:08:29 compute-0 sshd-session[192475]: banner exchange: Connection from 195.178.110.15 port 57296: invalid format
Oct 02 19:08:29 compute-0 sshd-session[192476]: banner exchange: Connection from 195.178.110.15 port 57302: invalid format
Oct 02 19:08:29 compute-0 ceph-mgr[192222]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 02 19:08:29 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'prometheus'
Oct 02 19:08:29 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:29.190+0000 7f092430b140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 02 19:08:29 compute-0 podman[192477]: 2025-10-02 19:08:29.581201177 +0000 UTC m=+0.060002006 container create 4e368a32df0983d9bcd2d049bf3b21b476c8b5f8411379028b9d4d05833dc2be (image=quay.io/ceph/ceph:v18, name=nervous_banzai, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 19:08:29 compute-0 systemd[1]: Started libpod-conmon-4e368a32df0983d9bcd2d049bf3b21b476c8b5f8411379028b9d4d05833dc2be.scope.
Oct 02 19:08:29 compute-0 podman[192477]: 2025-10-02 19:08:29.558489541 +0000 UTC m=+0.037290410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f35f6ca92d3dc2dce026d647c5f59819eb833f39c7983c2e68063d8d44fd26b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f35f6ca92d3dc2dce026d647c5f59819eb833f39c7983c2e68063d8d44fd26b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f35f6ca92d3dc2dce026d647c5f59819eb833f39c7983c2e68063d8d44fd26b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:29 compute-0 podman[192477]: 2025-10-02 19:08:29.712668686 +0000 UTC m=+0.191469595 container init 4e368a32df0983d9bcd2d049bf3b21b476c8b5f8411379028b9d4d05833dc2be (image=quay.io/ceph/ceph:v18, name=nervous_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 19:08:29 compute-0 podman[192477]: 2025-10-02 19:08:29.727939947 +0000 UTC m=+0.206740806 container start 4e368a32df0983d9bcd2d049bf3b21b476c8b5f8411379028b9d4d05833dc2be (image=quay.io/ceph/ceph:v18, name=nervous_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:08:29 compute-0 podman[192477]: 2025-10-02 19:08:29.739803168 +0000 UTC m=+0.218604047 container attach 4e368a32df0983d9bcd2d049bf3b21b476c8b5f8411379028b9d4d05833dc2be (image=quay.io/ceph/ceph:v18, name=nervous_banzai, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 19:08:29 compute-0 podman[157186]: time="2025-10-02T19:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:08:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 23485 "" "Go-http-client/1.1"
Oct 02 19:08:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4350 "" "Go-http-client/1.1"
Oct 02 19:08:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 19:08:30 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2524688262' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 19:08:30 compute-0 nervous_banzai[192494]: 
Oct 02 19:08:30 compute-0 nervous_banzai[192494]: {
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     "fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     "health": {
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "status": "HEALTH_OK",
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "checks": {},
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "mutes": []
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     },
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     "election_epoch": 5,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     "quorum": [
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         0
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     ],
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     "quorum_names": [
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "compute-0"
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     ],
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     "quorum_age": 16,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     "monmap": {
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "epoch": 1,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "min_mon_release_name": "reef",
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "num_mons": 1
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     },
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     "osdmap": {
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "epoch": 1,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "num_osds": 0,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "num_up_osds": 0,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "osd_up_since": 0,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "num_in_osds": 0,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "osd_in_since": 0,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "num_remapped_pgs": 0
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     },
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     "pgmap": {
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "pgs_by_state": [],
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "num_pgs": 0,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "num_pools": 0,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "num_objects": 0,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "data_bytes": 0,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "bytes_used": 0,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "bytes_avail": 0,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "bytes_total": 0
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     },
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     "fsmap": {
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "epoch": 1,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "by_rank": [],
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "up:standby": 0
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     },
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     "mgrmap": {
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "available": false,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "num_standbys": 0,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "modules": [
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:             "iostat",
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:             "nfs",
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:             "restful"
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         ],
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "services": {}
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     },
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     "servicemap": {
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "epoch": 1,
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "modified": "2025-10-02T19:08:09.836492+0000",
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:         "services": {}
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     },
Oct 02 19:08:30 compute-0 nervous_banzai[192494]:     "progress_events": {}
Oct 02 19:08:30 compute-0 nervous_banzai[192494]: }
Oct 02 19:08:30 compute-0 systemd[1]: libpod-4e368a32df0983d9bcd2d049bf3b21b476c8b5f8411379028b9d4d05833dc2be.scope: Deactivated successfully.
Oct 02 19:08:30 compute-0 podman[192477]: 2025-10-02 19:08:30.205783854 +0000 UTC m=+0.684584673 container died 4e368a32df0983d9bcd2d049bf3b21b476c8b5f8411379028b9d4d05833dc2be (image=quay.io/ceph/ceph:v18, name=nervous_banzai, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 19:08:30 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2524688262' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 19:08:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f35f6ca92d3dc2dce026d647c5f59819eb833f39c7983c2e68063d8d44fd26b-merged.mount: Deactivated successfully.
Oct 02 19:08:30 compute-0 podman[192477]: 2025-10-02 19:08:30.273557762 +0000 UTC m=+0.752358581 container remove 4e368a32df0983d9bcd2d049bf3b21b476c8b5f8411379028b9d4d05833dc2be (image=quay.io/ceph/ceph:v18, name=nervous_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:08:30 compute-0 ceph-mgr[192222]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 02 19:08:30 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'rbd_support'
Oct 02 19:08:30 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:30.307+0000 7f092430b140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 02 19:08:30 compute-0 systemd[1]: libpod-conmon-4e368a32df0983d9bcd2d049bf3b21b476c8b5f8411379028b9d4d05833dc2be.scope: Deactivated successfully.
Oct 02 19:08:30 compute-0 ceph-mgr[192222]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 02 19:08:30 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'restful'
Oct 02 19:08:30 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:30.638+0000 7f092430b140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 02 19:08:31 compute-0 openstack_network_exporter[159337]: ERROR   19:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:08:31 compute-0 openstack_network_exporter[159337]: ERROR   19:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:08:31 compute-0 openstack_network_exporter[159337]: ERROR   19:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:08:31 compute-0 openstack_network_exporter[159337]: ERROR   19:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:08:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:08:31 compute-0 openstack_network_exporter[159337]: ERROR   19:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:08:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:08:31 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'rgw'
Oct 02 19:08:32 compute-0 ceph-mgr[192222]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 02 19:08:32 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:32.132+0000 7f092430b140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 02 19:08:32 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'rook'
Oct 02 19:08:32 compute-0 podman[192530]: 2025-10-02 19:08:32.392678181 +0000 UTC m=+0.069627258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:33 compute-0 podman[192530]: 2025-10-02 19:08:33.445427572 +0000 UTC m=+1.122376639 container create 66fe47e942e6be46c737844db53b70cceb97228cbb5b7a13c76cf17d38e1cab1 (image=quay.io/ceph/ceph:v18, name=magical_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:08:33 compute-0 systemd[1]: Started libpod-conmon-66fe47e942e6be46c737844db53b70cceb97228cbb5b7a13c76cf17d38e1cab1.scope.
Oct 02 19:08:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1147d48e820851402c875d5ee3e19828cd2ba31154d62d5c6f03de0d30f99e0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1147d48e820851402c875d5ee3e19828cd2ba31154d62d5c6f03de0d30f99e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1147d48e820851402c875d5ee3e19828cd2ba31154d62d5c6f03de0d30f99e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:33 compute-0 podman[192530]: 2025-10-02 19:08:33.590078207 +0000 UTC m=+1.267027294 container init 66fe47e942e6be46c737844db53b70cceb97228cbb5b7a13c76cf17d38e1cab1 (image=quay.io/ceph/ceph:v18, name=magical_jang, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 19:08:33 compute-0 podman[192530]: 2025-10-02 19:08:33.60924539 +0000 UTC m=+1.286194427 container start 66fe47e942e6be46c737844db53b70cceb97228cbb5b7a13c76cf17d38e1cab1 (image=quay.io/ceph/ceph:v18, name=magical_jang, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 19:08:33 compute-0 podman[192530]: 2025-10-02 19:08:33.614559219 +0000 UTC m=+1.291508276 container attach 66fe47e942e6be46c737844db53b70cceb97228cbb5b7a13c76cf17d38e1cab1 (image=quay.io/ceph/ceph:v18, name=magical_jang, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 19:08:33 compute-0 podman[192547]: 2025-10-02 19:08:33.64777045 +0000 UTC m=+0.131715296 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:08:33 compute-0 podman[192544]: 2025-10-02 19:08:33.660354531 +0000 UTC m=+0.139794619 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true)
Oct 02 19:08:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 19:08:34 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/218674515' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 19:08:34 compute-0 magical_jang[192559]: 
Oct 02 19:08:34 compute-0 magical_jang[192559]: {
Oct 02 19:08:34 compute-0 magical_jang[192559]:     "fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:08:34 compute-0 magical_jang[192559]:     "health": {
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "status": "HEALTH_OK",
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "checks": {},
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "mutes": []
Oct 02 19:08:34 compute-0 magical_jang[192559]:     },
Oct 02 19:08:34 compute-0 magical_jang[192559]:     "election_epoch": 5,
Oct 02 19:08:34 compute-0 magical_jang[192559]:     "quorum": [
Oct 02 19:08:34 compute-0 magical_jang[192559]:         0
Oct 02 19:08:34 compute-0 magical_jang[192559]:     ],
Oct 02 19:08:34 compute-0 magical_jang[192559]:     "quorum_names": [
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "compute-0"
Oct 02 19:08:34 compute-0 magical_jang[192559]:     ],
Oct 02 19:08:34 compute-0 magical_jang[192559]:     "quorum_age": 19,
Oct 02 19:08:34 compute-0 magical_jang[192559]:     "monmap": {
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "epoch": 1,
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "min_mon_release_name": "reef",
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "num_mons": 1
Oct 02 19:08:34 compute-0 magical_jang[192559]:     },
Oct 02 19:08:34 compute-0 magical_jang[192559]:     "osdmap": {
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "epoch": 1,
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "num_osds": 0,
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "num_up_osds": 0,
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "osd_up_since": 0,
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "num_in_osds": 0,
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "osd_in_since": 0,
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "num_remapped_pgs": 0
Oct 02 19:08:34 compute-0 magical_jang[192559]:     },
Oct 02 19:08:34 compute-0 magical_jang[192559]:     "pgmap": {
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "pgs_by_state": [],
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "num_pgs": 0,
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "num_pools": 0,
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "num_objects": 0,
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "data_bytes": 0,
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "bytes_used": 0,
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "bytes_avail": 0,
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "bytes_total": 0
Oct 02 19:08:34 compute-0 magical_jang[192559]:     },
Oct 02 19:08:34 compute-0 magical_jang[192559]:     "fsmap": {
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "epoch": 1,
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "by_rank": [],
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "up:standby": 0
Oct 02 19:08:34 compute-0 magical_jang[192559]:     },
Oct 02 19:08:34 compute-0 magical_jang[192559]:     "mgrmap": {
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "available": false,
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "num_standbys": 0,
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "modules": [
Oct 02 19:08:34 compute-0 magical_jang[192559]:             "iostat",
Oct 02 19:08:34 compute-0 magical_jang[192559]:             "nfs",
Oct 02 19:08:34 compute-0 magical_jang[192559]:             "restful"
Oct 02 19:08:34 compute-0 magical_jang[192559]:         ],
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "services": {}
Oct 02 19:08:34 compute-0 magical_jang[192559]:     },
Oct 02 19:08:34 compute-0 magical_jang[192559]:     "servicemap": {
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "epoch": 1,
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "modified": "2025-10-02T19:08:09.836492+0000",
Oct 02 19:08:34 compute-0 magical_jang[192559]:         "services": {}
Oct 02 19:08:34 compute-0 magical_jang[192559]:     },
Oct 02 19:08:34 compute-0 magical_jang[192559]:     "progress_events": {}
Oct 02 19:08:34 compute-0 magical_jang[192559]: }
Oct 02 19:08:34 compute-0 systemd[1]: libpod-66fe47e942e6be46c737844db53b70cceb97228cbb5b7a13c76cf17d38e1cab1.scope: Deactivated successfully.
Oct 02 19:08:34 compute-0 podman[192530]: 2025-10-02 19:08:34.065872129 +0000 UTC m=+1.742821206 container died 66fe47e942e6be46c737844db53b70cceb97228cbb5b7a13c76cf17d38e1cab1 (image=quay.io/ceph/ceph:v18, name=magical_jang, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:08:34 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/218674515' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 19:08:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1147d48e820851402c875d5ee3e19828cd2ba31154d62d5c6f03de0d30f99e0-merged.mount: Deactivated successfully.
Oct 02 19:08:34 compute-0 podman[192530]: 2025-10-02 19:08:34.170056573 +0000 UTC m=+1.847005620 container remove 66fe47e942e6be46c737844db53b70cceb97228cbb5b7a13c76cf17d38e1cab1 (image=quay.io/ceph/ceph:v18, name=magical_jang, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:08:34 compute-0 systemd[1]: libpod-conmon-66fe47e942e6be46c737844db53b70cceb97228cbb5b7a13c76cf17d38e1cab1.scope: Deactivated successfully.
Oct 02 19:08:34 compute-0 ceph-mgr[192222]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 02 19:08:34 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:34.250+0000 7f092430b140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 02 19:08:34 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'selftest'
Oct 02 19:08:34 compute-0 ceph-mgr[192222]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 02 19:08:34 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:34.487+0000 7f092430b140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 02 19:08:34 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'snap_schedule'
Oct 02 19:08:34 compute-0 ceph-mgr[192222]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 02 19:08:34 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'stats'
Oct 02 19:08:34 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:34.725+0000 7f092430b140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 02 19:08:34 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'status'
Oct 02 19:08:35 compute-0 ceph-mgr[192222]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 02 19:08:35 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'telegraf'
Oct 02 19:08:35 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:35.241+0000 7f092430b140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 02 19:08:35 compute-0 ceph-mgr[192222]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 02 19:08:35 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'telemetry'
Oct 02 19:08:35 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:35.479+0000 7f092430b140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 02 19:08:36 compute-0 ceph-mgr[192222]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 02 19:08:36 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:36.070+0000 7f092430b140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 02 19:08:36 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'test_orchestrator'
Oct 02 19:08:36 compute-0 podman[192628]: 2025-10-02 19:08:36.310212474 +0000 UTC m=+0.086183792 container create bb4fa03fd8b368475bc9de4d305413a2c38b76c746c994fa0a212504ded9b598 (image=quay.io/ceph/ceph:v18, name=upbeat_carver, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 19:08:36 compute-0 systemd[1]: Started libpod-conmon-bb4fa03fd8b368475bc9de4d305413a2c38b76c746c994fa0a212504ded9b598.scope.
Oct 02 19:08:36 compute-0 podman[192628]: 2025-10-02 19:08:36.280684029 +0000 UTC m=+0.056655407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbca6822d257388eca0eaac6954a9f7343200d978c05ab798f3d59bc4d52932a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbca6822d257388eca0eaac6954a9f7343200d978c05ab798f3d59bc4d52932a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbca6822d257388eca0eaac6954a9f7343200d978c05ab798f3d59bc4d52932a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:36 compute-0 podman[192628]: 2025-10-02 19:08:36.458357881 +0000 UTC m=+0.234329269 container init bb4fa03fd8b368475bc9de4d305413a2c38b76c746c994fa0a212504ded9b598 (image=quay.io/ceph/ceph:v18, name=upbeat_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 19:08:36 compute-0 podman[192628]: 2025-10-02 19:08:36.491624814 +0000 UTC m=+0.267596102 container start bb4fa03fd8b368475bc9de4d305413a2c38b76c746c994fa0a212504ded9b598 (image=quay.io/ceph/ceph:v18, name=upbeat_carver, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:08:36 compute-0 podman[192628]: 2025-10-02 19:08:36.500933278 +0000 UTC m=+0.276904616 container attach bb4fa03fd8b368475bc9de4d305413a2c38b76c746c994fa0a212504ded9b598 (image=quay.io/ceph/ceph:v18, name=upbeat_carver, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:08:36 compute-0 ceph-mgr[192222]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 02 19:08:36 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'volumes'
Oct 02 19:08:36 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:36.850+0000 7f092430b140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 02 19:08:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 19:08:36 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3297954129' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 19:08:36 compute-0 upbeat_carver[192644]: 
Oct 02 19:08:36 compute-0 upbeat_carver[192644]: {
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     "fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     "health": {
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "status": "HEALTH_OK",
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "checks": {},
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "mutes": []
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     },
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     "election_epoch": 5,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     "quorum": [
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         0
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     ],
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     "quorum_names": [
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "compute-0"
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     ],
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     "quorum_age": 22,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     "monmap": {
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "epoch": 1,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "min_mon_release_name": "reef",
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "num_mons": 1
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     },
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     "osdmap": {
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "epoch": 1,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "num_osds": 0,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "num_up_osds": 0,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "osd_up_since": 0,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "num_in_osds": 0,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "osd_in_since": 0,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "num_remapped_pgs": 0
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     },
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     "pgmap": {
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "pgs_by_state": [],
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "num_pgs": 0,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "num_pools": 0,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "num_objects": 0,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "data_bytes": 0,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "bytes_used": 0,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "bytes_avail": 0,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "bytes_total": 0
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     },
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     "fsmap": {
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "epoch": 1,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "by_rank": [],
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "up:standby": 0
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     },
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     "mgrmap": {
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "available": false,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "num_standbys": 0,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "modules": [
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:             "iostat",
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:             "nfs",
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:             "restful"
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         ],
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "services": {}
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     },
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     "servicemap": {
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "epoch": 1,
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "modified": "2025-10-02T19:08:09.836492+0000",
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:         "services": {}
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     },
Oct 02 19:08:36 compute-0 upbeat_carver[192644]:     "progress_events": {}
Oct 02 19:08:36 compute-0 upbeat_carver[192644]: }
Oct 02 19:08:36 compute-0 systemd[1]: libpod-bb4fa03fd8b368475bc9de4d305413a2c38b76c746c994fa0a212504ded9b598.scope: Deactivated successfully.
Oct 02 19:08:36 compute-0 podman[192628]: 2025-10-02 19:08:36.963207277 +0000 UTC m=+0.739178595 container died bb4fa03fd8b368475bc9de4d305413a2c38b76c746c994fa0a212504ded9b598 (image=quay.io/ceph/ceph:v18, name=upbeat_carver, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:08:36 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3297954129' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 19:08:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbca6822d257388eca0eaac6954a9f7343200d978c05ab798f3d59bc4d52932a-merged.mount: Deactivated successfully.
Oct 02 19:08:37 compute-0 podman[192628]: 2025-10-02 19:08:37.051548535 +0000 UTC m=+0.827519863 container remove bb4fa03fd8b368475bc9de4d305413a2c38b76c746c994fa0a212504ded9b598 (image=quay.io/ceph/ceph:v18, name=upbeat_carver, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:08:37 compute-0 systemd[1]: libpod-conmon-bb4fa03fd8b368475bc9de4d305413a2c38b76c746c994fa0a212504ded9b598.scope: Deactivated successfully.
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'zabbix'
Oct 02 19:08:37 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:37.610+0000 7f092430b140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 02 19:08:37 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:37.837+0000 7f092430b140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: ms_deliver_dispatch: unhandled message 0x562a183791e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 02 19:08:37 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.uktbkz
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: mgr handle_mgr_map Activating!
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: mgr handle_mgr_map I am now activating
Oct 02 19:08:37 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.uktbkz(active, starting, since 0.0326024s)
Oct 02 19:08:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 02 19:08:37 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 02 19:08:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).mds e1 all = 1
Oct 02 19:08:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 02 19:08:37 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 19:08:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 02 19:08:37 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 02 19:08:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 02 19:08:37 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 19:08:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.uktbkz", "id": "compute-0.uktbkz"} v 0) v1
Oct 02 19:08:37 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mgr metadata", "who": "compute-0.uktbkz", "id": "compute-0.uktbkz"}]: dispatch
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: balancer
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [balancer INFO root] Starting
Oct 02 19:08:37 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : Manager daemon compute-0.uktbkz is now available
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:08:37
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [balancer INFO root] No pools available
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: crash
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: devicehealth
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [devicehealth INFO root] Starting
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: iostat
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: nfs
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: orchestrator
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: pg_autoscaler
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: progress
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [progress INFO root] Loading...
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [progress INFO root] No stored events to load
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [progress INFO root] Loaded [] historic events
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [progress INFO root] Loaded OSDMap, ready.
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [rbd_support INFO root] recovery thread starting
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [rbd_support INFO root] starting setup
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: rbd_support
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: restful
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: status
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [restful INFO root] server_addr: :: server_port: 8003
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [restful WARNING root] server not running: no certificate configured
Oct 02 19:08:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uktbkz/mirror_snapshot_schedule"} v 0) v1
Oct 02 19:08:37 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uktbkz/mirror_snapshot_schedule"}]: dispatch
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: telemetry
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:08:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [rbd_support INFO root] PerfHandler: starting
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TaskHandler: starting
Oct 02 19:08:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uktbkz/trash_purge_schedule"} v 0) v1
Oct 02 19:08:37 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uktbkz/trash_purge_schedule"}]: dispatch
Oct 02 19:08:37 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' 
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 02 19:08:37 compute-0 ceph-mgr[192222]: [rbd_support INFO root] setup complete
Oct 02 19:08:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Oct 02 19:08:37 compute-0 ceph-mon[191910]: Activating manager daemon compute-0.uktbkz
Oct 02 19:08:37 compute-0 ceph-mon[191910]: mgrmap e2: compute-0.uktbkz(active, starting, since 0.0326024s)
Oct 02 19:08:37 compute-0 ceph-mon[191910]: from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 02 19:08:37 compute-0 ceph-mon[191910]: from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 19:08:37 compute-0 ceph-mon[191910]: from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 02 19:08:37 compute-0 ceph-mon[191910]: from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 19:08:37 compute-0 ceph-mon[191910]: from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mgr metadata", "who": "compute-0.uktbkz", "id": "compute-0.uktbkz"}]: dispatch
Oct 02 19:08:37 compute-0 ceph-mon[191910]: Manager daemon compute-0.uktbkz is now available
Oct 02 19:08:37 compute-0 ceph-mon[191910]: from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uktbkz/mirror_snapshot_schedule"}]: dispatch
Oct 02 19:08:37 compute-0 ceph-mon[191910]: from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uktbkz/trash_purge_schedule"}]: dispatch
Oct 02 19:08:37 compute-0 ceph-mon[191910]: from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' 
Oct 02 19:08:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' 
Oct 02 19:08:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Oct 02 19:08:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' 
Oct 02 19:08:38 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: volumes
Oct 02 19:08:38 compute-0 podman[192759]: 2025-10-02 19:08:38.694237503 +0000 UTC m=+0.119595248 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350)
Oct 02 19:08:38 compute-0 podman[192760]: 2025-10-02 19:08:38.753802236 +0000 UTC m=+0.178880994 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 02 19:08:38 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.uktbkz(active, since 1.05494s)
Oct 02 19:08:39 compute-0 ceph-mon[191910]: from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' 
Oct 02 19:08:39 compute-0 ceph-mon[191910]: from='mgr.14102 192.168.122.100:0/928438284' entity='mgr.compute-0.uktbkz' 
Oct 02 19:08:39 compute-0 ceph-mon[191910]: mgrmap e3: compute-0.uktbkz(active, since 1.05494s)
Oct 02 19:08:39 compute-0 podman[192806]: 2025-10-02 19:08:39.204820009 +0000 UTC m=+0.095592128 container create 9738b11f45e7ccd6770d170f6ca16e57a235312f5b22153c01c3fdac9df53c01 (image=quay.io/ceph/ceph:v18, name=boring_margulis, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 19:08:39 compute-0 podman[192806]: 2025-10-02 19:08:39.162321135 +0000 UTC m=+0.053093304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:39 compute-0 systemd[1]: Started libpod-conmon-9738b11f45e7ccd6770d170f6ca16e57a235312f5b22153c01c3fdac9df53c01.scope.
Oct 02 19:08:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f866b7ec3adf6e40e0e88edf711eaa887287cb69d0a43043dbea7025c654c4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f866b7ec3adf6e40e0e88edf711eaa887287cb69d0a43043dbea7025c654c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f866b7ec3adf6e40e0e88edf711eaa887287cb69d0a43043dbea7025c654c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:39 compute-0 podman[192806]: 2025-10-02 19:08:39.400117323 +0000 UTC m=+0.290889502 container init 9738b11f45e7ccd6770d170f6ca16e57a235312f5b22153c01c3fdac9df53c01 (image=quay.io/ceph/ceph:v18, name=boring_margulis, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 19:08:39 compute-0 podman[192806]: 2025-10-02 19:08:39.417709055 +0000 UTC m=+0.308481174 container start 9738b11f45e7ccd6770d170f6ca16e57a235312f5b22153c01c3fdac9df53c01 (image=quay.io/ceph/ceph:v18, name=boring_margulis, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 19:08:39 compute-0 podman[192806]: 2025-10-02 19:08:39.425126459 +0000 UTC m=+0.315898648 container attach 9738b11f45e7ccd6770d170f6ca16e57a235312f5b22153c01c3fdac9df53c01 (image=quay.io/ceph/ceph:v18, name=boring_margulis, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 19:08:39 compute-0 ceph-mgr[192222]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 19:08:40 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.uktbkz(active, since 2s)
Oct 02 19:08:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 19:08:40 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1938023524' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 19:08:40 compute-0 boring_margulis[192822]: 
Oct 02 19:08:40 compute-0 boring_margulis[192822]: {
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     "fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     "health": {
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "status": "HEALTH_OK",
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "checks": {},
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "mutes": []
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     },
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     "election_epoch": 5,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     "quorum": [
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         0
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     ],
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     "quorum_names": [
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "compute-0"
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     ],
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     "quorum_age": 26,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     "monmap": {
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "epoch": 1,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "min_mon_release_name": "reef",
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "num_mons": 1
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     },
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     "osdmap": {
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "epoch": 1,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "num_osds": 0,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "num_up_osds": 0,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "osd_up_since": 0,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "num_in_osds": 0,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "osd_in_since": 0,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "num_remapped_pgs": 0
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     },
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     "pgmap": {
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "pgs_by_state": [],
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "num_pgs": 0,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "num_pools": 0,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "num_objects": 0,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "data_bytes": 0,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "bytes_used": 0,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "bytes_avail": 0,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "bytes_total": 0
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     },
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     "fsmap": {
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "epoch": 1,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "by_rank": [],
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "up:standby": 0
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     },
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     "mgrmap": {
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "available": true,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "num_standbys": 0,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "modules": [
Oct 02 19:08:40 compute-0 boring_margulis[192822]:             "iostat",
Oct 02 19:08:40 compute-0 boring_margulis[192822]:             "nfs",
Oct 02 19:08:40 compute-0 boring_margulis[192822]:             "restful"
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         ],
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "services": {}
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     },
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     "servicemap": {
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "epoch": 1,
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "modified": "2025-10-02T19:08:09.836492+0000",
Oct 02 19:08:40 compute-0 boring_margulis[192822]:         "services": {}
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     },
Oct 02 19:08:40 compute-0 boring_margulis[192822]:     "progress_events": {}
Oct 02 19:08:40 compute-0 boring_margulis[192822]: }
Oct 02 19:08:40 compute-0 systemd[1]: libpod-9738b11f45e7ccd6770d170f6ca16e57a235312f5b22153c01c3fdac9df53c01.scope: Deactivated successfully.
Oct 02 19:08:40 compute-0 podman[192806]: 2025-10-02 19:08:40.156984211 +0000 UTC m=+1.047756330 container died 9738b11f45e7ccd6770d170f6ca16e57a235312f5b22153c01c3fdac9df53c01 (image=quay.io/ceph/ceph:v18, name=boring_margulis, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:08:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0f866b7ec3adf6e40e0e88edf711eaa887287cb69d0a43043dbea7025c654c4-merged.mount: Deactivated successfully.
Oct 02 19:08:40 compute-0 podman[192806]: 2025-10-02 19:08:40.239763833 +0000 UTC m=+1.130535932 container remove 9738b11f45e7ccd6770d170f6ca16e57a235312f5b22153c01c3fdac9df53c01 (image=quay.io/ceph/ceph:v18, name=boring_margulis, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 19:08:40 compute-0 systemd[1]: libpod-conmon-9738b11f45e7ccd6770d170f6ca16e57a235312f5b22153c01c3fdac9df53c01.scope: Deactivated successfully.
Oct 02 19:08:40 compute-0 podman[192859]: 2025-10-02 19:08:40.374252361 +0000 UTC m=+0.096372369 container create 40146192c8348be6347fa9c547b67db82e51235c07fa8ad1e68d9faa301a5e2a (image=quay.io/ceph/ceph:v18, name=romantic_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:08:40 compute-0 podman[192859]: 2025-10-02 19:08:40.332319741 +0000 UTC m=+0.054439779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:40 compute-0 systemd[1]: Started libpod-conmon-40146192c8348be6347fa9c547b67db82e51235c07fa8ad1e68d9faa301a5e2a.scope.
Oct 02 19:08:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ae67fb0d55023650e9fbddfc757bc223e22f005ea1f49e73325ac22975f9ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ae67fb0d55023650e9fbddfc757bc223e22f005ea1f49e73325ac22975f9ca/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ae67fb0d55023650e9fbddfc757bc223e22f005ea1f49e73325ac22975f9ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ae67fb0d55023650e9fbddfc757bc223e22f005ea1f49e73325ac22975f9ca/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:40 compute-0 podman[192859]: 2025-10-02 19:08:40.544722054 +0000 UTC m=+0.266842072 container init 40146192c8348be6347fa9c547b67db82e51235c07fa8ad1e68d9faa301a5e2a (image=quay.io/ceph/ceph:v18, name=romantic_mclean, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:08:40 compute-0 podman[192859]: 2025-10-02 19:08:40.570006958 +0000 UTC m=+0.292126936 container start 40146192c8348be6347fa9c547b67db82e51235c07fa8ad1e68d9faa301a5e2a (image=quay.io/ceph/ceph:v18, name=romantic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 19:08:40 compute-0 podman[192859]: 2025-10-02 19:08:40.575880822 +0000 UTC m=+0.298000850 container attach 40146192c8348be6347fa9c547b67db82e51235c07fa8ad1e68d9faa301a5e2a (image=quay.io/ceph/ceph:v18, name=romantic_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 19:08:41 compute-0 ceph-mon[191910]: mgrmap e4: compute-0.uktbkz(active, since 2s)
Oct 02 19:08:41 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1938023524' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 19:08:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 02 19:08:41 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1599555361' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 19:08:41 compute-0 systemd[1]: libpod-40146192c8348be6347fa9c547b67db82e51235c07fa8ad1e68d9faa301a5e2a.scope: Deactivated successfully.
Oct 02 19:08:41 compute-0 conmon[192875]: conmon 40146192c8348be6347f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40146192c8348be6347fa9c547b67db82e51235c07fa8ad1e68d9faa301a5e2a.scope/container/memory.events
Oct 02 19:08:41 compute-0 podman[192859]: 2025-10-02 19:08:41.143467782 +0000 UTC m=+0.865587790 container died 40146192c8348be6347fa9c547b67db82e51235c07fa8ad1e68d9faa301a5e2a (image=quay.io/ceph/ceph:v18, name=romantic_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:08:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-72ae67fb0d55023650e9fbddfc757bc223e22f005ea1f49e73325ac22975f9ca-merged.mount: Deactivated successfully.
Oct 02 19:08:41 compute-0 podman[192859]: 2025-10-02 19:08:41.242979723 +0000 UTC m=+0.965099701 container remove 40146192c8348be6347fa9c547b67db82e51235c07fa8ad1e68d9faa301a5e2a (image=quay.io/ceph/ceph:v18, name=romantic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Oct 02 19:08:41 compute-0 systemd[1]: libpod-conmon-40146192c8348be6347fa9c547b67db82e51235c07fa8ad1e68d9faa301a5e2a.scope: Deactivated successfully.
Oct 02 19:08:41 compute-0 podman[192912]: 2025-10-02 19:08:41.382111414 +0000 UTC m=+0.091106222 container create 8afd03f2e2117e0dd6b80687058f9753f3424be0cc6b3385e21971c214636940 (image=quay.io/ceph/ceph:v18, name=romantic_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 19:08:41 compute-0 podman[192912]: 2025-10-02 19:08:41.342425222 +0000 UTC m=+0.051420040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:41 compute-0 systemd[1]: Started libpod-conmon-8afd03f2e2117e0dd6b80687058f9753f3424be0cc6b3385e21971c214636940.scope.
Oct 02 19:08:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a66e4bc03427141f94e707d60b753445251a1ab2298d1d7081d7ec2c6fa66e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a66e4bc03427141f94e707d60b753445251a1ab2298d1d7081d7ec2c6fa66e1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a66e4bc03427141f94e707d60b753445251a1ab2298d1d7081d7ec2c6fa66e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:41 compute-0 podman[192912]: 2025-10-02 19:08:41.574541222 +0000 UTC m=+0.283536070 container init 8afd03f2e2117e0dd6b80687058f9753f3424be0cc6b3385e21971c214636940 (image=quay.io/ceph/ceph:v18, name=romantic_noyce, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 19:08:41 compute-0 podman[192912]: 2025-10-02 19:08:41.593588352 +0000 UTC m=+0.302583150 container start 8afd03f2e2117e0dd6b80687058f9753f3424be0cc6b3385e21971c214636940 (image=quay.io/ceph/ceph:v18, name=romantic_noyce, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:08:41 compute-0 podman[192912]: 2025-10-02 19:08:41.601237283 +0000 UTC m=+0.310232091 container attach 8afd03f2e2117e0dd6b80687058f9753f3424be0cc6b3385e21971c214636940 (image=quay.io/ceph/ceph:v18, name=romantic_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 19:08:41 compute-0 ceph-mgr[192222]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 19:08:42 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1599555361' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 19:08:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Oct 02 19:08:42 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1257696984' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 02 19:08:43 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1257696984' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 02 19:08:43 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1257696984' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr respawn  1: '-n'
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr respawn  2: 'mgr.compute-0.uktbkz'
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr respawn  3: '-f'
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr respawn  4: '--setuser'
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr respawn  5: 'ceph'
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr respawn  6: '--setgroup'
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr respawn  7: 'ceph'
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr respawn  8: '--default-log-to-file=false'
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr respawn  9: '--default-log-to-journald=true'
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr respawn  exe_path /proc/self/exe
Oct 02 19:08:43 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.uktbkz(active, since 5s)
Oct 02 19:08:43 compute-0 systemd[1]: libpod-8afd03f2e2117e0dd6b80687058f9753f3424be0cc6b3385e21971c214636940.scope: Deactivated successfully.
Oct 02 19:08:43 compute-0 podman[192912]: 2025-10-02 19:08:43.138963808 +0000 UTC m=+1.847958606 container died 8afd03f2e2117e0dd6b80687058f9753f3424be0cc6b3385e21971c214636940 (image=quay.io/ceph/ceph:v18, name=romantic_noyce, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:08:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a66e4bc03427141f94e707d60b753445251a1ab2298d1d7081d7ec2c6fa66e1-merged.mount: Deactivated successfully.
Oct 02 19:08:43 compute-0 podman[192912]: 2025-10-02 19:08:43.234631408 +0000 UTC m=+1.943626206 container remove 8afd03f2e2117e0dd6b80687058f9753f3424be0cc6b3385e21971c214636940 (image=quay.io/ceph/ceph:v18, name=romantic_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:08:43 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: ignoring --setuser ceph since I am not root
Oct 02 19:08:43 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: ignoring --setgroup ceph since I am not root
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: pidfile_write: ignore empty --pid-file
Oct 02 19:08:43 compute-0 systemd[1]: libpod-conmon-8afd03f2e2117e0dd6b80687058f9753f3424be0cc6b3385e21971c214636940.scope: Deactivated successfully.
Oct 02 19:08:43 compute-0 podman[192975]: 2025-10-02 19:08:43.370119513 +0000 UTC m=+0.090565327 container create f0494907ceb262d015806eda1d1b2738f0da7310177535fad2ea16e33ba9250b (image=quay.io/ceph/ceph:v18, name=awesome_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'alerts'
Oct 02 19:08:43 compute-0 podman[192975]: 2025-10-02 19:08:43.343470684 +0000 UTC m=+0.063916588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:43 compute-0 systemd[1]: Started libpod-conmon-f0494907ceb262d015806eda1d1b2738f0da7310177535fad2ea16e33ba9250b.scope.
Oct 02 19:08:43 compute-0 podman[192993]: 2025-10-02 19:08:43.452066663 +0000 UTC m=+0.137912279 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:08:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b2fe6530b1c0e07fd21b3bf3bef4f4b91efda298270ec59ce1ecb6b123569f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b2fe6530b1c0e07fd21b3bf3bef4f4b91efda298270ec59ce1ecb6b123569f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b2fe6530b1c0e07fd21b3bf3bef4f4b91efda298270ec59ce1ecb6b123569f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:43 compute-0 podman[192975]: 2025-10-02 19:08:43.50720075 +0000 UTC m=+0.227646574 container init f0494907ceb262d015806eda1d1b2738f0da7310177535fad2ea16e33ba9250b (image=quay.io/ceph/ceph:v18, name=awesome_bhaskara, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 19:08:43 compute-0 podman[192975]: 2025-10-02 19:08:43.527093502 +0000 UTC m=+0.247539336 container start f0494907ceb262d015806eda1d1b2738f0da7310177535fad2ea16e33ba9250b (image=quay.io/ceph/ceph:v18, name=awesome_bhaskara, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:08:43 compute-0 podman[192975]: 2025-10-02 19:08:43.534212098 +0000 UTC m=+0.254657932 container attach f0494907ceb262d015806eda1d1b2738f0da7310177535fad2ea16e33ba9250b (image=quay.io/ceph/ceph:v18, name=awesome_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'balancer'
Oct 02 19:08:43 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:43.690+0000 7f5d19e0a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 02 19:08:43 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:43.982+0000 7f5d19e0a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 02 19:08:43 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'cephadm'
Oct 02 19:08:44 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1257696984' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 02 19:08:44 compute-0 ceph-mon[191910]: mgrmap e5: compute-0.uktbkz(active, since 5s)
Oct 02 19:08:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 02 19:08:44 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2132158928' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 19:08:44 compute-0 awesome_bhaskara[193027]: {
Oct 02 19:08:44 compute-0 awesome_bhaskara[193027]:     "epoch": 5,
Oct 02 19:08:44 compute-0 awesome_bhaskara[193027]:     "available": true,
Oct 02 19:08:44 compute-0 awesome_bhaskara[193027]:     "active_name": "compute-0.uktbkz",
Oct 02 19:08:44 compute-0 awesome_bhaskara[193027]:     "num_standby": 0
Oct 02 19:08:44 compute-0 awesome_bhaskara[193027]: }
Oct 02 19:08:44 compute-0 systemd[1]: libpod-f0494907ceb262d015806eda1d1b2738f0da7310177535fad2ea16e33ba9250b.scope: Deactivated successfully.
Oct 02 19:08:44 compute-0 podman[192975]: 2025-10-02 19:08:44.180086894 +0000 UTC m=+0.900532728 container died f0494907ceb262d015806eda1d1b2738f0da7310177535fad2ea16e33ba9250b (image=quay.io/ceph/ceph:v18, name=awesome_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 19:08:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1b2fe6530b1c0e07fd21b3bf3bef4f4b91efda298270ec59ce1ecb6b123569f-merged.mount: Deactivated successfully.
Oct 02 19:08:44 compute-0 podman[192975]: 2025-10-02 19:08:44.275324363 +0000 UTC m=+0.995770167 container remove f0494907ceb262d015806eda1d1b2738f0da7310177535fad2ea16e33ba9250b (image=quay.io/ceph/ceph:v18, name=awesome_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:08:44 compute-0 systemd[1]: libpod-conmon-f0494907ceb262d015806eda1d1b2738f0da7310177535fad2ea16e33ba9250b.scope: Deactivated successfully.
Oct 02 19:08:44 compute-0 podman[193054]: 2025-10-02 19:08:44.358006952 +0000 UTC m=+0.134011847 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm)
Oct 02 19:08:44 compute-0 podman[193075]: 2025-10-02 19:08:44.398766352 +0000 UTC m=+0.080665598 container create 548328eff8d05b2ef1d3cd66be18d5595f6aa573c1e31da01e47bc3936f34ca3 (image=quay.io/ceph/ceph:v18, name=hardcore_mahavira, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 19:08:44 compute-0 podman[193075]: 2025-10-02 19:08:44.371743303 +0000 UTC m=+0.053642559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:08:44 compute-0 systemd[1]: Started libpod-conmon-548328eff8d05b2ef1d3cd66be18d5595f6aa573c1e31da01e47bc3936f34ca3.scope.
Oct 02 19:08:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684e10ffd0e69356210a6f3ae56131a286864e9aef16db22dca34263e522a901/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684e10ffd0e69356210a6f3ae56131a286864e9aef16db22dca34263e522a901/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684e10ffd0e69356210a6f3ae56131a286864e9aef16db22dca34263e522a901/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:44 compute-0 podman[193075]: 2025-10-02 19:08:44.549962449 +0000 UTC m=+0.231861675 container init 548328eff8d05b2ef1d3cd66be18d5595f6aa573c1e31da01e47bc3936f34ca3 (image=quay.io/ceph/ceph:v18, name=hardcore_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 19:08:44 compute-0 podman[193075]: 2025-10-02 19:08:44.580725666 +0000 UTC m=+0.262624882 container start 548328eff8d05b2ef1d3cd66be18d5595f6aa573c1e31da01e47bc3936f34ca3 (image=quay.io/ceph/ceph:v18, name=hardcore_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 19:08:44 compute-0 podman[193075]: 2025-10-02 19:08:44.586482487 +0000 UTC m=+0.268381733 container attach 548328eff8d05b2ef1d3cd66be18d5595f6aa573c1e31da01e47bc3936f34ca3 (image=quay.io/ceph/ceph:v18, name=hardcore_mahavira, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 19:08:45 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2132158928' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 19:08:46 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'crash'
Oct 02 19:08:46 compute-0 ceph-mgr[192222]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 02 19:08:46 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'dashboard'
Oct 02 19:08:46 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:46.269+0000 7f5d19e0a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 02 19:08:47 compute-0 podman[193133]: 2025-10-02 19:08:47.712034251 +0000 UTC m=+0.130757592 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release-0.7.12=, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, release=1214.1726694543, io.buildah.version=1.29.0, vendor=Red Hat, Inc., name=ubi9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4)
Oct 02 19:08:47 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'devicehealth'
Oct 02 19:08:47 compute-0 ceph-mgr[192222]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 02 19:08:47 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'diskprediction_local'
Oct 02 19:08:47 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:47.965+0000 7f5d19e0a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 02 19:08:48 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 02 19:08:48 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 02 19:08:48 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]:   from numpy import show_config as show_numpy_config
Oct 02 19:08:48 compute-0 ceph-mgr[192222]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 02 19:08:48 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'influx'
Oct 02 19:08:48 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:48.489+0000 7f5d19e0a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 02 19:08:48 compute-0 ceph-mgr[192222]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 02 19:08:48 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'insights'
Oct 02 19:08:48 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:48.729+0000 7f5d19e0a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 02 19:08:48 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'iostat'
Oct 02 19:08:49 compute-0 ceph-mgr[192222]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 02 19:08:49 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'k8sevents'
Oct 02 19:08:49 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:49.230+0000 7f5d19e0a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 02 19:08:50 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'localpool'
Oct 02 19:08:51 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'mds_autoscaler'
Oct 02 19:08:51 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'mirroring'
Oct 02 19:08:52 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'nfs'
Oct 02 19:08:52 compute-0 ceph-mgr[192222]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 02 19:08:52 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'orchestrator'
Oct 02 19:08:52 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:52.825+0000 7f5d19e0a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 02 19:08:53 compute-0 ceph-mgr[192222]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 02 19:08:53 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'osd_perf_query'
Oct 02 19:08:53 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:53.516+0000 7f5d19e0a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 02 19:08:53 compute-0 ceph-mgr[192222]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 02 19:08:53 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'osd_support'
Oct 02 19:08:53 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:53.781+0000 7f5d19e0a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 02 19:08:54 compute-0 ceph-mgr[192222]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 02 19:08:54 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'pg_autoscaler'
Oct 02 19:08:54 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:54.016+0000 7f5d19e0a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 02 19:08:54 compute-0 ceph-mgr[192222]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 02 19:08:54 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'progress'
Oct 02 19:08:54 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:54.292+0000 7f5d19e0a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 02 19:08:54 compute-0 ceph-mgr[192222]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 02 19:08:54 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'prometheus'
Oct 02 19:08:54 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:54.528+0000 7f5d19e0a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 02 19:08:55 compute-0 ceph-mgr[192222]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 02 19:08:55 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'rbd_support'
Oct 02 19:08:55 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:55.592+0000 7f5d19e0a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 02 19:08:55 compute-0 ceph-mgr[192222]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 02 19:08:55 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'restful'
Oct 02 19:08:55 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:55.898+0000 7f5d19e0a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 02 19:08:56 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'rgw'
Oct 02 19:08:57 compute-0 ceph-mgr[192222]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 02 19:08:57 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'rook'
Oct 02 19:08:57 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:57.335+0000 7f5d19e0a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 02 19:08:59 compute-0 ceph-mgr[192222]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 02 19:08:59 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:59.508+0000 7f5d19e0a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 02 19:08:59 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'selftest'
Oct 02 19:08:59 compute-0 podman[157186]: time="2025-10-02T19:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:08:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 23486 "" "Go-http-client/1.1"
Oct 02 19:08:59 compute-0 ceph-mgr[192222]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 02 19:08:59 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:08:59.771+0000 7f5d19e0a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 02 19:08:59 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'snap_schedule'
Oct 02 19:08:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4362 "" "Go-http-client/1.1"
Oct 02 19:09:00 compute-0 ceph-mgr[192222]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 02 19:09:00 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:09:00.030+0000 7f5d19e0a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 02 19:09:00 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'stats'
Oct 02 19:09:00 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'status'
Oct 02 19:09:00 compute-0 ceph-mgr[192222]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 02 19:09:00 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:09:00.616+0000 7f5d19e0a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 02 19:09:00 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'telegraf'
Oct 02 19:09:00 compute-0 ceph-mgr[192222]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 02 19:09:00 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:09:00.860+0000 7f5d19e0a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 02 19:09:00 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'telemetry'
Oct 02 19:09:01 compute-0 openstack_network_exporter[159337]: ERROR   19:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:09:01 compute-0 openstack_network_exporter[159337]: ERROR   19:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:09:01 compute-0 openstack_network_exporter[159337]: ERROR   19:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:09:01 compute-0 openstack_network_exporter[159337]: ERROR   19:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:09:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:09:01 compute-0 openstack_network_exporter[159337]: ERROR   19:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:09:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:09:01 compute-0 ceph-mgr[192222]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 02 19:09:01 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'test_orchestrator'
Oct 02 19:09:01 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:09:01.502+0000 7f5d19e0a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 02 19:09:02 compute-0 ceph-mgr[192222]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 02 19:09:02 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:09:02.262+0000 7f5d19e0a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 02 19:09:02 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'volumes'
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr[py] Loading python module 'zabbix'
Oct 02 19:09:03 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:09:03.006+0000 7f5d19e0a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 02 19:09:03 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T19:09:03.246+0000 7f5d19e0a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 02 19:09:03 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : Active manager daemon compute-0.uktbkz restarted
Oct 02 19:09:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct 02 19:09:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 19:09:03 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.uktbkz
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: ms_deliver_dispatch: unhandled message 0x560c2cdef1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 02 19:09:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 02 19:09:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 02 19:09:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr handle_mgr_map Activating!
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr handle_mgr_map I am now activating
Oct 02 19:09:03 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct 02 19:09:03 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.uktbkz(active, starting, since 0.110514s)
Oct 02 19:09:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 02 19:09:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 19:09:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.uktbkz", "id": "compute-0.uktbkz"} v 0) v1
Oct 02 19:09:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mgr metadata", "who": "compute-0.uktbkz", "id": "compute-0.uktbkz"}]: dispatch
Oct 02 19:09:03 compute-0 ceph-mon[191910]: Active manager daemon compute-0.uktbkz restarted
Oct 02 19:09:03 compute-0 ceph-mon[191910]: Activating manager daemon compute-0.uktbkz
Oct 02 19:09:03 compute-0 ceph-mon[191910]: osdmap e2: 0 total, 0 up, 0 in
Oct 02 19:09:03 compute-0 ceph-mon[191910]: mgrmap e6: compute-0.uktbkz(active, starting, since 0.110514s)
Oct 02 19:09:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 02 19:09:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 02 19:09:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).mds e1 all = 1
Oct 02 19:09:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 02 19:09:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 19:09:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 02 19:09:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: balancer
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Starting
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:09:03
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [balancer INFO root] No pools available
Oct 02 19:09:03 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : Manager daemon compute-0.uktbkz is now available
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct 02 19:09:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Oct 02 19:09:03 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Oct 02 19:09:03 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: cephadm
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: crash
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: devicehealth
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: iostat
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: nfs
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: orchestrator
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: pg_autoscaler
Oct 02 19:09:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 02 19:09:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [devicehealth INFO root] Starting
Oct 02 19:09:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 02 19:09:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: progress
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [progress INFO root] Loading...
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [progress INFO root] No stored events to load
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [progress INFO root] Loaded [] historic events
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [progress INFO root] Loaded OSDMap, ready.
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] recovery thread starting
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] starting setup
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: rbd_support
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: restful
Oct 02 19:09:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uktbkz/mirror_snapshot_schedule"} v 0) v1
Oct 02 19:09:03 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uktbkz/mirror_snapshot_schedule"}]: dispatch
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: status
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [restful INFO root] server_addr: :: server_port: 8003
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [restful WARNING root] server not running: no certificate configured
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: telemetry
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] PerfHandler: starting
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TaskHandler: starting
Oct 02 19:09:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uktbkz/trash_purge_schedule"} v 0) v1
Oct 02 19:09:03 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uktbkz/trash_purge_schedule"}]: dispatch
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] setup complete
Oct 02 19:09:03 compute-0 ceph-mgr[192222]: mgr load Constructed class from module: volumes
Oct 02 19:09:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Oct 02 19:09:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019924471 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:09:04 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Oct 02 19:09:04 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:04 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.uktbkz(active, since 1.16341s)
Oct 02 19:09:04 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 02 19:09:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 19:09:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mgr metadata", "who": "compute-0.uktbkz", "id": "compute-0.uktbkz"}]: dispatch
Oct 02 19:09:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 02 19:09:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 19:09:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 02 19:09:04 compute-0 ceph-mon[191910]: Manager daemon compute-0.uktbkz is now available
Oct 02 19:09:04 compute-0 ceph-mon[191910]: Found migration_current of "None". Setting to last migration.
Oct 02 19:09:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 19:09:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 19:09:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uktbkz/mirror_snapshot_schedule"}]: dispatch
Oct 02 19:09:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.uktbkz/trash_purge_schedule"}]: dispatch
Oct 02 19:09:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:04 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 02 19:09:04 compute-0 hardcore_mahavira[193098]: {
Oct 02 19:09:04 compute-0 hardcore_mahavira[193098]:     "mgrmap_epoch": 7,
Oct 02 19:09:04 compute-0 hardcore_mahavira[193098]:     "initialized": true
Oct 02 19:09:04 compute-0 hardcore_mahavira[193098]: }
Oct 02 19:09:04 compute-0 systemd[1]: libpod-548328eff8d05b2ef1d3cd66be18d5595f6aa573c1e31da01e47bc3936f34ca3.scope: Deactivated successfully.
Oct 02 19:09:04 compute-0 podman[193075]: 2025-10-02 19:09:04.487950398 +0000 UTC m=+20.169849634 container died 548328eff8d05b2ef1d3cd66be18d5595f6aa573c1e31da01e47bc3936f34ca3 (image=quay.io/ceph/ceph:v18, name=hardcore_mahavira, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 19:09:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-684e10ffd0e69356210a6f3ae56131a286864e9aef16db22dca34263e522a901-merged.mount: Deactivated successfully.
Oct 02 19:09:04 compute-0 podman[193075]: 2025-10-02 19:09:04.585227133 +0000 UTC m=+20.267126359 container remove 548328eff8d05b2ef1d3cd66be18d5595f6aa573c1e31da01e47bc3936f34ca3 (image=quay.io/ceph/ceph:v18, name=hardcore_mahavira, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 19:09:04 compute-0 systemd[1]: libpod-conmon-548328eff8d05b2ef1d3cd66be18d5595f6aa573c1e31da01e47bc3936f34ca3.scope: Deactivated successfully.
Oct 02 19:09:04 compute-0 podman[193268]: 2025-10-02 19:09:04.654866653 +0000 UTC m=+0.126532975 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:09:04 compute-0 podman[193295]: 2025-10-02 19:09:04.686272518 +0000 UTC m=+0.068564182 container create 0c623d47411d9a2f8a2fc2a620c33361edba8006521d3d0f328b0ce41fc1f96a (image=quay.io/ceph/ceph:v18, name=practical_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 19:09:04 compute-0 podman[193266]: 2025-10-02 19:09:04.692191093 +0000 UTC m=+0.156247215 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:09:04 compute-0 podman[193295]: 2025-10-02 19:09:04.658461857 +0000 UTC m=+0.040753501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:04 compute-0 systemd[1]: Started libpod-conmon-0c623d47411d9a2f8a2fc2a620c33361edba8006521d3d0f328b0ce41fc1f96a.scope.
Oct 02 19:09:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb71559138ef4215e051270524ab7de74a58a5fd9b4052f518d5b904623de21e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb71559138ef4215e051270524ab7de74a58a5fd9b4052f518d5b904623de21e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb71559138ef4215e051270524ab7de74a58a5fd9b4052f518d5b904623de21e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:04 compute-0 podman[193295]: 2025-10-02 19:09:04.832771507 +0000 UTC m=+0.215063141 container init 0c623d47411d9a2f8a2fc2a620c33361edba8006521d3d0f328b0ce41fc1f96a (image=quay.io/ceph/ceph:v18, name=practical_khayyam, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:09:04 compute-0 podman[193295]: 2025-10-02 19:09:04.84813485 +0000 UTC m=+0.230426474 container start 0c623d47411d9a2f8a2fc2a620c33361edba8006521d3d0f328b0ce41fc1f96a (image=quay.io/ceph/ceph:v18, name=practical_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 19:09:04 compute-0 podman[193295]: 2025-10-02 19:09:04.852645768 +0000 UTC m=+0.234937402 container attach 0c623d47411d9a2f8a2fc2a620c33361edba8006521d3d0f328b0ce41fc1f96a (image=quay.io/ceph/ceph:v18, name=practical_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:09:05 compute-0 ceph-mgr[192222]: [cephadm INFO cherrypy.error] [02/Oct/2025:19:09:05] ENGINE Bus STARTING
Oct 02 19:09:05 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : [02/Oct/2025:19:09:05] ENGINE Bus STARTING
Oct 02 19:09:05 compute-0 ceph-mgr[192222]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 19:09:05 compute-0 ceph-mgr[192222]: [cephadm INFO cherrypy.error] [02/Oct/2025:19:09:05] ENGINE Serving on https://192.168.122.100:7150
Oct 02 19:09:05 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : [02/Oct/2025:19:09:05] ENGINE Serving on https://192.168.122.100:7150
Oct 02 19:09:05 compute-0 ceph-mgr[192222]: [cephadm INFO cherrypy.error] [02/Oct/2025:19:09:05] ENGINE Client ('192.168.122.100', 55876) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 02 19:09:05 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : [02/Oct/2025:19:09:05] ENGINE Client ('192.168.122.100', 55876) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 02 19:09:05 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.uktbkz(active, since 2s)
Oct 02 19:09:05 compute-0 ceph-mon[191910]: mgrmap e7: compute-0.uktbkz(active, since 1.16341s)
Oct 02 19:09:05 compute-0 ceph-mon[191910]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 02 19:09:05 compute-0 ceph-mon[191910]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 02 19:09:05 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Oct 02 19:09:05 compute-0 ceph-mgr[192222]: [cephadm INFO cherrypy.error] [02/Oct/2025:19:09:05] ENGINE Serving on http://192.168.122.100:8765
Oct 02 19:09:05 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : [02/Oct/2025:19:09:05] ENGINE Serving on http://192.168.122.100:8765
Oct 02 19:09:05 compute-0 ceph-mgr[192222]: [cephadm INFO cherrypy.error] [02/Oct/2025:19:09:05] ENGINE Bus STARTED
Oct 02 19:09:05 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : [02/Oct/2025:19:09:05] ENGINE Bus STARTED
Oct 02 19:09:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 02 19:09:05 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 19:09:05 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 02 19:09:05 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 19:09:05 compute-0 systemd[1]: libpod-0c623d47411d9a2f8a2fc2a620c33361edba8006521d3d0f328b0ce41fc1f96a.scope: Deactivated successfully.
Oct 02 19:09:05 compute-0 podman[193295]: 2025-10-02 19:09:05.581224238 +0000 UTC m=+0.963515892 container died 0c623d47411d9a2f8a2fc2a620c33361edba8006521d3d0f328b0ce41fc1f96a (image=quay.io/ceph/ceph:v18, name=practical_khayyam, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:09:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb71559138ef4215e051270524ab7de74a58a5fd9b4052f518d5b904623de21e-merged.mount: Deactivated successfully.
Oct 02 19:09:05 compute-0 podman[193295]: 2025-10-02 19:09:05.677029545 +0000 UTC m=+1.059321199 container remove 0c623d47411d9a2f8a2fc2a620c33361edba8006521d3d0f328b0ce41fc1f96a (image=quay.io/ceph/ceph:v18, name=practical_khayyam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:09:05 compute-0 systemd[1]: libpod-conmon-0c623d47411d9a2f8a2fc2a620c33361edba8006521d3d0f328b0ce41fc1f96a.scope: Deactivated successfully.
Oct 02 19:09:05 compute-0 podman[193389]: 2025-10-02 19:09:05.795189019 +0000 UTC m=+0.083333270 container create 917f44f9d445d617c52c447f2f565f596a33ea514d9c8da36b2cc671bc7ab549 (image=quay.io/ceph/ceph:v18, name=silly_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:09:05 compute-0 podman[193389]: 2025-10-02 19:09:05.756975525 +0000 UTC m=+0.045119816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:05 compute-0 systemd[1]: Started libpod-conmon-917f44f9d445d617c52c447f2f565f596a33ea514d9c8da36b2cc671bc7ab549.scope.
Oct 02 19:09:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/329005bfcbef8bf983ad47526772bbbec32e2bb8ccd296ef8501ec16b87d9e73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/329005bfcbef8bf983ad47526772bbbec32e2bb8ccd296ef8501ec16b87d9e73/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/329005bfcbef8bf983ad47526772bbbec32e2bb8ccd296ef8501ec16b87d9e73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:05 compute-0 podman[193389]: 2025-10-02 19:09:05.966542351 +0000 UTC m=+0.254686612 container init 917f44f9d445d617c52c447f2f565f596a33ea514d9c8da36b2cc671bc7ab549 (image=quay.io/ceph/ceph:v18, name=silly_keldysh, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:09:05 compute-0 podman[193389]: 2025-10-02 19:09:05.976322027 +0000 UTC m=+0.264466288 container start 917f44f9d445d617c52c447f2f565f596a33ea514d9c8da36b2cc671bc7ab549 (image=quay.io/ceph/ceph:v18, name=silly_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:09:05 compute-0 podman[193389]: 2025-10-02 19:09:05.990350126 +0000 UTC m=+0.278494427 container attach 917f44f9d445d617c52c447f2f565f596a33ea514d9c8da36b2cc671bc7ab549 (image=quay.io/ceph/ceph:v18, name=silly_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 19:09:06 compute-0 ceph-mon[191910]: [02/Oct/2025:19:09:05] ENGINE Bus STARTING
Oct 02 19:09:06 compute-0 ceph-mon[191910]: [02/Oct/2025:19:09:05] ENGINE Serving on https://192.168.122.100:7150
Oct 02 19:09:06 compute-0 ceph-mon[191910]: [02/Oct/2025:19:09:05] ENGINE Client ('192.168.122.100', 55876) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 02 19:09:06 compute-0 ceph-mon[191910]: mgrmap e8: compute-0.uktbkz(active, since 2s)
Oct 02 19:09:06 compute-0 ceph-mon[191910]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:06 compute-0 ceph-mon[191910]: [02/Oct/2025:19:09:05] ENGINE Serving on http://192.168.122.100:8765
Oct 02 19:09:06 compute-0 ceph-mon[191910]: [02/Oct/2025:19:09:05] ENGINE Bus STARTED
Oct 02 19:09:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 19:09:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 19:09:06 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Oct 02 19:09:06 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:06 compute-0 ceph-mgr[192222]: [cephadm INFO root] Set ssh ssh_user
Oct 02 19:09:06 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct 02 19:09:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Oct 02 19:09:06 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:06 compute-0 ceph-mgr[192222]: [cephadm INFO root] Set ssh ssh_config
Oct 02 19:09:06 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct 02 19:09:06 compute-0 ceph-mgr[192222]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct 02 19:09:06 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct 02 19:09:06 compute-0 silly_keldysh[193406]: ssh user set to ceph-admin. sudo will be used
Oct 02 19:09:06 compute-0 systemd[1]: libpod-917f44f9d445d617c52c447f2f565f596a33ea514d9c8da36b2cc671bc7ab549.scope: Deactivated successfully.
Oct 02 19:09:06 compute-0 podman[193389]: 2025-10-02 19:09:06.675565205 +0000 UTC m=+0.963709496 container died 917f44f9d445d617c52c447f2f565f596a33ea514d9c8da36b2cc671bc7ab549 (image=quay.io/ceph/ceph:v18, name=silly_keldysh, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:09:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-329005bfcbef8bf983ad47526772bbbec32e2bb8ccd296ef8501ec16b87d9e73-merged.mount: Deactivated successfully.
Oct 02 19:09:06 compute-0 podman[193389]: 2025-10-02 19:09:06.80403397 +0000 UTC m=+1.092178231 container remove 917f44f9d445d617c52c447f2f565f596a33ea514d9c8da36b2cc671bc7ab549 (image=quay.io/ceph/ceph:v18, name=silly_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 19:09:06 compute-0 systemd[1]: libpod-conmon-917f44f9d445d617c52c447f2f565f596a33ea514d9c8da36b2cc671bc7ab549.scope: Deactivated successfully.
Oct 02 19:09:06 compute-0 podman[193442]: 2025-10-02 19:09:06.890512562 +0000 UTC m=+0.053940188 container create 76944b8bb3e00fd981e8409ddc7764ad92583481d262557024af104e5a4779bb (image=quay.io/ceph/ceph:v18, name=happy_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 19:09:06 compute-0 systemd[1]: Started libpod-conmon-76944b8bb3e00fd981e8409ddc7764ad92583481d262557024af104e5a4779bb.scope.
Oct 02 19:09:06 compute-0 podman[193442]: 2025-10-02 19:09:06.869093049 +0000 UTC m=+0.032520675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9eb0e5bc0f9ad8b89ebd264aa0cd5f984e676c003eb127c20f300252cee3af/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9eb0e5bc0f9ad8b89ebd264aa0cd5f984e676c003eb127c20f300252cee3af/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9eb0e5bc0f9ad8b89ebd264aa0cd5f984e676c003eb127c20f300252cee3af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9eb0e5bc0f9ad8b89ebd264aa0cd5f984e676c003eb127c20f300252cee3af/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9eb0e5bc0f9ad8b89ebd264aa0cd5f984e676c003eb127c20f300252cee3af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:07 compute-0 podman[193442]: 2025-10-02 19:09:07.068357424 +0000 UTC m=+0.231785060 container init 76944b8bb3e00fd981e8409ddc7764ad92583481d262557024af104e5a4779bb (image=quay.io/ceph/ceph:v18, name=happy_mendel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 19:09:07 compute-0 podman[193442]: 2025-10-02 19:09:07.086139831 +0000 UTC m=+0.249567447 container start 76944b8bb3e00fd981e8409ddc7764ad92583481d262557024af104e5a4779bb (image=quay.io/ceph/ceph:v18, name=happy_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 19:09:07 compute-0 podman[193442]: 2025-10-02 19:09:07.104950575 +0000 UTC m=+0.268378201 container attach 76944b8bb3e00fd981e8409ddc7764ad92583481d262557024af104e5a4779bb (image=quay.io/ceph/ceph:v18, name=happy_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 19:09:07 compute-0 ceph-mgr[192222]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 19:09:07 compute-0 ceph-mon[191910]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:07 compute-0 ceph-mon[191910]: Set ssh ssh_user
Oct 02 19:09:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:07 compute-0 ceph-mon[191910]: Set ssh ssh_config
Oct 02 19:09:07 compute-0 ceph-mon[191910]: ssh user set to ceph-admin. sudo will be used
Oct 02 19:09:07 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Oct 02 19:09:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:07 compute-0 ceph-mgr[192222]: [cephadm INFO root] Set ssh ssh_identity_key
Oct 02 19:09:07 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct 02 19:09:07 compute-0 ceph-mgr[192222]: [cephadm INFO root] Set ssh private key
Oct 02 19:09:07 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Set ssh private key
Oct 02 19:09:07 compute-0 systemd[1]: libpod-76944b8bb3e00fd981e8409ddc7764ad92583481d262557024af104e5a4779bb.scope: Deactivated successfully.
Oct 02 19:09:07 compute-0 conmon[193458]: conmon 76944b8bb3e00fd981e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-76944b8bb3e00fd981e8409ddc7764ad92583481d262557024af104e5a4779bb.scope/container/memory.events
Oct 02 19:09:07 compute-0 podman[193442]: 2025-10-02 19:09:07.702026281 +0000 UTC m=+0.865453917 container died 76944b8bb3e00fd981e8409ddc7764ad92583481d262557024af104e5a4779bb (image=quay.io/ceph/ceph:v18, name=happy_mendel, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:09:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa9eb0e5bc0f9ad8b89ebd264aa0cd5f984e676c003eb127c20f300252cee3af-merged.mount: Deactivated successfully.
Oct 02 19:09:07 compute-0 podman[193442]: 2025-10-02 19:09:07.771860565 +0000 UTC m=+0.935288171 container remove 76944b8bb3e00fd981e8409ddc7764ad92583481d262557024af104e5a4779bb (image=quay.io/ceph/ceph:v18, name=happy_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 19:09:07 compute-0 systemd[1]: libpod-conmon-76944b8bb3e00fd981e8409ddc7764ad92583481d262557024af104e5a4779bb.scope: Deactivated successfully.
Oct 02 19:09:07 compute-0 podman[193494]: 2025-10-02 19:09:07.915074367 +0000 UTC m=+0.104109135 container create ea2d37237ba5c07484ca57ac5722e68f3fddfcda49abb1555db3aac32e461f12 (image=quay.io/ceph/ceph:v18, name=awesome_haslett, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 19:09:07 compute-0 podman[193494]: 2025-10-02 19:09:07.878767144 +0000 UTC m=+0.067801962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:07 compute-0 systemd[1]: Started libpod-conmon-ea2d37237ba5c07484ca57ac5722e68f3fddfcda49abb1555db3aac32e461f12.scope.
Oct 02 19:09:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a7ef3c613884e62fe9f19b25b5ac65802ebdd9cf30216eb14fe131cd837f96/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a7ef3c613884e62fe9f19b25b5ac65802ebdd9cf30216eb14fe131cd837f96/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a7ef3c613884e62fe9f19b25b5ac65802ebdd9cf30216eb14fe131cd837f96/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a7ef3c613884e62fe9f19b25b5ac65802ebdd9cf30216eb14fe131cd837f96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a7ef3c613884e62fe9f19b25b5ac65802ebdd9cf30216eb14fe131cd837f96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:08 compute-0 podman[193494]: 2025-10-02 19:09:08.103733204 +0000 UTC m=+0.292767942 container init ea2d37237ba5c07484ca57ac5722e68f3fddfcda49abb1555db3aac32e461f12 (image=quay.io/ceph/ceph:v18, name=awesome_haslett, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 19:09:08 compute-0 podman[193494]: 2025-10-02 19:09:08.117112875 +0000 UTC m=+0.306147643 container start ea2d37237ba5c07484ca57ac5722e68f3fddfcda49abb1555db3aac32e461f12 (image=quay.io/ceph/ceph:v18, name=awesome_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 19:09:08 compute-0 podman[193494]: 2025-10-02 19:09:08.160456184 +0000 UTC m=+0.349490932 container attach ea2d37237ba5c07484ca57ac5722e68f3fddfcda49abb1555db3aac32e461f12 (image=quay.io/ceph/ceph:v18, name=awesome_haslett, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 19:09:08 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Oct 02 19:09:08 compute-0 ceph-mon[191910]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:08 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:08 compute-0 ceph-mon[191910]: Set ssh ssh_identity_key
Oct 02 19:09:08 compute-0 ceph-mon[191910]: Set ssh private key
Oct 02 19:09:08 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:08 compute-0 ceph-mgr[192222]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct 02 19:09:08 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct 02 19:09:08 compute-0 systemd[1]: libpod-ea2d37237ba5c07484ca57ac5722e68f3fddfcda49abb1555db3aac32e461f12.scope: Deactivated successfully.
Oct 02 19:09:08 compute-0 podman[193494]: 2025-10-02 19:09:08.839148173 +0000 UTC m=+1.028182941 container died ea2d37237ba5c07484ca57ac5722e68f3fddfcda49abb1555db3aac32e461f12 (image=quay.io/ceph/ceph:v18, name=awesome_haslett, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 19:09:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0a7ef3c613884e62fe9f19b25b5ac65802ebdd9cf30216eb14fe131cd837f96-merged.mount: Deactivated successfully.
Oct 02 19:09:08 compute-0 podman[193494]: 2025-10-02 19:09:08.917011649 +0000 UTC m=+1.106046397 container remove ea2d37237ba5c07484ca57ac5722e68f3fddfcda49abb1555db3aac32e461f12 (image=quay.io/ceph/ceph:v18, name=awesome_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 19:09:08 compute-0 systemd[1]: libpod-conmon-ea2d37237ba5c07484ca57ac5722e68f3fddfcda49abb1555db3aac32e461f12.scope: Deactivated successfully.
Oct 02 19:09:08 compute-0 podman[193537]: 2025-10-02 19:09:08.997884233 +0000 UTC m=+0.106354675 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vendor=Red Hat, Inc.)
Oct 02 19:09:09 compute-0 podman[193564]: 2025-10-02 19:09:09.012278391 +0000 UTC m=+0.066312753 container create ffd0e183f5d502bbc510fdce1fde15455bfb6e8b125db548aa16cbe39b9be9ff (image=quay.io/ceph/ceph:v18, name=zen_black, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:09:09 compute-0 systemd[1]: Started libpod-conmon-ffd0e183f5d502bbc510fdce1fde15455bfb6e8b125db548aa16cbe39b9be9ff.scope.
Oct 02 19:09:09 compute-0 podman[193539]: 2025-10-02 19:09:09.072283228 +0000 UTC m=+0.164016200 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:09:09 compute-0 podman[193564]: 2025-10-02 19:09:08.982732545 +0000 UTC m=+0.036766947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053053 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a3e55bf72b38c11103ea36425f99b7c75266e2d38d7b913e7c33a7bbddcba2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a3e55bf72b38c11103ea36425f99b7c75266e2d38d7b913e7c33a7bbddcba2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a3e55bf72b38c11103ea36425f99b7c75266e2d38d7b913e7c33a7bbddcba2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:09 compute-0 podman[193564]: 2025-10-02 19:09:09.140221992 +0000 UTC m=+0.194256424 container init ffd0e183f5d502bbc510fdce1fde15455bfb6e8b125db548aa16cbe39b9be9ff (image=quay.io/ceph/ceph:v18, name=zen_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 19:09:09 compute-0 podman[193564]: 2025-10-02 19:09:09.154831496 +0000 UTC m=+0.208865848 container start ffd0e183f5d502bbc510fdce1fde15455bfb6e8b125db548aa16cbe39b9be9ff (image=quay.io/ceph/ceph:v18, name=zen_black, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 19:09:09 compute-0 podman[193564]: 2025-10-02 19:09:09.159354175 +0000 UTC m=+0.213388627 container attach ffd0e183f5d502bbc510fdce1fde15455bfb6e8b125db548aa16cbe39b9be9ff (image=quay.io/ceph/ceph:v18, name=zen_black, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 19:09:09 compute-0 ceph-mgr[192222]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 19:09:09 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:09 compute-0 zen_black[193602]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpuVBL61OnVbQfhj8ZPTSGlBsgSpgVbO4Vwdg7tARA/FkpZ2E0dYb2ppJRdy+JLgUifulnBOiAktlPyQ9XuexxSKI9WVuD/oxIVhpkqPYF1PhLLsOrGBCD3ZG2zWOGgbKUfYFFOc384xXcwALtKnbmus2PIStqghle2rYZjbCVS+O6+HyzqjJvC/oBIeDM9ZXVAmEu0ryF2E7JysFzPvw7AKMNhkdPF4u0uKDI5MVQsDWDikF9IJuoQMH9/dzK4uiytPqp8PTUp3T4j9xsU9aBtsCurmQEbayUYlKvO6OpThVAkSsA2fiiFyuYiHUgsAlX2nKN0Ex8FcsCh4tsrWEXILaeySEj0sAJrPYpWuyuO21V/ijSvIsOEg1DY30wVJCmsylTh/RYfFrdX6zZTVLq8r1WreILWeuPQ3ThYrQTiu1UnNZNYP24xEszFX6RI3Jk3kBRZko+He/ZLVJTnC9pnsDai+9P6FWXabZfP4pcgAFx4XsjpsKscEriqdj9jms= zuul@controller
Oct 02 19:09:09 compute-0 systemd[1]: libpod-ffd0e183f5d502bbc510fdce1fde15455bfb6e8b125db548aa16cbe39b9be9ff.scope: Deactivated successfully.
Oct 02 19:09:09 compute-0 ceph-mon[191910]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:09 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:09 compute-0 ceph-mon[191910]: Set ssh ssh_identity_pub
Oct 02 19:09:09 compute-0 podman[193628]: 2025-10-02 19:09:09.868895764 +0000 UTC m=+0.051668029 container died ffd0e183f5d502bbc510fdce1fde15455bfb6e8b125db548aa16cbe39b9be9ff (image=quay.io/ceph/ceph:v18, name=zen_black, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:09:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-20a3e55bf72b38c11103ea36425f99b7c75266e2d38d7b913e7c33a7bbddcba2-merged.mount: Deactivated successfully.
Oct 02 19:09:09 compute-0 podman[193628]: 2025-10-02 19:09:09.930642406 +0000 UTC m=+0.113414651 container remove ffd0e183f5d502bbc510fdce1fde15455bfb6e8b125db548aa16cbe39b9be9ff (image=quay.io/ceph/ceph:v18, name=zen_black, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:09:09 compute-0 systemd[1]: libpod-conmon-ffd0e183f5d502bbc510fdce1fde15455bfb6e8b125db548aa16cbe39b9be9ff.scope: Deactivated successfully.
Oct 02 19:09:10 compute-0 podman[193645]: 2025-10-02 19:09:10.066657949 +0000 UTC m=+0.080020343 container create 25d4a5b8962e343218651529a771d7560afa406799ccafd5016d593d3e231398 (image=quay.io/ceph/ceph:v18, name=boring_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 19:09:10 compute-0 podman[193645]: 2025-10-02 19:09:10.040224264 +0000 UTC m=+0.053586748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:10 compute-0 systemd[1]: Started libpod-conmon-25d4a5b8962e343218651529a771d7560afa406799ccafd5016d593d3e231398.scope.
Oct 02 19:09:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ae84483d2070736e2b8ea3f0e74ea39ec13901eda8a9aee380d133b64e92845/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ae84483d2070736e2b8ea3f0e74ea39ec13901eda8a9aee380d133b64e92845/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ae84483d2070736e2b8ea3f0e74ea39ec13901eda8a9aee380d133b64e92845/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:10 compute-0 podman[193645]: 2025-10-02 19:09:10.246638757 +0000 UTC m=+0.260001231 container init 25d4a5b8962e343218651529a771d7560afa406799ccafd5016d593d3e231398 (image=quay.io/ceph/ceph:v18, name=boring_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:09:10 compute-0 podman[193645]: 2025-10-02 19:09:10.263151851 +0000 UTC m=+0.276514275 container start 25d4a5b8962e343218651529a771d7560afa406799ccafd5016d593d3e231398 (image=quay.io/ceph/ceph:v18, name=boring_turing, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:09:10 compute-0 podman[193645]: 2025-10-02 19:09:10.269745694 +0000 UTC m=+0.283108168 container attach 25d4a5b8962e343218651529a771d7560afa406799ccafd5016d593d3e231398 (image=quay.io/ceph/ceph:v18, name=boring_turing, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:09:10 compute-0 ceph-mon[191910]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:10 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:11 compute-0 sshd-session[193688]: Accepted publickey for ceph-admin from 192.168.122.100 port 58410 ssh2: RSA SHA256:8UDIrmWQ3f2YlQUx/D4IcSJEMjMc9LCtdqeeOKt60sQ
Oct 02 19:09:11 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 02 19:09:11 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 02 19:09:11 compute-0 systemd-logind[793]: New session 28 of user ceph-admin.
Oct 02 19:09:11 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 02 19:09:11 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 02 19:09:11 compute-0 systemd[193692]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 19:09:11 compute-0 ceph-mgr[192222]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 19:09:11 compute-0 sshd-session[193697]: Accepted publickey for ceph-admin from 192.168.122.100 port 54678 ssh2: RSA SHA256:8UDIrmWQ3f2YlQUx/D4IcSJEMjMc9LCtdqeeOKt60sQ
Oct 02 19:09:11 compute-0 systemd-logind[793]: New session 30 of user ceph-admin.
Oct 02 19:09:11 compute-0 systemd[193692]: Queued start job for default target Main User Target.
Oct 02 19:09:11 compute-0 systemd[193692]: Created slice User Application Slice.
Oct 02 19:09:11 compute-0 systemd[193692]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 19:09:11 compute-0 systemd[193692]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 19:09:11 compute-0 systemd[193692]: Reached target Paths.
Oct 02 19:09:11 compute-0 systemd[193692]: Reached target Timers.
Oct 02 19:09:11 compute-0 systemd[193692]: Starting D-Bus User Message Bus Socket...
Oct 02 19:09:11 compute-0 systemd[193692]: Starting Create User's Volatile Files and Directories...
Oct 02 19:09:11 compute-0 systemd[193692]: Finished Create User's Volatile Files and Directories.
Oct 02 19:09:11 compute-0 systemd[193692]: Listening on D-Bus User Message Bus Socket.
Oct 02 19:09:11 compute-0 systemd[193692]: Reached target Sockets.
Oct 02 19:09:11 compute-0 systemd[193692]: Reached target Basic System.
Oct 02 19:09:11 compute-0 systemd[193692]: Reached target Main User Target.
Oct 02 19:09:11 compute-0 systemd[193692]: Startup finished in 248ms.
Oct 02 19:09:11 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 02 19:09:11 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Oct 02 19:09:11 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Oct 02 19:09:11 compute-0 sshd-session[193688]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 19:09:11 compute-0 sshd-session[193697]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 19:09:11 compute-0 sudo[193711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:11 compute-0 sudo[193711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:11 compute-0 sudo[193711]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:11 compute-0 ceph-mon[191910]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:11 compute-0 sudo[193736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:09:11 compute-0 sudo[193736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:11 compute-0 sudo[193736]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:12 compute-0 sshd-session[193761]: Accepted publickey for ceph-admin from 192.168.122.100 port 54694 ssh2: RSA SHA256:8UDIrmWQ3f2YlQUx/D4IcSJEMjMc9LCtdqeeOKt60sQ
Oct 02 19:09:12 compute-0 systemd-logind[793]: New session 31 of user ceph-admin.
Oct 02 19:09:12 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Oct 02 19:09:12 compute-0 sshd-session[193761]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 19:09:12 compute-0 sudo[193765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:12 compute-0 sudo[193765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:12 compute-0 sudo[193765]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:12 compute-0 sudo[193790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 02 19:09:12 compute-0 sudo[193790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:12 compute-0 sudo[193790]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:12 compute-0 sshd-session[193815]: Accepted publickey for ceph-admin from 192.168.122.100 port 54696 ssh2: RSA SHA256:8UDIrmWQ3f2YlQUx/D4IcSJEMjMc9LCtdqeeOKt60sQ
Oct 02 19:09:12 compute-0 systemd-logind[793]: New session 32 of user ceph-admin.
Oct 02 19:09:12 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Oct 02 19:09:12 compute-0 sshd-session[193815]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 19:09:13 compute-0 sudo[193819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:13 compute-0 sudo[193819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:13 compute-0 sudo[193819]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:13 compute-0 sudo[193844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Oct 02 19:09:13 compute-0 sudo[193844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:13 compute-0 sudo[193844]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:13 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct 02 19:09:13 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct 02 19:09:13 compute-0 ceph-mon[191910]: Deploying cephadm binary to compute-0
Oct 02 19:09:13 compute-0 ceph-mgr[192222]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 19:09:13 compute-0 sshd-session[193869]: Accepted publickey for ceph-admin from 192.168.122.100 port 54700 ssh2: RSA SHA256:8UDIrmWQ3f2YlQUx/D4IcSJEMjMc9LCtdqeeOKt60sQ
Oct 02 19:09:13 compute-0 systemd-logind[793]: New session 33 of user ceph-admin.
Oct 02 19:09:13 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Oct 02 19:09:13 compute-0 sshd-session[193869]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 19:09:13 compute-0 podman[193872]: 2025-10-02 19:09:13.649057918 +0000 UTC m=+0.134080804 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:09:13 compute-0 sudo[193885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:13 compute-0 sudo[193885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:13 compute-0 sudo[193885]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:13 compute-0 sudo[193921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:09:13 compute-0 sudo[193921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:13 compute-0 sudo[193921]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:14 compute-0 sshd-session[193946]: Accepted publickey for ceph-admin from 192.168.122.100 port 54716 ssh2: RSA SHA256:8UDIrmWQ3f2YlQUx/D4IcSJEMjMc9LCtdqeeOKt60sQ
Oct 02 19:09:14 compute-0 systemd-logind[793]: New session 34 of user ceph-admin.
Oct 02 19:09:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:09:14 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Oct 02 19:09:14 compute-0 sshd-session[193946]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 19:09:14 compute-0 sudo[193950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:14 compute-0 sudo[193950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:14 compute-0 sudo[193950]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:14 compute-0 sudo[193975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:09:14 compute-0 sudo[193975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:14 compute-0 sudo[193975]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:14 compute-0 podman[193999]: 2025-10-02 19:09:14.63696107 +0000 UTC m=+0.127052799 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:09:14 compute-0 sshd-session[194019]: Accepted publickey for ceph-admin from 192.168.122.100 port 54724 ssh2: RSA SHA256:8UDIrmWQ3f2YlQUx/D4IcSJEMjMc9LCtdqeeOKt60sQ
Oct 02 19:09:14 compute-0 systemd-logind[793]: New session 35 of user ceph-admin.
Oct 02 19:09:14 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Oct 02 19:09:14 compute-0 sshd-session[194019]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 19:09:14 compute-0 sudo[194023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:14 compute-0 sudo[194023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:14 compute-0 sudo[194023]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:15 compute-0 sudo[194048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Oct 02 19:09:15 compute-0 sudo[194048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:15 compute-0 sudo[194048]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:15 compute-0 sshd-session[194073]: Accepted publickey for ceph-admin from 192.168.122.100 port 54740 ssh2: RSA SHA256:8UDIrmWQ3f2YlQUx/D4IcSJEMjMc9LCtdqeeOKt60sQ
Oct 02 19:09:15 compute-0 systemd-logind[793]: New session 36 of user ceph-admin.
Oct 02 19:09:15 compute-0 systemd[1]: Started Session 36 of User ceph-admin.
Oct 02 19:09:15 compute-0 sshd-session[194073]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 19:09:15 compute-0 ceph-mgr[192222]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 19:09:15 compute-0 sudo[194077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:15 compute-0 sudo[194077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:15 compute-0 sudo[194077]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:15 compute-0 sudo[194102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:09:15 compute-0 sudo[194102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:15 compute-0 sudo[194102]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:15 compute-0 sshd-session[194127]: Accepted publickey for ceph-admin from 192.168.122.100 port 54754 ssh2: RSA SHA256:8UDIrmWQ3f2YlQUx/D4IcSJEMjMc9LCtdqeeOKt60sQ
Oct 02 19:09:15 compute-0 systemd-logind[793]: New session 37 of user ceph-admin.
Oct 02 19:09:15 compute-0 systemd[1]: Started Session 37 of User ceph-admin.
Oct 02 19:09:15 compute-0 sshd-session[194127]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 19:09:16 compute-0 sudo[194131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:16 compute-0 sudo[194131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:16 compute-0 sudo[194131]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:16 compute-0 sudo[194156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Oct 02 19:09:16 compute-0 sudo[194156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:16 compute-0 sudo[194156]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:16 compute-0 sshd-session[194181]: Accepted publickey for ceph-admin from 192.168.122.100 port 54756 ssh2: RSA SHA256:8UDIrmWQ3f2YlQUx/D4IcSJEMjMc9LCtdqeeOKt60sQ
Oct 02 19:09:16 compute-0 systemd-logind[793]: New session 38 of user ceph-admin.
Oct 02 19:09:16 compute-0 systemd[1]: Started Session 38 of User ceph-admin.
Oct 02 19:09:16 compute-0 sshd-session[194181]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 19:09:17 compute-0 sshd-session[194208]: Accepted publickey for ceph-admin from 192.168.122.100 port 54766 ssh2: RSA SHA256:8UDIrmWQ3f2YlQUx/D4IcSJEMjMc9LCtdqeeOKt60sQ
Oct 02 19:09:17 compute-0 systemd-logind[793]: New session 39 of user ceph-admin.
Oct 02 19:09:17 compute-0 systemd[1]: Started Session 39 of User ceph-admin.
Oct 02 19:09:17 compute-0 sshd-session[194208]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 19:09:17 compute-0 sudo[194212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:17 compute-0 sudo[194212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:17 compute-0 sudo[194212]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:17 compute-0 ceph-mgr[192222]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 19:09:17 compute-0 sudo[194237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Oct 02 19:09:17 compute-0 sudo[194237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:17 compute-0 sudo[194237]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:17 compute-0 sshd-session[194262]: Accepted publickey for ceph-admin from 192.168.122.100 port 54774 ssh2: RSA SHA256:8UDIrmWQ3f2YlQUx/D4IcSJEMjMc9LCtdqeeOKt60sQ
Oct 02 19:09:17 compute-0 systemd-logind[793]: New session 40 of user ceph-admin.
Oct 02 19:09:17 compute-0 systemd[1]: Started Session 40 of User ceph-admin.
Oct 02 19:09:17 compute-0 sshd-session[194262]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 19:09:17 compute-0 podman[194265]: 2025-10-02 19:09:17.895687136 +0000 UTC m=+0.113148484 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-container, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, build-date=2024-09-18T21:23:30)
Oct 02 19:09:17 compute-0 sudo[194276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:17 compute-0 sudo[194276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:17 compute-0 sudo[194276]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:18 compute-0 sudo[194310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 02 19:09:18 compute-0 sudo[194310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:18 compute-0 sudo[194310]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 19:09:18 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:18 compute-0 ceph-mgr[192222]: [cephadm INFO root] Added host compute-0
Oct 02 19:09:18 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 02 19:09:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 02 19:09:18 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 19:09:18 compute-0 boring_turing[193662]: Added host 'compute-0' with addr '192.168.122.100'
Oct 02 19:09:18 compute-0 systemd[1]: libpod-25d4a5b8962e343218651529a771d7560afa406799ccafd5016d593d3e231398.scope: Deactivated successfully.
Oct 02 19:09:18 compute-0 podman[193645]: 2025-10-02 19:09:18.476923895 +0000 UTC m=+8.490286329 container died 25d4a5b8962e343218651529a771d7560afa406799ccafd5016d593d3e231398 (image=quay.io/ceph/ceph:v18, name=boring_turing, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:09:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ae84483d2070736e2b8ea3f0e74ea39ec13901eda8a9aee380d133b64e92845-merged.mount: Deactivated successfully.
Oct 02 19:09:18 compute-0 sudo[194355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:18 compute-0 sudo[194355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:18 compute-0 podman[193645]: 2025-10-02 19:09:18.561189988 +0000 UTC m=+8.574552402 container remove 25d4a5b8962e343218651529a771d7560afa406799ccafd5016d593d3e231398 (image=quay.io/ceph/ceph:v18, name=boring_turing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 19:09:18 compute-0 sudo[194355]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:18 compute-0 systemd[1]: libpod-conmon-25d4a5b8962e343218651529a771d7560afa406799ccafd5016d593d3e231398.scope: Deactivated successfully.
Oct 02 19:09:18 compute-0 sudo[194391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:09:18 compute-0 podman[194392]: 2025-10-02 19:09:18.648117692 +0000 UTC m=+0.058435006 container create 3a34bc971ac042480058caa830715671ade47b1f7839b6adc12fb0b90ef9142c (image=quay.io/ceph/ceph:v18, name=competent_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 19:09:18 compute-0 sudo[194391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:18 compute-0 sudo[194391]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:18 compute-0 systemd[1]: Started libpod-conmon-3a34bc971ac042480058caa830715671ade47b1f7839b6adc12fb0b90ef9142c.scope.
Oct 02 19:09:18 compute-0 podman[194392]: 2025-10-02 19:09:18.623520206 +0000 UTC m=+0.033837560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11024c3658e9e38718320f4372edbc873c6ffdf3419dac60a032fe65edafb29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11024c3658e9e38718320f4372edbc873c6ffdf3419dac60a032fe65edafb29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11024c3658e9e38718320f4372edbc873c6ffdf3419dac60a032fe65edafb29/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:18 compute-0 sudo[194429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:18 compute-0 sudo[194429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:18 compute-0 sudo[194429]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:18 compute-0 podman[194392]: 2025-10-02 19:09:18.791415996 +0000 UTC m=+0.201733310 container init 3a34bc971ac042480058caa830715671ade47b1f7839b6adc12fb0b90ef9142c (image=quay.io/ceph/ceph:v18, name=competent_payne, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:09:18 compute-0 podman[194392]: 2025-10-02 19:09:18.802468107 +0000 UTC m=+0.212785431 container start 3a34bc971ac042480058caa830715671ade47b1f7839b6adc12fb0b90ef9142c (image=quay.io/ceph/ceph:v18, name=competent_payne, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:09:18 compute-0 podman[194392]: 2025-10-02 19:09:18.807120669 +0000 UTC m=+0.217437973 container attach 3a34bc971ac042480058caa830715671ade47b1f7839b6adc12fb0b90ef9142c (image=quay.io/ceph/ceph:v18, name=competent_payne, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:09:18 compute-0 sudo[194459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Oct 02 19:09:18 compute-0 sudo[194459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:09:19 compute-0 podman[194529]: 2025-10-02 19:09:19.242117566 +0000 UTC m=+0.065076590 container create 2877e738ce4af4620b62142e3b6c7ddfc130ec736bd07b06334836b6edc9a251 (image=quay.io/ceph/ceph:v18, name=quirky_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:09:19 compute-0 systemd[1]: Started libpod-conmon-2877e738ce4af4620b62142e3b6c7ddfc130ec736bd07b06334836b6edc9a251.scope.
Oct 02 19:09:19 compute-0 podman[194529]: 2025-10-02 19:09:19.219165803 +0000 UTC m=+0.042124857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:19 compute-0 podman[194529]: 2025-10-02 19:09:19.350479053 +0000 UTC m=+0.173438107 container init 2877e738ce4af4620b62142e3b6c7ddfc130ec736bd07b06334836b6edc9a251 (image=quay.io/ceph/ceph:v18, name=quirky_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:09:19 compute-0 podman[194529]: 2025-10-02 19:09:19.365613311 +0000 UTC m=+0.188572355 container start 2877e738ce4af4620b62142e3b6c7ddfc130ec736bd07b06334836b6edc9a251 (image=quay.io/ceph/ceph:v18, name=quirky_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:09:19 compute-0 podman[194529]: 2025-10-02 19:09:19.375063659 +0000 UTC m=+0.198022673 container attach 2877e738ce4af4620b62142e3b6c7ddfc130ec736bd07b06334836b6edc9a251 (image=quay.io/ceph/ceph:v18, name=quirky_merkle, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 19:09:19 compute-0 ceph-mgr[192222]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 19:09:19 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:19 compute-0 ceph-mgr[192222]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct 02 19:09:19 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct 02 19:09:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 02 19:09:19 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:19 compute-0 competent_payne[194449]: Scheduled mon update...
Oct 02 19:09:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:19 compute-0 ceph-mon[191910]: Added host compute-0
Oct 02 19:09:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 19:09:19 compute-0 systemd[1]: libpod-3a34bc971ac042480058caa830715671ade47b1f7839b6adc12fb0b90ef9142c.scope: Deactivated successfully.
Oct 02 19:09:19 compute-0 conmon[194449]: conmon 3a34bc971ac042480058 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a34bc971ac042480058caa830715671ade47b1f7839b6adc12fb0b90ef9142c.scope/container/memory.events
Oct 02 19:09:19 compute-0 podman[194392]: 2025-10-02 19:09:19.465200467 +0000 UTC m=+0.875517781 container died 3a34bc971ac042480058caa830715671ade47b1f7839b6adc12fb0b90ef9142c (image=quay.io/ceph/ceph:v18, name=competent_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:09:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b11024c3658e9e38718320f4372edbc873c6ffdf3419dac60a032fe65edafb29-merged.mount: Deactivated successfully.
Oct 02 19:09:19 compute-0 podman[194392]: 2025-10-02 19:09:19.562922004 +0000 UTC m=+0.973239328 container remove 3a34bc971ac042480058caa830715671ade47b1f7839b6adc12fb0b90ef9142c (image=quay.io/ceph/ceph:v18, name=competent_payne, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 19:09:19 compute-0 systemd[1]: libpod-conmon-3a34bc971ac042480058caa830715671ade47b1f7839b6adc12fb0b90ef9142c.scope: Deactivated successfully.
Oct 02 19:09:19 compute-0 podman[194565]: 2025-10-02 19:09:19.677502834 +0000 UTC m=+0.080360212 container create f6d16ecf56794cbf02ff1f7cdf9a298b6e1014dfa344bc5e3ab2c05f5705d064 (image=quay.io/ceph/ceph:v18, name=lucid_solomon, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 19:09:19 compute-0 quirky_merkle[194546]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 02 19:09:19 compute-0 podman[194529]: 2025-10-02 19:09:19.711131867 +0000 UTC m=+0.534090871 container died 2877e738ce4af4620b62142e3b6c7ddfc130ec736bd07b06334836b6edc9a251 (image=quay.io/ceph/ceph:v18, name=quirky_merkle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:09:19 compute-0 systemd[1]: libpod-2877e738ce4af4620b62142e3b6c7ddfc130ec736bd07b06334836b6edc9a251.scope: Deactivated successfully.
Oct 02 19:09:19 compute-0 podman[194565]: 2025-10-02 19:09:19.642979917 +0000 UTC m=+0.045837355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a72305f994fac285b60736cd737628c762e918db8be5b12f47b55bbad01e7049-merged.mount: Deactivated successfully.
Oct 02 19:09:19 compute-0 podman[194529]: 2025-10-02 19:09:19.788317055 +0000 UTC m=+0.611276049 container remove 2877e738ce4af4620b62142e3b6c7ddfc130ec736bd07b06334836b6edc9a251 (image=quay.io/ceph/ceph:v18, name=quirky_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 19:09:19 compute-0 systemd[1]: Started libpod-conmon-f6d16ecf56794cbf02ff1f7cdf9a298b6e1014dfa344bc5e3ab2c05f5705d064.scope.
Oct 02 19:09:19 compute-0 systemd[1]: libpod-conmon-2877e738ce4af4620b62142e3b6c7ddfc130ec736bd07b06334836b6edc9a251.scope: Deactivated successfully.
Oct 02 19:09:19 compute-0 sudo[194459]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Oct 02 19:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb9c0b0e5071f38b091ccd2c502af48ff09673e1be6efb177918017dd547b1d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb9c0b0e5071f38b091ccd2c502af48ff09673e1be6efb177918017dd547b1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb9c0b0e5071f38b091ccd2c502af48ff09673e1be6efb177918017dd547b1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:19 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:19 compute-0 podman[194565]: 2025-10-02 19:09:19.895250514 +0000 UTC m=+0.298107872 container init f6d16ecf56794cbf02ff1f7cdf9a298b6e1014dfa344bc5e3ab2c05f5705d064 (image=quay.io/ceph/ceph:v18, name=lucid_solomon, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:09:19 compute-0 podman[194565]: 2025-10-02 19:09:19.912562539 +0000 UTC m=+0.315419897 container start f6d16ecf56794cbf02ff1f7cdf9a298b6e1014dfa344bc5e3ab2c05f5705d064 (image=quay.io/ceph/ceph:v18, name=lucid_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:09:19 compute-0 podman[194565]: 2025-10-02 19:09:19.917891359 +0000 UTC m=+0.320748727 container attach f6d16ecf56794cbf02ff1f7cdf9a298b6e1014dfa344bc5e3ab2c05f5705d064 (image=quay.io/ceph/ceph:v18, name=lucid_solomon, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:09:20 compute-0 sudo[194597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:20 compute-0 sudo[194597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:20 compute-0 sudo[194597]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:20 compute-0 sudo[194623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:09:20 compute-0 sudo[194623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:20 compute-0 sudo[194623]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:20 compute-0 sudo[194648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:20 compute-0 sudo[194648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:20 compute-0 sudo[194648]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:20 compute-0 sudo[194674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 19:09:20 compute-0 sudo[194674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:20 compute-0 ceph-mon[191910]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:20 compute-0 ceph-mon[191910]: Saving service mon spec with placement count:5
Oct 02 19:09:20 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:20 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:20 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:20 compute-0 ceph-mgr[192222]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct 02 19:09:20 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct 02 19:09:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 02 19:09:20 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:20 compute-0 lucid_solomon[194593]: Scheduled mgr update...
Oct 02 19:09:20 compute-0 systemd[1]: libpod-f6d16ecf56794cbf02ff1f7cdf9a298b6e1014dfa344bc5e3ab2c05f5705d064.scope: Deactivated successfully.
Oct 02 19:09:20 compute-0 podman[194565]: 2025-10-02 19:09:20.567845642 +0000 UTC m=+0.970703020 container died f6d16ecf56794cbf02ff1f7cdf9a298b6e1014dfa344bc5e3ab2c05f5705d064 (image=quay.io/ceph/ceph:v18, name=lucid_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 19:09:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fb9c0b0e5071f38b091ccd2c502af48ff09673e1be6efb177918017dd547b1d-merged.mount: Deactivated successfully.
Oct 02 19:09:20 compute-0 podman[194565]: 2025-10-02 19:09:20.648926992 +0000 UTC m=+1.051784370 container remove f6d16ecf56794cbf02ff1f7cdf9a298b6e1014dfa344bc5e3ab2c05f5705d064 (image=quay.io/ceph/ceph:v18, name=lucid_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:09:20 compute-0 systemd[1]: libpod-conmon-f6d16ecf56794cbf02ff1f7cdf9a298b6e1014dfa344bc5e3ab2c05f5705d064.scope: Deactivated successfully.
Oct 02 19:09:20 compute-0 sudo[194674]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:09:20 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:20 compute-0 podman[194748]: 2025-10-02 19:09:20.780799097 +0000 UTC m=+0.088261810 container create 24edd149860d4356740a143c5fbbf12950c8889147b74cb898b36b34f59cb4a7 (image=quay.io/ceph/ceph:v18, name=nostalgic_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 19:09:20 compute-0 podman[194748]: 2025-10-02 19:09:20.744992816 +0000 UTC m=+0.052455499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:20 compute-0 systemd[1]: Started libpod-conmon-24edd149860d4356740a143c5fbbf12950c8889147b74cb898b36b34f59cb4a7.scope.
Oct 02 19:09:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0616969a07f814e2ba5f01fdf4a1933f1e51ad7dba9f183a356980bd8d2865e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0616969a07f814e2ba5f01fdf4a1933f1e51ad7dba9f183a356980bd8d2865e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0616969a07f814e2ba5f01fdf4a1933f1e51ad7dba9f183a356980bd8d2865e1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:20 compute-0 sudo[194762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:20 compute-0 sudo[194762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:20 compute-0 sudo[194762]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:20 compute-0 podman[194748]: 2025-10-02 19:09:20.958202497 +0000 UTC m=+0.265665280 container init 24edd149860d4356740a143c5fbbf12950c8889147b74cb898b36b34f59cb4a7 (image=quay.io/ceph/ceph:v18, name=nostalgic_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:09:20 compute-0 podman[194748]: 2025-10-02 19:09:20.968248231 +0000 UTC m=+0.275710914 container start 24edd149860d4356740a143c5fbbf12950c8889147b74cb898b36b34f59cb4a7 (image=quay.io/ceph/ceph:v18, name=nostalgic_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:09:20 compute-0 podman[194748]: 2025-10-02 19:09:20.973716565 +0000 UTC m=+0.281179278 container attach 24edd149860d4356740a143c5fbbf12950c8889147b74cb898b36b34f59cb4a7 (image=quay.io/ceph/ceph:v18, name=nostalgic_easley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 19:09:21 compute-0 sudo[194793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:09:21 compute-0 sudo[194793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:21 compute-0 sudo[194793]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:21 compute-0 sudo[194820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:21 compute-0 sudo[194820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:21 compute-0 sudo[194820]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:21 compute-0 sudo[194845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 19:09:21 compute-0 sudo[194845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:21 compute-0 ceph-mgr[192222]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 02 19:09:21 compute-0 ceph-mon[191910]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:21 compute-0 ceph-mon[191910]: Saving service mgr spec with placement count:2
Oct 02 19:09:21 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:21 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:21 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:21 compute-0 ceph-mgr[192222]: [cephadm INFO root] Saving service crash spec with placement *
Oct 02 19:09:21 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct 02 19:09:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 02 19:09:21 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:21 compute-0 nostalgic_easley[194783]: Scheduled crash update...
Oct 02 19:09:21 compute-0 systemd[1]: libpod-24edd149860d4356740a143c5fbbf12950c8889147b74cb898b36b34f59cb4a7.scope: Deactivated successfully.
Oct 02 19:09:21 compute-0 podman[194748]: 2025-10-02 19:09:21.6035246 +0000 UTC m=+0.910987273 container died 24edd149860d4356740a143c5fbbf12950c8889147b74cb898b36b34f59cb4a7 (image=quay.io/ceph/ceph:v18, name=nostalgic_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 02 19:09:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0616969a07f814e2ba5f01fdf4a1933f1e51ad7dba9f183a356980bd8d2865e1-merged.mount: Deactivated successfully.
Oct 02 19:09:21 compute-0 podman[194748]: 2025-10-02 19:09:21.718233273 +0000 UTC m=+1.025695956 container remove 24edd149860d4356740a143c5fbbf12950c8889147b74cb898b36b34f59cb4a7 (image=quay.io/ceph/ceph:v18, name=nostalgic_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 19:09:21 compute-0 systemd[1]: libpod-conmon-24edd149860d4356740a143c5fbbf12950c8889147b74cb898b36b34f59cb4a7.scope: Deactivated successfully.
Oct 02 19:09:21 compute-0 podman[194936]: 2025-10-02 19:09:21.83345028 +0000 UTC m=+0.074888208 container create 1e1abfef5f7f4ff4c1735e5e68748c73f76efa348dd275fcf4b629ad3e8d07c4 (image=quay.io/ceph/ceph:v18, name=sleepy_cray, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 19:09:21 compute-0 podman[194936]: 2025-10-02 19:09:21.805876346 +0000 UTC m=+0.047314294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:21 compute-0 systemd[1]: Started libpod-conmon-1e1abfef5f7f4ff4c1735e5e68748c73f76efa348dd275fcf4b629ad3e8d07c4.scope.
Oct 02 19:09:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36595b0b1725686f5f242685f7688bd68593c35e022387dc70fbcb6df40d454/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36595b0b1725686f5f242685f7688bd68593c35e022387dc70fbcb6df40d454/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36595b0b1725686f5f242685f7688bd68593c35e022387dc70fbcb6df40d454/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:21 compute-0 podman[194936]: 2025-10-02 19:09:21.972531344 +0000 UTC m=+0.213969282 container init 1e1abfef5f7f4ff4c1735e5e68748c73f76efa348dd275fcf4b629ad3e8d07c4 (image=quay.io/ceph/ceph:v18, name=sleepy_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 19:09:21 compute-0 podman[194936]: 2025-10-02 19:09:21.989459308 +0000 UTC m=+0.230897206 container start 1e1abfef5f7f4ff4c1735e5e68748c73f76efa348dd275fcf4b629ad3e8d07c4 (image=quay.io/ceph/ceph:v18, name=sleepy_cray, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 19:09:21 compute-0 podman[194936]: 2025-10-02 19:09:21.99483513 +0000 UTC m=+0.236273038 container attach 1e1abfef5f7f4ff4c1735e5e68748c73f76efa348dd275fcf4b629ad3e8d07c4 (image=quay.io/ceph/ceph:v18, name=sleepy_cray, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:09:22 compute-0 podman[194989]: 2025-10-02 19:09:22.132165037 +0000 UTC m=+0.103010087 container exec a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 19:09:22 compute-0 podman[194989]: 2025-10-02 19:09:22.469975502 +0000 UTC m=+0.440820522 container exec_died a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 19:09:22 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Oct 02 19:09:22 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1797117205' entity='client.admin' 
Oct 02 19:09:22 compute-0 ceph-mon[191910]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:22 compute-0 ceph-mon[191910]: Saving service crash spec with placement *
Oct 02 19:09:22 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:22 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1797117205' entity='client.admin' 
Oct 02 19:09:22 compute-0 systemd[1]: libpod-1e1abfef5f7f4ff4c1735e5e68748c73f76efa348dd275fcf4b629ad3e8d07c4.scope: Deactivated successfully.
Oct 02 19:09:22 compute-0 podman[194936]: 2025-10-02 19:09:22.610712279 +0000 UTC m=+0.852150237 container died 1e1abfef5f7f4ff4c1735e5e68748c73f76efa348dd275fcf4b629ad3e8d07c4 (image=quay.io/ceph/ceph:v18, name=sleepy_cray, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 19:09:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a36595b0b1725686f5f242685f7688bd68593c35e022387dc70fbcb6df40d454-merged.mount: Deactivated successfully.
Oct 02 19:09:22 compute-0 podman[194936]: 2025-10-02 19:09:22.694689675 +0000 UTC m=+0.936127573 container remove 1e1abfef5f7f4ff4c1735e5e68748c73f76efa348dd275fcf4b629ad3e8d07c4 (image=quay.io/ceph/ceph:v18, name=sleepy_cray, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 19:09:22 compute-0 systemd[1]: libpod-conmon-1e1abfef5f7f4ff4c1735e5e68748c73f76efa348dd275fcf4b629ad3e8d07c4.scope: Deactivated successfully.
Oct 02 19:09:22 compute-0 sudo[194845]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:22 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:09:22 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:22 compute-0 podman[195066]: 2025-10-02 19:09:22.806930813 +0000 UTC m=+0.072627889 container create 85bfa05026a97ceb9fe80d2490adbe7a104981ea434f373ca5a5a29a406f6d21 (image=quay.io/ceph/ceph:v18, name=frosty_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 19:09:22 compute-0 sudo[195073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:22 compute-0 sudo[195073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:22 compute-0 sudo[195073]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:22 compute-0 podman[195066]: 2025-10-02 19:09:22.777537921 +0000 UTC m=+0.043235067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:22 compute-0 systemd[1]: Started libpod-conmon-85bfa05026a97ceb9fe80d2490adbe7a104981ea434f373ca5a5a29a406f6d21.scope.
Oct 02 19:09:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9d8c62e478cdae8964a53dcd8faefdfbb2d1d75b596c5dd91c8c98a65b268c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9d8c62e478cdae8964a53dcd8faefdfbb2d1d75b596c5dd91c8c98a65b268c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9d8c62e478cdae8964a53dcd8faefdfbb2d1d75b596c5dd91c8c98a65b268c8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:22 compute-0 sudo[195105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:09:22 compute-0 sudo[195105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:22 compute-0 sudo[195105]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:22 compute-0 podman[195066]: 2025-10-02 19:09:22.94763857 +0000 UTC m=+0.213335686 container init 85bfa05026a97ceb9fe80d2490adbe7a104981ea434f373ca5a5a29a406f6d21 (image=quay.io/ceph/ceph:v18, name=frosty_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 19:09:22 compute-0 podman[195066]: 2025-10-02 19:09:22.981174801 +0000 UTC m=+0.246871877 container start 85bfa05026a97ceb9fe80d2490adbe7a104981ea434f373ca5a5a29a406f6d21 (image=quay.io/ceph/ceph:v18, name=frosty_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:09:22 compute-0 podman[195066]: 2025-10-02 19:09:22.993793272 +0000 UTC m=+0.259490398 container attach 85bfa05026a97ceb9fe80d2490adbe7a104981ea434f373ca5a5a29a406f6d21 (image=quay.io/ceph/ceph:v18, name=frosty_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Oct 02 19:09:23 compute-0 sudo[195136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:23 compute-0 sudo[195136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:23 compute-0 sudo[195136]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:23 compute-0 sudo[195162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:09:23 compute-0 sudo[195162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:23 compute-0 ceph-mgr[192222]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct 02 19:09:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:23 compute-0 ceph-mon[191910]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 02 19:09:23 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 195218 (sysctl)
Oct 02 19:09:23 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 02 19:09:23 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 02 19:09:23 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Oct 02 19:09:23 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:23 compute-0 systemd[1]: libpod-85bfa05026a97ceb9fe80d2490adbe7a104981ea434f373ca5a5a29a406f6d21.scope: Deactivated successfully.
Oct 02 19:09:23 compute-0 podman[195226]: 2025-10-02 19:09:23.666859864 +0000 UTC m=+0.039912260 container died 85bfa05026a97ceb9fe80d2490adbe7a104981ea434f373ca5a5a29a406f6d21 (image=quay.io/ceph/ceph:v18, name=frosty_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 19:09:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9d8c62e478cdae8964a53dcd8faefdfbb2d1d75b596c5dd91c8c98a65b268c8-merged.mount: Deactivated successfully.
Oct 02 19:09:23 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:23 compute-0 ceph-mon[191910]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 02 19:09:23 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:23 compute-0 podman[195226]: 2025-10-02 19:09:23.741391062 +0000 UTC m=+0.114443448 container remove 85bfa05026a97ceb9fe80d2490adbe7a104981ea434f373ca5a5a29a406f6d21 (image=quay.io/ceph/ceph:v18, name=frosty_ritchie, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:09:23 compute-0 systemd[1]: libpod-conmon-85bfa05026a97ceb9fe80d2490adbe7a104981ea434f373ca5a5a29a406f6d21.scope: Deactivated successfully.
Oct 02 19:09:23 compute-0 podman[195244]: 2025-10-02 19:09:23.874654623 +0000 UTC m=+0.077077656 container create 5a328de22c5d5f85e9124e6ab6389cf56080b5f25ce5c8479119d18dc5d66510 (image=quay.io/ceph/ceph:v18, name=beautiful_bassi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 19:09:23 compute-0 sudo[195162]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:23 compute-0 systemd[1]: Started libpod-conmon-5a328de22c5d5f85e9124e6ab6389cf56080b5f25ce5c8479119d18dc5d66510.scope.
Oct 02 19:09:23 compute-0 podman[195244]: 2025-10-02 19:09:23.841464461 +0000 UTC m=+0.043887534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/454b107257a8179bb1c404bd83857976b92f4d3adc0c54c11760a992c3bc6178/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/454b107257a8179bb1c404bd83857976b92f4d3adc0c54c11760a992c3bc6178/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/454b107257a8179bb1c404bd83857976b92f4d3adc0c54c11760a992c3bc6178/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:23 compute-0 podman[195244]: 2025-10-02 19:09:23.990735151 +0000 UTC m=+0.193158174 container init 5a328de22c5d5f85e9124e6ab6389cf56080b5f25ce5c8479119d18dc5d66510 (image=quay.io/ceph/ceph:v18, name=beautiful_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:09:24 compute-0 podman[195244]: 2025-10-02 19:09:24.009584456 +0000 UTC m=+0.212007469 container start 5a328de22c5d5f85e9124e6ab6389cf56080b5f25ce5c8479119d18dc5d66510 (image=quay.io/ceph/ceph:v18, name=beautiful_bassi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 19:09:24 compute-0 podman[195244]: 2025-10-02 19:09:24.01581214 +0000 UTC m=+0.218235253 container attach 5a328de22c5d5f85e9124e6ab6389cf56080b5f25ce5c8479119d18dc5d66510 (image=quay.io/ceph/ceph:v18, name=beautiful_bassi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:09:24 compute-0 sudo[195270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:24 compute-0 sudo[195270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:24 compute-0 sudo[195270]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:09:24 compute-0 sudo[195300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:09:24 compute-0 sudo[195300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:24 compute-0 sudo[195300]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:24 compute-0 sudo[195325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:24 compute-0 sudo[195325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:24 compute-0 sudo[195325]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:24 compute-0 sudo[195352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 19:09:24 compute-0 sudo[195352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:24 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 19:09:24 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:24 compute-0 ceph-mgr[192222]: [cephadm INFO root] Added label _admin to host compute-0
Oct 02 19:09:24 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct 02 19:09:24 compute-0 beautiful_bassi[195271]: Added label _admin to host compute-0
Oct 02 19:09:24 compute-0 systemd[1]: libpod-5a328de22c5d5f85e9124e6ab6389cf56080b5f25ce5c8479119d18dc5d66510.scope: Deactivated successfully.
Oct 02 19:09:24 compute-0 podman[195244]: 2025-10-02 19:09:24.632067979 +0000 UTC m=+0.834491002 container died 5a328de22c5d5f85e9124e6ab6389cf56080b5f25ce5c8479119d18dc5d66510 (image=quay.io/ceph/ceph:v18, name=beautiful_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 02 19:09:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-454b107257a8179bb1c404bd83857976b92f4d3adc0c54c11760a992c3bc6178-merged.mount: Deactivated successfully.
Oct 02 19:09:24 compute-0 podman[195244]: 2025-10-02 19:09:24.704678096 +0000 UTC m=+0.907101109 container remove 5a328de22c5d5f85e9124e6ab6389cf56080b5f25ce5c8479119d18dc5d66510 (image=quay.io/ceph/ceph:v18, name=beautiful_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 19:09:24 compute-0 systemd[1]: libpod-conmon-5a328de22c5d5f85e9124e6ab6389cf56080b5f25ce5c8479119d18dc5d66510.scope: Deactivated successfully.
Oct 02 19:09:24 compute-0 sudo[195352]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:09:24 compute-0 ceph-mon[191910]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:24 compute-0 ceph-mon[191910]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:24 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:24 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:24 compute-0 podman[195424]: 2025-10-02 19:09:24.801226053 +0000 UTC m=+0.062589626 container create bd2f49cce5cb03c2986016053d19dcbec36a86c576ad6a6e14985aaca08be0a5 (image=quay.io/ceph/ceph:v18, name=crazy_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 19:09:24 compute-0 systemd[1]: Started libpod-conmon-bd2f49cce5cb03c2986016053d19dcbec36a86c576ad6a6e14985aaca08be0a5.scope.
Oct 02 19:09:24 compute-0 sudo[195434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:24 compute-0 sudo[195434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:24 compute-0 podman[195424]: 2025-10-02 19:09:24.77980906 +0000 UTC m=+0.041172663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:24 compute-0 sudo[195434]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/223f5eb908e44fe5f92c8b5022b1cb9cc2968ca078b7d5adf9412b46272bd9d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/223f5eb908e44fe5f92c8b5022b1cb9cc2968ca078b7d5adf9412b46272bd9d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/223f5eb908e44fe5f92c8b5022b1cb9cc2968ca078b7d5adf9412b46272bd9d9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:24 compute-0 podman[195424]: 2025-10-02 19:09:24.923520265 +0000 UTC m=+0.184883878 container init bd2f49cce5cb03c2986016053d19dcbec36a86c576ad6a6e14985aaca08be0a5 (image=quay.io/ceph/ceph:v18, name=crazy_lalande, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 19:09:24 compute-0 podman[195424]: 2025-10-02 19:09:24.939072674 +0000 UTC m=+0.200436257 container start bd2f49cce5cb03c2986016053d19dcbec36a86c576ad6a6e14985aaca08be0a5 (image=quay.io/ceph/ceph:v18, name=crazy_lalande, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:09:24 compute-0 podman[195424]: 2025-10-02 19:09:24.945165114 +0000 UTC m=+0.206528787 container attach bd2f49cce5cb03c2986016053d19dcbec36a86c576ad6a6e14985aaca08be0a5 (image=quay.io/ceph/ceph:v18, name=crazy_lalande, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:09:25 compute-0 sudo[195468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:09:25 compute-0 sudo[195468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:25 compute-0 sudo[195468]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:25 compute-0 sudo[195495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:25 compute-0 sudo[195495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:25 compute-0 sudo[195495]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:25 compute-0 sudo[195520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- inventory --format=json-pretty --filter-for-batch
Oct 02 19:09:25 compute-0 sudo[195520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Oct 02 19:09:25 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2616870699' entity='client.admin' 
Oct 02 19:09:25 compute-0 systemd[1]: libpod-bd2f49cce5cb03c2986016053d19dcbec36a86c576ad6a6e14985aaca08be0a5.scope: Deactivated successfully.
Oct 02 19:09:25 compute-0 podman[195587]: 2025-10-02 19:09:25.67282922 +0000 UTC m=+0.063137980 container died bd2f49cce5cb03c2986016053d19dcbec36a86c576ad6a6e14985aaca08be0a5 (image=quay.io/ceph/ceph:v18, name=crazy_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 19:09:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-223f5eb908e44fe5f92c8b5022b1cb9cc2968ca078b7d5adf9412b46272bd9d9-merged.mount: Deactivated successfully.
Oct 02 19:09:25 compute-0 podman[195599]: 2025-10-02 19:09:25.763111561 +0000 UTC m=+0.110741230 container remove bd2f49cce5cb03c2986016053d19dcbec36a86c576ad6a6e14985aaca08be0a5 (image=quay.io/ceph/ceph:v18, name=crazy_lalande, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 19:09:25 compute-0 ceph-mon[191910]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:25 compute-0 ceph-mon[191910]: Added label _admin to host compute-0
Oct 02 19:09:25 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:25 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2616870699' entity='client.admin' 
Oct 02 19:09:25 compute-0 systemd[1]: libpod-conmon-bd2f49cce5cb03c2986016053d19dcbec36a86c576ad6a6e14985aaca08be0a5.scope: Deactivated successfully.
Oct 02 19:09:25 compute-0 podman[195614]: 2025-10-02 19:09:25.820495419 +0000 UTC m=+0.082758905 container create 8fff7296c162ecf16c90ccd4ee99fb06af31cb66672eb1c6da6195ea3a426bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:09:25 compute-0 systemd[1]: Started libpod-conmon-8fff7296c162ecf16c90ccd4ee99fb06af31cb66672eb1c6da6195ea3a426bad.scope.
Oct 02 19:09:25 compute-0 podman[195614]: 2025-10-02 19:09:25.796513569 +0000 UTC m=+0.058777065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:09:25 compute-0 podman[195625]: 2025-10-02 19:09:25.905285886 +0000 UTC m=+0.090246451 container create ba0fbd503fd9fb79da2fa9ac28aac80c85753701de68e147f2f6b8cc8d0c470b (image=quay.io/ceph/ceph:v18, name=sleepy_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 19:09:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:25 compute-0 podman[195614]: 2025-10-02 19:09:25.943895961 +0000 UTC m=+0.206159487 container init 8fff7296c162ecf16c90ccd4ee99fb06af31cb66672eb1c6da6195ea3a426bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 19:09:25 compute-0 podman[195625]: 2025-10-02 19:09:25.869541097 +0000 UTC m=+0.054501692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:25 compute-0 podman[195614]: 2025-10-02 19:09:25.966485064 +0000 UTC m=+0.228748550 container start 8fff7296c162ecf16c90ccd4ee99fb06af31cb66672eb1c6da6195ea3a426bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 19:09:25 compute-0 systemd[1]: Started libpod-conmon-ba0fbd503fd9fb79da2fa9ac28aac80c85753701de68e147f2f6b8cc8d0c470b.scope.
Oct 02 19:09:25 compute-0 gallant_wing[195646]: 167 167
Oct 02 19:09:25 compute-0 podman[195614]: 2025-10-02 19:09:25.973897429 +0000 UTC m=+0.236160975 container attach 8fff7296c162ecf16c90ccd4ee99fb06af31cb66672eb1c6da6195ea3a426bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 19:09:25 compute-0 systemd[1]: libpod-8fff7296c162ecf16c90ccd4ee99fb06af31cb66672eb1c6da6195ea3a426bad.scope: Deactivated successfully.
Oct 02 19:09:25 compute-0 podman[195614]: 2025-10-02 19:09:25.975063679 +0000 UTC m=+0.237327165 container died 8fff7296c162ecf16c90ccd4ee99fb06af31cb66672eb1c6da6195ea3a426bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:09:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9382106837137e3206831ace311ab82a288c5aba4f42402e49fa3ff7688b2cd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9382106837137e3206831ace311ab82a288c5aba4f42402e49fa3ff7688b2cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9382106837137e3206831ace311ab82a288c5aba4f42402e49fa3ff7688b2cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e625316c4e462659212401a4a9ed007b4d27756afa2aaa9db4b7be8bb3169e7-merged.mount: Deactivated successfully.
Oct 02 19:09:26 compute-0 podman[195625]: 2025-10-02 19:09:26.053828849 +0000 UTC m=+0.238789484 container init ba0fbd503fd9fb79da2fa9ac28aac80c85753701de68e147f2f6b8cc8d0c470b (image=quay.io/ceph/ceph:v18, name=sleepy_ride, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 19:09:26 compute-0 podman[195625]: 2025-10-02 19:09:26.063626006 +0000 UTC m=+0.248586571 container start ba0fbd503fd9fb79da2fa9ac28aac80c85753701de68e147f2f6b8cc8d0c470b (image=quay.io/ceph/ceph:v18, name=sleepy_ride, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:09:26 compute-0 podman[195614]: 2025-10-02 19:09:26.077254864 +0000 UTC m=+0.339518350 container remove 8fff7296c162ecf16c90ccd4ee99fb06af31cb66672eb1c6da6195ea3a426bad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 19:09:26 compute-0 podman[195625]: 2025-10-02 19:09:26.089078345 +0000 UTC m=+0.274038930 container attach ba0fbd503fd9fb79da2fa9ac28aac80c85753701de68e147f2f6b8cc8d0c470b (image=quay.io/ceph/ceph:v18, name=sleepy_ride, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 19:09:26 compute-0 systemd[1]: libpod-conmon-8fff7296c162ecf16c90ccd4ee99fb06af31cb66672eb1c6da6195ea3a426bad.scope: Deactivated successfully.
Oct 02 19:09:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Oct 02 19:09:27 compute-0 ceph-mon[191910]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1518340984' entity='client.admin' 
Oct 02 19:09:27 compute-0 sleepy_ride[195653]: set mgr/dashboard/cluster/status
Oct 02 19:09:27 compute-0 systemd[1]: libpod-ba0fbd503fd9fb79da2fa9ac28aac80c85753701de68e147f2f6b8cc8d0c470b.scope: Deactivated successfully.
Oct 02 19:09:27 compute-0 podman[195625]: 2025-10-02 19:09:27.38390984 +0000 UTC m=+1.568870435 container died ba0fbd503fd9fb79da2fa9ac28aac80c85753701de68e147f2f6b8cc8d0c470b (image=quay.io/ceph/ceph:v18, name=sleepy_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:09:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9382106837137e3206831ace311ab82a288c5aba4f42402e49fa3ff7688b2cd-merged.mount: Deactivated successfully.
Oct 02 19:09:27 compute-0 podman[195625]: 2025-10-02 19:09:27.604950346 +0000 UTC m=+1.789910941 container remove ba0fbd503fd9fb79da2fa9ac28aac80c85753701de68e147f2f6b8cc8d0c470b (image=quay.io/ceph/ceph:v18, name=sleepy_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:09:27 compute-0 systemd[1]: libpod-conmon-ba0fbd503fd9fb79da2fa9ac28aac80c85753701de68e147f2f6b8cc8d0c470b.scope: Deactivated successfully.
Oct 02 19:09:27 compute-0 sudo[190690]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:28 compute-0 podman[195709]: 2025-10-02 19:09:28.009515034 +0000 UTC m=+0.116355458 container create 34e756b805d48a3f39378433163149dd6b3edcb9a83de67b36acfe6ffa60ec0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:09:28 compute-0 podman[195709]: 2025-10-02 19:09:27.95608727 +0000 UTC m=+0.062927744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:09:28 compute-0 systemd[1]: Started libpod-conmon-34e756b805d48a3f39378433163149dd6b3edcb9a83de67b36acfe6ffa60ec0d.scope.
Oct 02 19:09:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c44c94989fedebc8c5b07e4e90ace612517dbfbdda5389e0593202fbd3e5372/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c44c94989fedebc8c5b07e4e90ace612517dbfbdda5389e0593202fbd3e5372/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c44c94989fedebc8c5b07e4e90ace612517dbfbdda5389e0593202fbd3e5372/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c44c94989fedebc8c5b07e4e90ace612517dbfbdda5389e0593202fbd3e5372/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:28 compute-0 sudo[195751]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbosxpoklblzviiszqgvhxepknlriawk ; /usr/bin/python3'
Oct 02 19:09:28 compute-0 sudo[195751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:28 compute-0 podman[195709]: 2025-10-02 19:09:28.220307921 +0000 UTC m=+0.327148335 container init 34e756b805d48a3f39378433163149dd6b3edcb9a83de67b36acfe6ffa60ec0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_golick, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:09:28 compute-0 podman[195709]: 2025-10-02 19:09:28.232790879 +0000 UTC m=+0.339631283 container start 34e756b805d48a3f39378433163149dd6b3edcb9a83de67b36acfe6ffa60ec0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 02 19:09:28 compute-0 podman[195709]: 2025-10-02 19:09:28.263951668 +0000 UTC m=+0.370792092 container attach 34e756b805d48a3f39378433163149dd6b3edcb9a83de67b36acfe6ffa60ec0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:09:28 compute-0 python3[195753]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                            _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:09:28 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1518340984' entity='client.admin' 
Oct 02 19:09:28 compute-0 ceph-mon[191910]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:28 compute-0 podman[195756]: 2025-10-02 19:09:28.532074341 +0000 UTC m=+0.142444283 container create 890fb77a41d43ca54e58505a081f69f3b6da939007ff1a0bac16ffac555ade0c (image=quay.io/ceph/ceph:v18, name=great_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct 02 19:09:28 compute-0 podman[195756]: 2025-10-02 19:09:28.460014958 +0000 UTC m=+0.070384990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:28 compute-0 systemd[1]: Started libpod-conmon-890fb77a41d43ca54e58505a081f69f3b6da939007ff1a0bac16ffac555ade0c.scope.
Oct 02 19:09:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5ec586dd0adab3c82042ab90b073e2aa914ad7190e105330d47b7ae037cb0cd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5ec586dd0adab3c82042ab90b073e2aa914ad7190e105330d47b7ae037cb0cd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:28 compute-0 podman[195756]: 2025-10-02 19:09:28.723204682 +0000 UTC m=+0.333574674 container init 890fb77a41d43ca54e58505a081f69f3b6da939007ff1a0bac16ffac555ade0c (image=quay.io/ceph/ceph:v18, name=great_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 19:09:28 compute-0 podman[195756]: 2025-10-02 19:09:28.733720539 +0000 UTC m=+0.344090491 container start 890fb77a41d43ca54e58505a081f69f3b6da939007ff1a0bac16ffac555ade0c (image=quay.io/ceph/ceph:v18, name=great_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 19:09:28 compute-0 podman[195756]: 2025-10-02 19:09:28.757615176 +0000 UTC m=+0.367985128 container attach 890fb77a41d43ca54e58505a081f69f3b6da939007ff1a0bac16ffac555ade0c (image=quay.io/ceph/ceph:v18, name=great_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 19:09:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:09:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Oct 02 19:09:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4206728578' entity='client.admin' 
Oct 02 19:09:29 compute-0 systemd[1]: libpod-890fb77a41d43ca54e58505a081f69f3b6da939007ff1a0bac16ffac555ade0c.scope: Deactivated successfully.
Oct 02 19:09:29 compute-0 podman[195756]: 2025-10-02 19:09:29.390216395 +0000 UTC m=+1.000586337 container died 890fb77a41d43ca54e58505a081f69f3b6da939007ff1a0bac16ffac555ade0c (image=quay.io/ceph/ceph:v18, name=great_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:09:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5ec586dd0adab3c82042ab90b073e2aa914ad7190e105330d47b7ae037cb0cd-merged.mount: Deactivated successfully.
Oct 02 19:09:29 compute-0 podman[195756]: 2025-10-02 19:09:29.46693054 +0000 UTC m=+1.077300482 container remove 890fb77a41d43ca54e58505a081f69f3b6da939007ff1a0bac16ffac555ade0c (image=quay.io/ceph/ceph:v18, name=great_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 19:09:29 compute-0 systemd[1]: libpod-conmon-890fb77a41d43ca54e58505a081f69f3b6da939007ff1a0bac16ffac555ade0c.scope: Deactivated successfully.
Oct 02 19:09:29 compute-0 sudo[195751]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:29 compute-0 podman[157186]: time="2025-10-02T19:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:09:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 23699 "" "Go-http-client/1.1"
Oct 02 19:09:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4370 "" "Go-http-client/1.1"
Oct 02 19:09:30 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4206728578' entity='client.admin' 
Oct 02 19:09:30 compute-0 ceph-mon[191910]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:30 compute-0 sudo[197973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rihcymhnoufihtnugxaxztcacbriszou ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432169.9225624-33576-251874905845146/async_wrapper.py j170841206523 30 /home/zuul/.ansible/tmp/ansible-tmp-1759432169.9225624-33576-251874905845146/AnsiballZ_command.py _'
Oct 02 19:09:30 compute-0 sudo[197973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]: [
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:     {
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:         "available": false,
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:         "ceph_device": false,
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:         "lsm_data": {},
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:         "lvs": [],
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:         "path": "/dev/sr0",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:         "rejected_reasons": [
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "Has a FileSystem",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "Insufficient space (<5GB)"
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:         ],
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:         "sys_api": {
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "actuators": null,
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "device_nodes": "sr0",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "devname": "sr0",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "human_readable_size": "482.00 KB",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "id_bus": "ata",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "model": "QEMU DVD-ROM",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "nr_requests": "2",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "parent": "/dev/sr0",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "partitions": {},
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "path": "/dev/sr0",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "removable": "1",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "rev": "2.5+",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "ro": "0",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "rotational": "0",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "sas_address": "",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "sas_device_handle": "",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "scheduler_mode": "mq-deadline",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "sectors": 0,
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "sectorsize": "2048",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "size": 493568.0,
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "support_discard": "2048",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "type": "disk",
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:             "vendor": "QEMU"
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:         }
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]:     }
Oct 02 19:09:30 compute-0 ecstatic_golick[195742]: ]
Oct 02 19:09:30 compute-0 systemd[1]: libpod-34e756b805d48a3f39378433163149dd6b3edcb9a83de67b36acfe6ffa60ec0d.scope: Deactivated successfully.
Oct 02 19:09:30 compute-0 podman[195709]: 2025-10-02 19:09:30.666950425 +0000 UTC m=+2.773790829 container died 34e756b805d48a3f39378433163149dd6b3edcb9a83de67b36acfe6ffa60ec0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_golick, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 19:09:30 compute-0 systemd[1]: libpod-34e756b805d48a3f39378433163149dd6b3edcb9a83de67b36acfe6ffa60ec0d.scope: Consumed 2.480s CPU time.
Oct 02 19:09:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c44c94989fedebc8c5b07e4e90ace612517dbfbdda5389e0593202fbd3e5372-merged.mount: Deactivated successfully.
Oct 02 19:09:30 compute-0 podman[195709]: 2025-10-02 19:09:30.747624794 +0000 UTC m=+2.854465178 container remove 34e756b805d48a3f39378433163149dd6b3edcb9a83de67b36acfe6ffa60ec0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_golick, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:09:30 compute-0 ansible-async_wrapper.py[198044]: Invoked with j170841206523 30 /home/zuul/.ansible/tmp/ansible-tmp-1759432169.9225624-33576-251874905845146/AnsiballZ_command.py _
Oct 02 19:09:30 compute-0 systemd[1]: libpod-conmon-34e756b805d48a3f39378433163149dd6b3edcb9a83de67b36acfe6ffa60ec0d.scope: Deactivated successfully.
Oct 02 19:09:30 compute-0 ansible-async_wrapper.py[198133]: Starting module and watcher
Oct 02 19:09:30 compute-0 ansible-async_wrapper.py[198133]: Start watching 198134 (30)
Oct 02 19:09:30 compute-0 ansible-async_wrapper.py[198134]: Start module (198134)
Oct 02 19:09:30 compute-0 sudo[195520]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:30 compute-0 ansible-async_wrapper.py[198044]: Return async_wrapper task started.
Oct 02 19:09:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:09:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:09:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:30 compute-0 sudo[197973]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:09:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:09:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 19:09:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 19:09:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:09:30 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:09:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:09:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:09:30 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 02 19:09:30 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 02 19:09:30 compute-0 python3[198135]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:09:30 compute-0 sudo[198136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:30 compute-0 sudo[198136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:30 compute-0 sudo[198136]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:31 compute-0 podman[198159]: 2025-10-02 19:09:31.019962128 +0000 UTC m=+0.081824480 container create 1f206245bc4f880af5254be461dc60cf3be7ed486b19438437d9e79706751253 (image=quay.io/ceph/ceph:v18, name=admiring_tesla, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:09:31 compute-0 systemd[1]: Started libpod-conmon-1f206245bc4f880af5254be461dc60cf3be7ed486b19438437d9e79706751253.scope.
Oct 02 19:09:31 compute-0 podman[198159]: 2025-10-02 19:09:30.987855785 +0000 UTC m=+0.049718217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:31 compute-0 sudo[198171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 02 19:09:31 compute-0 sudo[198171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:31 compute-0 sudo[198171]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108717f2fc874b845fa2320669feee7b4c6f04dad1ed8cb8e55efa676ca4b9a3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108717f2fc874b845fa2320669feee7b4c6f04dad1ed8cb8e55efa676ca4b9a3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:31 compute-0 podman[198159]: 2025-10-02 19:09:31.145853486 +0000 UTC m=+0.207715908 container init 1f206245bc4f880af5254be461dc60cf3be7ed486b19438437d9e79706751253 (image=quay.io/ceph/ceph:v18, name=admiring_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 19:09:31 compute-0 podman[198159]: 2025-10-02 19:09:31.163784136 +0000 UTC m=+0.225646478 container start 1f206245bc4f880af5254be461dc60cf3be7ed486b19438437d9e79706751253 (image=quay.io/ceph/ceph:v18, name=admiring_tesla, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:09:31 compute-0 podman[198159]: 2025-10-02 19:09:31.168847739 +0000 UTC m=+0.230710161 container attach 1f206245bc4f880af5254be461dc60cf3be7ed486b19438437d9e79706751253 (image=quay.io/ceph/ceph:v18, name=admiring_tesla, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:09:31 compute-0 sudo[198202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:31 compute-0 sudo[198202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:31 compute-0 sudo[198202]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:31 compute-0 sudo[198228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/etc/ceph
Oct 02 19:09:31 compute-0 sudo[198228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:31 compute-0 sudo[198228]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:31 compute-0 openstack_network_exporter[159337]: ERROR   19:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:09:31 compute-0 openstack_network_exporter[159337]: ERROR   19:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:09:31 compute-0 openstack_network_exporter[159337]: ERROR   19:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:09:31 compute-0 openstack_network_exporter[159337]: ERROR   19:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:09:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:09:31 compute-0 openstack_network_exporter[159337]: ERROR   19:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:09:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:09:31 compute-0 sudo[198253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:31 compute-0 sudo[198253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:31 compute-0 sudo[198253]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:31 compute-0 sudo[198289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/etc/ceph/ceph.conf.new
Oct 02 19:09:31 compute-0 sudo[198289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:31 compute-0 auditd[704]: Audit daemon rotating log files
Oct 02 19:09:31 compute-0 sudo[198289]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:31 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 19:09:31 compute-0 admiring_tesla[198198]: 
Oct 02 19:09:31 compute-0 admiring_tesla[198198]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 02 19:09:31 compute-0 sudo[198322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:31 compute-0 sudo[198322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:31 compute-0 sudo[198322]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:31 compute-0 systemd[1]: libpod-1f206245bc4f880af5254be461dc60cf3be7ed486b19438437d9e79706751253.scope: Deactivated successfully.
Oct 02 19:09:31 compute-0 podman[198159]: 2025-10-02 19:09:31.781173554 +0000 UTC m=+0.843035896 container died 1f206245bc4f880af5254be461dc60cf3be7ed486b19438437d9e79706751253 (image=quay.io/ceph/ceph:v18, name=admiring_tesla, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:09:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 19:09:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:09:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:09:31 compute-0 ceph-mon[191910]: Updating compute-0:/etc/ceph/ceph.conf
Oct 02 19:09:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-108717f2fc874b845fa2320669feee7b4c6f04dad1ed8cb8e55efa676ca4b9a3-merged.mount: Deactivated successfully.
Oct 02 19:09:31 compute-0 podman[198159]: 2025-10-02 19:09:31.875864212 +0000 UTC m=+0.937726594 container remove 1f206245bc4f880af5254be461dc60cf3be7ed486b19438437d9e79706751253 (image=quay.io/ceph/ceph:v18, name=admiring_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 19:09:31 compute-0 systemd[1]: libpod-conmon-1f206245bc4f880af5254be461dc60cf3be7ed486b19438437d9e79706751253.scope: Deactivated successfully.
Oct 02 19:09:31 compute-0 sudo[198350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:09:31 compute-0 ansible-async_wrapper.py[198134]: Module complete (198134)
Oct 02 19:09:31 compute-0 sudo[198350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:31 compute-0 sudo[198350]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:32 compute-0 sudo[198407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:32 compute-0 sudo[198407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:32 compute-0 sudo[198407]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:32 compute-0 sudo[198432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/etc/ceph/ceph.conf.new
Oct 02 19:09:32 compute-0 sudo[198432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:32 compute-0 sudo[198432]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:32 compute-0 sudo[198497]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nccsfmoevnbxabrfpimympytrcdfowjy ; /usr/bin/python3'
Oct 02 19:09:32 compute-0 sudo[198497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:32 compute-0 sudo[198506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:32 compute-0 sudo[198506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:32 compute-0 sudo[198506]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:32 compute-0 python3[198505]: ansible-ansible.legacy.async_status Invoked with jid=j170841206523.198044 mode=status _async_dir=/root/.ansible_async
Oct 02 19:09:32 compute-0 sudo[198497]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:32 compute-0 sudo[198531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/etc/ceph/ceph.conf.new
Oct 02 19:09:32 compute-0 sudo[198531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:32 compute-0 sudo[198531]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:32 compute-0 sudo[198560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:32 compute-0 sudo[198560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:32 compute-0 sudo[198560]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:32 compute-0 sudo[198648]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erphvpjorpsaekqcvowaabtprmcnszyy ; /usr/bin/python3'
Oct 02 19:09:32 compute-0 sudo[198648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:32 compute-0 sudo[198609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/etc/ceph/ceph.conf.new
Oct 02 19:09:32 compute-0 sudo[198609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:32 compute-0 sudo[198609]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:32 compute-0 sudo[198655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:32 compute-0 sudo[198655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:32 compute-0 sudo[198655]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:32 compute-0 python3[198653]: ansible-ansible.legacy.async_status Invoked with jid=j170841206523.198044 mode=cleanup _async_dir=/root/.ansible_async
Oct 02 19:09:32 compute-0 sudo[198648]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:32 compute-0 sudo[198680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 02 19:09:32 compute-0 sudo[198680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:32 compute-0 sudo[198680]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:32 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config/ceph.conf
Oct 02 19:09:32 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config/ceph.conf
Oct 02 19:09:32 compute-0 sudo[198705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:32 compute-0 sudo[198705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:32 compute-0 sudo[198705]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:32 compute-0 ceph-mon[191910]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:32 compute-0 ceph-mon[191910]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 19:09:32 compute-0 sudo[198730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config
Oct 02 19:09:32 compute-0 sudo[198730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:32 compute-0 sudo[198730]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:33 compute-0 sudo[198755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:33 compute-0 sudo[198755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:33 compute-0 sudo[198755]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:33 compute-0 sudo[198823]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdkwyzbvbazbntoouazxtwjjzuzuazxh ; /usr/bin/python3'
Oct 02 19:09:33 compute-0 sudo[198823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:33 compute-0 sudo[198786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config
Oct 02 19:09:33 compute-0 sudo[198786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:33 compute-0 sudo[198786]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:33 compute-0 sudo[198831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:33 compute-0 sudo[198831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:33 compute-0 sudo[198831]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:33 compute-0 python3[198829]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:09:33 compute-0 sudo[198823]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:33 compute-0 sudo[198856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config/ceph.conf.new
Oct 02 19:09:33 compute-0 sudo[198856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:33 compute-0 sudo[198856]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:33 compute-0 sudo[198883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:33 compute-0 sudo[198883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:33 compute-0 sudo[198883]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:09:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:09:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:09:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:09:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:09:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:09:33 compute-0 sudo[198908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:09:33 compute-0 sudo[198908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:33 compute-0 sudo[198908]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:33 compute-0 sudo[198977]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmjggtzficwtrqlqucgppvhzcwkyxsbf ; /usr/bin/python3'
Oct 02 19:09:33 compute-0 sudo[198977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:33 compute-0 sudo[198937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:33 compute-0 sudo[198937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:33 compute-0 sudo[198937]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:33 compute-0 ceph-mon[191910]: Updating compute-0:/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config/ceph.conf
Oct 02 19:09:33 compute-0 sudo[198984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config/ceph.conf.new
Oct 02 19:09:33 compute-0 sudo[198984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:33 compute-0 sudo[198984]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:33 compute-0 python3[198982]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:09:34 compute-0 podman[199010]: 2025-10-02 19:09:34.048052356 +0000 UTC m=+0.091787103 container create ebd829c282b2477be3d654dce28d144a33f3b0cd7eafe4fac4e24ec22a24bd56 (image=quay.io/ceph/ceph:v18, name=great_satoshi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:09:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:09:34 compute-0 podman[199010]: 2025-10-02 19:09:34.022276138 +0000 UTC m=+0.066010975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:34 compute-0 systemd[1]: Started libpod-conmon-ebd829c282b2477be3d654dce28d144a33f3b0cd7eafe4fac4e24ec22a24bd56.scope.
Oct 02 19:09:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:34 compute-0 sudo[199043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efadb816deb3802ff665b3f971444ce19a7bcd10b24c48a990990f3dc0dfd436/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efadb816deb3802ff665b3f971444ce19a7bcd10b24c48a990990f3dc0dfd436/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efadb816deb3802ff665b3f971444ce19a7bcd10b24c48a990990f3dc0dfd436/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:34 compute-0 sudo[199043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:34 compute-0 sudo[199043]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:34 compute-0 podman[199010]: 2025-10-02 19:09:34.199983847 +0000 UTC m=+0.243718644 container init ebd829c282b2477be3d654dce28d144a33f3b0cd7eafe4fac4e24ec22a24bd56 (image=quay.io/ceph/ceph:v18, name=great_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 19:09:34 compute-0 podman[199010]: 2025-10-02 19:09:34.217103506 +0000 UTC m=+0.260838283 container start ebd829c282b2477be3d654dce28d144a33f3b0cd7eafe4fac4e24ec22a24bd56 (image=quay.io/ceph/ceph:v18, name=great_satoshi, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 19:09:34 compute-0 podman[199010]: 2025-10-02 19:09:34.223433393 +0000 UTC m=+0.267168170 container attach ebd829c282b2477be3d654dce28d144a33f3b0cd7eafe4fac4e24ec22a24bd56 (image=quay.io/ceph/ceph:v18, name=great_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 19:09:34 compute-0 sudo[199077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config/ceph.conf.new
Oct 02 19:09:34 compute-0 sudo[199077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:34 compute-0 sudo[199077]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:34 compute-0 sudo[199102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:34 compute-0 sudo[199102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:34 compute-0 sudo[199102]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:34 compute-0 sudo[199127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config/ceph.conf.new
Oct 02 19:09:34 compute-0 sudo[199127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:34 compute-0 sudo[199127]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:34 compute-0 sudo[199171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:34 compute-0 sudo[199171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:34 compute-0 sudo[199171]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:34 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 19:09:34 compute-0 great_satoshi[199068]: 
Oct 02 19:09:34 compute-0 great_satoshi[199068]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 02 19:09:34 compute-0 ceph-mon[191910]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:34 compute-0 systemd[1]: libpod-ebd829c282b2477be3d654dce28d144a33f3b0cd7eafe4fac4e24ec22a24bd56.scope: Deactivated successfully.
Oct 02 19:09:34 compute-0 conmon[199068]: conmon ebd829c282b2477be3d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebd829c282b2477be3d654dce28d144a33f3b0cd7eafe4fac4e24ec22a24bd56.scope/container/memory.events
Oct 02 19:09:34 compute-0 podman[199010]: 2025-10-02 19:09:34.895042885 +0000 UTC m=+0.938777662 container died ebd829c282b2477be3d654dce28d144a33f3b0cd7eafe4fac4e24ec22a24bd56 (image=quay.io/ceph/ceph:v18, name=great_satoshi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 19:09:34 compute-0 sudo[199198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config/ceph.conf.new /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config/ceph.conf
Oct 02 19:09:34 compute-0 sudo[199198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:34 compute-0 podman[199195]: 2025-10-02 19:09:34.917851694 +0000 UTC m=+0.111944592 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 19:09:34 compute-0 podman[199196]: 2025-10-02 19:09:34.918304136 +0000 UTC m=+0.112718082 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:09:34 compute-0 sudo[199198]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:34 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 02 19:09:34 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 02 19:09:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-efadb816deb3802ff665b3f971444ce19a7bcd10b24c48a990990f3dc0dfd436-merged.mount: Deactivated successfully.
Oct 02 19:09:34 compute-0 podman[199010]: 2025-10-02 19:09:34.974063351 +0000 UTC m=+1.017798088 container remove ebd829c282b2477be3d654dce28d144a33f3b0cd7eafe4fac4e24ec22a24bd56 (image=quay.io/ceph/ceph:v18, name=great_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:09:34 compute-0 systemd[1]: libpod-conmon-ebd829c282b2477be3d654dce28d144a33f3b0cd7eafe4fac4e24ec22a24bd56.scope: Deactivated successfully.
Oct 02 19:09:34 compute-0 sudo[198977]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:35 compute-0 sudo[199275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:35 compute-0 sudo[199275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:35 compute-0 sudo[199275]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:35 compute-0 sudo[199301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 02 19:09:35 compute-0 sudo[199301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:35 compute-0 sudo[199301]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:35 compute-0 sudo[199326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:35 compute-0 sudo[199326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:35 compute-0 sudo[199326]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:35 compute-0 sudo[199351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/etc/ceph
Oct 02 19:09:35 compute-0 sudo[199351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:35 compute-0 sudo[199351]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:35 compute-0 sudo[199400]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuqsxxxgqootqimrnkfjzdtbqujnkarb ; /usr/bin/python3'
Oct 02 19:09:35 compute-0 sudo[199400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:35 compute-0 sudo[199399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:35 compute-0 sudo[199399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:35 compute-0 sudo[199399]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:35 compute-0 python3[199408]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:09:35 compute-0 sudo[199427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/etc/ceph/ceph.client.admin.keyring.new
Oct 02 19:09:35 compute-0 sudo[199427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:35 compute-0 sudo[199427]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:35 compute-0 podman[199440]: 2025-10-02 19:09:35.67977403 +0000 UTC m=+0.096927347 container create 0f56252f92c9c06a6ba6babc91985284caf53bec92ac349744fdf110a7f7ffaf (image=quay.io/ceph/ceph:v18, name=romantic_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:09:35 compute-0 podman[199440]: 2025-10-02 19:09:35.640823017 +0000 UTC m=+0.057976384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:35 compute-0 systemd[1]: Started libpod-conmon-0f56252f92c9c06a6ba6babc91985284caf53bec92ac349744fdf110a7f7ffaf.scope.
Oct 02 19:09:35 compute-0 sudo[199465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:35 compute-0 sudo[199465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:35 compute-0 ansible-async_wrapper.py[198133]: Done in kid B.
Oct 02 19:09:35 compute-0 sudo[199465]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc805ee35395757cb6786d83e5d1482a6fb25d7a6eeb041cf2ff49915bd8499/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc805ee35395757cb6786d83e5d1482a6fb25d7a6eeb041cf2ff49915bd8499/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc805ee35395757cb6786d83e5d1482a6fb25d7a6eeb041cf2ff49915bd8499/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:35 compute-0 podman[199440]: 2025-10-02 19:09:35.848307917 +0000 UTC m=+0.265461284 container init 0f56252f92c9c06a6ba6babc91985284caf53bec92ac349744fdf110a7f7ffaf (image=quay.io/ceph/ceph:v18, name=romantic_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:09:35 compute-0 podman[199440]: 2025-10-02 19:09:35.868695823 +0000 UTC m=+0.285849140 container start 0f56252f92c9c06a6ba6babc91985284caf53bec92ac349744fdf110a7f7ffaf (image=quay.io/ceph/ceph:v18, name=romantic_shannon, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 19:09:35 compute-0 podman[199440]: 2025-10-02 19:09:35.875462521 +0000 UTC m=+0.292615838 container attach 0f56252f92c9c06a6ba6babc91985284caf53bec92ac349744fdf110a7f7ffaf (image=quay.io/ceph/ceph:v18, name=romantic_shannon, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:09:35 compute-0 ceph-mon[191910]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 19:09:35 compute-0 ceph-mon[191910]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 02 19:09:35 compute-0 sudo[199495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:09:35 compute-0 sudo[199495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:35 compute-0 sudo[199495]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:36 compute-0 sudo[199521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:36 compute-0 sudo[199521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:36 compute-0 sudo[199521]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:36 compute-0 sudo[199546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/etc/ceph/ceph.client.admin.keyring.new
Oct 02 19:09:36 compute-0 sudo[199546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:36 compute-0 sudo[199546]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:36 compute-0 sudo[199613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:36 compute-0 sudo[199613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:36 compute-0 sudo[199613]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Oct 02 19:09:36 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/288006269' entity='client.admin' 
Oct 02 19:09:36 compute-0 sudo[199638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/etc/ceph/ceph.client.admin.keyring.new
Oct 02 19:09:36 compute-0 sudo[199638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:36 compute-0 systemd[1]: libpod-0f56252f92c9c06a6ba6babc91985284caf53bec92ac349744fdf110a7f7ffaf.scope: Deactivated successfully.
Oct 02 19:09:36 compute-0 podman[199440]: 2025-10-02 19:09:36.520126156 +0000 UTC m=+0.937279473 container died 0f56252f92c9c06a6ba6babc91985284caf53bec92ac349744fdf110a7f7ffaf (image=quay.io/ceph/ceph:v18, name=romantic_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:09:36 compute-0 sudo[199638]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-abc805ee35395757cb6786d83e5d1482a6fb25d7a6eeb041cf2ff49915bd8499-merged.mount: Deactivated successfully.
Oct 02 19:09:36 compute-0 podman[199440]: 2025-10-02 19:09:36.611153527 +0000 UTC m=+1.028306804 container remove 0f56252f92c9c06a6ba6babc91985284caf53bec92ac349744fdf110a7f7ffaf (image=quay.io/ceph/ceph:v18, name=romantic_shannon, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 19:09:36 compute-0 sudo[199671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:36 compute-0 systemd[1]: libpod-conmon-0f56252f92c9c06a6ba6babc91985284caf53bec92ac349744fdf110a7f7ffaf.scope: Deactivated successfully.
Oct 02 19:09:36 compute-0 sudo[199671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:36 compute-0 sudo[199671]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:36 compute-0 sudo[199400]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:36 compute-0 sudo[199701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/etc/ceph/ceph.client.admin.keyring.new
Oct 02 19:09:36 compute-0 sudo[199701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:36 compute-0 sudo[199701]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:36 compute-0 sudo[199726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:36 compute-0 sudo[199726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:36 compute-0 sudo[199726]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:36 compute-0 ceph-mon[191910]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:36 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/288006269' entity='client.admin' 
Oct 02 19:09:36 compute-0 sudo[199773]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joosselejowspgirwkapshzvooaponcw ; /usr/bin/python3'
Oct 02 19:09:36 compute-0 sudo[199773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:36 compute-0 sudo[199776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 02 19:09:37 compute-0 sudo[199776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:37 compute-0 sudo[199776]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:37 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config/ceph.client.admin.keyring
Oct 02 19:09:37 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config/ceph.client.admin.keyring
Oct 02 19:09:37 compute-0 sudo[199802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:37 compute-0 python3[199778]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:09:37 compute-0 sudo[199802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:37 compute-0 sudo[199802]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:37 compute-0 podman[199826]: 2025-10-02 19:09:37.183286148 +0000 UTC m=+0.063245313 container create 5ded982cea8f4eb8283021dfef12d9cd032705ef4bb3316b3fe758dfc55c5cd8 (image=quay.io/ceph/ceph:v18, name=intelligent_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 19:09:37 compute-0 sudo[199828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config
Oct 02 19:09:37 compute-0 sudo[199828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:37 compute-0 sudo[199828]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:37 compute-0 systemd[1]: Started libpod-conmon-5ded982cea8f4eb8283021dfef12d9cd032705ef4bb3316b3fe758dfc55c5cd8.scope.
Oct 02 19:09:37 compute-0 podman[199826]: 2025-10-02 19:09:37.153242478 +0000 UTC m=+0.033201733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f1fbe884ec09a155a049ebe30d55ee82ca4130cf3aff4bd4740f26a7e6cecc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f1fbe884ec09a155a049ebe30d55ee82ca4130cf3aff4bd4740f26a7e6cecc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f1fbe884ec09a155a049ebe30d55ee82ca4130cf3aff4bd4740f26a7e6cecc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:37 compute-0 sudo[199865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:37 compute-0 sudo[199865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:37 compute-0 sudo[199865]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:37 compute-0 podman[199826]: 2025-10-02 19:09:37.329900289 +0000 UTC m=+0.209859464 container init 5ded982cea8f4eb8283021dfef12d9cd032705ef4bb3316b3fe758dfc55c5cd8 (image=quay.io/ceph/ceph:v18, name=intelligent_vaughan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:09:37 compute-0 podman[199826]: 2025-10-02 19:09:37.3459297 +0000 UTC m=+0.225888875 container start 5ded982cea8f4eb8283021dfef12d9cd032705ef4bb3316b3fe758dfc55c5cd8 (image=quay.io/ceph/ceph:v18, name=intelligent_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 19:09:37 compute-0 podman[199826]: 2025-10-02 19:09:37.352925314 +0000 UTC m=+0.232884479 container attach 5ded982cea8f4eb8283021dfef12d9cd032705ef4bb3316b3fe758dfc55c5cd8 (image=quay.io/ceph/ceph:v18, name=intelligent_vaughan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:09:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:37 compute-0 sudo[199895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config
Oct 02 19:09:37 compute-0 sudo[199895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:37 compute-0 sudo[199895]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:37 compute-0 sudo[199921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:37 compute-0 sudo[199921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:37 compute-0 sudo[199921]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:37 compute-0 sudo[199946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config/ceph.client.admin.keyring.new
Oct 02 19:09:37 compute-0 sudo[199946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:37 compute-0 sudo[199946]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:37 compute-0 sudo[199973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:37 compute-0 sudo[199973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:37 compute-0 sudo[199973]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:37 compute-0 sudo[200015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:09:37 compute-0 sudo[200015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:37 compute-0 sudo[200015]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:37 compute-0 ceph-mon[191910]: Updating compute-0:/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config/ceph.client.admin.keyring
Oct 02 19:09:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Oct 02 19:09:37 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/985341800' entity='client.admin' 
Oct 02 19:09:37 compute-0 sudo[200040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:37 compute-0 sudo[200040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:37 compute-0 sudo[200040]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:37 compute-0 systemd[1]: libpod-5ded982cea8f4eb8283021dfef12d9cd032705ef4bb3316b3fe758dfc55c5cd8.scope: Deactivated successfully.
Oct 02 19:09:37 compute-0 podman[199826]: 2025-10-02 19:09:37.950034891 +0000 UTC m=+0.829994066 container died 5ded982cea8f4eb8283021dfef12d9cd032705ef4bb3316b3fe758dfc55c5cd8 (image=quay.io/ceph/ceph:v18, name=intelligent_vaughan, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:09:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3f1fbe884ec09a155a049ebe30d55ee82ca4130cf3aff4bd4740f26a7e6cecc-merged.mount: Deactivated successfully.
Oct 02 19:09:38 compute-0 podman[199826]: 2025-10-02 19:09:38.012893102 +0000 UTC m=+0.892852287 container remove 5ded982cea8f4eb8283021dfef12d9cd032705ef4bb3316b3fe758dfc55c5cd8 (image=quay.io/ceph/ceph:v18, name=intelligent_vaughan, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 19:09:38 compute-0 systemd[1]: libpod-conmon-5ded982cea8f4eb8283021dfef12d9cd032705ef4bb3316b3fe758dfc55c5cd8.scope: Deactivated successfully.
Oct 02 19:09:38 compute-0 sudo[199773]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:38 compute-0 sudo[200067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config/ceph.client.admin.keyring.new
Oct 02 19:09:38 compute-0 sudo[200067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:38 compute-0 sudo[200067]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:38 compute-0 sudo[200126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:38 compute-0 sudo[200126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:38 compute-0 sudo[200126]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:38 compute-0 sudo[200181]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-radmdvjotzaaykxoysrumkolktsfpfed ; /usr/bin/python3'
Oct 02 19:09:38 compute-0 sudo[200181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:38 compute-0 sudo[200169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config/ceph.client.admin.keyring.new
Oct 02 19:09:38 compute-0 sudo[200169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:38 compute-0 sudo[200169]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:38 compute-0 sudo[200202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:38 compute-0 sudo[200202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:38 compute-0 sudo[200202]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:38 compute-0 python3[200196]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                            _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:09:38 compute-0 sudo[200227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config/ceph.client.admin.keyring.new
Oct 02 19:09:38 compute-0 sudo[200227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:38 compute-0 sudo[200227]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:38 compute-0 podman[200233]: 2025-10-02 19:09:38.594062919 +0000 UTC m=+0.085861236 container create f99334533ebaed9e51a5f94f0e7edb3cdbf17b8809d31bfce883c9bf83a2554c (image=quay.io/ceph/ceph:v18, name=dazzling_lehmann, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:09:38 compute-0 systemd[1]: Started libpod-conmon-f99334533ebaed9e51a5f94f0e7edb3cdbf17b8809d31bfce883c9bf83a2554c.scope.
Oct 02 19:09:38 compute-0 sudo[200262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:38 compute-0 sudo[200262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:38 compute-0 sudo[200262]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:38 compute-0 podman[200233]: 2025-10-02 19:09:38.568007025 +0000 UTC m=+0.059805402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80ff3fee200c712dafb5a2115964807428f8fb864aacfd98442947fde86499f7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80ff3fee200c712dafb5a2115964807428f8fb864aacfd98442947fde86499f7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80ff3fee200c712dafb5a2115964807428f8fb864aacfd98442947fde86499f7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:38 compute-0 podman[200233]: 2025-10-02 19:09:38.76614831 +0000 UTC m=+0.257946647 container init f99334533ebaed9e51a5f94f0e7edb3cdbf17b8809d31bfce883c9bf83a2554c (image=quay.io/ceph/ceph:v18, name=dazzling_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 19:09:38 compute-0 podman[200233]: 2025-10-02 19:09:38.780046375 +0000 UTC m=+0.271844682 container start f99334533ebaed9e51a5f94f0e7edb3cdbf17b8809d31bfce883c9bf83a2554c (image=quay.io/ceph/ceph:v18, name=dazzling_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 19:09:38 compute-0 sudo[200294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-6019f664-a1c2-5955-8391-692cb79a59f9/var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config/ceph.client.admin.keyring.new /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/config/ceph.client.admin.keyring
Oct 02 19:09:38 compute-0 podman[200233]: 2025-10-02 19:09:38.7855374 +0000 UTC m=+0.277335707 container attach f99334533ebaed9e51a5f94f0e7edb3cdbf17b8809d31bfce883c9bf83a2554c (image=quay.io/ceph/ceph:v18, name=dazzling_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:09:38 compute-0 sudo[200294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:38 compute-0 sudo[200294]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:09:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:09:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:09:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:38 compute-0 ceph-mgr[192222]: [progress INFO root] update: starting ev 26967903-96a9-480f-aa20-ac40a16b352f (Updating crash deployment (+1 -> 1))
Oct 02 19:09:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct 02 19:09:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 19:09:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 02 19:09:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:09:38 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:09:38 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct 02 19:09:38 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct 02 19:09:38 compute-0 ceph-mon[191910]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:38 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/985341800' entity='client.admin' 
Oct 02 19:09:38 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:38 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:38 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:38 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 19:09:38 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 02 19:09:38 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:09:38 compute-0 sudo[200320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:38 compute-0 sudo[200320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:38 compute-0 sudo[200320]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:09:39 compute-0 sudo[200345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:09:39 compute-0 sudo[200345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:39 compute-0 sudo[200345]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:39 compute-0 sudo[200401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:39 compute-0 sudo[200401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:39 compute-0 sudo[200401]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:39 compute-0 podman[200386]: 2025-10-02 19:09:39.235593573 +0000 UTC m=+0.096260660 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41)
Oct 02 19:09:39 compute-0 podman[200389]: 2025-10-02 19:09:39.271587178 +0000 UTC m=+0.125457337 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Oct 02 19:09:39 compute-0 sudo[200452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:09:39 compute-0 sudo[200452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Oct 02 19:09:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/849255117' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 02 19:09:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:39 compute-0 podman[200520]: 2025-10-02 19:09:39.918575786 +0000 UTC m=+0.097660927 container create 3e30c5976109052dc30cee0b2f461f743de09a94ead0e0f1b7dc99ce95f5d610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct 02 19:09:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct 02 19:09:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 19:09:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/849255117' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 02 19:09:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct 02 19:09:39 compute-0 dazzling_lehmann[200290]: set require_min_compat_client to mimic
Oct 02 19:09:39 compute-0 ceph-mon[191910]: Deploying daemon crash.compute-0 on compute-0
Oct 02 19:09:39 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/849255117' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 02 19:09:39 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct 02 19:09:39 compute-0 podman[200520]: 2025-10-02 19:09:39.877224269 +0000 UTC m=+0.056309450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:09:39 compute-0 systemd[1]: libpod-f99334533ebaed9e51a5f94f0e7edb3cdbf17b8809d31bfce883c9bf83a2554c.scope: Deactivated successfully.
Oct 02 19:09:39 compute-0 podman[200233]: 2025-10-02 19:09:39.98650307 +0000 UTC m=+1.478301377 container died f99334533ebaed9e51a5f94f0e7edb3cdbf17b8809d31bfce883c9bf83a2554c (image=quay.io/ceph/ceph:v18, name=dazzling_lehmann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:09:40 compute-0 systemd[1]: Started libpod-conmon-3e30c5976109052dc30cee0b2f461f743de09a94ead0e0f1b7dc99ce95f5d610.scope.
Oct 02 19:09:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-80ff3fee200c712dafb5a2115964807428f8fb864aacfd98442947fde86499f7-merged.mount: Deactivated successfully.
Oct 02 19:09:40 compute-0 podman[200520]: 2025-10-02 19:09:40.097112616 +0000 UTC m=+0.276197767 container init 3e30c5976109052dc30cee0b2f461f743de09a94ead0e0f1b7dc99ce95f5d610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_germain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:09:40 compute-0 podman[200520]: 2025-10-02 19:09:40.114172654 +0000 UTC m=+0.293257785 container start 3e30c5976109052dc30cee0b2f461f743de09a94ead0e0f1b7dc99ce95f5d610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_germain, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct 02 19:09:40 compute-0 podman[200233]: 2025-10-02 19:09:40.118858067 +0000 UTC m=+1.610656384 container remove f99334533ebaed9e51a5f94f0e7edb3cdbf17b8809d31bfce883c9bf83a2554c (image=quay.io/ceph/ceph:v18, name=dazzling_lehmann, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:09:40 compute-0 sad_germain[200544]: 167 167
Oct 02 19:09:40 compute-0 systemd[1]: libpod-3e30c5976109052dc30cee0b2f461f743de09a94ead0e0f1b7dc99ce95f5d610.scope: Deactivated successfully.
Oct 02 19:09:40 compute-0 podman[200520]: 2025-10-02 19:09:40.129885107 +0000 UTC m=+0.308970258 container attach 3e30c5976109052dc30cee0b2f461f743de09a94ead0e0f1b7dc99ce95f5d610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_germain, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:09:40 compute-0 podman[200520]: 2025-10-02 19:09:40.130686928 +0000 UTC m=+0.309772059 container died 3e30c5976109052dc30cee0b2f461f743de09a94ead0e0f1b7dc99ce95f5d610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_germain, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:09:40 compute-0 systemd[1]: libpod-conmon-f99334533ebaed9e51a5f94f0e7edb3cdbf17b8809d31bfce883c9bf83a2554c.scope: Deactivated successfully.
Oct 02 19:09:40 compute-0 sudo[200181]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed92ac0e24b4b21aa2fcb9946b30b5025fb23f205ba27740a97547443f17595c-merged.mount: Deactivated successfully.
Oct 02 19:09:40 compute-0 podman[200520]: 2025-10-02 19:09:40.204275701 +0000 UTC m=+0.383360812 container remove 3e30c5976109052dc30cee0b2f461f743de09a94ead0e0f1b7dc99ce95f5d610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_germain, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 19:09:40 compute-0 systemd[1]: libpod-conmon-3e30c5976109052dc30cee0b2f461f743de09a94ead0e0f1b7dc99ce95f5d610.scope: Deactivated successfully.
Oct 02 19:09:40 compute-0 systemd[1]: Reloading.
Oct 02 19:09:40 compute-0 systemd-rc-local-generator[200593]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:09:40 compute-0 systemd-sysv-generator[200596]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:09:40 compute-0 sudo[200627]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssssturvdczigdljfpulcodpzsoieqen ; /usr/bin/python3'
Oct 02 19:09:40 compute-0 sudo[200627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:40 compute-0 systemd[1]: Reloading.
Oct 02 19:09:40 compute-0 systemd-rc-local-generator[200653]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:09:40 compute-0 systemd-sysv-generator[200656]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:09:40 compute-0 ceph-mon[191910]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:40 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/849255117' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 02 19:09:40 compute-0 ceph-mon[191910]: osdmap e3: 0 total, 0 up, 0 in
Oct 02 19:09:40 compute-0 python3[200635]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:09:41 compute-0 podman[200670]: 2025-10-02 19:09:41.087205096 +0000 UTC m=+0.099562296 container create 3b75c0d85201f63f5ae45bbfe084db58e7fc40879beb6f63715881776831b79e (image=quay.io/ceph/ceph:v18, name=admiring_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 19:09:41 compute-0 podman[200670]: 2025-10-02 19:09:41.031320648 +0000 UTC m=+0.043677898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:41 compute-0 systemd[1]: Started libpod-conmon-3b75c0d85201f63f5ae45bbfe084db58e7fc40879beb6f63715881776831b79e.scope.
Oct 02 19:09:41 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 6019f664-a1c2-5955-8391-692cb79a59f9...
Oct 02 19:09:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/216c618aabbf2b8f75f6083b2c1cd2f679bd820734b3b21012b7c8e55e100cfa/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/216c618aabbf2b8f75f6083b2c1cd2f679bd820734b3b21012b7c8e55e100cfa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/216c618aabbf2b8f75f6083b2c1cd2f679bd820734b3b21012b7c8e55e100cfa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:41 compute-0 podman[200670]: 2025-10-02 19:09:41.249963412 +0000 UTC m=+0.262320632 container init 3b75c0d85201f63f5ae45bbfe084db58e7fc40879beb6f63715881776831b79e (image=quay.io/ceph/ceph:v18, name=admiring_mccarthy, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 19:09:41 compute-0 podman[200670]: 2025-10-02 19:09:41.270307116 +0000 UTC m=+0.282664276 container start 3b75c0d85201f63f5ae45bbfe084db58e7fc40879beb6f63715881776831b79e (image=quay.io/ceph/ceph:v18, name=admiring_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:09:41 compute-0 podman[200670]: 2025-10-02 19:09:41.277027823 +0000 UTC m=+0.289385103 container attach 3b75c0d85201f63f5ae45bbfe084db58e7fc40879beb6f63715881776831b79e (image=quay.io/ceph/ceph:v18, name=admiring_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:09:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:41 compute-0 podman[200733]: 2025-10-02 19:09:41.644053875 +0000 UTC m=+0.106714965 container create 56c88518a73e83c048700a04f8f0580a426156c1ca6149396b804f52eadfa05c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-crash-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:09:41 compute-0 podman[200733]: 2025-10-02 19:09:41.582069456 +0000 UTC m=+0.044730586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952bde64f257f0da9bf48436b232bc8329e46d51b26e5a4622e1d856fbc79231/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952bde64f257f0da9bf48436b232bc8329e46d51b26e5a4622e1d856fbc79231/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952bde64f257f0da9bf48436b232bc8329e46d51b26e5a4622e1d856fbc79231/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952bde64f257f0da9bf48436b232bc8329e46d51b26e5a4622e1d856fbc79231/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:41 compute-0 podman[200733]: 2025-10-02 19:09:41.739994835 +0000 UTC m=+0.202655935 container init 56c88518a73e83c048700a04f8f0580a426156c1ca6149396b804f52eadfa05c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:09:41 compute-0 podman[200733]: 2025-10-02 19:09:41.750799669 +0000 UTC m=+0.213460749 container start 56c88518a73e83c048700a04f8f0580a426156c1ca6149396b804f52eadfa05c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-crash-compute-0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 02 19:09:41 compute-0 bash[200733]: 56c88518a73e83c048700a04f8f0580a426156c1ca6149396b804f52eadfa05c
Oct 02 19:09:41 compute-0 systemd[1]: Started Ceph crash.compute-0 for 6019f664-a1c2-5955-8391-692cb79a59f9.
Oct 02 19:09:41 compute-0 sudo[200452]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:09:41 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:09:41 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 02 19:09:41 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:41 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:41 compute-0 ceph-mgr[192222]: [progress INFO root] complete: finished ev 26967903-96a9-480f-aa20-ac40a16b352f (Updating crash deployment (+1 -> 1))
Oct 02 19:09:41 compute-0 ceph-mgr[192222]: [progress INFO root] Completed event 26967903-96a9-480f-aa20-ac40a16b352f (Updating crash deployment (+1 -> 1)) in 3 seconds
Oct 02 19:09:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 02 19:09:41 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:41 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 84b69c6e-e978-4c4e-b6aa-c13d2159fdd5 does not exist
Oct 02 19:09:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 02 19:09:41 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:41 compute-0 ceph-mgr[192222]: [progress INFO root] update: starting ev 62f0dfeb-e7b3-4be8-9456-f54a5eb78e7d (Updating mgr deployment (+1 -> 2))
Oct 02 19:09:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ztntmm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 02 19:09:41 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ztntmm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 19:09:41 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ztntmm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 02 19:09:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 19:09:41 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 19:09:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:09:41 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:09:41 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.ztntmm on compute-0
Oct 02 19:09:41 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.ztntmm on compute-0
Oct 02 19:09:41 compute-0 sudo[200770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:41 compute-0 sudo[200770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:41 compute-0 sudo[200770]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:41 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-crash-compute-0[200764]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 02 19:09:42 compute-0 sudo[200787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:42 compute-0 sudo[200787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:42 compute-0 sudo[200787]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:42 compute-0 sudo[200820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:09:42 compute-0 sudo[200820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:42 compute-0 sudo[200820]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:42 compute-0 sudo[200831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:09:42 compute-0 sudo[200831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:42 compute-0 sudo[200831]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:42 compute-0 sudo[200870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:42 compute-0 sudo[200870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:42 compute-0 sudo[200870]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:42 compute-0 sudo[200889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:42 compute-0 sudo[200889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:42 compute-0 sudo[200889]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:42 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-crash-compute-0[200764]: 2025-10-02T19:09:42.188+0000 7f40318b2640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 02 19:09:42 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-crash-compute-0[200764]: 2025-10-02T19:09:42.188+0000 7f40318b2640 -1 AuthRegistry(0x7f402c066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 02 19:09:42 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-crash-compute-0[200764]: 2025-10-02T19:09:42.190+0000 7f40318b2640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 02 19:09:42 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-crash-compute-0[200764]: 2025-10-02T19:09:42.190+0000 7f40318b2640 -1 AuthRegistry(0x7f40318b1000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 02 19:09:42 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-crash-compute-0[200764]: 2025-10-02T19:09:42.191+0000 7f402affd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 02 19:09:42 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-crash-compute-0[200764]: 2025-10-02T19:09:42.191+0000 7f40318b2640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 02 19:09:42 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-crash-compute-0[200764]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 02 19:09:42 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-crash-compute-0[200764]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 02 19:09:42 compute-0 sudo[200918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 02 19:09:42 compute-0 sudo[200918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:42 compute-0 sudo[200953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:09:42 compute-0 sudo[200953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:42 compute-0 sudo[200918]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 19:09:42 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 19:09:42 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 19:09:42 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 02 19:09:42 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 ceph-mgr[192222]: [cephadm INFO root] Added host compute-0
Oct 02 19:09:42 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 02 19:09:42 compute-0 ceph-mgr[192222]: [cephadm INFO root] Saving service mon spec with placement compute-0
Oct 02 19:09:42 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Oct 02 19:09:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 02 19:09:42 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 ceph-mgr[192222]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Oct 02 19:09:42 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Oct 02 19:09:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 02 19:09:42 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 ceph-mgr[192222]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct 02 19:09:42 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct 02 19:09:42 compute-0 ceph-mgr[192222]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Oct 02 19:09:42 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Oct 02 19:09:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Oct 02 19:09:42 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 admiring_mccarthy[200687]: Added host 'compute-0' with addr '192.168.122.100'
Oct 02 19:09:42 compute-0 admiring_mccarthy[200687]: Scheduled mon update...
Oct 02 19:09:42 compute-0 admiring_mccarthy[200687]: Scheduled mgr update...
Oct 02 19:09:42 compute-0 admiring_mccarthy[200687]: Scheduled osd.default_drive_group update...
Oct 02 19:09:42 compute-0 systemd[1]: libpod-3b75c0d85201f63f5ae45bbfe084db58e7fc40879beb6f63715881776831b79e.scope: Deactivated successfully.
Oct 02 19:09:42 compute-0 podman[200670]: 2025-10-02 19:09:42.662923559 +0000 UTC m=+1.675280749 container died 3b75c0d85201f63f5ae45bbfe084db58e7fc40879beb6f63715881776831b79e (image=quay.io/ceph/ceph:v18, name=admiring_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:09:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-216c618aabbf2b8f75f6083b2c1cd2f679bd820734b3b21012b7c8e55e100cfa-merged.mount: Deactivated successfully.
Oct 02 19:09:42 compute-0 podman[200670]: 2025-10-02 19:09:42.750615183 +0000 UTC m=+1.762972353 container remove 3b75c0d85201f63f5ae45bbfe084db58e7fc40879beb6f63715881776831b79e (image=quay.io/ceph/ceph:v18, name=admiring_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 19:09:42 compute-0 systemd[1]: libpod-conmon-3b75c0d85201f63f5ae45bbfe084db58e7fc40879beb6f63715881776831b79e.scope: Deactivated successfully.
Oct 02 19:09:42 compute-0 sudo[200627]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:42 compute-0 podman[201040]: 2025-10-02 19:09:42.806342117 +0000 UTC m=+0.106477408 container create bfe693c27032b463adc1ba120e4a3fbb0ee3e35b7c0b5dbbc84e4f7559fceaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 19:09:42 compute-0 ceph-mon[191910]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 ceph-mon[191910]: from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:09:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ztntmm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 19:09:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ztntmm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 02 19:09:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 19:09:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:09:42 compute-0 ceph-mon[191910]: Deploying daemon mgr.compute-0.ztntmm on compute-0
Oct 02 19:09:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:42 compute-0 podman[201040]: 2025-10-02 19:09:42.779685877 +0000 UTC m=+0.079821178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:09:42 compute-0 systemd[1]: Started libpod-conmon-bfe693c27032b463adc1ba120e4a3fbb0ee3e35b7c0b5dbbc84e4f7559fceaa2.scope.
Oct 02 19:09:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:42 compute-0 podman[201040]: 2025-10-02 19:09:42.940146162 +0000 UTC m=+0.240281493 container init bfe693c27032b463adc1ba120e4a3fbb0ee3e35b7c0b5dbbc84e4f7559fceaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chebyshev, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct 02 19:09:42 compute-0 podman[201040]: 2025-10-02 19:09:42.949669592 +0000 UTC m=+0.249804843 container start bfe693c27032b463adc1ba120e4a3fbb0ee3e35b7c0b5dbbc84e4f7559fceaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chebyshev, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:09:42 compute-0 podman[201040]: 2025-10-02 19:09:42.953942515 +0000 UTC m=+0.254077856 container attach bfe693c27032b463adc1ba120e4a3fbb0ee3e35b7c0b5dbbc84e4f7559fceaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chebyshev, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:09:42 compute-0 dreamy_chebyshev[201068]: 167 167
Oct 02 19:09:42 compute-0 systemd[1]: libpod-bfe693c27032b463adc1ba120e4a3fbb0ee3e35b7c0b5dbbc84e4f7559fceaa2.scope: Deactivated successfully.
Oct 02 19:09:42 compute-0 podman[201040]: 2025-10-02 19:09:42.962823298 +0000 UTC m=+0.262958589 container died bfe693c27032b463adc1ba120e4a3fbb0ee3e35b7c0b5dbbc84e4f7559fceaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 19:09:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-5177ef50a81f42a2024608e2c5c6b3047a9337f946fcaa0ab1de4bf112ebca2b-merged.mount: Deactivated successfully.
Oct 02 19:09:43 compute-0 podman[201040]: 2025-10-02 19:09:43.045307085 +0000 UTC m=+0.345442376 container remove bfe693c27032b463adc1ba120e4a3fbb0ee3e35b7c0b5dbbc84e4f7559fceaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chebyshev, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:09:43 compute-0 systemd[1]: libpod-conmon-bfe693c27032b463adc1ba120e4a3fbb0ee3e35b7c0b5dbbc84e4f7559fceaa2.scope: Deactivated successfully.
Oct 02 19:09:43 compute-0 sudo[201108]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjlmssxyzsxozgnwsbvyhoaslrvtwntx ; /usr/bin/python3'
Oct 02 19:09:43 compute-0 sudo[201108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:43 compute-0 systemd[1]: Reloading.
Oct 02 19:09:43 compute-0 python3[201112]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:09:43 compute-0 systemd-rc-local-generator[201139]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:09:43 compute-0 systemd-sysv-generator[201143]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:09:43 compute-0 podman[201146]: 2025-10-02 19:09:43.370911128 +0000 UTC m=+0.048325451 container create 1c0771122829afd79fae0d2479e5cfbfd53d6793104a6eba940485a7417cabfa (image=quay.io/ceph/ceph:v18, name=amazing_kepler, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:09:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:43 compute-0 podman[201146]: 2025-10-02 19:09:43.352090364 +0000 UTC m=+0.029504687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:09:43 compute-0 ceph-mgr[192222]: [progress INFO root] Writing back 1 completed events
Oct 02 19:09:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 19:09:43 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:43 compute-0 systemd[1]: Started libpod-conmon-1c0771122829afd79fae0d2479e5cfbfd53d6793104a6eba940485a7417cabfa.scope.
Oct 02 19:09:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5900090c21001bc4bad98ed4c58894dcc756e4ca2ede454e4073d2c1b24c2e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5900090c21001bc4bad98ed4c58894dcc756e4ca2ede454e4073d2c1b24c2e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5900090c21001bc4bad98ed4c58894dcc756e4ca2ede454e4073d2c1b24c2e4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:43 compute-0 systemd[1]: Reloading.
Oct 02 19:09:43 compute-0 podman[201146]: 2025-10-02 19:09:43.675077708 +0000 UTC m=+0.352492091 container init 1c0771122829afd79fae0d2479e5cfbfd53d6793104a6eba940485a7417cabfa (image=quay.io/ceph/ceph:v18, name=amazing_kepler, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:09:43 compute-0 podman[201146]: 2025-10-02 19:09:43.697056436 +0000 UTC m=+0.374470739 container start 1c0771122829afd79fae0d2479e5cfbfd53d6793104a6eba940485a7417cabfa (image=quay.io/ceph/ceph:v18, name=amazing_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:09:43 compute-0 podman[201146]: 2025-10-02 19:09:43.703161346 +0000 UTC m=+0.380575729 container attach 1c0771122829afd79fae0d2479e5cfbfd53d6793104a6eba940485a7417cabfa (image=quay.io/ceph/ceph:v18, name=amazing_kepler, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:09:43 compute-0 systemd-rc-local-generator[201200]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:09:43 compute-0 systemd-sysv-generator[201203]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:09:43 compute-0 ceph-mon[191910]: Added host compute-0
Oct 02 19:09:43 compute-0 ceph-mon[191910]: Saving service mon spec with placement compute-0
Oct 02 19:09:43 compute-0 ceph-mon[191910]: Saving service mgr spec with placement compute-0
Oct 02 19:09:43 compute-0 ceph-mon[191910]: Marking host: compute-0 for OSDSpec preview refresh.
Oct 02 19:09:43 compute-0 ceph-mon[191910]: Saving service osd.default_drive_group spec with placement compute-0
Oct 02 19:09:43 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:44 compute-0 systemd[1]: Starting Ceph mgr.compute-0.ztntmm for 6019f664-a1c2-5955-8391-692cb79a59f9...
Oct 02 19:09:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:09:44 compute-0 podman[201230]: 2025-10-02 19:09:44.210439432 +0000 UTC m=+0.097442970 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:09:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 02 19:09:44 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/776954319' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 19:09:44 compute-0 amazing_kepler[201166]: 
Oct 02 19:09:44 compute-0 amazing_kepler[201166]: {"fsid":"6019f664-a1c2-5955-8391-692cb79a59f9","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":90,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-02T19:08:09.836492+0000","services":{}},"progress_events":{"62f0dfeb-e7b3-4be8-9456-f54a5eb78e7d":{"message":"Updating mgr deployment (+1 -> 2) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Oct 02 19:09:44 compute-0 systemd[1]: libpod-1c0771122829afd79fae0d2479e5cfbfd53d6793104a6eba940485a7417cabfa.scope: Deactivated successfully.
Oct 02 19:09:44 compute-0 podman[201294]: 2025-10-02 19:09:44.475213298 +0000 UTC m=+0.108372758 container create c9dd5c9f48dd74ccc3cadc5532c811cd23938dcf9e0f9132ef3e700d5eba99b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-ztntmm, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Oct 02 19:09:44 compute-0 podman[201146]: 2025-10-02 19:09:44.485631452 +0000 UTC m=+1.163045785 container died 1c0771122829afd79fae0d2479e5cfbfd53d6793104a6eba940485a7417cabfa (image=quay.io/ceph/ceph:v18, name=amazing_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:09:44 compute-0 podman[201294]: 2025-10-02 19:09:44.40525932 +0000 UTC m=+0.038418780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:09:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5900090c21001bc4bad98ed4c58894dcc756e4ca2ede454e4073d2c1b24c2e4-merged.mount: Deactivated successfully.
Oct 02 19:09:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/467b8f5a7d4962844239ef1f3e330a2bead2b769066a1a89c655c72642bd5a30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/467b8f5a7d4962844239ef1f3e330a2bead2b769066a1a89c655c72642bd5a30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/467b8f5a7d4962844239ef1f3e330a2bead2b769066a1a89c655c72642bd5a30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/467b8f5a7d4962844239ef1f3e330a2bead2b769066a1a89c655c72642bd5a30/merged/var/lib/ceph/mgr/ceph-compute-0.ztntmm supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:44 compute-0 podman[201306]: 2025-10-02 19:09:44.798293405 +0000 UTC m=+0.352567533 container remove 1c0771122829afd79fae0d2479e5cfbfd53d6793104a6eba940485a7417cabfa (image=quay.io/ceph/ceph:v18, name=amazing_kepler, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 19:09:44 compute-0 systemd[1]: libpod-conmon-1c0771122829afd79fae0d2479e5cfbfd53d6793104a6eba940485a7417cabfa.scope: Deactivated successfully.
Oct 02 19:09:44 compute-0 podman[201294]: 2025-10-02 19:09:44.816949035 +0000 UTC m=+0.450108505 container init c9dd5c9f48dd74ccc3cadc5532c811cd23938dcf9e0f9132ef3e700d5eba99b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-ztntmm, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 19:09:44 compute-0 podman[201294]: 2025-10-02 19:09:44.824619947 +0000 UTC m=+0.457779397 container start c9dd5c9f48dd74ccc3cadc5532c811cd23938dcf9e0f9132ef3e700d5eba99b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-ztntmm, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 19:09:44 compute-0 bash[201294]: c9dd5c9f48dd74ccc3cadc5532c811cd23938dcf9e0f9132ef3e700d5eba99b6
Oct 02 19:09:44 compute-0 systemd[1]: Started Ceph mgr.compute-0.ztntmm for 6019f664-a1c2-5955-8391-692cb79a59f9.
Oct 02 19:09:44 compute-0 sudo[201108]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:44 compute-0 podman[201326]: 2025-10-02 19:09:44.870766369 +0000 UTC m=+0.162556951 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:09:44 compute-0 ceph-mgr[201340]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 19:09:44 compute-0 ceph-mgr[201340]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 02 19:09:44 compute-0 ceph-mgr[201340]: pidfile_write: ignore empty --pid-file
Oct 02 19:09:44 compute-0 ceph-mon[191910]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:44 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/776954319' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 19:09:44 compute-0 sudo[200953]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:09:45 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:09:45 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 02 19:09:45 compute-0 ceph-mgr[201340]: mgr[py] Loading python module 'alerts'
Oct 02 19:09:45 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:45 compute-0 ceph-mgr[192222]: [progress INFO root] complete: finished ev 62f0dfeb-e7b3-4be8-9456-f54a5eb78e7d (Updating mgr deployment (+1 -> 2))
Oct 02 19:09:45 compute-0 ceph-mgr[192222]: [progress INFO root] Completed event 62f0dfeb-e7b3-4be8-9456-f54a5eb78e7d (Updating mgr deployment (+1 -> 2)) in 3 seconds
Oct 02 19:09:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 02 19:09:45 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:45 compute-0 sudo[201372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:45 compute-0 sudo[201372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:45 compute-0 sudo[201372]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:45 compute-0 sudo[201397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:09:45 compute-0 sudo[201397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:45 compute-0 ceph-mgr[201340]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 02 19:09:45 compute-0 ceph-mgr[201340]: mgr[py] Loading python module 'balancer'
Oct 02 19:09:45 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-ztntmm[201323]: 2025-10-02T19:09:45.404+0000 7f6d6e016140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 02 19:09:45 compute-0 sudo[201397]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:45 compute-0 sudo[201422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:45 compute-0 sudo[201422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:45 compute-0 sudo[201422]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:45 compute-0 sudo[201447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:09:45 compute-0 sudo[201447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:45 compute-0 sudo[201447]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:45 compute-0 ceph-mgr[201340]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 02 19:09:45 compute-0 ceph-mgr[201340]: mgr[py] Loading python module 'cephadm'
Oct 02 19:09:45 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-ztntmm[201323]: 2025-10-02T19:09:45.699+0000 7f6d6e016140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 02 19:09:45 compute-0 sudo[201472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:45 compute-0 sudo[201472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:45 compute-0 sudo[201472]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:45 compute-0 sudo[201497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 19:09:45 compute-0 sudo[201497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:46 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:46 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:46 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:46 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:46 compute-0 podman[201588]: 2025-10-02 19:09:46.742201311 +0000 UTC m=+0.154031968 container exec a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 19:09:46 compute-0 podman[201588]: 2025-10-02 19:09:46.871316673 +0000 UTC m=+0.283147330 container exec_died a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:09:47 compute-0 ceph-mon[191910]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:47 compute-0 sudo[201497]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:09:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:09:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:09:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:09:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:09:47 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:09:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:09:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:09:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:09:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:47 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 0349a289-9560-42ba-aae1-da5aaca8c35e does not exist
Oct 02 19:09:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 02 19:09:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:47 compute-0 ceph-mgr[192222]: [progress INFO root] update: starting ev 301f0720-6329-4963-9a1a-d145416390e1 (Updating mgr deployment (-1 -> 1))
Oct 02 19:09:47 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.ztntmm from compute-0 -- ports [8765]
Oct 02 19:09:47 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.ztntmm from compute-0 -- ports [8765]
Oct 02 19:09:47 compute-0 sudo[201678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:47 compute-0 sudo[201678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:47 compute-0 sudo[201678]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:47 compute-0 ceph-mgr[201340]: mgr[py] Loading python module 'crash'
Oct 02 19:09:47 compute-0 sudo[201703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:09:47 compute-0 sudo[201703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:47 compute-0 sudo[201703]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:48 compute-0 sudo[201728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:48 compute-0 sudo[201728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:48 compute-0 sudo[201728]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:48 compute-0 sudo[201758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 rm-daemon --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --name mgr.compute-0.ztntmm --force --tcp-ports 8765
Oct 02 19:09:48 compute-0 sudo[201758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:48 compute-0 podman[201752]: 2025-10-02 19:09:48.140836933 +0000 UTC m=+0.117557549 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, name=ubi9, version=9.4, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release-0.7.12=, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct 02 19:09:48 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-ztntmm[201323]: 2025-10-02T19:09:48.155+0000 7f6d6e016140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 02 19:09:48 compute-0 ceph-mgr[201340]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 02 19:09:48 compute-0 ceph-mgr[201340]: mgr[py] Loading python module 'dashboard'
Oct 02 19:09:48 compute-0 ceph-mon[191910]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:09:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:09:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:48 compute-0 ceph-mon[191910]: Removing daemon mgr.compute-0.ztntmm from compute-0 -- ports [8765]
Oct 02 19:09:48 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.ztntmm for 6019f664-a1c2-5955-8391-692cb79a59f9...
Oct 02 19:09:48 compute-0 ceph-mgr[192222]: [progress INFO root] Writing back 2 completed events
Oct 02 19:09:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 19:09:48 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:48 compute-0 podman[201862]: 2025-10-02 19:09:48.889511811 +0000 UTC m=+0.109235771 container died c9dd5c9f48dd74ccc3cadc5532c811cd23938dcf9e0f9132ef3e700d5eba99b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-ztntmm, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 19:09:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-467b8f5a7d4962844239ef1f3e330a2bead2b769066a1a89c655c72642bd5a30-merged.mount: Deactivated successfully.
Oct 02 19:09:48 compute-0 podman[201862]: 2025-10-02 19:09:48.955749511 +0000 UTC m=+0.175473501 container remove c9dd5c9f48dd74ccc3cadc5532c811cd23938dcf9e0f9132ef3e700d5eba99b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-ztntmm, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 19:09:48 compute-0 bash[201862]: ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-ztntmm
Oct 02 19:09:48 compute-0 systemd[1]: ceph-6019f664-a1c2-5955-8391-692cb79a59f9@mgr.compute-0.ztntmm.service: Main process exited, code=exited, status=143/n/a
Oct 02 19:09:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:09:49 compute-0 systemd[1]: ceph-6019f664-a1c2-5955-8391-692cb79a59f9@mgr.compute-0.ztntmm.service: Failed with result 'exit-code'.
Oct 02 19:09:49 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.ztntmm for 6019f664-a1c2-5955-8391-692cb79a59f9.
Oct 02 19:09:49 compute-0 systemd[1]: ceph-6019f664-a1c2-5955-8391-692cb79a59f9@mgr.compute-0.ztntmm.service: Consumed 5.532s CPU time.
Oct 02 19:09:49 compute-0 systemd[1]: Reloading.
Oct 02 19:09:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:49 compute-0 systemd-sysv-generator[201944]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:09:49 compute-0 systemd-rc-local-generator[201941]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:09:49 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:49 compute-0 sudo[201758]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:49 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.ztntmm
Oct 02 19:09:49 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.ztntmm
Oct 02 19:09:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.ztntmm"} v 0) v1
Oct 02 19:09:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.ztntmm"}]: dispatch
Oct 02 19:09:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.ztntmm"}]': finished
Oct 02 19:09:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 02 19:09:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:49 compute-0 ceph-mgr[192222]: [progress INFO root] complete: finished ev 301f0720-6329-4963-9a1a-d145416390e1 (Updating mgr deployment (-1 -> 1))
Oct 02 19:09:49 compute-0 ceph-mgr[192222]: [progress INFO root] Completed event 301f0720-6329-4963-9a1a-d145416390e1 (Updating mgr deployment (-1 -> 1)) in 2 seconds
Oct 02 19:09:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 02 19:09:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:49 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev ccfe0115-7ae3-40c2-a23f-c01df1cf733a does not exist
Oct 02 19:09:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:09:49 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:09:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:09:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:09:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:09:49 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:09:49 compute-0 sudo[201954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:49 compute-0 sudo[201954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:49 compute-0 sudo[201954]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:50 compute-0 sudo[201979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:09:50 compute-0 sudo[201979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:50 compute-0 sudo[201979]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:50 compute-0 sudo[202004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:09:50 compute-0 sudo[202004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:50 compute-0 sudo[202004]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:50 compute-0 sudo[202029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:09:50 compute-0 sudo[202029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:09:50 compute-0 ceph-mon[191910]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:50 compute-0 ceph-mon[191910]: Removing key for mgr.compute-0.ztntmm
Oct 02 19:09:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.ztntmm"}]: dispatch
Oct 02 19:09:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.ztntmm"}]': finished
Oct 02 19:09:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:09:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:09:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:09:50 compute-0 podman[202089]: 2025-10-02 19:09:50.869591477 +0000 UTC m=+0.042953070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:09:50 compute-0 podman[202089]: 2025-10-02 19:09:50.967988402 +0000 UTC m=+0.141350015 container create 7eac89ee54e726eb080693973bc3f6b151c1f2ff4e5f187cdb76d3ecb03a1662 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bouman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:09:51 compute-0 systemd[1]: Started libpod-conmon-7eac89ee54e726eb080693973bc3f6b151c1f2ff4e5f187cdb76d3ecb03a1662.scope.
Oct 02 19:09:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:51 compute-0 podman[202089]: 2025-10-02 19:09:51.210942944 +0000 UTC m=+0.384304607 container init 7eac89ee54e726eb080693973bc3f6b151c1f2ff4e5f187cdb76d3ecb03a1662 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bouman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:09:51 compute-0 podman[202089]: 2025-10-02 19:09:51.229317717 +0000 UTC m=+0.402679330 container start 7eac89ee54e726eb080693973bc3f6b151c1f2ff4e5f187cdb76d3ecb03a1662 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 19:09:51 compute-0 podman[202089]: 2025-10-02 19:09:51.235634553 +0000 UTC m=+0.408996216 container attach 7eac89ee54e726eb080693973bc3f6b151c1f2ff4e5f187cdb76d3ecb03a1662 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bouman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:09:51 compute-0 charming_bouman[202105]: 167 167
Oct 02 19:09:51 compute-0 systemd[1]: libpod-7eac89ee54e726eb080693973bc3f6b151c1f2ff4e5f187cdb76d3ecb03a1662.scope: Deactivated successfully.
Oct 02 19:09:51 compute-0 podman[202089]: 2025-10-02 19:09:51.243868329 +0000 UTC m=+0.417229932 container died 7eac89ee54e726eb080693973bc3f6b151c1f2ff4e5f187cdb76d3ecb03a1662 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bouman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 19:09:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5667e64a3b65298c6c8598af383d8d3386fc2756b107004c6714de38d427d51d-merged.mount: Deactivated successfully.
Oct 02 19:09:51 compute-0 podman[202089]: 2025-10-02 19:09:51.312165483 +0000 UTC m=+0.485527066 container remove 7eac89ee54e726eb080693973bc3f6b151c1f2ff4e5f187cdb76d3ecb03a1662 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bouman, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:09:51 compute-0 systemd[1]: libpod-conmon-7eac89ee54e726eb080693973bc3f6b151c1f2ff4e5f187cdb76d3ecb03a1662.scope: Deactivated successfully.
Oct 02 19:09:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:51 compute-0 podman[202128]: 2025-10-02 19:09:51.587000573 +0000 UTC m=+0.100040999 container create 4c39563477d75b1d40488a9ee5dafd537cd4845b9f1eefc8cce2d22ed97abe91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lehmann, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 19:09:51 compute-0 podman[202128]: 2025-10-02 19:09:51.552717603 +0000 UTC m=+0.065758069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:09:51 compute-0 systemd[1]: Started libpod-conmon-4c39563477d75b1d40488a9ee5dafd537cd4845b9f1eefc8cce2d22ed97abe91.scope.
Oct 02 19:09:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b43dcf45d96aa132f4c405d4fd34536045822fe1cc7f0cd2e8b6b7f4e68abd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b43dcf45d96aa132f4c405d4fd34536045822fe1cc7f0cd2e8b6b7f4e68abd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b43dcf45d96aa132f4c405d4fd34536045822fe1cc7f0cd2e8b6b7f4e68abd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b43dcf45d96aa132f4c405d4fd34536045822fe1cc7f0cd2e8b6b7f4e68abd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b43dcf45d96aa132f4c405d4fd34536045822fe1cc7f0cd2e8b6b7f4e68abd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:51 compute-0 podman[202128]: 2025-10-02 19:09:51.76427171 +0000 UTC m=+0.277312196 container init 4c39563477d75b1d40488a9ee5dafd537cd4845b9f1eefc8cce2d22ed97abe91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lehmann, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 19:09:51 compute-0 podman[202128]: 2025-10-02 19:09:51.79813644 +0000 UTC m=+0.311176836 container start 4c39563477d75b1d40488a9ee5dafd537cd4845b9f1eefc8cce2d22ed97abe91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:09:51 compute-0 podman[202128]: 2025-10-02 19:09:51.804352323 +0000 UTC m=+0.317392799 container attach 4c39563477d75b1d40488a9ee5dafd537cd4845b9f1eefc8cce2d22ed97abe91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 19:09:52 compute-0 ceph-mon[191910]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:53 compute-0 thirsty_lehmann[202145]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:09:53 compute-0 thirsty_lehmann[202145]: --> relative data size: 1.0
Oct 02 19:09:53 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 02 19:09:53 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48
Oct 02 19:09:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:53 compute-0 ceph-mgr[192222]: [progress INFO root] Writing back 3 completed events
Oct 02 19:09:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 19:09:53 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48"} v 0) v1
Oct 02 19:09:53 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/354939245' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48"}]: dispatch
Oct 02 19:09:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct 02 19:09:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 19:09:53 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/354939245' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48"}]': finished
Oct 02 19:09:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct 02 19:09:53 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct 02 19:09:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 19:09:53 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:09:53 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 19:09:53 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 02 19:09:53 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct 02 19:09:54 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 02 19:09:54 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 02 19:09:54 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 02 19:09:54 compute-0 lvm[202209]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 19:09:54 compute-0 lvm[202209]: VG ceph_vg0 finished
Oct 02 19:09:54 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct 02 19:09:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:09:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 02 19:09:54 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2530022976' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 19:09:54 compute-0 thirsty_lehmann[202145]:  stderr: got monmap epoch 1
Oct 02 19:09:54 compute-0 thirsty_lehmann[202145]: --> Creating keyring file for osd.0
Oct 02 19:09:54 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct 02 19:09:54 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct 02 19:09:54 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48 --setuser ceph --setgroup ceph
Oct 02 19:09:54 compute-0 ceph-mon[191910]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:54 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:09:54 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/354939245' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48"}]: dispatch
Oct 02 19:09:54 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/354939245' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48"}]': finished
Oct 02 19:09:54 compute-0 ceph-mon[191910]: osdmap e4: 1 total, 0 up, 1 in
Oct 02 19:09:54 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:09:54 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2530022976' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 19:09:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:55 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 02 19:09:55 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 02 19:09:55 compute-0 ceph-mon[191910]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 02 19:09:55 compute-0 ceph-mon[191910]: Cluster is now healthy
Oct 02 19:09:56 compute-0 ceph-mon[191910]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:57 compute-0 thirsty_lehmann[202145]:  stderr: 2025-10-02T19:09:54.620+0000 7feabac10740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 02 19:09:57 compute-0 thirsty_lehmann[202145]:  stderr: 2025-10-02T19:09:54.621+0000 7feabac10740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 02 19:09:57 compute-0 thirsty_lehmann[202145]:  stderr: 2025-10-02T19:09:54.621+0000 7feabac10740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 02 19:09:57 compute-0 thirsty_lehmann[202145]:  stderr: 2025-10-02T19:09:54.621+0000 7feabac10740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct 02 19:09:57 compute-0 thirsty_lehmann[202145]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 02 19:09:57 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 02 19:09:57 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 02 19:09:57 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 02 19:09:57 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 02 19:09:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:57 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 02 19:09:57 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 02 19:09:57 compute-0 thirsty_lehmann[202145]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 02 19:09:57 compute-0 thirsty_lehmann[202145]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 02 19:09:57 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 02 19:09:57 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 82844b2c-c78f-4ec2-a159-b058e47d1cbd
Oct 02 19:09:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd"} v 0) v1
Oct 02 19:09:57 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1948687973' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd"}]: dispatch
Oct 02 19:09:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct 02 19:09:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 19:09:57 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1948687973' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd"}]': finished
Oct 02 19:09:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct 02 19:09:57 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct 02 19:09:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 19:09:57 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:09:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 19:09:57 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:09:57 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 19:09:57 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 19:09:58 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 02 19:09:58 compute-0 lvm[203170]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 02 19:09:58 compute-0 lvm[203170]: VG ceph_vg1 finished
Oct 02 19:09:58 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct 02 19:09:58 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Oct 02 19:09:58 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 02 19:09:58 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 02 19:09:58 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct 02 19:09:58 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 02 19:09:58 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2311154014' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 19:09:58 compute-0 thirsty_lehmann[202145]:  stderr: got monmap epoch 1
Oct 02 19:09:58 compute-0 thirsty_lehmann[202145]: --> Creating keyring file for osd.1
Oct 02 19:09:58 compute-0 ceph-mon[191910]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:58 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1948687973' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd"}]: dispatch
Oct 02 19:09:58 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1948687973' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd"}]': finished
Oct 02 19:09:58 compute-0 ceph-mon[191910]: osdmap e5: 2 total, 0 up, 2 in
Oct 02 19:09:58 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:09:58 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:09:58 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2311154014' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 19:09:58 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct 02 19:09:58 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct 02 19:09:58 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 82844b2c-c78f-4ec2-a159-b058e47d1cbd --setuser ceph --setgroup ceph
Oct 02 19:09:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:09:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:09:59 compute-0 podman[157186]: time="2025-10-02T19:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:09:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 25439 "" "Go-http-client/1.1"
Oct 02 19:09:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4851 "" "Go-http-client/1.1"
Oct 02 19:10:00 compute-0 ceph-mon[191910]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:01 compute-0 openstack_network_exporter[159337]: ERROR   19:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:10:01 compute-0 openstack_network_exporter[159337]: ERROR   19:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:10:01 compute-0 openstack_network_exporter[159337]: ERROR   19:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:10:01 compute-0 openstack_network_exporter[159337]: ERROR   19:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:10:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:10:01 compute-0 openstack_network_exporter[159337]: ERROR   19:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:10:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:10:01 compute-0 thirsty_lehmann[202145]:  stderr: 2025-10-02T19:09:58.849+0000 7f3e8ac91740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 02 19:10:01 compute-0 thirsty_lehmann[202145]:  stderr: 2025-10-02T19:09:58.850+0000 7f3e8ac91740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 02 19:10:01 compute-0 thirsty_lehmann[202145]:  stderr: 2025-10-02T19:09:58.850+0000 7f3e8ac91740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 02 19:10:01 compute-0 thirsty_lehmann[202145]:  stderr: 2025-10-02T19:09:58.850+0000 7f3e8ac91740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct 02 19:10:01 compute-0 thirsty_lehmann[202145]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Oct 02 19:10:01 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 02 19:10:01 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 02 19:10:01 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 02 19:10:01 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 02 19:10:01 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 02 19:10:01 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 02 19:10:01 compute-0 thirsty_lehmann[202145]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 02 19:10:01 compute-0 thirsty_lehmann[202145]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Oct 02 19:10:01 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 02 19:10:01 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new afe0acfe-daf6-4901-80df-bc50bc9ae508
Oct 02 19:10:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508"} v 0) v1
Oct 02 19:10:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1456908578' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508"}]: dispatch
Oct 02 19:10:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct 02 19:10:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 19:10:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1456908578' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508"}]': finished
Oct 02 19:10:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Oct 02 19:10:02 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Oct 02 19:10:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 19:10:02 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:10:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 19:10:02 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 19:10:02 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:02 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 19:10:02 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 19:10:02 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 19:10:02 compute-0 lvm[204138]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 02 19:10:02 compute-0 lvm[204138]: VG ceph_vg2 finished
Oct 02 19:10:02 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 02 19:10:02 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Oct 02 19:10:02 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Oct 02 19:10:02 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 02 19:10:02 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 02 19:10:02 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Oct 02 19:10:02 compute-0 ceph-mon[191910]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:02 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1456908578' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508"}]: dispatch
Oct 02 19:10:02 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1456908578' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508"}]': finished
Oct 02 19:10:02 compute-0 ceph-mon[191910]: osdmap e6: 3 total, 0 up, 3 in
Oct 02 19:10:02 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:10:02 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:02 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 02 19:10:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/52639054' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 19:10:03 compute-0 thirsty_lehmann[202145]:  stderr: got monmap epoch 1
Oct 02 19:10:03 compute-0 thirsty_lehmann[202145]: --> Creating keyring file for osd.2
Oct 02 19:10:03 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Oct 02 19:10:03 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Oct 02 19:10:03 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid afe0acfe-daf6-4901-80df-bc50bc9ae508 --setuser ceph --setgroup ceph
Oct 02 19:10:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:10:03
Oct 02 19:10:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:10:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:10:03 compute-0 ceph-mgr[192222]: [balancer INFO root] No pools available
Oct 02 19:10:03 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:10:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:10:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:10:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:10:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:10:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:10:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:10:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:10:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:10:03 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/52639054' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 19:10:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:10:04 compute-0 ceph-mon[191910]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:05 compute-0 podman[204825]: 2025-10-02 19:10:05.246187917 +0000 UTC m=+0.098962401 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:10:05 compute-0 podman[204786]: 2025-10-02 19:10:05.258460509 +0000 UTC m=+0.112594118 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true)
Oct 02 19:10:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:05 compute-0 thirsty_lehmann[202145]:  stderr: 2025-10-02T19:10:03.281+0000 7f6670665740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 02 19:10:05 compute-0 thirsty_lehmann[202145]:  stderr: 2025-10-02T19:10:03.282+0000 7f6670665740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 02 19:10:05 compute-0 thirsty_lehmann[202145]:  stderr: 2025-10-02T19:10:03.282+0000 7f6670665740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 02 19:10:05 compute-0 thirsty_lehmann[202145]:  stderr: 2025-10-02T19:10:03.282+0000 7f6670665740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Oct 02 19:10:05 compute-0 thirsty_lehmann[202145]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Oct 02 19:10:05 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 02 19:10:05 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct 02 19:10:06 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 02 19:10:06 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct 02 19:10:06 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 02 19:10:06 compute-0 thirsty_lehmann[202145]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 02 19:10:06 compute-0 thirsty_lehmann[202145]: --> ceph-volume lvm activate successful for osd ID: 2
Oct 02 19:10:06 compute-0 thirsty_lehmann[202145]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Oct 02 19:10:06 compute-0 systemd[1]: libpod-4c39563477d75b1d40488a9ee5dafd537cd4845b9f1eefc8cce2d22ed97abe91.scope: Deactivated successfully.
Oct 02 19:10:06 compute-0 systemd[1]: libpod-4c39563477d75b1d40488a9ee5dafd537cd4845b9f1eefc8cce2d22ed97abe91.scope: Consumed 8.738s CPU time.
Oct 02 19:10:06 compute-0 podman[205118]: 2025-10-02 19:10:06.19642085 +0000 UTC m=+0.046376720 container died 4c39563477d75b1d40488a9ee5dafd537cd4845b9f1eefc8cce2d22ed97abe91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:10:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5b43dcf45d96aa132f4c405d4fd34536045822fe1cc7f0cd2e8b6b7f4e68abd-merged.mount: Deactivated successfully.
Oct 02 19:10:06 compute-0 podman[205118]: 2025-10-02 19:10:06.305980878 +0000 UTC m=+0.155936748 container remove 4c39563477d75b1d40488a9ee5dafd537cd4845b9f1eefc8cce2d22ed97abe91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lehmann, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 19:10:06 compute-0 systemd[1]: libpod-conmon-4c39563477d75b1d40488a9ee5dafd537cd4845b9f1eefc8cce2d22ed97abe91.scope: Deactivated successfully.
Oct 02 19:10:06 compute-0 sudo[202029]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:06 compute-0 sudo[205133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:06 compute-0 sudo[205133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:06 compute-0 sudo[205133]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:06 compute-0 sudo[205158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:10:06 compute-0 sudo[205158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:06 compute-0 sudo[205158]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:06 compute-0 sudo[205183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:06 compute-0 ceph-mon[191910]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:06 compute-0 sudo[205183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:06 compute-0 sudo[205183]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:06 compute-0 sudo[205208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:10:06 compute-0 sudo[205208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:07 compute-0 podman[205268]: 2025-10-02 19:10:07.41450451 +0000 UTC m=+0.066833005 container create 36f296a4e3c0d40f801003e4d3a5fc71fbbb176baf49acc8b5d0ab41e0036966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_elgamal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 19:10:07 compute-0 podman[205268]: 2025-10-02 19:10:07.38132335 +0000 UTC m=+0.033651885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:07 compute-0 systemd[1]: Started libpod-conmon-36f296a4e3c0d40f801003e4d3a5fc71fbbb176baf49acc8b5d0ab41e0036966.scope.
Oct 02 19:10:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:07 compute-0 podman[205268]: 2025-10-02 19:10:07.548878607 +0000 UTC m=+0.201207082 container init 36f296a4e3c0d40f801003e4d3a5fc71fbbb176baf49acc8b5d0ab41e0036966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_elgamal, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 19:10:07 compute-0 podman[205268]: 2025-10-02 19:10:07.5610679 +0000 UTC m=+0.213396355 container start 36f296a4e3c0d40f801003e4d3a5fc71fbbb176baf49acc8b5d0ab41e0036966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:07 compute-0 podman[205268]: 2025-10-02 19:10:07.565739234 +0000 UTC m=+0.218067679 container attach 36f296a4e3c0d40f801003e4d3a5fc71fbbb176baf49acc8b5d0ab41e0036966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_elgamal, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:10:07 compute-0 funny_elgamal[205284]: 167 167
Oct 02 19:10:07 compute-0 systemd[1]: libpod-36f296a4e3c0d40f801003e4d3a5fc71fbbb176baf49acc8b5d0ab41e0036966.scope: Deactivated successfully.
Oct 02 19:10:07 compute-0 podman[205268]: 2025-10-02 19:10:07.575633807 +0000 UTC m=+0.227962282 container died 36f296a4e3c0d40f801003e4d3a5fc71fbbb176baf49acc8b5d0ab41e0036966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_elgamal, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 19:10:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfe1c0fdd93ceda39b400ada02177f3508a1e68b7232534f421b6623031e038c-merged.mount: Deactivated successfully.
Oct 02 19:10:07 compute-0 podman[205268]: 2025-10-02 19:10:07.628690825 +0000 UTC m=+0.281019270 container remove 36f296a4e3c0d40f801003e4d3a5fc71fbbb176baf49acc8b5d0ab41e0036966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 02 19:10:07 compute-0 systemd[1]: libpod-conmon-36f296a4e3c0d40f801003e4d3a5fc71fbbb176baf49acc8b5d0ab41e0036966.scope: Deactivated successfully.
Oct 02 19:10:07 compute-0 podman[205309]: 2025-10-02 19:10:07.862362606 +0000 UTC m=+0.080728523 container create 73ca9780800874bb8537f8d46f1ed15d2f150f70aa90bafe36f5049df0696f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:07 compute-0 podman[205309]: 2025-10-02 19:10:07.832636937 +0000 UTC m=+0.051002904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:07 compute-0 systemd[1]: Started libpod-conmon-73ca9780800874bb8537f8d46f1ed15d2f150f70aa90bafe36f5049df0696f8e.scope.
Oct 02 19:10:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28d32fd074443abfabf891c4c9523a1712131f9e5f902a6c8139e5b7bc65af43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28d32fd074443abfabf891c4c9523a1712131f9e5f902a6c8139e5b7bc65af43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28d32fd074443abfabf891c4c9523a1712131f9e5f902a6c8139e5b7bc65af43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28d32fd074443abfabf891c4c9523a1712131f9e5f902a6c8139e5b7bc65af43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:08 compute-0 podman[205309]: 2025-10-02 19:10:08.09119605 +0000 UTC m=+0.309561997 container init 73ca9780800874bb8537f8d46f1ed15d2f150f70aa90bafe36f5049df0696f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:08 compute-0 podman[205309]: 2025-10-02 19:10:08.1115515 +0000 UTC m=+0.329917407 container start 73ca9780800874bb8537f8d46f1ed15d2f150f70aa90bafe36f5049df0696f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_moore, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:10:08 compute-0 podman[205309]: 2025-10-02 19:10:08.136354238 +0000 UTC m=+0.354720155 container attach 73ca9780800874bb8537f8d46f1ed15d2f150f70aa90bafe36f5049df0696f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_moore, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:10:08 compute-0 ceph-mon[191910]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:08 compute-0 strange_moore[205324]: {
Oct 02 19:10:08 compute-0 strange_moore[205324]:     "0": [
Oct 02 19:10:08 compute-0 strange_moore[205324]:         {
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "devices": [
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "/dev/loop3"
Oct 02 19:10:08 compute-0 strange_moore[205324]:             ],
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "lv_name": "ceph_lv0",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "lv_size": "21470642176",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "name": "ceph_lv0",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "tags": {
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.cluster_name": "ceph",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.crush_device_class": "",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.encrypted": "0",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.osd_id": "0",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.type": "block",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.vdo": "0"
Oct 02 19:10:08 compute-0 strange_moore[205324]:             },
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "type": "block",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "vg_name": "ceph_vg0"
Oct 02 19:10:08 compute-0 strange_moore[205324]:         }
Oct 02 19:10:08 compute-0 strange_moore[205324]:     ],
Oct 02 19:10:08 compute-0 strange_moore[205324]:     "1": [
Oct 02 19:10:08 compute-0 strange_moore[205324]:         {
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "devices": [
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "/dev/loop4"
Oct 02 19:10:08 compute-0 strange_moore[205324]:             ],
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "lv_name": "ceph_lv1",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "lv_size": "21470642176",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "name": "ceph_lv1",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "tags": {
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.cluster_name": "ceph",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.crush_device_class": "",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.encrypted": "0",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.osd_id": "1",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.type": "block",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.vdo": "0"
Oct 02 19:10:08 compute-0 strange_moore[205324]:             },
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "type": "block",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "vg_name": "ceph_vg1"
Oct 02 19:10:08 compute-0 strange_moore[205324]:         }
Oct 02 19:10:08 compute-0 strange_moore[205324]:     ],
Oct 02 19:10:08 compute-0 strange_moore[205324]:     "2": [
Oct 02 19:10:08 compute-0 strange_moore[205324]:         {
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "devices": [
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "/dev/loop5"
Oct 02 19:10:08 compute-0 strange_moore[205324]:             ],
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "lv_name": "ceph_lv2",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "lv_size": "21470642176",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "name": "ceph_lv2",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "tags": {
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.cluster_name": "ceph",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.crush_device_class": "",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.encrypted": "0",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.osd_id": "2",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.type": "block",
Oct 02 19:10:08 compute-0 strange_moore[205324]:                 "ceph.vdo": "0"
Oct 02 19:10:08 compute-0 strange_moore[205324]:             },
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "type": "block",
Oct 02 19:10:08 compute-0 strange_moore[205324]:             "vg_name": "ceph_vg2"
Oct 02 19:10:08 compute-0 strange_moore[205324]:         }
Oct 02 19:10:08 compute-0 strange_moore[205324]:     ]
Oct 02 19:10:08 compute-0 strange_moore[205324]: }
Oct 02 19:10:08 compute-0 systemd[1]: libpod-73ca9780800874bb8537f8d46f1ed15d2f150f70aa90bafe36f5049df0696f8e.scope: Deactivated successfully.
Oct 02 19:10:08 compute-0 podman[205309]: 2025-10-02 19:10:08.964487287 +0000 UTC m=+1.182853254 container died 73ca9780800874bb8537f8d46f1ed15d2f150f70aa90bafe36f5049df0696f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_moore, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 19:10:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-28d32fd074443abfabf891c4c9523a1712131f9e5f902a6c8139e5b7bc65af43-merged.mount: Deactivated successfully.
Oct 02 19:10:09 compute-0 podman[205309]: 2025-10-02 19:10:09.070281595 +0000 UTC m=+1.288647512 container remove 73ca9780800874bb8537f8d46f1ed15d2f150f70aa90bafe36f5049df0696f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_moore, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:09 compute-0 systemd[1]: libpod-conmon-73ca9780800874bb8537f8d46f1ed15d2f150f70aa90bafe36f5049df0696f8e.scope: Deactivated successfully.
Oct 02 19:10:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:10:09 compute-0 sudo[205208]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Oct 02 19:10:09 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 02 19:10:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:10:09 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:10:09 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Oct 02 19:10:09 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Oct 02 19:10:09 compute-0 sudo[205346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:09 compute-0 sudo[205346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:09 compute-0 sudo[205346]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:09 compute-0 sudo[205382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:10:09 compute-0 sudo[205382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:09 compute-0 sudo[205382]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:09 compute-0 podman[205370]: 2025-10-02 19:10:09.482032913 +0000 UTC m=+0.136428132 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Oct 02 19:10:09 compute-0 podman[205371]: 2025-10-02 19:10:09.518812559 +0000 UTC m=+0.172725425 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 19:10:09 compute-0 sudo[205437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:09 compute-0 sudo[205437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:09 compute-0 sudo[205437]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:09 compute-0 sudo[205465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:10:09 compute-0 sudo[205465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:09 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 02 19:10:09 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:10:09 compute-0 ceph-mon[191910]: Deploying daemon osd.0 on compute-0
Oct 02 19:10:10 compute-0 podman[205527]: 2025-10-02 19:10:10.252779168 +0000 UTC m=+0.088026547 container create 3c076cb76116ee03ee50295e2fd6df34f53d18a851aa7ee45a5b5478f00793b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_pascal, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:10 compute-0 podman[205527]: 2025-10-02 19:10:10.220892562 +0000 UTC m=+0.056140001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:10 compute-0 systemd[1]: Started libpod-conmon-3c076cb76116ee03ee50295e2fd6df34f53d18a851aa7ee45a5b5478f00793b4.scope.
Oct 02 19:10:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:10 compute-0 podman[205527]: 2025-10-02 19:10:10.410230947 +0000 UTC m=+0.245478336 container init 3c076cb76116ee03ee50295e2fd6df34f53d18a851aa7ee45a5b5478f00793b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 19:10:10 compute-0 podman[205527]: 2025-10-02 19:10:10.430010522 +0000 UTC m=+0.265257881 container start 3c076cb76116ee03ee50295e2fd6df34f53d18a851aa7ee45a5b5478f00793b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:10 compute-0 podman[205527]: 2025-10-02 19:10:10.435148569 +0000 UTC m=+0.270395958 container attach 3c076cb76116ee03ee50295e2fd6df34f53d18a851aa7ee45a5b5478f00793b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:10:10 compute-0 upbeat_pascal[205543]: 167 167
Oct 02 19:10:10 compute-0 systemd[1]: libpod-3c076cb76116ee03ee50295e2fd6df34f53d18a851aa7ee45a5b5478f00793b4.scope: Deactivated successfully.
Oct 02 19:10:10 compute-0 podman[205527]: 2025-10-02 19:10:10.441352903 +0000 UTC m=+0.276600262 container died 3c076cb76116ee03ee50295e2fd6df34f53d18a851aa7ee45a5b5478f00793b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_pascal, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 19:10:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-848938f5f78c64b5c6f739ac4a507c5a26e5ff424426d66de8ef0e3834a98d35-merged.mount: Deactivated successfully.
Oct 02 19:10:10 compute-0 podman[205527]: 2025-10-02 19:10:10.507892729 +0000 UTC m=+0.343140078 container remove 3c076cb76116ee03ee50295e2fd6df34f53d18a851aa7ee45a5b5478f00793b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:10 compute-0 systemd[1]: libpod-conmon-3c076cb76116ee03ee50295e2fd6df34f53d18a851aa7ee45a5b5478f00793b4.scope: Deactivated successfully.
Oct 02 19:10:10 compute-0 ceph-mon[191910]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:10 compute-0 podman[205573]: 2025-10-02 19:10:10.873926283 +0000 UTC m=+0.073263406 container create d8e1c2b198f8bcdf9ab6a1546a98caefea545695323b96023c5c596aade1c6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 19:10:10 compute-0 systemd[1]: Started libpod-conmon-d8e1c2b198f8bcdf9ab6a1546a98caefea545695323b96023c5c596aade1c6b2.scope.
Oct 02 19:10:10 compute-0 podman[205573]: 2025-10-02 19:10:10.852186376 +0000 UTC m=+0.051523499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308be1d539121efb0c8a8c86faece097c38aa457a6eddbad3179f9c2dd98e845/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308be1d539121efb0c8a8c86faece097c38aa457a6eddbad3179f9c2dd98e845/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308be1d539121efb0c8a8c86faece097c38aa457a6eddbad3179f9c2dd98e845/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308be1d539121efb0c8a8c86faece097c38aa457a6eddbad3179f9c2dd98e845/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308be1d539121efb0c8a8c86faece097c38aa457a6eddbad3179f9c2dd98e845/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:11 compute-0 podman[205573]: 2025-10-02 19:10:11.018797688 +0000 UTC m=+0.218134821 container init d8e1c2b198f8bcdf9ab6a1546a98caefea545695323b96023c5c596aade1c6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate-test, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:11 compute-0 podman[205573]: 2025-10-02 19:10:11.03169559 +0000 UTC m=+0.231032683 container start d8e1c2b198f8bcdf9ab6a1546a98caefea545695323b96023c5c596aade1c6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:10:11 compute-0 podman[205573]: 2025-10-02 19:10:11.037220657 +0000 UTC m=+0.236557830 container attach d8e1c2b198f8bcdf9ab6a1546a98caefea545695323b96023c5c596aade1c6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:11 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate-test[205589]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 02 19:10:11 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate-test[205589]:                             [--no-systemd] [--no-tmpfs]
Oct 02 19:10:11 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate-test[205589]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 02 19:10:11 compute-0 systemd[1]: libpod-d8e1c2b198f8bcdf9ab6a1546a98caefea545695323b96023c5c596aade1c6b2.scope: Deactivated successfully.
Oct 02 19:10:11 compute-0 podman[205573]: 2025-10-02 19:10:11.718659582 +0000 UTC m=+0.917996715 container died d8e1c2b198f8bcdf9ab6a1546a98caefea545695323b96023c5c596aade1c6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 02 19:10:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-308be1d539121efb0c8a8c86faece097c38aa457a6eddbad3179f9c2dd98e845-merged.mount: Deactivated successfully.
Oct 02 19:10:11 compute-0 podman[205573]: 2025-10-02 19:10:11.818571594 +0000 UTC m=+1.017908677 container remove d8e1c2b198f8bcdf9ab6a1546a98caefea545695323b96023c5c596aade1c6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate-test, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:11 compute-0 systemd[1]: libpod-conmon-d8e1c2b198f8bcdf9ab6a1546a98caefea545695323b96023c5c596aade1c6b2.scope: Deactivated successfully.
Oct 02 19:10:12 compute-0 systemd[1]: Reloading.
Oct 02 19:10:12 compute-0 systemd-rc-local-generator[205649]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:10:12 compute-0 systemd-sysv-generator[205653]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:10:12 compute-0 systemd[1]: Reloading.
Oct 02 19:10:12 compute-0 ceph-mon[191910]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:12 compute-0 systemd-rc-local-generator[205685]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:10:12 compute-0 systemd-sysv-generator[205690]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:10:13 compute-0 systemd[1]: Starting Ceph osd.0 for 6019f664-a1c2-5955-8391-692cb79a59f9...
Oct 02 19:10:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:13 compute-0 podman[205746]: 2025-10-02 19:10:13.57652027 +0000 UTC m=+0.073462851 container create c4a7f30823df12714cd4cf0d257759819ac3cc4b35feb9e4f3d6fcb210dae2d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 02 19:10:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:13 compute-0 podman[205746]: 2025-10-02 19:10:13.547548091 +0000 UTC m=+0.044490742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca3fcd033f6fa635ec066b7b23faa1330ba389f83e591e30cc40dcf4721a1f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca3fcd033f6fa635ec066b7b23faa1330ba389f83e591e30cc40dcf4721a1f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca3fcd033f6fa635ec066b7b23faa1330ba389f83e591e30cc40dcf4721a1f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca3fcd033f6fa635ec066b7b23faa1330ba389f83e591e30cc40dcf4721a1f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca3fcd033f6fa635ec066b7b23faa1330ba389f83e591e30cc40dcf4721a1f3/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:13 compute-0 podman[205746]: 2025-10-02 19:10:13.670062843 +0000 UTC m=+0.167005524 container init c4a7f30823df12714cd4cf0d257759819ac3cc4b35feb9e4f3d6fcb210dae2d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 19:10:13 compute-0 podman[205746]: 2025-10-02 19:10:13.693368421 +0000 UTC m=+0.190311002 container start c4a7f30823df12714cd4cf0d257759819ac3cc4b35feb9e4f3d6fcb210dae2d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 19:10:13 compute-0 podman[205746]: 2025-10-02 19:10:13.698876347 +0000 UTC m=+0.195818938 container attach c4a7f30823df12714cd4cf0d257759819ac3cc4b35feb9e4f3d6fcb210dae2d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:10:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:10:14 compute-0 podman[205777]: 2025-10-02 19:10:14.708339338 +0000 UTC m=+0.131339287 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:10:14 compute-0 ceph-mon[191910]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:14 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate[205761]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 02 19:10:14 compute-0 bash[205746]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 02 19:10:14 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate[205761]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 02 19:10:14 compute-0 bash[205746]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 02 19:10:14 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate[205761]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 02 19:10:14 compute-0 bash[205746]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 02 19:10:14 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate[205761]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 02 19:10:14 compute-0 bash[205746]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 02 19:10:14 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate[205761]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 02 19:10:14 compute-0 bash[205746]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 02 19:10:14 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate[205761]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 02 19:10:14 compute-0 bash[205746]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 02 19:10:14 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate[205761]: --> ceph-volume raw activate successful for osd ID: 0
Oct 02 19:10:14 compute-0 bash[205746]: --> ceph-volume raw activate successful for osd ID: 0
Oct 02 19:10:14 compute-0 podman[205746]: 2025-10-02 19:10:14.994287677 +0000 UTC m=+1.491230258 container died c4a7f30823df12714cd4cf0d257759819ac3cc4b35feb9e4f3d6fcb210dae2d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:10:14 compute-0 systemd[1]: libpod-c4a7f30823df12714cd4cf0d257759819ac3cc4b35feb9e4f3d6fcb210dae2d4.scope: Deactivated successfully.
Oct 02 19:10:14 compute-0 systemd[1]: libpod-c4a7f30823df12714cd4cf0d257759819ac3cc4b35feb9e4f3d6fcb210dae2d4.scope: Consumed 1.301s CPU time.
Oct 02 19:10:15 compute-0 sudo[205942]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emhxqzkazhhbrpubvxxazvpzhkmecmnr ; /usr/bin/python3'
Oct 02 19:10:15 compute-0 sudo[205942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ca3fcd033f6fa635ec066b7b23faa1330ba389f83e591e30cc40dcf4721a1f3-merged.mount: Deactivated successfully.
Oct 02 19:10:15 compute-0 podman[205746]: 2025-10-02 19:10:15.091543818 +0000 UTC m=+1.588486409 container remove c4a7f30823df12714cd4cf0d257759819ac3cc4b35feb9e4f3d6fcb210dae2d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0-activate, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:10:15 compute-0 podman[205945]: 2025-10-02 19:10:15.126870895 +0000 UTC m=+0.110674178 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001)
Oct 02 19:10:15 compute-0 python3[205952]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:10:15 compute-0 podman[205987]: 2025-10-02 19:10:15.240132812 +0000 UTC m=+0.058311859 container create 46d0e893700e4aa510319ec46af971ff5b43bcd86ba3d3cd979d27eceb8d7c60 (image=quay.io/ceph/ceph:v18, name=jolly_mcnulty, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 19:10:15 compute-0 systemd[1]: Started libpod-conmon-46d0e893700e4aa510319ec46af971ff5b43bcd86ba3d3cd979d27eceb8d7c60.scope.
Oct 02 19:10:15 compute-0 podman[205987]: 2025-10-02 19:10:15.223214363 +0000 UTC m=+0.041393420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:10:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63997c15ec4acd63913e9f5043fcc0a943296f7af1740d4cdcfb5109945b3443/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63997c15ec4acd63913e9f5043fcc0a943296f7af1740d4cdcfb5109945b3443/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63997c15ec4acd63913e9f5043fcc0a943296f7af1740d4cdcfb5109945b3443/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:15 compute-0 podman[205987]: 2025-10-02 19:10:15.389806874 +0000 UTC m=+0.207986011 container init 46d0e893700e4aa510319ec46af971ff5b43bcd86ba3d3cd979d27eceb8d7c60 (image=quay.io/ceph/ceph:v18, name=jolly_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 19:10:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:15 compute-0 podman[205987]: 2025-10-02 19:10:15.406034535 +0000 UTC m=+0.224213572 container start 46d0e893700e4aa510319ec46af971ff5b43bcd86ba3d3cd979d27eceb8d7c60 (image=quay.io/ceph/ceph:v18, name=jolly_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:10:15 compute-0 podman[205987]: 2025-10-02 19:10:15.412716263 +0000 UTC m=+0.230895400 container attach 46d0e893700e4aa510319ec46af971ff5b43bcd86ba3d3cd979d27eceb8d7c60 (image=quay.io/ceph/ceph:v18, name=jolly_mcnulty, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 19:10:15 compute-0 podman[206034]: 2025-10-02 19:10:15.50868774 +0000 UTC m=+0.079564353 container create 67a0f30cd91e28105d329ef1caf27197c482a3dc941e5257cbff8f8666f898c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:15 compute-0 podman[206034]: 2025-10-02 19:10:15.478163369 +0000 UTC m=+0.049040012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f495ef9157d1c45f167605b902910fff80d6d1e2a346158951ed498d06a9659d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f495ef9157d1c45f167605b902910fff80d6d1e2a346158951ed498d06a9659d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f495ef9157d1c45f167605b902910fff80d6d1e2a346158951ed498d06a9659d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f495ef9157d1c45f167605b902910fff80d6d1e2a346158951ed498d06a9659d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f495ef9157d1c45f167605b902910fff80d6d1e2a346158951ed498d06a9659d/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:15 compute-0 podman[206034]: 2025-10-02 19:10:15.646532058 +0000 UTC m=+0.217408731 container init 67a0f30cd91e28105d329ef1caf27197c482a3dc941e5257cbff8f8666f898c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Oct 02 19:10:15 compute-0 podman[206034]: 2025-10-02 19:10:15.654591582 +0000 UTC m=+0.225468215 container start 67a0f30cd91e28105d329ef1caf27197c482a3dc941e5257cbff8f8666f898c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 19:10:15 compute-0 bash[206034]: 67a0f30cd91e28105d329ef1caf27197c482a3dc941e5257cbff8f8666f898c7
Oct 02 19:10:15 compute-0 systemd[1]: Started Ceph osd.0 for 6019f664-a1c2-5955-8391-692cb79a59f9.
Oct 02 19:10:15 compute-0 ceph-osd[206053]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 19:10:15 compute-0 ceph-osd[206053]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 02 19:10:15 compute-0 ceph-osd[206053]: pidfile_write: ignore empty --pid-file
Oct 02 19:10:15 compute-0 ceph-osd[206053]: bdev(0x55b4153b1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 02 19:10:15 compute-0 ceph-osd[206053]: bdev(0x55b4153b1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 02 19:10:15 compute-0 ceph-osd[206053]: bdev(0x55b4153b1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:15 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 19:10:15 compute-0 ceph-osd[206053]: bdev(0x55b4161eb800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 02 19:10:15 compute-0 ceph-osd[206053]: bdev(0x55b4161eb800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 02 19:10:15 compute-0 ceph-osd[206053]: bdev(0x55b4161eb800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:15 compute-0 ceph-osd[206053]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 02 19:10:15 compute-0 ceph-osd[206053]: bdev(0x55b4161eb800 /var/lib/ceph/osd/ceph-0/block) close
Oct 02 19:10:15 compute-0 sudo[205465]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:10:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:10:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Oct 02 19:10:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 02 19:10:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:10:15 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:10:15 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Oct 02 19:10:15 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Oct 02 19:10:15 compute-0 sudo[206085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:15 compute-0 sudo[206085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:15 compute-0 sudo[206085]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bdev(0x55b4153b1800 /var/lib/ceph/osd/ceph-0/block) close
Oct 02 19:10:16 compute-0 sudo[206110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:10:16 compute-0 sudo[206110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:16 compute-0 sudo[206110]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 02 19:10:16 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/204002393' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 19:10:16 compute-0 jolly_mcnulty[206026]: 
Oct 02 19:10:16 compute-0 jolly_mcnulty[206026]: {"fsid":"6019f664-a1c2-5955-8391-692cb79a59f9","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":121,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":6,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1759432202,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-02T19:10:05.401655+0000","services":{}},"progress_events":{}}
Oct 02 19:10:16 compute-0 systemd[1]: libpod-46d0e893700e4aa510319ec46af971ff5b43bcd86ba3d3cd979d27eceb8d7c60.scope: Deactivated successfully.
Oct 02 19:10:16 compute-0 podman[205987]: 2025-10-02 19:10:16.100464776 +0000 UTC m=+0.918643823 container died 46d0e893700e4aa510319ec46af971ff5b43bcd86ba3d3cd979d27eceb8d7c60 (image=quay.io/ceph/ceph:v18, name=jolly_mcnulty, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 19:10:16 compute-0 sudo[206135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:16 compute-0 sudo[206135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-63997c15ec4acd63913e9f5043fcc0a943296f7af1740d4cdcfb5109945b3443-merged.mount: Deactivated successfully.
Oct 02 19:10:16 compute-0 sudo[206135]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:16 compute-0 podman[205987]: 2025-10-02 19:10:16.170833453 +0000 UTC m=+0.989012490 container remove 46d0e893700e4aa510319ec46af971ff5b43bcd86ba3d3cd979d27eceb8d7c60 (image=quay.io/ceph/ceph:v18, name=jolly_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:10:16 compute-0 systemd[1]: libpod-conmon-46d0e893700e4aa510319ec46af971ff5b43bcd86ba3d3cd979d27eceb8d7c60.scope: Deactivated successfully.
Oct 02 19:10:16 compute-0 sudo[205942]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:16 compute-0 sudo[206174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:10:16 compute-0 sudo[206174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:16 compute-0 ceph-osd[206053]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct 02 19:10:16 compute-0 ceph-osd[206053]: load: jerasure load: lrc 
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bdev(0x55b41626cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bdev(0x55b41626cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bdev(0x55b41626cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bdev(0x55b41626cc00 /var/lib/ceph/osd/ceph-0/block) close
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bdev(0x55b41626cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bdev(0x55b41626cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bdev(0x55b41626cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bdev(0x55b41626cc00 /var/lib/ceph/osd/ceph-0/block) close
Oct 02 19:10:16 compute-0 ceph-mon[191910]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 02 19:10:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:10:16 compute-0 ceph-mon[191910]: Deploying daemon osd.1 on compute-0
Oct 02 19:10:16 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/204002393' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 19:10:16 compute-0 podman[206245]: 2025-10-02 19:10:16.802505509 +0000 UTC m=+0.124065114 container create 9b5ee31a207e75ae575c035cd5e7ff38dc4275f85e15ce7d7335dd2ca3267121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hopper, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 19:10:16 compute-0 ceph-osd[206053]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 02 19:10:16 compute-0 ceph-osd[206053]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 02 19:10:16 compute-0 podman[206245]: 2025-10-02 19:10:16.737197065 +0000 UTC m=+0.058756710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bdev(0x55b41626cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bdev(0x55b41626cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bdev(0x55b41626cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bdev(0x55b41626d400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bdev(0x55b41626d400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bdev(0x55b41626d400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bluefs mount
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bluefs mount shared_bdev_used = 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: RocksDB version: 7.9.2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Git sha 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: DB SUMMARY
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: DB Session ID:  WT85E3U09G2TFLF2CFJ3
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: CURRENT file:  CURRENT
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                         Options.error_if_exists: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                       Options.create_if_missing: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                                     Options.env: 0x55b41623dc70
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                                Options.info_log: 0x55b4154388a0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                              Options.statistics: (nil)
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                               Options.use_fsync: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                              Options.db_log_dir: 
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                                 Options.wal_dir: db.wal
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.write_buffer_manager: 0x55b41634a460
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.unordered_write: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                               Options.row_cache: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                              Options.wal_filter: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.two_write_queues: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.wal_compression: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.atomic_flush: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.max_background_jobs: 4
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.max_background_compactions: -1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.max_subcompactions: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.max_open_files: -1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Compression algorithms supported:
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         kZSTD supported: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         kXpressCompression supported: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         kBZip2Compression supported: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         kLZ4Compression supported: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         kZlibCompression supported: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         kSnappyCompression supported: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b4154382c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b4154251f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 systemd[1]: Started libpod-conmon-9b5ee31a207e75ae575c035cd5e7ff38dc4275f85e15ce7d7335dd2ca3267121.scope.
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b4154382c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b4154251f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b4154382c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b4154251f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b4154382c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b4154251f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b4154382c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b4154251f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b4154382c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b4154251f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b4154382c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b4154251f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b415438240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b415425090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b415438240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b415425090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b415438240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b415425090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 56bff2fc-f34b-459f-a891-0686f163dfda
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432216898368, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432216898839, "job": 1, "event": "recovery_finished"}
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct 02 19:10:16 compute-0 ceph-osd[206053]: freelist init
Oct 02 19:10:16 compute-0 ceph-osd[206053]: freelist _read_cfg
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 02 19:10:16 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bluefs umount
Oct 02 19:10:16 compute-0 ceph-osd[206053]: bdev(0x55b41626d400 /var/lib/ceph/osd/ceph-0/block) close
Oct 02 19:10:16 compute-0 podman[206245]: 2025-10-02 19:10:16.947842006 +0000 UTC m=+0.269401601 container init 9b5ee31a207e75ae575c035cd5e7ff38dc4275f85e15ce7d7335dd2ca3267121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hopper, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 19:10:16 compute-0 podman[206245]: 2025-10-02 19:10:16.965444083 +0000 UTC m=+0.287003658 container start 9b5ee31a207e75ae575c035cd5e7ff38dc4275f85e15ce7d7335dd2ca3267121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hopper, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:10:16 compute-0 podman[206245]: 2025-10-02 19:10:16.970822796 +0000 UTC m=+0.292382391 container attach 9b5ee31a207e75ae575c035cd5e7ff38dc4275f85e15ce7d7335dd2ca3267121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hopper, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 19:10:16 compute-0 sad_hopper[206425]: 167 167
Oct 02 19:10:16 compute-0 systemd[1]: libpod-9b5ee31a207e75ae575c035cd5e7ff38dc4275f85e15ce7d7335dd2ca3267121.scope: Deactivated successfully.
Oct 02 19:10:16 compute-0 podman[206245]: 2025-10-02 19:10:16.980622526 +0000 UTC m=+0.302182121 container died 9b5ee31a207e75ae575c035cd5e7ff38dc4275f85e15ce7d7335dd2ca3267121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:10:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1d19fc370a266411f623e18aad532c19a2b2d36dbaca7e3fd35836e278f5737-merged.mount: Deactivated successfully.
Oct 02 19:10:17 compute-0 podman[206245]: 2025-10-02 19:10:17.032191994 +0000 UTC m=+0.353751569 container remove 9b5ee31a207e75ae575c035cd5e7ff38dc4275f85e15ce7d7335dd2ca3267121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 19:10:17 compute-0 systemd[1]: libpod-conmon-9b5ee31a207e75ae575c035cd5e7ff38dc4275f85e15ce7d7335dd2ca3267121.scope: Deactivated successfully.
Oct 02 19:10:17 compute-0 ceph-osd[206053]: bdev(0x55b41626d400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 02 19:10:17 compute-0 ceph-osd[206053]: bdev(0x55b41626d400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 02 19:10:17 compute-0 ceph-osd[206053]: bdev(0x55b41626d400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:17 compute-0 ceph-osd[206053]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 02 19:10:17 compute-0 ceph-osd[206053]: bluefs mount
Oct 02 19:10:17 compute-0 ceph-osd[206053]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: bluefs mount shared_bdev_used = 4718592
Oct 02 19:10:17 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: RocksDB version: 7.9.2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Git sha 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: DB SUMMARY
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: DB Session ID:  WT85E3U09G2TFLF2CFJ2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: CURRENT file:  CURRENT
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                         Options.error_if_exists: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                       Options.create_if_missing: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                                     Options.env: 0x55b4163f6b60
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                                Options.info_log: 0x55b415438620
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                              Options.statistics: (nil)
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                               Options.use_fsync: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                              Options.db_log_dir: 
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                                 Options.wal_dir: db.wal
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.write_buffer_manager: 0x55b41634a6e0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.unordered_write: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                               Options.row_cache: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                              Options.wal_filter: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.two_write_queues: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.wal_compression: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.atomic_flush: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.max_background_jobs: 4
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.max_background_compactions: -1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.max_subcompactions: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.max_open_files: -1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Compression algorithms supported:
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         kZSTD supported: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         kXpressCompression supported: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         kBZip2Compression supported: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         kLZ4Compression supported: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         kZlibCompression supported: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         kSnappyCompression supported: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b41542e7e0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b4154251f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b41542e7e0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b4154251f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b41542e7e0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b4154251f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b41542e7e0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b4154251f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b41542e7e0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b4154251f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b41542e7e0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b4154251f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b41542e7e0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b4154251f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b41542e780)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b415425090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b41542e780)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b415425090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b41542e780)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55b415425090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 56bff2fc-f34b-459f-a891-0686f163dfda
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432217175986, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432217184055, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432217, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "56bff2fc-f34b-459f-a891-0686f163dfda", "db_session_id": "WT85E3U09G2TFLF2CFJ2", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432217192149, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432217, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "56bff2fc-f34b-459f-a891-0686f163dfda", "db_session_id": "WT85E3U09G2TFLF2CFJ2", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432217199823, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432217, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "56bff2fc-f34b-459f-a891-0686f163dfda", "db_session_id": "WT85E3U09G2TFLF2CFJ2", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432217203362, "job": 1, "event": "recovery_finished"}
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b415592000
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: DB pointer 0x55b416325a00
Oct 02 19:10:17 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 02 19:10:17 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct 02 19:10:17 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                            Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                            Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b415425090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b415425090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b415425090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Oct 02 19:10:17 compute-0 ceph-osd[206053]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 02 19:10:17 compute-0 ceph-osd[206053]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 02 19:10:17 compute-0 ceph-osd[206053]: _get_class not permitted to load lua
Oct 02 19:10:17 compute-0 ceph-osd[206053]: _get_class not permitted to load sdk
Oct 02 19:10:17 compute-0 ceph-osd[206053]: _get_class not permitted to load test_remote_reads
Oct 02 19:10:17 compute-0 ceph-osd[206053]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 02 19:10:17 compute-0 ceph-osd[206053]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 02 19:10:17 compute-0 ceph-osd[206053]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 02 19:10:17 compute-0 ceph-osd[206053]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 02 19:10:17 compute-0 ceph-osd[206053]: osd.0 0 load_pgs
Oct 02 19:10:17 compute-0 ceph-osd[206053]: osd.0 0 load_pgs opened 0 pgs
Oct 02 19:10:17 compute-0 ceph-osd[206053]: osd.0 0 log_to_monitors true
Oct 02 19:10:17 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0[206049]: 2025-10-02T19:10:17.251+0000 7f168ff20740 -1 osd.0 0 log_to_monitors true
Oct 02 19:10:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Oct 02 19:10:17 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1219293671,v1:192.168.122.100:6803/1219293671]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 02 19:10:17 compute-0 podman[206701]: 2025-10-02 19:10:17.364280698 +0000 UTC m=+0.058457153 container create a36c7d303c3fcf162a25ea02cb64c34743bc6e752f7092828e799d44430c6b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:10:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:17 compute-0 systemd[1]: Started libpod-conmon-a36c7d303c3fcf162a25ea02cb64c34743bc6e752f7092828e799d44430c6b44.scope.
Oct 02 19:10:17 compute-0 podman[206701]: 2025-10-02 19:10:17.336943102 +0000 UTC m=+0.031119577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9e5ad80cdf3d7b44ab638f4d56ee5b7a7957c2c0de1cdb64c7fe19e06931d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9e5ad80cdf3d7b44ab638f4d56ee5b7a7957c2c0de1cdb64c7fe19e06931d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9e5ad80cdf3d7b44ab638f4d56ee5b7a7957c2c0de1cdb64c7fe19e06931d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9e5ad80cdf3d7b44ab638f4d56ee5b7a7957c2c0de1cdb64c7fe19e06931d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9e5ad80cdf3d7b44ab638f4d56ee5b7a7957c2c0de1cdb64c7fe19e06931d8/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:17 compute-0 podman[206701]: 2025-10-02 19:10:17.502316551 +0000 UTC m=+0.196493046 container init a36c7d303c3fcf162a25ea02cb64c34743bc6e752f7092828e799d44430c6b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:17 compute-0 podman[206701]: 2025-10-02 19:10:17.524771287 +0000 UTC m=+0.218947782 container start a36c7d303c3fcf162a25ea02cb64c34743bc6e752f7092828e799d44430c6b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 19:10:17 compute-0 podman[206701]: 2025-10-02 19:10:17.532169684 +0000 UTC m=+0.226346139 container attach a36c7d303c3fcf162a25ea02cb64c34743bc6e752f7092828e799d44430c6b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate-test, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 19:10:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct 02 19:10:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 19:10:17 compute-0 ceph-mon[191910]: from='osd.0 [v2:192.168.122.100:6802/1219293671,v1:192.168.122.100:6803/1219293671]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 02 19:10:17 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1219293671,v1:192.168.122.100:6803/1219293671]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 02 19:10:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Oct 02 19:10:17 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Oct 02 19:10:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 19:10:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:10:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 02 19:10:17 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1219293671,v1:192.168.122.100:6803/1219293671]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 02 19:10:17 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 19:10:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 02 19:10:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 19:10:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 19:10:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:17 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 19:10:17 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 19:10:18 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate-test[206715]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 02 19:10:18 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate-test[206715]:                             [--no-systemd] [--no-tmpfs]
Oct 02 19:10:18 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate-test[206715]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 02 19:10:18 compute-0 systemd[1]: libpod-a36c7d303c3fcf162a25ea02cb64c34743bc6e752f7092828e799d44430c6b44.scope: Deactivated successfully.
Oct 02 19:10:18 compute-0 podman[206701]: 2025-10-02 19:10:18.218367624 +0000 UTC m=+0.912544079 container died a36c7d303c3fcf162a25ea02cb64c34743bc6e752f7092828e799d44430c6b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate-test, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:10:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f9e5ad80cdf3d7b44ab638f4d56ee5b7a7957c2c0de1cdb64c7fe19e06931d8-merged.mount: Deactivated successfully.
Oct 02 19:10:18 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 02 19:10:18 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 02 19:10:18 compute-0 podman[206701]: 2025-10-02 19:10:18.311696481 +0000 UTC m=+1.005872946 container remove a36c7d303c3fcf162a25ea02cb64c34743bc6e752f7092828e799d44430c6b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 19:10:18 compute-0 systemd[1]: libpod-conmon-a36c7d303c3fcf162a25ea02cb64c34743bc6e752f7092828e799d44430c6b44.scope: Deactivated successfully.
Oct 02 19:10:18 compute-0 podman[206723]: 2025-10-02 19:10:18.370943924 +0000 UTC m=+0.107728010 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, release=1214.1726694543, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=base rhel9, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, version=9.4, name=ubi9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler)
Oct 02 19:10:18 compute-0 systemd[1]: Reloading.
Oct 02 19:10:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct 02 19:10:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 19:10:18 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1219293671,v1:192.168.122.100:6803/1219293671]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 02 19:10:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Oct 02 19:10:18 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Oct 02 19:10:18 compute-0 ceph-osd[206053]: osd.0 0 done with init, starting boot process
Oct 02 19:10:18 compute-0 ceph-osd[206053]: osd.0 0 start_boot
Oct 02 19:10:18 compute-0 ceph-osd[206053]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 02 19:10:18 compute-0 ceph-osd[206053]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 02 19:10:18 compute-0 ceph-osd[206053]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 02 19:10:18 compute-0 ceph-osd[206053]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 02 19:10:18 compute-0 ceph-osd[206053]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct 02 19:10:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 19:10:18 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:10:18 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 19:10:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 19:10:18 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 19:10:18 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:18 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 19:10:18 compute-0 ceph-mon[191910]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:18 compute-0 ceph-mon[191910]: from='osd.0 [v2:192.168.122.100:6802/1219293671,v1:192.168.122.100:6803/1219293671]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 02 19:10:18 compute-0 ceph-mon[191910]: osdmap e7: 3 total, 0 up, 3 in
Oct 02 19:10:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:10:18 compute-0 ceph-mon[191910]: from='osd.0 [v2:192.168.122.100:6802/1219293671,v1:192.168.122.100:6803/1219293671]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 02 19:10:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:18 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 19:10:18 compute-0 ceph-mgr[192222]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1219293671; not ready for session (expect reconnect)
Oct 02 19:10:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 19:10:18 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:10:18 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 19:10:18 compute-0 systemd-rc-local-generator[206797]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:10:18 compute-0 systemd-sysv-generator[206800]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:10:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:10:19 compute-0 systemd[1]: Reloading.
Oct 02 19:10:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:19 compute-0 systemd-sysv-generator[206842]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:10:19 compute-0 systemd-rc-local-generator[206835]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:10:19 compute-0 systemd[1]: Starting Ceph osd.1 for 6019f664-a1c2-5955-8391-692cb79a59f9...
Oct 02 19:10:19 compute-0 ceph-mgr[192222]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1219293671; not ready for session (expect reconnect)
Oct 02 19:10:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 19:10:19 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:10:19 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 19:10:19 compute-0 ceph-mon[191910]: from='osd.0 [v2:192.168.122.100:6802/1219293671,v1:192.168.122.100:6803/1219293671]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 02 19:10:19 compute-0 ceph-mon[191910]: osdmap e8: 3 total, 0 up, 3 in
Oct 02 19:10:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:10:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:10:20 compute-0 podman[206893]: 2025-10-02 19:10:20.134688904 +0000 UTC m=+0.086378624 container create 78a63f66e39a901503c5164dccda6b55bf0e79fcf32a2ebfd451e2a1527d51de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:10:20 compute-0 podman[206893]: 2025-10-02 19:10:20.091498118 +0000 UTC m=+0.043187858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37c97432acd5ff6845d1dd5b5e401b1ae0308c173d739fa570b7d164f63ebcb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37c97432acd5ff6845d1dd5b5e401b1ae0308c173d739fa570b7d164f63ebcb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37c97432acd5ff6845d1dd5b5e401b1ae0308c173d739fa570b7d164f63ebcb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37c97432acd5ff6845d1dd5b5e401b1ae0308c173d739fa570b7d164f63ebcb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37c97432acd5ff6845d1dd5b5e401b1ae0308c173d739fa570b7d164f63ebcb2/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:20 compute-0 podman[206893]: 2025-10-02 19:10:20.260036771 +0000 UTC m=+0.211726511 container init 78a63f66e39a901503c5164dccda6b55bf0e79fcf32a2ebfd451e2a1527d51de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:20 compute-0 podman[206893]: 2025-10-02 19:10:20.272323787 +0000 UTC m=+0.224013527 container start 78a63f66e39a901503c5164dccda6b55bf0e79fcf32a2ebfd451e2a1527d51de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:10:20 compute-0 podman[206893]: 2025-10-02 19:10:20.299652332 +0000 UTC m=+0.251342102 container attach 78a63f66e39a901503c5164dccda6b55bf0e79fcf32a2ebfd451e2a1527d51de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 19:10:20 compute-0 ceph-mgr[192222]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1219293671; not ready for session (expect reconnect)
Oct 02 19:10:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 19:10:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:10:20 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 19:10:20 compute-0 ceph-mon[191910]: purged_snaps scrub starts
Oct 02 19:10:20 compute-0 ceph-mon[191910]: purged_snaps scrub ok
Oct 02 19:10:20 compute-0 ceph-mon[191910]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:20 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:10:20 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:10:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:21 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate[206907]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 02 19:10:21 compute-0 bash[206893]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 02 19:10:21 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate[206907]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct 02 19:10:21 compute-0 bash[206893]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct 02 19:10:21 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate[206907]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct 02 19:10:21 compute-0 bash[206893]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct 02 19:10:21 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate[206907]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 02 19:10:21 compute-0 bash[206893]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 02 19:10:21 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate[206907]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 02 19:10:21 compute-0 bash[206893]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 02 19:10:21 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate[206907]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 02 19:10:21 compute-0 bash[206893]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 02 19:10:21 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate[206907]: --> ceph-volume raw activate successful for osd ID: 1
Oct 02 19:10:21 compute-0 bash[206893]: --> ceph-volume raw activate successful for osd ID: 1
Oct 02 19:10:21 compute-0 systemd[1]: libpod-78a63f66e39a901503c5164dccda6b55bf0e79fcf32a2ebfd451e2a1527d51de.scope: Deactivated successfully.
Oct 02 19:10:21 compute-0 podman[206893]: 2025-10-02 19:10:21.597014623 +0000 UTC m=+1.548704363 container died 78a63f66e39a901503c5164dccda6b55bf0e79fcf32a2ebfd451e2a1527d51de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:21 compute-0 systemd[1]: libpod-78a63f66e39a901503c5164dccda6b55bf0e79fcf32a2ebfd451e2a1527d51de.scope: Consumed 1.329s CPU time.
Oct 02 19:10:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-37c97432acd5ff6845d1dd5b5e401b1ae0308c173d739fa570b7d164f63ebcb2-merged.mount: Deactivated successfully.
Oct 02 19:10:21 compute-0 podman[206893]: 2025-10-02 19:10:21.705947464 +0000 UTC m=+1.657637224 container remove 78a63f66e39a901503c5164dccda6b55bf0e79fcf32a2ebfd451e2a1527d51de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 02 19:10:21 compute-0 ceph-mgr[192222]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1219293671; not ready for session (expect reconnect)
Oct 02 19:10:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 19:10:21 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:10:21 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 19:10:21 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:10:22 compute-0 podman[207088]: 2025-10-02 19:10:22.116092619 +0000 UTC m=+0.090509632 container create 0b15b60ae73a8466fd87318de6940a0e255f5686582d8fd040b157b8d00931fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:10:22 compute-0 podman[207088]: 2025-10-02 19:10:22.076632712 +0000 UTC m=+0.051049715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16b04069232b146e157c7a13f31b6211006722fb858dd4d3265dc42998b696f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16b04069232b146e157c7a13f31b6211006722fb858dd4d3265dc42998b696f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16b04069232b146e157c7a13f31b6211006722fb858dd4d3265dc42998b696f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16b04069232b146e157c7a13f31b6211006722fb858dd4d3265dc42998b696f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16b04069232b146e157c7a13f31b6211006722fb858dd4d3265dc42998b696f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:22 compute-0 podman[207088]: 2025-10-02 19:10:22.233598978 +0000 UTC m=+0.208015981 container init 0b15b60ae73a8466fd87318de6940a0e255f5686582d8fd040b157b8d00931fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:10:22 compute-0 podman[207088]: 2025-10-02 19:10:22.2453685 +0000 UTC m=+0.219785483 container start 0b15b60ae73a8466fd87318de6940a0e255f5686582d8fd040b157b8d00931fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:10:22 compute-0 bash[207088]: 0b15b60ae73a8466fd87318de6940a0e255f5686582d8fd040b157b8d00931fb
Oct 02 19:10:22 compute-0 systemd[1]: Started Ceph osd.1 for 6019f664-a1c2-5955-8391-692cb79a59f9.
Oct 02 19:10:22 compute-0 ceph-osd[207106]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 19:10:22 compute-0 ceph-osd[207106]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 02 19:10:22 compute-0 ceph-osd[207106]: pidfile_write: ignore empty --pid-file
Oct 02 19:10:22 compute-0 ceph-osd[207106]: bdev(0x563e27def800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 19:10:22 compute-0 ceph-osd[207106]: bdev(0x563e27def800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 19:10:22 compute-0 ceph-osd[207106]: bdev(0x563e27def800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:22 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 19:10:22 compute-0 ceph-osd[207106]: bdev(0x563e28c27800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 19:10:22 compute-0 ceph-osd[207106]: bdev(0x563e28c27800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 19:10:22 compute-0 ceph-osd[207106]: bdev(0x563e28c27800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:22 compute-0 ceph-osd[207106]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 02 19:10:22 compute-0 ceph-osd[207106]: bdev(0x563e28c27800 /var/lib/ceph/osd/ceph-1/block) close
Oct 02 19:10:22 compute-0 sudo[206174]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:22 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:10:22 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:22 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:10:22 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:22 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Oct 02 19:10:22 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 02 19:10:22 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:10:22 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:10:22 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Oct 02 19:10:22 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Oct 02 19:10:22 compute-0 sudo[207119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:22 compute-0 sudo[207119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:22 compute-0 sudo[207119]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:22 compute-0 ceph-osd[207106]: bdev(0x563e27def800 /var/lib/ceph/osd/ceph-1/block) close
Oct 02 19:10:22 compute-0 sudo[207144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:10:22 compute-0 sudo[207144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:22 compute-0 sudo[207144]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:22 compute-0 sudo[207171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:22 compute-0 sudo[207171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:22 compute-0 sudo[207171]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:22 compute-0 sudo[207196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:10:22 compute-0 sudo[207196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:22 compute-0 ceph-osd[207106]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct 02 19:10:22 compute-0 ceph-osd[207106]: load: jerasure load: lrc 
Oct 02 19:10:22 compute-0 ceph-osd[207106]: bdev(0x563e28cbac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 19:10:22 compute-0 ceph-osd[207106]: bdev(0x563e28cbac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 19:10:22 compute-0 ceph-osd[207106]: bdev(0x563e28cbac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:22 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 19:10:22 compute-0 ceph-osd[207106]: bdev(0x563e28cbac00 /var/lib/ceph/osd/ceph-1/block) close
Oct 02 19:10:22 compute-0 ceph-mgr[192222]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1219293671; not ready for session (expect reconnect)
Oct 02 19:10:22 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 19:10:22 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:10:22 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 02 19:10:22 compute-0 ceph-osd[206053]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 29.388 iops: 7523.261 elapsed_sec: 0.399
Oct 02 19:10:22 compute-0 ceph-osd[206053]: log_channel(cluster) log [WRN] : OSD bench result of 7523.261492 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 02 19:10:22 compute-0 ceph-osd[206053]: osd.0 0 waiting for initial osdmap
Oct 02 19:10:22 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0[206049]: 2025-10-02T19:10:22.850+0000 7f168bea0640 -1 osd.0 0 waiting for initial osdmap
Oct 02 19:10:22 compute-0 ceph-osd[206053]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct 02 19:10:22 compute-0 ceph-osd[206053]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct 02 19:10:22 compute-0 ceph-osd[206053]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct 02 19:10:22 compute-0 ceph-osd[206053]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Oct 02 19:10:22 compute-0 ceph-osd[206053]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 02 19:10:22 compute-0 ceph-osd[206053]: osd.0 8 set_numa_affinity not setting numa affinity
Oct 02 19:10:22 compute-0 ceph-osd[206053]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct 02 19:10:22 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-0[206049]: 2025-10-02T19:10:22.880+0000 7f16874c8640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 02 19:10:22 compute-0 ceph-mon[191910]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:22 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:22 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:22 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 02 19:10:22 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:10:22 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bdev(0x563e28cbac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bdev(0x563e28cbac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bdev(0x563e28cbac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bdev(0x563e28cbac00 /var/lib/ceph/osd/ceph-1/block) close
Oct 02 19:10:23 compute-0 podman[207269]: 2025-10-02 19:10:23.287442337 +0000 UTC m=+0.071664753 container create 5eb2c669a72a6d2f917f0d26a7efc8a753e225ab8319d2800794fc1c949880bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 19:10:23 compute-0 systemd[1]: Started libpod-conmon-5eb2c669a72a6d2f917f0d26a7efc8a753e225ab8319d2800794fc1c949880bc.scope.
Oct 02 19:10:23 compute-0 podman[207269]: 2025-10-02 19:10:23.257801381 +0000 UTC m=+0.042023827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct 02 19:10:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 19:10:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Oct 02 19:10:23 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1219293671,v1:192.168.122.100:6803/1219293671] boot
Oct 02 19:10:23 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Oct 02 19:10:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 02 19:10:23 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:10:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 19:10:23 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 19:10:23 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:23 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 19:10:23 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 19:10:23 compute-0 ceph-osd[206053]: osd.0 9 state: booting -> active
Oct 02 19:10:23 compute-0 ceph-osd[207106]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 02 19:10:23 compute-0 ceph-osd[207106]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bdev(0x563e28cbac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bdev(0x563e28cbac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bdev(0x563e28cbac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bdev(0x563e28cbb400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bdev(0x563e28cbb400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 19:10:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bdev(0x563e28cbb400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluefs mount
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluefs mount shared_bdev_used = 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: RocksDB version: 7.9.2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Git sha 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: DB SUMMARY
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: DB Session ID:  QE6ZK81O8E55KPYEANKH
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: CURRENT file:  CURRENT
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                         Options.error_if_exists: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.create_if_missing: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                                     Options.env: 0x563e28c79c70
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                                Options.info_log: 0x563e27e768a0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                              Options.statistics: (nil)
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.use_fsync: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                              Options.db_log_dir: 
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                                 Options.wal_dir: db.wal
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.write_buffer_manager: 0x563e28d90460
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.unordered_write: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.row_cache: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                              Options.wal_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.two_write_queues: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.wal_compression: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.atomic_flush: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.max_background_jobs: 4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.max_background_compactions: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.max_subcompactions: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.max_open_files: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Compression algorithms supported:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         kZSTD supported: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         kXpressCompression supported: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         kBZip2Compression supported: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         kLZ4Compression supported: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         kZlibCompression supported: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         kSnappyCompression supported: 1
Oct 02 19:10:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e762c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e631f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e762c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e631f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e762c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e631f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e762c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e631f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 podman[207269]: 2025-10-02 19:10:23.414984772 +0000 UTC m=+0.199207208 container init 5eb2c669a72a6d2f917f0d26a7efc8a753e225ab8319d2800794fc1c949880bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cerf, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e762c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e631f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e762c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e631f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e762c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e631f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e76240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e63090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e76240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e63090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 podman[207269]: 2025-10-02 19:10:23.42883777 +0000 UTC m=+0.213060186 container start 5eb2c669a72a6d2f917f0d26a7efc8a753e225ab8319d2800794fc1c949880bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cerf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e76240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e63090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 19:10:23 compute-0 podman[207269]: 2025-10-02 19:10:23.434139411 +0000 UTC m=+0.218361867 container attach 5eb2c669a72a6d2f917f0d26a7efc8a753e225ab8319d2800794fc1c949880bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cerf, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 19:10:23 compute-0 confident_cerf[207283]: 167 167
Oct 02 19:10:23 compute-0 systemd[1]: libpod-5eb2c669a72a6d2f917f0d26a7efc8a753e225ab8319d2800794fc1c949880bc.scope: Deactivated successfully.
Oct 02 19:10:23 compute-0 podman[207269]: 2025-10-02 19:10:23.440032537 +0000 UTC m=+0.224254943 container died 5eb2c669a72a6d2f917f0d26a7efc8a753e225ab8319d2800794fc1c949880bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3cfaaca9-a531-4093-a701-3163bf00305f
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432223466290, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432223467356, "job": 1, "event": "recovery_finished"}
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: freelist init
Oct 02 19:10:23 compute-0 ceph-osd[207106]: freelist _read_cfg
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluefs umount
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bdev(0x563e28cbb400 /var/lib/ceph/osd/ceph-1/block) close
Oct 02 19:10:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b7cc10c49e77f3ea1399b1b5df2357c7721156ef1f0dfe4c6e89f14d7e87bef-merged.mount: Deactivated successfully.
Oct 02 19:10:23 compute-0 podman[207269]: 2025-10-02 19:10:23.513903968 +0000 UTC m=+0.298126384 container remove 5eb2c669a72a6d2f917f0d26a7efc8a753e225ab8319d2800794fc1c949880bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 19:10:23 compute-0 systemd[1]: libpod-conmon-5eb2c669a72a6d2f917f0d26a7efc8a753e225ab8319d2800794fc1c949880bc.scope: Deactivated successfully.
Oct 02 19:10:23 compute-0 ceph-mgr[192222]: [devicehealth INFO root] creating mgr pool
Oct 02 19:10:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Oct 02 19:10:23 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bdev(0x563e28cbb400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bdev(0x563e28cbb400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bdev(0x563e28cbb400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluefs mount
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluefs mount shared_bdev_used = 4718592
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: RocksDB version: 7.9.2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Git sha 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: DB SUMMARY
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: DB Session ID:  QE6ZK81O8E55KPYEANKG
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: CURRENT file:  CURRENT
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                         Options.error_if_exists: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.create_if_missing: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                                     Options.env: 0x563e28e3c3f0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                                Options.info_log: 0x563e2813cf20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                              Options.statistics: (nil)
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.use_fsync: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                              Options.db_log_dir: 
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                                 Options.wal_dir: db.wal
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.write_buffer_manager: 0x563e28d906e0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.unordered_write: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.row_cache: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                              Options.wal_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.two_write_queues: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.wal_compression: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.atomic_flush: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.max_background_jobs: 4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.max_background_compactions: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.max_subcompactions: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.max_open_files: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Compression algorithms supported:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         kZSTD supported: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         kXpressCompression supported: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         kBZip2Compression supported: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         kLZ4Compression supported: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         kZlibCompression supported: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         kSnappyCompression supported: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e6d060)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e631f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e6d060)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e631f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e6d060)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e631f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e6d060)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e631f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e6d060)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e631f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e6d060)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e631f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e6d060)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e631f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e6d040)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e63090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e6d040)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e63090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563e27e6d040)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x563e27e63090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3cfaaca9-a531-4093-a701-3163bf00305f
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432223717214, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432223722870, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432223, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3cfaaca9-a531-4093-a701-3163bf00305f", "db_session_id": "QE6ZK81O8E55KPYEANKG", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432223727575, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432223, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3cfaaca9-a531-4093-a701-3163bf00305f", "db_session_id": "QE6ZK81O8E55KPYEANKG", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432223732000, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432223, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3cfaaca9-a531-4093-a701-3163bf00305f", "db_session_id": "QE6ZK81O8E55KPYEANKG", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432223734404, "job": 1, "event": "recovery_finished"}
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563e27fd1c00
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: DB pointer 0x563e28d75a00
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct 02 19:10:23 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                            Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                            Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e63090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e63090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e63090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Oct 02 19:10:23 compute-0 ceph-osd[207106]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 02 19:10:23 compute-0 ceph-osd[207106]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 02 19:10:23 compute-0 ceph-osd[207106]: _get_class not permitted to load lua
Oct 02 19:10:23 compute-0 ceph-osd[207106]: _get_class not permitted to load sdk
Oct 02 19:10:23 compute-0 ceph-osd[207106]: _get_class not permitted to load test_remote_reads
Oct 02 19:10:23 compute-0 ceph-osd[207106]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 02 19:10:23 compute-0 ceph-osd[207106]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 02 19:10:23 compute-0 ceph-osd[207106]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 02 19:10:23 compute-0 ceph-osd[207106]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 02 19:10:23 compute-0 ceph-osd[207106]: osd.1 0 load_pgs
Oct 02 19:10:23 compute-0 ceph-osd[207106]: osd.1 0 load_pgs opened 0 pgs
Oct 02 19:10:23 compute-0 ceph-osd[207106]: osd.1 0 log_to_monitors true
Oct 02 19:10:23 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1[207102]: 2025-10-02T19:10:23.790+0000 7f75023e3740 -1 osd.1 0 log_to_monitors true
Oct 02 19:10:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Oct 02 19:10:23 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3952184026,v1:192.168.122.100:6807/3952184026]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 02 19:10:23 compute-0 podman[207691]: 2025-10-02 19:10:23.843143306 +0000 UTC m=+0.060808065 container create 3c3f97e1aac9ef2f7a6c3916e65bf5aeaba6f79514c284351c52c19f4e703201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 19:10:23 compute-0 systemd[1]: Started libpod-conmon-3c3f97e1aac9ef2f7a6c3916e65bf5aeaba6f79514c284351c52c19f4e703201.scope.
Oct 02 19:10:23 compute-0 ceph-mon[191910]: Deploying daemon osd.2 on compute-0
Oct 02 19:10:23 compute-0 ceph-mon[191910]: OSD bench result of 7523.261492 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 02 19:10:23 compute-0 ceph-mon[191910]: osd.0 [v2:192.168.122.100:6802/1219293671,v1:192.168.122.100:6803/1219293671] boot
Oct 02 19:10:23 compute-0 ceph-mon[191910]: osdmap e9: 3 total, 1 up, 3 in
Oct 02 19:10:23 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 19:10:23 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:23 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:23 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 02 19:10:23 compute-0 ceph-mon[191910]: from='osd.1 [v2:192.168.122.100:6806/3952184026,v1:192.168.122.100:6807/3952184026]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 02 19:10:23 compute-0 podman[207691]: 2025-10-02 19:10:23.822302703 +0000 UTC m=+0.039967492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5258bf27a30791f9da0a73c706c93ffb6f58332f62fbac2be2c853681f3ff999/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5258bf27a30791f9da0a73c706c93ffb6f58332f62fbac2be2c853681f3ff999/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5258bf27a30791f9da0a73c706c93ffb6f58332f62fbac2be2c853681f3ff999/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5258bf27a30791f9da0a73c706c93ffb6f58332f62fbac2be2c853681f3ff999/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5258bf27a30791f9da0a73c706c93ffb6f58332f62fbac2be2c853681f3ff999/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:23 compute-0 podman[207691]: 2025-10-02 19:10:23.962254657 +0000 UTC m=+0.179919476 container init 3c3f97e1aac9ef2f7a6c3916e65bf5aeaba6f79514c284351c52c19f4e703201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate-test, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:23 compute-0 podman[207691]: 2025-10-02 19:10:23.976632839 +0000 UTC m=+0.194297618 container start 3c3f97e1aac9ef2f7a6c3916e65bf5aeaba6f79514c284351c52c19f4e703201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate-test, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 19:10:23 compute-0 podman[207691]: 2025-10-02 19:10:23.983367907 +0000 UTC m=+0.201032736 container attach 3c3f97e1aac9ef2f7a6c3916e65bf5aeaba6f79514c284351c52c19f4e703201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 19:10:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:10:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct 02 19:10:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 02 19:10:24 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 02 19:10:24 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3952184026,v1:192.168.122.100:6807/3952184026]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 02 19:10:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Oct 02 19:10:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Oct 02 19:10:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct 02 19:10:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct 02 19:10:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct 02 19:10:24 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Oct 02 19:10:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 19:10:24 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 19:10:24 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:24 compute-0 ceph-osd[206053]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 02 19:10:24 compute-0 ceph-osd[206053]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct 02 19:10:24 compute-0 ceph-osd[206053]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 02 19:10:24 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 19:10:24 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 19:10:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Oct 02 19:10:24 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 02 19:10:24 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 10 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [0] r=0 lpr=10 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:10:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 02 19:10:24 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3952184026,v1:192.168.122.100:6807/3952184026]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 02 19:10:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e10 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.434 17 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.435 17 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.435 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a00e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.436 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feec38a00b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec5f86ab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38272c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec6d242f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.438 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.438 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feec72bb7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.439 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.439 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feec38264b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.439 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.439 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feec3827c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.439 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38273e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.439 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feec3827230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.440 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.441 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feec3825700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.441 17 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.441 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feec3827290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.441 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.441 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feec38277a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.441 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.442 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feec38272f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.442 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.442 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feec3827350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38274d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.442 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.443 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feec38a0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.443 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feec38273b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feec49646e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feec3827440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feec3827c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feec38274a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feec3827ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feec3827d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feec3827dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feec3827e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec4ac1ee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3825730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feec38a0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feec38276e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feec3827ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feec49641d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feec3827740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feec3827f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.450 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.450 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.450 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.450 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.450 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.450 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.450 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.451 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:10:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:10:24 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate-test[207740]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 02 19:10:24 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate-test[207740]:                             [--no-systemd] [--no-tmpfs]
Oct 02 19:10:24 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate-test[207740]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 02 19:10:24 compute-0 systemd[1]: libpod-3c3f97e1aac9ef2f7a6c3916e65bf5aeaba6f79514c284351c52c19f4e703201.scope: Deactivated successfully.
Oct 02 19:10:24 compute-0 podman[207691]: 2025-10-02 19:10:24.753868147 +0000 UTC m=+0.971532956 container died 3c3f97e1aac9ef2f7a6c3916e65bf5aeaba6f79514c284351c52c19f4e703201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 19:10:24 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 02 19:10:24 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 02 19:10:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5258bf27a30791f9da0a73c706c93ffb6f58332f62fbac2be2c853681f3ff999-merged.mount: Deactivated successfully.
Oct 02 19:10:24 compute-0 podman[207691]: 2025-10-02 19:10:24.833470069 +0000 UTC m=+1.051134808 container remove 3c3f97e1aac9ef2f7a6c3916e65bf5aeaba6f79514c284351c52c19f4e703201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 19:10:24 compute-0 systemd[1]: libpod-conmon-3c3f97e1aac9ef2f7a6c3916e65bf5aeaba6f79514c284351c52c19f4e703201.scope: Deactivated successfully.
Oct 02 19:10:24 compute-0 ceph-mon[191910]: pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 19:10:24 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 02 19:10:24 compute-0 ceph-mon[191910]: from='osd.1 [v2:192.168.122.100:6806/3952184026,v1:192.168.122.100:6807/3952184026]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 02 19:10:24 compute-0 ceph-mon[191910]: osdmap e10: 3 total, 1 up, 3 in
Oct 02 19:10:24 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:24 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:24 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 02 19:10:24 compute-0 ceph-mon[191910]: from='osd.1 [v2:192.168.122.100:6806/3952184026,v1:192.168.122.100:6807/3952184026]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 02 19:10:25 compute-0 systemd[1]: Reloading.
Oct 02 19:10:25 compute-0 systemd-sysv-generator[207803]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:10:25 compute-0 systemd-rc-local-generator[207799]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:10:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct 02 19:10:25 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 02 19:10:25 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3952184026,v1:192.168.122.100:6807/3952184026]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 02 19:10:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Oct 02 19:10:25 compute-0 ceph-osd[207106]: osd.1 0 done with init, starting boot process
Oct 02 19:10:25 compute-0 ceph-osd[207106]: osd.1 0 start_boot
Oct 02 19:10:25 compute-0 ceph-osd[207106]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 02 19:10:25 compute-0 ceph-osd[207106]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 02 19:10:25 compute-0 ceph-osd[207106]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 02 19:10:25 compute-0 ceph-osd[207106]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 02 19:10:25 compute-0 ceph-osd[207106]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct 02 19:10:25 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Oct 02 19:10:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 19:10:25 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 19:10:25 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:25 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 19:10:25 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 19:10:25 compute-0 ceph-mgr[192222]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3952184026; not ready for session (expect reconnect)
Oct 02 19:10:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 19:10:25 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:25 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 19:10:25 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 11 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=11) [] r=-1 lpr=11 pi=[10,11)/0 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:10:25 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 11 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=11) [] r=-1 lpr=11 pi=[10,11)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:10:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 02 19:10:25 compute-0 systemd[1]: Reloading.
Oct 02 19:10:25 compute-0 systemd-rc-local-generator[207841]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:10:25 compute-0 systemd-sysv-generator[207846]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:10:26 compute-0 systemd[1]: Starting Ceph osd.2 for 6019f664-a1c2-5955-8391-692cb79a59f9...
Oct 02 19:10:26 compute-0 ceph-mgr[192222]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3952184026; not ready for session (expect reconnect)
Oct 02 19:10:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 19:10:26 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:26 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 19:10:26 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 02 19:10:26 compute-0 ceph-mon[191910]: from='osd.1 [v2:192.168.122.100:6806/3952184026,v1:192.168.122.100:6807/3952184026]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 02 19:10:26 compute-0 ceph-mon[191910]: osdmap e11: 3 total, 1 up, 3 in
Oct 02 19:10:26 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:26 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:26 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:26 compute-0 ceph-mon[191910]: pgmap v43: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 02 19:10:26 compute-0 podman[207894]: 2025-10-02 19:10:26.669926318 +0000 UTC m=+0.070639566 container create df2956a43e2232707b0b6a259c2fe7dca4a4436112b135efa0406627aa5c851f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 19:10:26 compute-0 podman[207894]: 2025-10-02 19:10:26.634039315 +0000 UTC m=+0.034752663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b581e4042ed2a04c3de964732723a7a278d0457e56b7616561b7b4e9b427f2fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b581e4042ed2a04c3de964732723a7a278d0457e56b7616561b7b4e9b427f2fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b581e4042ed2a04c3de964732723a7a278d0457e56b7616561b7b4e9b427f2fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b581e4042ed2a04c3de964732723a7a278d0457e56b7616561b7b4e9b427f2fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b581e4042ed2a04c3de964732723a7a278d0457e56b7616561b7b4e9b427f2fe/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:26 compute-0 podman[207894]: 2025-10-02 19:10:26.826201665 +0000 UTC m=+0.226915003 container init df2956a43e2232707b0b6a259c2fe7dca4a4436112b135efa0406627aa5c851f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 19:10:26 compute-0 podman[207894]: 2025-10-02 19:10:26.858236396 +0000 UTC m=+0.258949654 container start df2956a43e2232707b0b6a259c2fe7dca4a4436112b135efa0406627aa5c851f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 19:10:26 compute-0 podman[207894]: 2025-10-02 19:10:26.863631539 +0000 UTC m=+0.264344837 container attach df2956a43e2232707b0b6a259c2fe7dca4a4436112b135efa0406627aa5c851f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 19:10:27 compute-0 ceph-mgr[192222]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3952184026; not ready for session (expect reconnect)
Oct 02 19:10:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 19:10:27 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:27 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 19:10:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 02 19:10:27 compute-0 ceph-mon[191910]: purged_snaps scrub starts
Oct 02 19:10:27 compute-0 ceph-mon[191910]: purged_snaps scrub ok
Oct 02 19:10:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:28 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate[207907]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 02 19:10:28 compute-0 bash[207894]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 02 19:10:28 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate[207907]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct 02 19:10:28 compute-0 bash[207894]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct 02 19:10:28 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate[207907]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct 02 19:10:28 compute-0 bash[207894]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct 02 19:10:28 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate[207907]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 02 19:10:28 compute-0 bash[207894]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 02 19:10:28 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate[207907]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 02 19:10:28 compute-0 bash[207894]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 02 19:10:28 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate[207907]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 02 19:10:28 compute-0 bash[207894]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 02 19:10:28 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate[207907]: --> ceph-volume raw activate successful for osd ID: 2
Oct 02 19:10:28 compute-0 bash[207894]: --> ceph-volume raw activate successful for osd ID: 2
Oct 02 19:10:28 compute-0 systemd[1]: libpod-df2956a43e2232707b0b6a259c2fe7dca4a4436112b135efa0406627aa5c851f.scope: Deactivated successfully.
Oct 02 19:10:28 compute-0 podman[207894]: 2025-10-02 19:10:28.178508936 +0000 UTC m=+1.579222194 container died df2956a43e2232707b0b6a259c2fe7dca4a4436112b135efa0406627aa5c851f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 19:10:28 compute-0 systemd[1]: libpod-df2956a43e2232707b0b6a259c2fe7dca4a4436112b135efa0406627aa5c851f.scope: Consumed 1.331s CPU time.
Oct 02 19:10:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-b581e4042ed2a04c3de964732723a7a278d0457e56b7616561b7b4e9b427f2fe-merged.mount: Deactivated successfully.
Oct 02 19:10:28 compute-0 podman[207894]: 2025-10-02 19:10:28.395735251 +0000 UTC m=+1.796448519 container remove df2956a43e2232707b0b6a259c2fe7dca4a4436112b135efa0406627aa5c851f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2-activate, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:10:28 compute-0 ceph-mgr[192222]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3952184026; not ready for session (expect reconnect)
Oct 02 19:10:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 19:10:28 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:28 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 19:10:28 compute-0 ceph-mon[191910]: pgmap v44: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 02 19:10:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:28 compute-0 podman[208102]: 2025-10-02 19:10:28.87351326 +0000 UTC m=+0.096278736 container create a1dbd8bcb63ed95687c9afb28ed554cb558cdc1e3edc4152a9c58674dece3403 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:10:28 compute-0 podman[208102]: 2025-10-02 19:10:28.827669014 +0000 UTC m=+0.050434530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411fa65f9dcbab0bbd0ad979b2668e0e744b9523d28a4f42a78a3b97bb85f783/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411fa65f9dcbab0bbd0ad979b2668e0e744b9523d28a4f42a78a3b97bb85f783/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411fa65f9dcbab0bbd0ad979b2668e0e744b9523d28a4f42a78a3b97bb85f783/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411fa65f9dcbab0bbd0ad979b2668e0e744b9523d28a4f42a78a3b97bb85f783/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411fa65f9dcbab0bbd0ad979b2668e0e744b9523d28a4f42a78a3b97bb85f783/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:29 compute-0 podman[208102]: 2025-10-02 19:10:29.045615348 +0000 UTC m=+0.268380784 container init a1dbd8bcb63ed95687c9afb28ed554cb558cdc1e3edc4152a9c58674dece3403 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:10:29 compute-0 podman[208102]: 2025-10-02 19:10:29.05964535 +0000 UTC m=+0.282410806 container start a1dbd8bcb63ed95687c9afb28ed554cb558cdc1e3edc4152a9c58674dece3403 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 19:10:29 compute-0 ceph-osd[208121]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 19:10:29 compute-0 ceph-osd[208121]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 02 19:10:29 compute-0 ceph-osd[208121]: pidfile_write: ignore empty --pid-file
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bdev(0x563963439800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bdev(0x563963439800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bdev(0x563963439800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bdev(0x56396427b800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bdev(0x56396427b800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bdev(0x56396427b800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bdev(0x56396427b800 /var/lib/ceph/osd/ceph-2/block) close
Oct 02 19:10:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:10:29 compute-0 bash[208102]: a1dbd8bcb63ed95687c9afb28ed554cb558cdc1e3edc4152a9c58674dece3403
Oct 02 19:10:29 compute-0 systemd[1]: Started Ceph osd.2 for 6019f664-a1c2-5955-8391-692cb79a59f9.
Oct 02 19:10:29 compute-0 sudo[207196]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:10:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:10:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bdev(0x563963439800 /var/lib/ceph/osd/ceph-2/block) close
Oct 02 19:10:29 compute-0 ceph-mgr[192222]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3952184026; not ready for session (expect reconnect)
Oct 02 19:10:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 19:10:29 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:29 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 02 19:10:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v45: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 02 19:10:29 compute-0 sudo[208134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:29 compute-0 sudo[208134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:29 compute-0 sudo[208134]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:29 compute-0 sudo[208161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:10:29 compute-0 sudo[208161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:29 compute-0 sudo[208161]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:29 compute-0 ceph-osd[208121]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Oct 02 19:10:29 compute-0 ceph-osd[208121]: load: jerasure load: lrc 
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bdev(0x563964300c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bdev(0x563964300c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bdev(0x563964300c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bdev(0x563964300c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 02 19:10:29 compute-0 sudo[208186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:29 compute-0 sudo[208186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:29 compute-0 sudo[208186]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:29 compute-0 podman[157186]: time="2025-10-02T19:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:10:29 compute-0 sudo[208216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:10:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29173 "" "Go-http-client/1.1"
Oct 02 19:10:29 compute-0 sudo[208216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5808 "" "Go-http-client/1.1"
Oct 02 19:10:29 compute-0 ceph-osd[207106]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 22.236 iops: 5692.413 elapsed_sec: 0.527
Oct 02 19:10:29 compute-0 ceph-osd[207106]: log_channel(cluster) log [WRN] : OSD bench result of 5692.413326 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 02 19:10:29 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1[207102]: 2025-10-02T19:10:29.908+0000 7f74feb7a640 -1 osd.1 0 waiting for initial osdmap
Oct 02 19:10:29 compute-0 ceph-osd[207106]: osd.1 0 waiting for initial osdmap
Oct 02 19:10:29 compute-0 ceph-osd[207106]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 02 19:10:29 compute-0 ceph-osd[207106]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 02 19:10:29 compute-0 ceph-osd[207106]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 02 19:10:29 compute-0 ceph-osd[207106]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bdev(0x563964300c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bdev(0x563964300c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bdev(0x563964300c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 19:10:29 compute-0 ceph-osd[208121]: bdev(0x563964300c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 02 19:10:29 compute-0 ceph-osd[207106]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 02 19:10:29 compute-0 ceph-osd[207106]: osd.1 11 set_numa_affinity not setting numa affinity
Oct 02 19:10:29 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-1[207102]: 2025-10-02T19:10:29.933+0000 7f74f998b640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 02 19:10:29 compute-0 ceph-osd[207106]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Oct 02 19:10:30 compute-0 podman[208283]: 2025-10-02 19:10:30.099850027 +0000 UTC m=+0.053105640 container create a03ee0e9ca65c8a5a66ce4e079e365b0620745eb625132dd47de913397562a44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:10:30 compute-0 systemd[1]: Started libpod-conmon-a03ee0e9ca65c8a5a66ce4e079e365b0620745eb625132dd47de913397562a44.scope.
Oct 02 19:10:30 compute-0 podman[208283]: 2025-10-02 19:10:30.075271185 +0000 UTC m=+0.028526888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:30 compute-0 podman[208283]: 2025-10-02 19:10:30.192126566 +0000 UTC m=+0.145382229 container init a03ee0e9ca65c8a5a66ce4e079e365b0620745eb625132dd47de913397562a44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:30 compute-0 podman[208283]: 2025-10-02 19:10:30.201586688 +0000 UTC m=+0.154842311 container start a03ee0e9ca65c8a5a66ce4e079e365b0620745eb625132dd47de913397562a44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_edison, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 19:10:30 compute-0 ceph-osd[208121]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 02 19:10:30 compute-0 ceph-osd[208121]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bdev(0x563964300c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bdev(0x563964300c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bdev(0x563964300c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:30 compute-0 podman[208283]: 2025-10-02 19:10:30.207360041 +0000 UTC m=+0.160615694 container attach a03ee0e9ca65c8a5a66ce4e079e365b0620745eb625132dd47de913397562a44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_edison, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bdev(0x563964301400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bdev(0x563964301400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bdev(0x563964301400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluefs mount
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluefs mount shared_bdev_used = 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 02 19:10:30 compute-0 peaceful_edison[208299]: 167 167
Oct 02 19:10:30 compute-0 systemd[1]: libpod-a03ee0e9ca65c8a5a66ce4e079e365b0620745eb625132dd47de913397562a44.scope: Deactivated successfully.
Oct 02 19:10:30 compute-0 podman[208283]: 2025-10-02 19:10:30.213310439 +0000 UTC m=+0.166566052 container died a03ee0e9ca65c8a5a66ce4e079e365b0620745eb625132dd47de913397562a44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: RocksDB version: 7.9.2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Git sha 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: DB SUMMARY
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: DB Session ID:  DKZRMAH130XYQH1Y4QLW
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: CURRENT file:  CURRENT
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                         Options.error_if_exists: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.create_if_missing: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                                     Options.env: 0x5639642cdd50
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                                Options.info_log: 0x5639634c0840
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                              Options.statistics: (nil)
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.use_fsync: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                              Options.db_log_dir: 
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                                 Options.wal_dir: db.wal
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.write_buffer_manager: 0x5639643da460
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.unordered_write: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.row_cache: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                              Options.wal_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.two_write_queues: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.wal_compression: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.atomic_flush: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.max_background_jobs: 4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.max_background_compactions: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.max_subcompactions: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.max_open_files: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Compression algorithms supported:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         kZSTD supported: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         kXpressCompression supported: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         kBZip2Compression supported: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         kLZ4Compression supported: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         kZlibCompression supported: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         kSnappyCompression supported: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c0240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c0240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c0240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c0240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c0240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c0240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c0240)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c0260)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c0260)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ce23a8dc248bba8b47e9c2b65b1e8e7ba3049e43b976318fbfaf1b0d80c9ab6-merged.mount: Deactivated successfully.
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c0260)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5215c463-60d8-4c85-8935-ffbff2ba05af
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432230256886, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432230259447, "job": 1, "event": "recovery_finished"}
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: freelist init
Oct 02 19:10:30 compute-0 ceph-osd[208121]: freelist _read_cfg
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluefs umount
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bdev(0x563964301400 /var/lib/ceph/osd/ceph-2/block) close
Oct 02 19:10:30 compute-0 podman[208283]: 2025-10-02 19:10:30.272543661 +0000 UTC m=+0.225799274 container remove a03ee0e9ca65c8a5a66ce4e079e365b0620745eb625132dd47de913397562a44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_edison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct 02 19:10:30 compute-0 systemd[1]: libpod-conmon-a03ee0e9ca65c8a5a66ce4e079e365b0620745eb625132dd47de913397562a44.scope: Deactivated successfully.
Oct 02 19:10:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct 02 19:10:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Oct 02 19:10:30 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/3952184026,v1:192.168.122.100:6807/3952184026] boot
Oct 02 19:10:30 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Oct 02 19:10:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 02 19:10:30 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:30 compute-0 ceph-osd[207106]: osd.1 12 state: booting -> active
Oct 02 19:10:30 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:10:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 19:10:30 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:30 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bdev(0x563964301400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bdev(0x563964301400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bdev(0x563964301400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluefs mount
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluefs mount shared_bdev_used = 4718592
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: RocksDB version: 7.9.2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Git sha 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: DB SUMMARY
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: DB Session ID:  DKZRMAH130XYQH1Y4QLX
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: CURRENT file:  CURRENT
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                         Options.error_if_exists: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.create_if_missing: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                                     Options.env: 0x56396446a230
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                                Options.info_log: 0x5639634c05c0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                              Options.statistics: (nil)
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.use_fsync: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                              Options.db_log_dir: 
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                                 Options.wal_dir: db.wal
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.write_buffer_manager: 0x5639643da460
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.unordered_write: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.row_cache: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                              Options.wal_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.two_write_queues: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.wal_compression: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.atomic_flush: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.max_background_jobs: 4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.max_background_compactions: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.max_subcompactions: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.max_open_files: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Compression algorithms supported:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         kZSTD supported: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         kXpressCompression supported: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         kBZip2Compression supported: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         kLZ4Compression supported: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         kZlibCompression supported: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         kSnappyCompression supported: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c09e0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c09e0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c09e0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c09e0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 podman[208515]: 2025-10-02 19:10:30.519892455 +0000 UTC m=+0.078821873 container create 363a346e295235f8820cba5f32ad661653b4276fd00184e2d55020bb1b2e2672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ellis, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c09e0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c09e0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c09e0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c0360)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c0360)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:           Options.merge_operator: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5639634c0360)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5639634ad090
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.compression: LZ4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.num_levels: 7
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.bloom_locality: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                               Options.ttl: 2592000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                       Options.enable_blob_files: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                           Options.min_blob_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5215c463-60d8-4c85-8935-ffbff2ba05af
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432230563671, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432230569961, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432230, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5215c463-60d8-4c85-8935-ffbff2ba05af", "db_session_id": "DKZRMAH130XYQH1Y4QLX", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:10:30 compute-0 podman[208515]: 2025-10-02 19:10:30.481582489 +0000 UTC m=+0.040511917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432230575622, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 467, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432230, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5215c463-60d8-4c85-8935-ffbff2ba05af", "db_session_id": "DKZRMAH130XYQH1Y4QLX", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432230580930, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432230, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5215c463-60d8-4c85-8935-ffbff2ba05af", "db_session_id": "DKZRMAH130XYQH1Y4QLX", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432230583553, "job": 1, "event": "recovery_finished"}
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 02 19:10:30 compute-0 systemd[1]: Started libpod-conmon-363a346e295235f8820cba5f32ad661653b4276fd00184e2d55020bb1b2e2672.scope.
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56396361a000
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: DB pointer 0x5639643bfa00
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Oct 02 19:10:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                            Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                            Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Oct 02 19:10:30 compute-0 ceph-osd[208121]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 02 19:10:30 compute-0 ceph-osd[208121]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 02 19:10:30 compute-0 ceph-osd[208121]: _get_class not permitted to load lua
Oct 02 19:10:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:30 compute-0 ceph-osd[208121]: _get_class not permitted to load sdk
Oct 02 19:10:30 compute-0 ceph-osd[208121]: _get_class not permitted to load test_remote_reads
Oct 02 19:10:30 compute-0 ceph-osd[208121]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 02 19:10:30 compute-0 ceph-osd[208121]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 02 19:10:30 compute-0 ceph-osd[208121]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 02 19:10:30 compute-0 ceph-osd[208121]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 02 19:10:30 compute-0 ceph-osd[208121]: osd.2 0 load_pgs
Oct 02 19:10:30 compute-0 ceph-osd[208121]: osd.2 0 load_pgs opened 0 pgs
Oct 02 19:10:30 compute-0 ceph-osd[208121]: osd.2 0 log_to_monitors true
Oct 02 19:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e916723d0fc19471bfac13a3bb816023c2e770243363bb2f1b51de28665355/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:30 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2[208117]: 2025-10-02T19:10:30.624+0000 7f23c3778740 -1 osd.2 0 log_to_monitors true
Oct 02 19:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e916723d0fc19471bfac13a3bb816023c2e770243363bb2f1b51de28665355/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e916723d0fc19471bfac13a3bb816023c2e770243363bb2f1b51de28665355/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e916723d0fc19471bfac13a3bb816023c2e770243363bb2f1b51de28665355/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Oct 02 19:10:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/684409079,v1:192.168.122.100:6811/684409079]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 02 19:10:30 compute-0 podman[208515]: 2025-10-02 19:10:30.649729491 +0000 UTC m=+0.208658929 container init 363a346e295235f8820cba5f32ad661653b4276fd00184e2d55020bb1b2e2672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:10:30 compute-0 podman[208515]: 2025-10-02 19:10:30.672973198 +0000 UTC m=+0.231902606 container start 363a346e295235f8820cba5f32ad661653b4276fd00184e2d55020bb1b2e2672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:30 compute-0 podman[208515]: 2025-10-02 19:10:30.677655402 +0000 UTC m=+0.236584800 container attach 363a346e295235f8820cba5f32ad661653b4276fd00184e2d55020bb1b2e2672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 19:10:30 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=-1 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [1], acting [] -> [1], acting_primary ? -> 1, up_primary ? -> 1, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:10:30 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=-1 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:10:31 compute-0 ceph-mon[191910]: pgmap v45: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 02 19:10:31 compute-0 ceph-mon[191910]: OSD bench result of 5692.413326 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 02 19:10:31 compute-0 ceph-mon[191910]: osd.1 [v2:192.168.122.100:6806/3952184026,v1:192.168.122.100:6807/3952184026] boot
Oct 02 19:10:31 compute-0 ceph-mon[191910]: osdmap e12: 3 total, 2 up, 3 in
Oct 02 19:10:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 19:10:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:31 compute-0 ceph-mon[191910]: from='osd.2 [v2:192.168.122.100:6810/684409079,v1:192.168.122.100:6811/684409079]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 02 19:10:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct 02 19:10:31 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/684409079,v1:192.168.122.100:6811/684409079]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 02 19:10:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Oct 02 19:10:31 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Oct 02 19:10:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 02 19:10:31 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/684409079,v1:192.168.122.100:6811/684409079]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 02 19:10:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 02 19:10:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 19:10:31 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:31 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 19:10:31 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:10:31 compute-0 ceph-mgr[192222]: [devicehealth INFO root] creating main.db for devicehealth
Oct 02 19:10:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 creating+peering; 0 B data, 852 MiB used, 39 GiB / 40 GiB avail
Oct 02 19:10:31 compute-0 openstack_network_exporter[159337]: ERROR   19:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:10:31 compute-0 openstack_network_exporter[159337]: ERROR   19:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:10:31 compute-0 openstack_network_exporter[159337]: ERROR   19:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:10:31 compute-0 openstack_network_exporter[159337]: ERROR   19:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:10:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:10:31 compute-0 openstack_network_exporter[159337]: ERROR   19:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:10:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:10:31 compute-0 ceph-mgr[192222]: [devicehealth INFO root] Check health
Oct 02 19:10:31 compute-0 ceph-mgr[192222]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Oct 02 19:10:31 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 02 19:10:31 compute-0 sudo[208777]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Oct 02 19:10:31 compute-0 sudo[208777]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:10:31 compute-0 sudo[208777]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Oct 02 19:10:31 compute-0 sudo[208777]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:31 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 02 19:10:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 02 19:10:31 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 19:10:31 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 02 19:10:31 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 02 19:10:31 compute-0 laughing_ellis[208713]: {
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:         "osd_id": 1,
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:         "type": "bluestore"
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:     },
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:         "osd_id": 2,
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:         "type": "bluestore"
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:     },
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:         "osd_id": 0,
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:         "type": "bluestore"
Oct 02 19:10:31 compute-0 laughing_ellis[208713]:     }
Oct 02 19:10:31 compute-0 laughing_ellis[208713]: }
Oct 02 19:10:31 compute-0 systemd[1]: libpod-363a346e295235f8820cba5f32ad661653b4276fd00184e2d55020bb1b2e2672.scope: Deactivated successfully.
Oct 02 19:10:31 compute-0 podman[208515]: 2025-10-02 19:10:31.760642715 +0000 UTC m=+1.319572173 container died 363a346e295235f8820cba5f32ad661653b4276fd00184e2d55020bb1b2e2672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ellis, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:10:31 compute-0 systemd[1]: libpod-363a346e295235f8820cba5f32ad661653b4276fd00184e2d55020bb1b2e2672.scope: Consumed 1.087s CPU time.
Oct 02 19:10:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3e916723d0fc19471bfac13a3bb816023c2e770243363bb2f1b51de28665355-merged.mount: Deactivated successfully.
Oct 02 19:10:31 compute-0 podman[208515]: 2025-10-02 19:10:31.851720442 +0000 UTC m=+1.410649840 container remove 363a346e295235f8820cba5f32ad661653b4276fd00184e2d55020bb1b2e2672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ellis, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 19:10:31 compute-0 systemd[1]: libpod-conmon-363a346e295235f8820cba5f32ad661653b4276fd00184e2d55020bb1b2e2672.scope: Deactivated successfully.
Oct 02 19:10:31 compute-0 sudo[208216]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:10:31 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:10:31 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:32 compute-0 sudo[208806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:32 compute-0 sudo[208806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:32 compute-0 sudo[208806]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:32 compute-0 sudo[208831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:10:32 compute-0 sudo[208831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:32 compute-0 sudo[208831]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:32 compute-0 sudo[208856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:32 compute-0 sudo[208856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:32 compute-0 sudo[208856]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct 02 19:10:32 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/684409079,v1:192.168.122.100:6811/684409079]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 02 19:10:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Oct 02 19:10:32 compute-0 ceph-osd[208121]: osd.2 0 done with init, starting boot process
Oct 02 19:10:32 compute-0 ceph-osd[208121]: osd.2 0 start_boot
Oct 02 19:10:32 compute-0 ceph-osd[208121]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 02 19:10:32 compute-0 ceph-osd[208121]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 02 19:10:32 compute-0 ceph-osd[208121]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 02 19:10:32 compute-0 ceph-osd[208121]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 02 19:10:32 compute-0 ceph-osd[208121]: osd.2 0  bench count 12288000 bsize 4 KiB
Oct 02 19:10:32 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Oct 02 19:10:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 19:10:32 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:32 compute-0 ceph-mon[191910]: from='osd.2 [v2:192.168.122.100:6810/684409079,v1:192.168.122.100:6811/684409079]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 02 19:10:32 compute-0 ceph-mon[191910]: osdmap e13: 3 total, 2 up, 3 in
Oct 02 19:10:32 compute-0 ceph-mon[191910]: from='osd.2 [v2:192.168.122.100:6810/684409079,v1:192.168.122.100:6811/684409079]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 02 19:10:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:32 compute-0 ceph-mon[191910]: pgmap v48: 1 pgs: 1 creating+peering; 0 B data, 852 MiB used, 39 GiB / 40 GiB avail
Oct 02 19:10:32 compute-0 ceph-mon[191910]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 02 19:10:32 compute-0 ceph-mon[191910]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 02 19:10:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 19:10:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:32 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 19:10:32 compute-0 ceph-mgr[192222]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/684409079; not ready for session (expect reconnect)
Oct 02 19:10:32 compute-0 sudo[208881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:10:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 19:10:32 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:32 compute-0 sudo[208881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:32 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 19:10:32 compute-0 sudo[208881]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:32 compute-0 sudo[208906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:32 compute-0 sudo[208906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:32 compute-0 sudo[208906]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:32 compute-0 sudo[208931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 19:10:32 compute-0 sudo[208931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:33 compute-0 podman[209024]: 2025-10-02 19:10:33.314620997 +0000 UTC m=+0.100717014 container exec a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 02 19:10:33 compute-0 ceph-mgr[192222]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/684409079; not ready for session (expect reconnect)
Oct 02 19:10:33 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 19:10:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 19:10:33 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:33 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.uktbkz(active, since 90s)
Oct 02 19:10:33 compute-0 podman[209024]: 2025-10-02 19:10:33.410589454 +0000 UTC m=+0.196685451 container exec_died a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 19:10:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 creating+peering; 0 B data, 852 MiB used, 39 GiB / 40 GiB avail
Oct 02 19:10:33 compute-0 ceph-mon[191910]: from='osd.2 [v2:192.168.122.100:6810/684409079,v1:192.168.122.100:6811/684409079]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 02 19:10:33 compute-0 ceph-mon[191910]: osdmap e14: 3 total, 2 up, 3 in
Oct 02 19:10:33 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:33 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:10:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:10:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:10:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:10:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:10:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:10:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:10:34 compute-0 sudo[208931]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:10:34 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:10:34 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:34 compute-0 sudo[209135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:34 compute-0 sudo[209135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:34 compute-0 sudo[209135]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:34 compute-0 ceph-mgr[192222]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/684409079; not ready for session (expect reconnect)
Oct 02 19:10:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 19:10:34 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:34 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 19:10:34 compute-0 ceph-mon[191910]: purged_snaps scrub starts
Oct 02 19:10:34 compute-0 ceph-mon[191910]: purged_snaps scrub ok
Oct 02 19:10:34 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:34 compute-0 ceph-mon[191910]: mgrmap e9: compute-0.uktbkz(active, since 90s)
Oct 02 19:10:34 compute-0 ceph-mon[191910]: pgmap v50: 1 pgs: 1 creating+peering; 0 B data, 852 MiB used, 39 GiB / 40 GiB avail
Oct 02 19:10:34 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:34 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:34 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:34 compute-0 sudo[209160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:10:34 compute-0 sudo[209160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:34 compute-0 sudo[209160]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:34 compute-0 sudo[209185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:34 compute-0 sudo[209185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:34 compute-0 sudo[209185]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:34 compute-0 sudo[209210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:10:34 compute-0 sudo[209210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:35 compute-0 sudo[209210]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:35 compute-0 ceph-mgr[192222]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/684409079; not ready for session (expect reconnect)
Oct 02 19:10:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 19:10:35 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:35 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 19:10:35 compute-0 sudo[209265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:35 compute-0 sudo[209265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:35 compute-0 sudo[209265]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 02 19:10:35 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:35 compute-0 sudo[209297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:10:35 compute-0 sudo[209297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:35 compute-0 sudo[209297]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:35 compute-0 podman[209290]: 2025-10-02 19:10:35.512793227 +0000 UTC m=+0.094982192 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:10:35 compute-0 podman[209289]: 2025-10-02 19:10:35.545755462 +0000 UTC m=+0.142079942 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 19:10:35 compute-0 sudo[209353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:35 compute-0 sudo[209353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:35 compute-0 sudo[209353]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:35 compute-0 sudo[209383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- inventory --format=json-pretty --filter-for-batch
Oct 02 19:10:35 compute-0 sudo[209383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:35 compute-0 ceph-osd[208121]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 20.173 iops: 5164.385 elapsed_sec: 0.581
Oct 02 19:10:35 compute-0 ceph-osd[208121]: log_channel(cluster) log [WRN] : OSD bench result of 5164.384813 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 02 19:10:35 compute-0 ceph-osd[208121]: osd.2 0 waiting for initial osdmap
Oct 02 19:10:35 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2[208117]: 2025-10-02T19:10:35.934+0000 7f23bf6f8640 -1 osd.2 0 waiting for initial osdmap
Oct 02 19:10:35 compute-0 ceph-osd[208121]: osd.2 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 02 19:10:35 compute-0 ceph-osd[208121]: osd.2 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 02 19:10:35 compute-0 ceph-osd[208121]: osd.2 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 02 19:10:35 compute-0 ceph-osd[208121]: osd.2 14 check_osdmap_features require_osd_release unknown -> reef
Oct 02 19:10:35 compute-0 ceph-osd[208121]: osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 02 19:10:35 compute-0 ceph-osd[208121]: osd.2 14 set_numa_affinity not setting numa affinity
Oct 02 19:10:35 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-osd-2[208117]: 2025-10-02T19:10:35.975+0000 7f23bad20640 -1 osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 02 19:10:35 compute-0 ceph-osd[208121]: osd.2 14 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Oct 02 19:10:36 compute-0 podman[209449]: 2025-10-02 19:10:36.234144671 +0000 UTC m=+0.078785332 container create ed69425d26ebda0b2838133bcecf579aeb9a69fe6b0e1c9f813ff0716620c0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_austin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:10:36 compute-0 podman[209449]: 2025-10-02 19:10:36.207473853 +0000 UTC m=+0.052114494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:36 compute-0 systemd[1]: Started libpod-conmon-ed69425d26ebda0b2838133bcecf579aeb9a69fe6b0e1c9f813ff0716620c0ac.scope.
Oct 02 19:10:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:36 compute-0 ceph-mgr[192222]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/684409079; not ready for session (expect reconnect)
Oct 02 19:10:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 19:10:36 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:36 compute-0 ceph-mgr[192222]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 02 19:10:36 compute-0 podman[209449]: 2025-10-02 19:10:36.371087105 +0000 UTC m=+0.215727766 container init ed69425d26ebda0b2838133bcecf579aeb9a69fe6b0e1c9f813ff0716620c0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_austin, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 19:10:36 compute-0 podman[209449]: 2025-10-02 19:10:36.382923139 +0000 UTC m=+0.227563780 container start ed69425d26ebda0b2838133bcecf579aeb9a69fe6b0e1c9f813ff0716620c0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_austin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 19:10:36 compute-0 podman[209449]: 2025-10-02 19:10:36.389197106 +0000 UTC m=+0.233837757 container attach ed69425d26ebda0b2838133bcecf579aeb9a69fe6b0e1c9f813ff0716620c0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_austin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:36 compute-0 sweet_austin[209465]: 167 167
Oct 02 19:10:36 compute-0 systemd[1]: libpod-ed69425d26ebda0b2838133bcecf579aeb9a69fe6b0e1c9f813ff0716620c0ac.scope: Deactivated successfully.
Oct 02 19:10:36 compute-0 podman[209449]: 2025-10-02 19:10:36.395926724 +0000 UTC m=+0.240567375 container died ed69425d26ebda0b2838133bcecf579aeb9a69fe6b0e1c9f813ff0716620c0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_austin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct 02 19:10:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-edb24ce7ba059bedbd9ae5dd82819da8d4be0623f8ffc8ae0ee02f115ed9d4a3-merged.mount: Deactivated successfully.
Oct 02 19:10:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct 02 19:10:36 compute-0 ceph-mon[191910]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 02 19:10:36 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:36 compute-0 podman[209449]: 2025-10-02 19:10:36.471208842 +0000 UTC m=+0.315849503 container remove ed69425d26ebda0b2838133bcecf579aeb9a69fe6b0e1c9f813ff0716620c0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 19:10:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e15 e15: 3 total, 3 up, 3 in
Oct 02 19:10:36 compute-0 systemd[1]: libpod-conmon-ed69425d26ebda0b2838133bcecf579aeb9a69fe6b0e1c9f813ff0716620c0ac.scope: Deactivated successfully.
Oct 02 19:10:36 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/684409079,v1:192.168.122.100:6811/684409079] boot
Oct 02 19:10:36 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 3 up, 3 in
Oct 02 19:10:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 02 19:10:36 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:36 compute-0 ceph-osd[208121]: osd.2 15 state: booting -> active
Oct 02 19:10:36 compute-0 podman[209489]: 2025-10-02 19:10:36.69379195 +0000 UTC m=+0.067630576 container create c5231ed8f6418b2c3c9dc96f773f9e8d25a80cb426ffe4bef65a0337a1d21c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:36 compute-0 systemd[1]: Started libpod-conmon-c5231ed8f6418b2c3c9dc96f773f9e8d25a80cb426ffe4bef65a0337a1d21c7e.scope.
Oct 02 19:10:36 compute-0 podman[209489]: 2025-10-02 19:10:36.664663817 +0000 UTC m=+0.038502533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d95e0cab267abbc53c951e57996ee69e77ac40a3c671f947268560934f1fa3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d95e0cab267abbc53c951e57996ee69e77ac40a3c671f947268560934f1fa3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d95e0cab267abbc53c951e57996ee69e77ac40a3c671f947268560934f1fa3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d95e0cab267abbc53c951e57996ee69e77ac40a3c671f947268560934f1fa3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:36 compute-0 podman[209489]: 2025-10-02 19:10:36.83925945 +0000 UTC m=+0.213098176 container init c5231ed8f6418b2c3c9dc96f773f9e8d25a80cb426ffe4bef65a0337a1d21c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:10:36 compute-0 podman[209489]: 2025-10-02 19:10:36.850529759 +0000 UTC m=+0.224368395 container start c5231ed8f6418b2c3c9dc96f773f9e8d25a80cb426ffe4bef65a0337a1d21c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 02 19:10:36 compute-0 podman[209489]: 2025-10-02 19:10:36.855697757 +0000 UTC m=+0.229536413 container attach c5231ed8f6418b2c3c9dc96f773f9e8d25a80cb426ffe4bef65a0337a1d21c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 19:10:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct 02 19:10:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Oct 02 19:10:37 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Oct 02 19:10:37 compute-0 ceph-mon[191910]: OSD bench result of 5164.384813 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 02 19:10:37 compute-0 ceph-mon[191910]: osd.2 [v2:192.168.122.100:6810/684409079,v1:192.168.122.100:6811/684409079] boot
Oct 02 19:10:37 compute-0 ceph-mon[191910]: osdmap e15: 3 total, 3 up, 3 in
Oct 02 19:10:37 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 19:10:38 compute-0 ceph-mon[191910]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:38 compute-0 ceph-mon[191910]: osdmap e16: 3 total, 3 up, 3 in
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]: [
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:     {
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:         "available": false,
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:         "ceph_device": false,
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:         "lsm_data": {},
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:         "lvs": [],
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:         "path": "/dev/sr0",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:         "rejected_reasons": [
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "Insufficient space (<5GB)",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "Has a FileSystem"
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:         ],
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:         "sys_api": {
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "actuators": null,
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "device_nodes": "sr0",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "devname": "sr0",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "human_readable_size": "482.00 KB",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "id_bus": "ata",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "model": "QEMU DVD-ROM",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "nr_requests": "2",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "parent": "/dev/sr0",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "partitions": {},
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "path": "/dev/sr0",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "removable": "1",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "rev": "2.5+",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "ro": "0",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "rotational": "0",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "sas_address": "",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "sas_device_handle": "",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "scheduler_mode": "mq-deadline",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "sectors": 0,
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "sectorsize": "2048",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "size": 493568.0,
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "support_discard": "2048",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "type": "disk",
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:             "vendor": "QEMU"
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:         }
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]:     }
Oct 02 19:10:38 compute-0 peaceful_euclid[209505]: ]
Oct 02 19:10:39 compute-0 systemd[1]: libpod-c5231ed8f6418b2c3c9dc96f773f9e8d25a80cb426ffe4bef65a0337a1d21c7e.scope: Deactivated successfully.
Oct 02 19:10:39 compute-0 systemd[1]: libpod-c5231ed8f6418b2c3c9dc96f773f9e8d25a80cb426ffe4bef65a0337a1d21c7e.scope: Consumed 2.306s CPU time.
Oct 02 19:10:39 compute-0 conmon[209505]: conmon c5231ed8f6418b2c3c9d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5231ed8f6418b2c3c9dc96f773f9e8d25a80cb426ffe4bef65a0337a1d21c7e.scope/container/memory.events
Oct 02 19:10:39 compute-0 podman[209489]: 2025-10-02 19:10:39.037271056 +0000 UTC m=+2.411109692 container died c5231ed8f6418b2c3c9dc96f773f9e8d25a80cb426ffe4bef65a0337a1d21c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:10:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-96d95e0cab267abbc53c951e57996ee69e77ac40a3c671f947268560934f1fa3-merged.mount: Deactivated successfully.
Oct 02 19:10:39 compute-0 podman[209489]: 2025-10-02 19:10:39.111772593 +0000 UTC m=+2.485611219 container remove c5231ed8f6418b2c3c9dc96f773f9e8d25a80cb426ffe4bef65a0337a1d21c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 19:10:39 compute-0 systemd[1]: libpod-conmon-c5231ed8f6418b2c3c9dc96f773f9e8d25a80cb426ffe4bef65a0337a1d21c7e.scope: Deactivated successfully.
Oct 02 19:10:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:10:39 compute-0 sudo[209383]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:10:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:10:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Oct 02 19:10:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 02 19:10:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Oct 02 19:10:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 02 19:10:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Oct 02 19:10:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 02 19:10:39 compute-0 ceph-mgr[192222]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43639k
Oct 02 19:10:39 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43639k
Oct 02 19:10:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct 02 19:10:39 compute-0 ceph-mgr[192222]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44686677: error parsing value: Value '44686677' is below minimum 939524096
Oct 02 19:10:39 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44686677: error parsing value: Value '44686677' is below minimum 939524096
Oct 02 19:10:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:10:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:10:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:10:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:10:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:10:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:39 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 8c3f27ec-0dc9-46ce-abdf-0d1577eafa51 does not exist
Oct 02 19:10:39 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev d05d6ff3-0778-4139-9813-2364e37af623 does not exist
Oct 02 19:10:39 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev b11799dc-f039-4462-80e2-9550508ebf37 does not exist
Oct 02 19:10:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:10:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:10:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:10:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:10:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:10:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:10:39 compute-0 sudo[211624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:39 compute-0 sudo[211624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:39 compute-0 sudo[211624]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:39 compute-0 sudo[211649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:10:39 compute-0 sudo[211649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:39 compute-0 sudo[211649]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:39 compute-0 sudo[211674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:39 compute-0 sudo[211674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:39 compute-0 sudo[211674]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:39 compute-0 podman[211692]: 2025-10-02 19:10:39.695447063 +0000 UTC m=+0.127191326 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, architecture=x86_64, release=1755695350, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Oct 02 19:10:39 compute-0 sudo[211721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:10:39 compute-0 sudo[211721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:39 compute-0 podman[211696]: 2025-10-02 19:10:39.740637113 +0000 UTC m=+0.167662731 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 19:10:40 compute-0 podman[211803]: 2025-10-02 19:10:40.148200099 +0000 UTC m=+0.046692100 container create ff08f79b0d24cab88284a0450f8f46d28f239a87e005d62304897cd94ef67a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 02 19:10:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 02 19:10:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 02 19:10:40 compute-0 ceph-mon[191910]: Adjusting osd_memory_target on compute-0 to 43639k
Oct 02 19:10:40 compute-0 ceph-mon[191910]: Unable to set osd_memory_target on compute-0 to 44686677: error parsing value: Value '44686677' is below minimum 939524096
Oct 02 19:10:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:10:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:10:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:10:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:10:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:10:40 compute-0 systemd[1]: Started libpod-conmon-ff08f79b0d24cab88284a0450f8f46d28f239a87e005d62304897cd94ef67a85.scope.
Oct 02 19:10:40 compute-0 podman[211803]: 2025-10-02 19:10:40.131866306 +0000 UTC m=+0.030358307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:40 compute-0 podman[211803]: 2025-10-02 19:10:40.253002361 +0000 UTC m=+0.151494452 container init ff08f79b0d24cab88284a0450f8f46d28f239a87e005d62304897cd94ef67a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 19:10:40 compute-0 podman[211803]: 2025-10-02 19:10:40.268716728 +0000 UTC m=+0.167208769 container start ff08f79b0d24cab88284a0450f8f46d28f239a87e005d62304897cd94ef67a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pare, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 19:10:40 compute-0 gifted_pare[211819]: 167 167
Oct 02 19:10:40 compute-0 podman[211803]: 2025-10-02 19:10:40.275338504 +0000 UTC m=+0.173830545 container attach ff08f79b0d24cab88284a0450f8f46d28f239a87e005d62304897cd94ef67a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:40 compute-0 systemd[1]: libpod-ff08f79b0d24cab88284a0450f8f46d28f239a87e005d62304897cd94ef67a85.scope: Deactivated successfully.
Oct 02 19:10:40 compute-0 podman[211803]: 2025-10-02 19:10:40.299035973 +0000 UTC m=+0.197527984 container died ff08f79b0d24cab88284a0450f8f46d28f239a87e005d62304897cd94ef67a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pare, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:10:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-25c92fe761550a135b1ab08b55e033f9e1c063d07c53019dafdbe9954867e7c2-merged.mount: Deactivated successfully.
Oct 02 19:10:40 compute-0 podman[211803]: 2025-10-02 19:10:40.365459045 +0000 UTC m=+0.263951056 container remove ff08f79b0d24cab88284a0450f8f46d28f239a87e005d62304897cd94ef67a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pare, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:10:40 compute-0 systemd[1]: libpod-conmon-ff08f79b0d24cab88284a0450f8f46d28f239a87e005d62304897cd94ef67a85.scope: Deactivated successfully.
Oct 02 19:10:40 compute-0 podman[211843]: 2025-10-02 19:10:40.642852867 +0000 UTC m=+0.088244343 container create 3ee11565ecef265be697c4b189d15f71536f4fe28c403853b40fb10d30f8a838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:40 compute-0 podman[211843]: 2025-10-02 19:10:40.607086568 +0000 UTC m=+0.052478094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:40 compute-0 systemd[1]: Started libpod-conmon-3ee11565ecef265be697c4b189d15f71536f4fe28c403853b40fb10d30f8a838.scope.
Oct 02 19:10:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf1526c531273a078e1c9ed071ce45207e7a36faaf5d4a0251cb1a5f601d6893/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf1526c531273a078e1c9ed071ce45207e7a36faaf5d4a0251cb1a5f601d6893/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf1526c531273a078e1c9ed071ce45207e7a36faaf5d4a0251cb1a5f601d6893/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf1526c531273a078e1c9ed071ce45207e7a36faaf5d4a0251cb1a5f601d6893/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf1526c531273a078e1c9ed071ce45207e7a36faaf5d4a0251cb1a5f601d6893/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:40 compute-0 podman[211843]: 2025-10-02 19:10:40.760248583 +0000 UTC m=+0.205640029 container init 3ee11565ecef265be697c4b189d15f71536f4fe28c403853b40fb10d30f8a838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_bose, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 19:10:40 compute-0 podman[211843]: 2025-10-02 19:10:40.776847264 +0000 UTC m=+0.222238710 container start 3ee11565ecef265be697c4b189d15f71536f4fe28c403853b40fb10d30f8a838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_bose, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:10:40 compute-0 podman[211843]: 2025-10-02 19:10:40.782941045 +0000 UTC m=+0.228332491 container attach 3ee11565ecef265be697c4b189d15f71536f4fe28c403853b40fb10d30f8a838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:41 compute-0 ceph-mon[191910]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:42 compute-0 focused_bose[211858]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:10:42 compute-0 focused_bose[211858]: --> relative data size: 1.0
Oct 02 19:10:42 compute-0 focused_bose[211858]: --> All data devices are unavailable
Oct 02 19:10:42 compute-0 systemd[1]: libpod-3ee11565ecef265be697c4b189d15f71536f4fe28c403853b40fb10d30f8a838.scope: Deactivated successfully.
Oct 02 19:10:42 compute-0 systemd[1]: libpod-3ee11565ecef265be697c4b189d15f71536f4fe28c403853b40fb10d30f8a838.scope: Consumed 1.276s CPU time.
Oct 02 19:10:42 compute-0 podman[211843]: 2025-10-02 19:10:42.097612447 +0000 UTC m=+1.543003963 container died 3ee11565ecef265be697c4b189d15f71536f4fe28c403853b40fb10d30f8a838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_bose, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:10:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf1526c531273a078e1c9ed071ce45207e7a36faaf5d4a0251cb1a5f601d6893-merged.mount: Deactivated successfully.
Oct 02 19:10:42 compute-0 podman[211843]: 2025-10-02 19:10:42.204954936 +0000 UTC m=+1.650346372 container remove 3ee11565ecef265be697c4b189d15f71536f4fe28c403853b40fb10d30f8a838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_bose, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:42 compute-0 systemd[1]: libpod-conmon-3ee11565ecef265be697c4b189d15f71536f4fe28c403853b40fb10d30f8a838.scope: Deactivated successfully.
Oct 02 19:10:42 compute-0 sudo[211721]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:42 compute-0 sudo[211899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:42 compute-0 sudo[211899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:42 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:10:42 compute-0 sudo[211899]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:42 compute-0 sudo[211925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:10:42 compute-0 sudo[211925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:42 compute-0 sudo[211925]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:42 compute-0 sudo[211950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:42 compute-0 sudo[211950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:42 compute-0 sudo[211950]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:42 compute-0 sudo[211975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:10:42 compute-0 sudo[211975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:43 compute-0 podman[212037]: 2025-10-02 19:10:43.02916692 +0000 UTC m=+0.059871680 container create e6e8a21495cfd710edc7ead7db88275a525660f68eb97d66b243e5bf99bca4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:10:43 compute-0 systemd[1]: Started libpod-conmon-e6e8a21495cfd710edc7ead7db88275a525660f68eb97d66b243e5bf99bca4ba.scope.
Oct 02 19:10:43 compute-0 podman[212037]: 2025-10-02 19:10:43.00884166 +0000 UTC m=+0.039546450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:43 compute-0 podman[212037]: 2025-10-02 19:10:43.147454899 +0000 UTC m=+0.178159709 container init e6e8a21495cfd710edc7ead7db88275a525660f68eb97d66b243e5bf99bca4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:43 compute-0 podman[212037]: 2025-10-02 19:10:43.157348602 +0000 UTC m=+0.188053372 container start e6e8a21495cfd710edc7ead7db88275a525660f68eb97d66b243e5bf99bca4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:10:43 compute-0 podman[212037]: 2025-10-02 19:10:43.163209137 +0000 UTC m=+0.193913927 container attach e6e8a21495cfd710edc7ead7db88275a525660f68eb97d66b243e5bf99bca4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 19:10:43 compute-0 objective_kalam[212052]: 167 167
Oct 02 19:10:43 compute-0 systemd[1]: libpod-e6e8a21495cfd710edc7ead7db88275a525660f68eb97d66b243e5bf99bca4ba.scope: Deactivated successfully.
Oct 02 19:10:43 compute-0 conmon[212052]: conmon e6e8a21495cfd710edc7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6e8a21495cfd710edc7ead7db88275a525660f68eb97d66b243e5bf99bca4ba.scope/container/memory.events
Oct 02 19:10:43 compute-0 podman[212037]: 2025-10-02 19:10:43.166584927 +0000 UTC m=+0.197289707 container died e6e8a21495cfd710edc7ead7db88275a525660f68eb97d66b243e5bf99bca4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 19:10:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-9aefc7a62cc6749d43f837a0afd24f864e1a355bc78c34343e0bd46a8d786822-merged.mount: Deactivated successfully.
Oct 02 19:10:43 compute-0 podman[212037]: 2025-10-02 19:10:43.241501895 +0000 UTC m=+0.272206665 container remove e6e8a21495cfd710edc7ead7db88275a525660f68eb97d66b243e5bf99bca4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 19:10:43 compute-0 ceph-mon[191910]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:43 compute-0 systemd[1]: libpod-conmon-e6e8a21495cfd710edc7ead7db88275a525660f68eb97d66b243e5bf99bca4ba.scope: Deactivated successfully.
Oct 02 19:10:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:43 compute-0 podman[212076]: 2025-10-02 19:10:43.471322695 +0000 UTC m=+0.063758364 container create 4573cd25a5dfab193b8fa63ffc5af1de9c65ab472121f07fc97fa05b5ee72082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:43 compute-0 systemd[1]: Started libpod-conmon-4573cd25a5dfab193b8fa63ffc5af1de9c65ab472121f07fc97fa05b5ee72082.scope.
Oct 02 19:10:43 compute-0 podman[212076]: 2025-10-02 19:10:43.44667053 +0000 UTC m=+0.039106189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b37fbb5749f588b55dbe453a73ccb6f2a08b37fbae532d6ee003ee5d6c407411/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b37fbb5749f588b55dbe453a73ccb6f2a08b37fbae532d6ee003ee5d6c407411/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b37fbb5749f588b55dbe453a73ccb6f2a08b37fbae532d6ee003ee5d6c407411/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b37fbb5749f588b55dbe453a73ccb6f2a08b37fbae532d6ee003ee5d6c407411/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:43 compute-0 podman[212076]: 2025-10-02 19:10:43.594156985 +0000 UTC m=+0.186592654 container init 4573cd25a5dfab193b8fa63ffc5af1de9c65ab472121f07fc97fa05b5ee72082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mayer, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 19:10:43 compute-0 podman[212076]: 2025-10-02 19:10:43.611520265 +0000 UTC m=+0.203955914 container start 4573cd25a5dfab193b8fa63ffc5af1de9c65ab472121f07fc97fa05b5ee72082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:43 compute-0 podman[212076]: 2025-10-02 19:10:43.616682282 +0000 UTC m=+0.209117941 container attach 4573cd25a5dfab193b8fa63ffc5af1de9c65ab472121f07fc97fa05b5ee72082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 19:10:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:10:44 compute-0 gallant_mayer[212093]: {
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:     "0": [
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:         {
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "devices": [
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "/dev/loop3"
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             ],
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "lv_name": "ceph_lv0",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "lv_size": "21470642176",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "name": "ceph_lv0",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "tags": {
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.cluster_name": "ceph",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.crush_device_class": "",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.encrypted": "0",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.osd_id": "0",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.type": "block",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.vdo": "0"
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             },
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "type": "block",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "vg_name": "ceph_vg0"
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:         }
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:     ],
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:     "1": [
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:         {
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "devices": [
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "/dev/loop4"
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             ],
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "lv_name": "ceph_lv1",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "lv_size": "21470642176",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "name": "ceph_lv1",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "tags": {
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.cluster_name": "ceph",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.crush_device_class": "",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.encrypted": "0",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.osd_id": "1",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.type": "block",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.vdo": "0"
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             },
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "type": "block",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "vg_name": "ceph_vg1"
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:         }
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:     ],
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:     "2": [
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:         {
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "devices": [
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "/dev/loop5"
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             ],
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "lv_name": "ceph_lv2",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "lv_size": "21470642176",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "name": "ceph_lv2",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "tags": {
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.cluster_name": "ceph",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.crush_device_class": "",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.encrypted": "0",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.osd_id": "2",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.type": "block",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:                 "ceph.vdo": "0"
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             },
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "type": "block",
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:             "vg_name": "ceph_vg2"
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:         }
Oct 02 19:10:44 compute-0 gallant_mayer[212093]:     ]
Oct 02 19:10:44 compute-0 gallant_mayer[212093]: }
Oct 02 19:10:44 compute-0 systemd[1]: libpod-4573cd25a5dfab193b8fa63ffc5af1de9c65ab472121f07fc97fa05b5ee72082.scope: Deactivated successfully.
Oct 02 19:10:44 compute-0 podman[212076]: 2025-10-02 19:10:44.471556351 +0000 UTC m=+1.063992000 container died 4573cd25a5dfab193b8fa63ffc5af1de9c65ab472121f07fc97fa05b5ee72082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 19:10:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b37fbb5749f588b55dbe453a73ccb6f2a08b37fbae532d6ee003ee5d6c407411-merged.mount: Deactivated successfully.
Oct 02 19:10:44 compute-0 podman[212076]: 2025-10-02 19:10:44.558362025 +0000 UTC m=+1.150797654 container remove 4573cd25a5dfab193b8fa63ffc5af1de9c65ab472121f07fc97fa05b5ee72082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 19:10:44 compute-0 systemd[1]: libpod-conmon-4573cd25a5dfab193b8fa63ffc5af1de9c65ab472121f07fc97fa05b5ee72082.scope: Deactivated successfully.
Oct 02 19:10:44 compute-0 sudo[211975]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:44 compute-0 sudo[212113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:44 compute-0 sudo[212113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:44 compute-0 sudo[212113]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:44 compute-0 sudo[212138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:10:44 compute-0 sudo[212138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:44 compute-0 sudo[212138]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:44 compute-0 sudo[212164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:44 compute-0 sudo[212164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:44 compute-0 sudo[212164]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:44 compute-0 podman[212162]: 2025-10-02 19:10:44.923333971 +0000 UTC m=+0.105809929 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:10:44 compute-0 sudo[212209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:10:44 compute-0 sudo[212209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:45 compute-0 ceph-mon[191910]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:45 compute-0 podman[212274]: 2025-10-02 19:10:45.446810214 +0000 UTC m=+0.072621908 container create 60efae8f598bbd6a2adaa1701e4529f00fe02028850db685f9a4c35e0aa11cc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_williams, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 19:10:45 compute-0 systemd[1]: Started libpod-conmon-60efae8f598bbd6a2adaa1701e4529f00fe02028850db685f9a4c35e0aa11cc5.scope.
Oct 02 19:10:45 compute-0 podman[212274]: 2025-10-02 19:10:45.413589553 +0000 UTC m=+0.039401297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:45 compute-0 podman[212274]: 2025-10-02 19:10:45.5449869 +0000 UTC m=+0.170798634 container init 60efae8f598bbd6a2adaa1701e4529f00fe02028850db685f9a4c35e0aa11cc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:45 compute-0 podman[212274]: 2025-10-02 19:10:45.553913947 +0000 UTC m=+0.179725601 container start 60efae8f598bbd6a2adaa1701e4529f00fe02028850db685f9a4c35e0aa11cc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_williams, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 19:10:45 compute-0 crazy_williams[212290]: 167 167
Oct 02 19:10:45 compute-0 systemd[1]: libpod-60efae8f598bbd6a2adaa1701e4529f00fe02028850db685f9a4c35e0aa11cc5.scope: Deactivated successfully.
Oct 02 19:10:45 compute-0 podman[212274]: 2025-10-02 19:10:45.560471731 +0000 UTC m=+0.186283485 container attach 60efae8f598bbd6a2adaa1701e4529f00fe02028850db685f9a4c35e0aa11cc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_williams, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct 02 19:10:45 compute-0 podman[212274]: 2025-10-02 19:10:45.560872631 +0000 UTC m=+0.186684285 container died 60efae8f598bbd6a2adaa1701e4529f00fe02028850db685f9a4c35e0aa11cc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_williams, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 19:10:45 compute-0 podman[212287]: 2025-10-02 19:10:45.591682159 +0000 UTC m=+0.093607765 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:10:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-733bffb8bb6dbfe4ab1aaa44ae057923f8f2b0f9e53ee33c1f2229a1d67bc59c-merged.mount: Deactivated successfully.
Oct 02 19:10:45 compute-0 podman[212274]: 2025-10-02 19:10:45.607938981 +0000 UTC m=+0.233750635 container remove 60efae8f598bbd6a2adaa1701e4529f00fe02028850db685f9a4c35e0aa11cc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_williams, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:10:45 compute-0 systemd[1]: libpod-conmon-60efae8f598bbd6a2adaa1701e4529f00fe02028850db685f9a4c35e0aa11cc5.scope: Deactivated successfully.
Oct 02 19:10:45 compute-0 podman[212331]: 2025-10-02 19:10:45.791759629 +0000 UTC m=+0.057749183 container create a862f3326f98b932b9fbc62ac92c96e911ab1f13c3b35260daf34901bc0a0c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hypatia, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 19:10:45 compute-0 systemd[1]: Started libpod-conmon-a862f3326f98b932b9fbc62ac92c96e911ab1f13c3b35260daf34901bc0a0c8c.scope.
Oct 02 19:10:45 compute-0 podman[212331]: 2025-10-02 19:10:45.762650097 +0000 UTC m=+0.028639691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3accff87962f7012e9ef67debbd61844a358be5526700472aa2069fd177156/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3accff87962f7012e9ef67debbd61844a358be5526700472aa2069fd177156/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3accff87962f7012e9ef67debbd61844a358be5526700472aa2069fd177156/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3accff87962f7012e9ef67debbd61844a358be5526700472aa2069fd177156/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:45 compute-0 podman[212331]: 2025-10-02 19:10:45.910654885 +0000 UTC m=+0.176644489 container init a862f3326f98b932b9fbc62ac92c96e911ab1f13c3b35260daf34901bc0a0c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:10:45 compute-0 podman[212331]: 2025-10-02 19:10:45.92329095 +0000 UTC m=+0.189280514 container start a862f3326f98b932b9fbc62ac92c96e911ab1f13c3b35260daf34901bc0a0c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 19:10:45 compute-0 podman[212331]: 2025-10-02 19:10:45.930327507 +0000 UTC m=+0.196317151 container attach a862f3326f98b932b9fbc62ac92c96e911ab1f13c3b35260daf34901bc0a0c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hypatia, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:46 compute-0 sudo[212376]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azbdkpvjtxfavhiohqjwadsjixwyngsh ; /usr/bin/python3'
Oct 02 19:10:46 compute-0 sudo[212376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:46 compute-0 python3[212378]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:10:46 compute-0 podman[212380]: 2025-10-02 19:10:46.569497259 +0000 UTC m=+0.067989555 container create c717e8a3b1c776f4c0410b30d5b3edd2d8dcc48c0879e3a967d50a4082ce627d (image=quay.io/ceph/ceph:v18, name=elegant_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 19:10:46 compute-0 systemd[1]: Started libpod-conmon-c717e8a3b1c776f4c0410b30d5b3edd2d8dcc48c0879e3a967d50a4082ce627d.scope.
Oct 02 19:10:46 compute-0 podman[212380]: 2025-10-02 19:10:46.547322461 +0000 UTC m=+0.045814777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:10:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e612a51c288aa81d8ab8e6c50a70fac9aabad5a7f50a9ee3050dafc3de8935/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e612a51c288aa81d8ab8e6c50a70fac9aabad5a7f50a9ee3050dafc3de8935/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e612a51c288aa81d8ab8e6c50a70fac9aabad5a7f50a9ee3050dafc3de8935/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:46 compute-0 podman[212380]: 2025-10-02 19:10:46.669946795 +0000 UTC m=+0.168439131 container init c717e8a3b1c776f4c0410b30d5b3edd2d8dcc48c0879e3a967d50a4082ce627d (image=quay.io/ceph/ceph:v18, name=elegant_euler, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 19:10:46 compute-0 podman[212380]: 2025-10-02 19:10:46.68518065 +0000 UTC m=+0.183672946 container start c717e8a3b1c776f4c0410b30d5b3edd2d8dcc48c0879e3a967d50a4082ce627d (image=quay.io/ceph/ceph:v18, name=elegant_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:10:46 compute-0 podman[212380]: 2025-10-02 19:10:46.690697896 +0000 UTC m=+0.189190292 container attach c717e8a3b1c776f4c0410b30d5b3edd2d8dcc48c0879e3a967d50a4082ce627d (image=quay.io/ceph/ceph:v18, name=elegant_euler, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]: {
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:         "osd_id": 1,
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:         "type": "bluestore"
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:     },
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:         "osd_id": 2,
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:         "type": "bluestore"
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:     },
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:         "osd_id": 0,
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:         "type": "bluestore"
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]:     }
Oct 02 19:10:46 compute-0 exciting_hypatia[212348]: }
Oct 02 19:10:47 compute-0 systemd[1]: libpod-a862f3326f98b932b9fbc62ac92c96e911ab1f13c3b35260daf34901bc0a0c8c.scope: Deactivated successfully.
Oct 02 19:10:47 compute-0 systemd[1]: libpod-a862f3326f98b932b9fbc62ac92c96e911ab1f13c3b35260daf34901bc0a0c8c.scope: Consumed 1.064s CPU time.
Oct 02 19:10:47 compute-0 podman[212447]: 2025-10-02 19:10:47.072353045 +0000 UTC m=+0.036006366 container died a862f3326f98b932b9fbc62ac92c96e911ab1f13c3b35260daf34901bc0a0c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f3accff87962f7012e9ef67debbd61844a358be5526700472aa2069fd177156-merged.mount: Deactivated successfully.
Oct 02 19:10:47 compute-0 podman[212447]: 2025-10-02 19:10:47.150651223 +0000 UTC m=+0.114304534 container remove a862f3326f98b932b9fbc62ac92c96e911ab1f13c3b35260daf34901bc0a0c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 19:10:47 compute-0 systemd[1]: libpod-conmon-a862f3326f98b932b9fbc62ac92c96e911ab1f13c3b35260daf34901bc0a0c8c.scope: Deactivated successfully.
Oct 02 19:10:47 compute-0 sudo[212209]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:10:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:10:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:47 compute-0 ceph-mon[191910]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:47 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:47 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 02 19:10:47 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/768428022' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 19:10:47 compute-0 elegant_euler[212396]: 
Oct 02 19:10:47 compute-0 elegant_euler[212396]: {"fsid":"6019f664-a1c2-5955-8391-692cb79a59f9","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":153,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":3,"osd_up_since":1759432236,"num_in_osds":3,"osd_in_since":1759432202,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502738944,"bytes_avail":63909187584,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-02T19:10:05.401655+0000","services":{}},"progress_events":{}}
Oct 02 19:10:47 compute-0 sudo[212461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:47 compute-0 sudo[212461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:47 compute-0 sudo[212461]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:47 compute-0 systemd[1]: libpod-c717e8a3b1c776f4c0410b30d5b3edd2d8dcc48c0879e3a967d50a4082ce627d.scope: Deactivated successfully.
Oct 02 19:10:47 compute-0 podman[212380]: 2025-10-02 19:10:47.347005295 +0000 UTC m=+0.845497611 container died c717e8a3b1c776f4c0410b30d5b3edd2d8dcc48c0879e3a967d50a4082ce627d (image=quay.io/ceph/ceph:v18, name=elegant_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:10:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0e612a51c288aa81d8ab8e6c50a70fac9aabad5a7f50a9ee3050dafc3de8935-merged.mount: Deactivated successfully.
Oct 02 19:10:47 compute-0 podman[212380]: 2025-10-02 19:10:47.405699502 +0000 UTC m=+0.904191798 container remove c717e8a3b1c776f4c0410b30d5b3edd2d8dcc48c0879e3a967d50a4082ce627d (image=quay.io/ceph/ceph:v18, name=elegant_euler, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:10:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:47 compute-0 systemd[1]: libpod-conmon-c717e8a3b1c776f4c0410b30d5b3edd2d8dcc48c0879e3a967d50a4082ce627d.scope: Deactivated successfully.
Oct 02 19:10:47 compute-0 sudo[212376]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:47 compute-0 sudo[212488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:10:47 compute-0 sudo[212488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:47 compute-0 sudo[212488]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Oct 02 19:10:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Oct 02 19:10:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Oct 02 19:10:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Oct 02 19:10:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:47 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct 02 19:10:47 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct 02 19:10:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 02 19:10:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 19:10:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 02 19:10:47 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 19:10:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:10:47 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:10:47 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct 02 19:10:47 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct 02 19:10:47 compute-0 sudo[212523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:47 compute-0 sudo[212523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:47 compute-0 sudo[212523]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:47 compute-0 sudo[212548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:10:47 compute-0 sudo[212548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:47 compute-0 sudo[212548]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:47 compute-0 sudo[212573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:47 compute-0 sudo[212619]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwiewewxtaowqgacleelgxawkbljrhvt ; /usr/bin/python3'
Oct 02 19:10:47 compute-0 sudo[212573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:47 compute-0 sudo[212619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:47 compute-0 sudo[212573]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:47 compute-0 sudo[212624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:10:47 compute-0 sudo[212624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:47 compute-0 python3[212623]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:10:48 compute-0 podman[212649]: 2025-10-02 19:10:48.039692289 +0000 UTC m=+0.080714953 container create 155b24459e8ba444c67f209219b5974a361a2ec8b02d491f80184ffd22d01352 (image=quay.io/ceph/ceph:v18, name=flamboyant_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 19:10:48 compute-0 systemd[1]: Started libpod-conmon-155b24459e8ba444c67f209219b5974a361a2ec8b02d491f80184ffd22d01352.scope.
Oct 02 19:10:48 compute-0 podman[212649]: 2025-10-02 19:10:48.008286525 +0000 UTC m=+0.049309199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:10:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45189639d0ea3078f34891aa8ce0d69282a0471b37e39385ce4cb55fa9a25cae/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45189639d0ea3078f34891aa8ce0d69282a0471b37e39385ce4cb55fa9a25cae/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:48 compute-0 podman[212649]: 2025-10-02 19:10:48.142153218 +0000 UTC m=+0.183175892 container init 155b24459e8ba444c67f209219b5974a361a2ec8b02d491f80184ffd22d01352 (image=quay.io/ceph/ceph:v18, name=flamboyant_sutherland, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:10:48 compute-0 podman[212649]: 2025-10-02 19:10:48.15918747 +0000 UTC m=+0.200210154 container start 155b24459e8ba444c67f209219b5974a361a2ec8b02d491f80184ffd22d01352 (image=quay.io/ceph/ceph:v18, name=flamboyant_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 19:10:48 compute-0 podman[212649]: 2025-10-02 19:10:48.16596462 +0000 UTC m=+0.206987294 container attach 155b24459e8ba444c67f209219b5974a361a2ec8b02d491f80184ffd22d01352 (image=quay.io/ceph/ceph:v18, name=flamboyant_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 19:10:48 compute-0 podman[212682]: 2025-10-02 19:10:48.222955663 +0000 UTC m=+0.062982663 container create 083a165c69ee1ab259341cdca9d6d66bdd97a50d03af65f6c0c69c1b21a29160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ptolemy, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:10:48 compute-0 systemd[1]: Started libpod-conmon-083a165c69ee1ab259341cdca9d6d66bdd97a50d03af65f6c0c69c1b21a29160.scope.
Oct 02 19:10:48 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/768428022' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 19:10:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 19:10:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 19:10:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:10:48 compute-0 podman[212682]: 2025-10-02 19:10:48.197069805 +0000 UTC m=+0.037096825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:48 compute-0 podman[212682]: 2025-10-02 19:10:48.319108134 +0000 UTC m=+0.159135134 container init 083a165c69ee1ab259341cdca9d6d66bdd97a50d03af65f6c0c69c1b21a29160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:48 compute-0 podman[212682]: 2025-10-02 19:10:48.331970836 +0000 UTC m=+0.171997836 container start 083a165c69ee1ab259341cdca9d6d66bdd97a50d03af65f6c0c69c1b21a29160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 19:10:48 compute-0 podman[212682]: 2025-10-02 19:10:48.337354369 +0000 UTC m=+0.177381419 container attach 083a165c69ee1ab259341cdca9d6d66bdd97a50d03af65f6c0c69c1b21a29160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ptolemy, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:10:48 compute-0 jolly_ptolemy[212699]: 167 167
Oct 02 19:10:48 compute-0 systemd[1]: libpod-083a165c69ee1ab259341cdca9d6d66bdd97a50d03af65f6c0c69c1b21a29160.scope: Deactivated successfully.
Oct 02 19:10:48 compute-0 podman[212682]: 2025-10-02 19:10:48.34455584 +0000 UTC m=+0.184582870 container died 083a165c69ee1ab259341cdca9d6d66bdd97a50d03af65f6c0c69c1b21a29160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:10:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-d58e15f942f502cc6280e74c20150c41e7deef5c23448277ae43a8b8b805ff80-merged.mount: Deactivated successfully.
Oct 02 19:10:48 compute-0 podman[212682]: 2025-10-02 19:10:48.414688481 +0000 UTC m=+0.254715491 container remove 083a165c69ee1ab259341cdca9d6d66bdd97a50d03af65f6c0c69c1b21a29160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 19:10:48 compute-0 systemd[1]: libpod-conmon-083a165c69ee1ab259341cdca9d6d66bdd97a50d03af65f6c0c69c1b21a29160.scope: Deactivated successfully.
Oct 02 19:10:48 compute-0 sudo[212624]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:10:48 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:10:48 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:48 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.uktbkz (unknown last config time)...
Oct 02 19:10:48 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.uktbkz (unknown last config time)...
Oct 02 19:10:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.uktbkz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 02 19:10:48 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.uktbkz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 19:10:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 19:10:48 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 19:10:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:10:48 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:10:48 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.uktbkz on compute-0
Oct 02 19:10:48 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.uktbkz on compute-0
Oct 02 19:10:48 compute-0 podman[212714]: 2025-10-02 19:10:48.545420521 +0000 UTC m=+0.112219250 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, vcs-type=git, io.buildah.version=1.29.0, container_name=kepler, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=)
Oct 02 19:10:48 compute-0 sudo[212737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:48 compute-0 sudo[212737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:48 compute-0 sudo[212737]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:48 compute-0 sudo[212781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:10:48 compute-0 sudo[212781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:48 compute-0 sudo[212781]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 19:10:48 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1409136668' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 19:10:48 compute-0 sudo[212806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:48 compute-0 sudo[212806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:48 compute-0 sudo[212806]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:48 compute-0 sudo[212834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:10:48 compute-0 sudo[212834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:10:49 compute-0 podman[212874]: 2025-10-02 19:10:49.246545669 +0000 UTC m=+0.072745462 container create 7100a50099d8a80b54b9a5485dccae45a5810d56959ebba53426ca8427d63a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:49 compute-0 systemd[1]: Started libpod-conmon-7100a50099d8a80b54b9a5485dccae45a5810d56959ebba53426ca8427d63a2e.scope.
Oct 02 19:10:49 compute-0 ceph-mon[191910]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:49 compute-0 ceph-mon[191910]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct 02 19:10:49 compute-0 ceph-mon[191910]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 02 19:10:49 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:49 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:49 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.uktbkz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 19:10:49 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 19:10:49 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:10:49 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1409136668' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 19:10:49 compute-0 podman[212874]: 2025-10-02 19:10:49.212568397 +0000 UTC m=+0.038768240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:49 compute-0 podman[212874]: 2025-10-02 19:10:49.361936331 +0000 UTC m=+0.188136114 container init 7100a50099d8a80b54b9a5485dccae45a5810d56959ebba53426ca8427d63a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:10:49 compute-0 podman[212874]: 2025-10-02 19:10:49.376554949 +0000 UTC m=+0.202754712 container start 7100a50099d8a80b54b9a5485dccae45a5810d56959ebba53426ca8427d63a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 19:10:49 compute-0 podman[212874]: 2025-10-02 19:10:49.381209053 +0000 UTC m=+0.207408816 container attach 7100a50099d8a80b54b9a5485dccae45a5810d56959ebba53426ca8427d63a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:10:49 compute-0 nostalgic_goldberg[212890]: 167 167
Oct 02 19:10:49 compute-0 systemd[1]: libpod-7100a50099d8a80b54b9a5485dccae45a5810d56959ebba53426ca8427d63a2e.scope: Deactivated successfully.
Oct 02 19:10:49 compute-0 podman[212874]: 2025-10-02 19:10:49.383197945 +0000 UTC m=+0.209397698 container died 7100a50099d8a80b54b9a5485dccae45a5810d56959ebba53426ca8427d63a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:10:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9f2a9d7debe6fe8dc118a5386a2190fee2107e9b54f4178f2ca50b157068d58-merged.mount: Deactivated successfully.
Oct 02 19:10:49 compute-0 podman[212874]: 2025-10-02 19:10:49.441872043 +0000 UTC m=+0.268071816 container remove 7100a50099d8a80b54b9a5485dccae45a5810d56959ebba53426ca8427d63a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 19:10:49 compute-0 systemd[1]: libpod-conmon-7100a50099d8a80b54b9a5485dccae45a5810d56959ebba53426ca8427d63a2e.scope: Deactivated successfully.
Oct 02 19:10:49 compute-0 sudo[212834]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:10:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct 02 19:10:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:10:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1409136668' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 19:10:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Oct 02 19:10:49 compute-0 flamboyant_sutherland[212677]: pool 'vms' created
Oct 02 19:10:49 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Oct 02 19:10:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:49 compute-0 systemd[1]: libpod-155b24459e8ba444c67f209219b5974a361a2ec8b02d491f80184ffd22d01352.scope: Deactivated successfully.
Oct 02 19:10:49 compute-0 podman[212649]: 2025-10-02 19:10:49.555086657 +0000 UTC m=+1.596109321 container died 155b24459e8ba444c67f209219b5974a361a2ec8b02d491f80184ffd22d01352 (image=quay.io/ceph/ceph:v18, name=flamboyant_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 19:10:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-45189639d0ea3078f34891aa8ce0d69282a0471b37e39385ce4cb55fa9a25cae-merged.mount: Deactivated successfully.
Oct 02 19:10:49 compute-0 podman[212649]: 2025-10-02 19:10:49.615966393 +0000 UTC m=+1.656989047 container remove 155b24459e8ba444c67f209219b5974a361a2ec8b02d491f80184ffd22d01352 (image=quay.io/ceph/ceph:v18, name=flamboyant_sutherland, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:49 compute-0 systemd[1]: libpod-conmon-155b24459e8ba444c67f209219b5974a361a2ec8b02d491f80184ffd22d01352.scope: Deactivated successfully.
Oct 02 19:10:49 compute-0 sudo[212910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:49 compute-0 sudo[212619]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:49 compute-0 sudo[212910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:49 compute-0 sudo[212910]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:49 compute-0 sudo[212947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:10:49 compute-0 sudo[212947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:49 compute-0 sudo[212947]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:49 compute-0 sudo[212972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:49 compute-0 sudo[212972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:49 compute-0 sudo[212972]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:49 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:10:49 compute-0 sudo[213026]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daottlgoknqgrjdwtncuxgzschasmrzi ; /usr/bin/python3'
Oct 02 19:10:49 compute-0 sudo[213026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:49 compute-0 sudo[213014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 19:10:49 compute-0 sudo[213014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:50 compute-0 python3[213040]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:10:50 compute-0 podman[213048]: 2025-10-02 19:10:50.093950098 +0000 UTC m=+0.067391070 container create 86d37ae4e2044984363d30a1f821c96c7737817be7b960cbb46b68d4de708629 (image=quay.io/ceph/ceph:v18, name=elated_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:10:50 compute-0 systemd[1]: Started libpod-conmon-86d37ae4e2044984363d30a1f821c96c7737817be7b960cbb46b68d4de708629.scope.
Oct 02 19:10:50 compute-0 podman[213048]: 2025-10-02 19:10:50.064496346 +0000 UTC m=+0.037937408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:10:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2d182a9382a5fd5eba1abacdd16737a25da71623d960935224d04d397fcbff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2d182a9382a5fd5eba1abacdd16737a25da71623d960935224d04d397fcbff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:50 compute-0 podman[213048]: 2025-10-02 19:10:50.206139576 +0000 UTC m=+0.179580588 container init 86d37ae4e2044984363d30a1f821c96c7737817be7b960cbb46b68d4de708629 (image=quay.io/ceph/ceph:v18, name=elated_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 19:10:50 compute-0 podman[213048]: 2025-10-02 19:10:50.215928425 +0000 UTC m=+0.189369407 container start 86d37ae4e2044984363d30a1f821c96c7737817be7b960cbb46b68d4de708629 (image=quay.io/ceph/ceph:v18, name=elated_satoshi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 19:10:50 compute-0 podman[213048]: 2025-10-02 19:10:50.221151584 +0000 UTC m=+0.194592566 container attach 86d37ae4e2044984363d30a1f821c96c7737817be7b960cbb46b68d4de708629 (image=quay.io/ceph/ceph:v18, name=elated_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 19:10:50 compute-0 ceph-mon[191910]: Reconfiguring mgr.compute-0.uktbkz (unknown last config time)...
Oct 02 19:10:50 compute-0 ceph-mon[191910]: Reconfiguring daemon mgr.compute-0.uktbkz on compute-0
Oct 02 19:10:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:50 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1409136668' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 19:10:50 compute-0 ceph-mon[191910]: osdmap e17: 3 total, 3 up, 3 in
Oct 02 19:10:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct 02 19:10:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Oct 02 19:10:50 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Oct 02 19:10:50 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:10:50 compute-0 podman[213138]: 2025-10-02 19:10:50.572981362 +0000 UTC m=+0.069182328 container exec a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:50 compute-0 podman[213138]: 2025-10-02 19:10:50.656272972 +0000 UTC m=+0.152473948 container exec_died a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 19:10:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 19:10:50 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/108210096' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 19:10:51 compute-0 sudo[213014]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:51 compute-0 ceph-mon[191910]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:51 compute-0 ceph-mon[191910]: osdmap e18: 3 total, 3 up, 3 in
Oct 02 19:10:51 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/108210096' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 19:10:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:10:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:10:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:10:51 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:10:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:10:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:10:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:10:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:51 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 10ce742d-1b52-46a1-93ce-c27f8178c99b does not exist
Oct 02 19:10:51 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 27432b5a-df45-463d-a4bb-55a7e882eda9 does not exist
Oct 02 19:10:51 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e9cd4797-8000-4871-b390-272d5452335d does not exist
Oct 02 19:10:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:10:51 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:10:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:10:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:10:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:10:51 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:10:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v63: 2 pgs: 1 active+clean, 1 unknown; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:51 compute-0 sudo[213279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:51 compute-0 sudo[213279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:51 compute-0 sudo[213279]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct 02 19:10:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/108210096' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 19:10:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Oct 02 19:10:51 compute-0 elated_satoshi[213081]: pool 'volumes' created
Oct 02 19:10:51 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Oct 02 19:10:51 compute-0 systemd[1]: libpod-86d37ae4e2044984363d30a1f821c96c7737817be7b960cbb46b68d4de708629.scope: Deactivated successfully.
Oct 02 19:10:51 compute-0 podman[213048]: 2025-10-02 19:10:51.590000093 +0000 UTC m=+1.563441065 container died 86d37ae4e2044984363d30a1f821c96c7737817be7b960cbb46b68d4de708629 (image=quay.io/ceph/ceph:v18, name=elated_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:10:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac2d182a9382a5fd5eba1abacdd16737a25da71623d960935224d04d397fcbff-merged.mount: Deactivated successfully.
Oct 02 19:10:51 compute-0 sudo[213304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:10:51 compute-0 sudo[213304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:51 compute-0 sudo[213304]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:51 compute-0 podman[213048]: 2025-10-02 19:10:51.648120226 +0000 UTC m=+1.621561198 container remove 86d37ae4e2044984363d30a1f821c96c7737817be7b960cbb46b68d4de708629 (image=quay.io/ceph/ceph:v18, name=elated_satoshi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:10:51 compute-0 systemd[1]: libpod-conmon-86d37ae4e2044984363d30a1f821c96c7737817be7b960cbb46b68d4de708629.scope: Deactivated successfully.
Oct 02 19:10:51 compute-0 sudo[213026]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:51 compute-0 sudo[213343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:51 compute-0 sudo[213343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:51 compute-0 sudo[213343]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:51 compute-0 sudo[213368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:10:51 compute-0 sudo[213368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:51 compute-0 sudo[213415]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esjtyzespgnbnvtqfjgrfrtmhflbhiyp ; /usr/bin/python3'
Oct 02 19:10:51 compute-0 sudo[213415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:51 compute-0 python3[213418]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:10:51 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:10:52 compute-0 podman[213426]: 2025-10-02 19:10:52.053577097 +0000 UTC m=+0.074315524 container create 0535a721c4ca17594f3a0e37ba4e620e252a2538ddd312fc31056ab78b78f074 (image=quay.io/ceph/ceph:v18, name=funny_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:52 compute-0 systemd[1]: Started libpod-conmon-0535a721c4ca17594f3a0e37ba4e620e252a2538ddd312fc31056ab78b78f074.scope.
Oct 02 19:10:52 compute-0 podman[213426]: 2025-10-02 19:10:52.020835358 +0000 UTC m=+0.041573835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:10:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081760d2c7d19302e2286208660c086ea374b61e942ce81631bc31dec04d1a57/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081760d2c7d19302e2286208660c086ea374b61e942ce81631bc31dec04d1a57/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:52 compute-0 podman[213426]: 2025-10-02 19:10:52.154516336 +0000 UTC m=+0.175254773 container init 0535a721c4ca17594f3a0e37ba4e620e252a2538ddd312fc31056ab78b78f074 (image=quay.io/ceph/ceph:v18, name=funny_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 19:10:52 compute-0 podman[213426]: 2025-10-02 19:10:52.16333195 +0000 UTC m=+0.184070377 container start 0535a721c4ca17594f3a0e37ba4e620e252a2538ddd312fc31056ab78b78f074 (image=quay.io/ceph/ceph:v18, name=funny_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 19:10:52 compute-0 podman[213426]: 2025-10-02 19:10:52.175319658 +0000 UTC m=+0.196058095 container attach 0535a721c4ca17594f3a0e37ba4e620e252a2538ddd312fc31056ab78b78f074 (image=quay.io/ceph/ceph:v18, name=funny_villani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:10:52 compute-0 podman[213476]: 2025-10-02 19:10:52.275647941 +0000 UTC m=+0.057950359 container create 083bfe6d5e40e926e5c257530900b4953824672168990c6886428611e6c8ac94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bose, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 19:10:52 compute-0 systemd[1]: Started libpod-conmon-083bfe6d5e40e926e5c257530900b4953824672168990c6886428611e6c8ac94.scope.
Oct 02 19:10:52 compute-0 podman[213476]: 2025-10-02 19:10:52.24774212 +0000 UTC m=+0.030044618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:10:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:10:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:10:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:10:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:10:52 compute-0 ceph-mon[191910]: pgmap v63: 2 pgs: 1 active+clean, 1 unknown; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:52 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/108210096' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 19:10:52 compute-0 ceph-mon[191910]: osdmap e19: 3 total, 3 up, 3 in
Oct 02 19:10:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:52 compute-0 ceph-mon[191910]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 19:10:52 compute-0 podman[213476]: 2025-10-02 19:10:52.38791929 +0000 UTC m=+0.170221738 container init 083bfe6d5e40e926e5c257530900b4953824672168990c6886428611e6c8ac94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bose, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:10:52 compute-0 podman[213476]: 2025-10-02 19:10:52.395098921 +0000 UTC m=+0.177401339 container start 083bfe6d5e40e926e5c257530900b4953824672168990c6886428611e6c8ac94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 19:10:52 compute-0 podman[213476]: 2025-10-02 19:10:52.399668802 +0000 UTC m=+0.181971240 container attach 083bfe6d5e40e926e5c257530900b4953824672168990c6886428611e6c8ac94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:10:52 compute-0 priceless_bose[213492]: 167 167
Oct 02 19:10:52 compute-0 systemd[1]: libpod-083bfe6d5e40e926e5c257530900b4953824672168990c6886428611e6c8ac94.scope: Deactivated successfully.
Oct 02 19:10:52 compute-0 podman[213476]: 2025-10-02 19:10:52.403809582 +0000 UTC m=+0.186112010 container died 083bfe6d5e40e926e5c257530900b4953824672168990c6886428611e6c8ac94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bose, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 19:10:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc97ffde99f91c16b41df7863dfead9e566e0c487a5e949a9ebe8d3b70c082e3-merged.mount: Deactivated successfully.
Oct 02 19:10:52 compute-0 podman[213476]: 2025-10-02 19:10:52.453369037 +0000 UTC m=+0.235671465 container remove 083bfe6d5e40e926e5c257530900b4953824672168990c6886428611e6c8ac94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bose, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:10:52 compute-0 systemd[1]: libpod-conmon-083bfe6d5e40e926e5c257530900b4953824672168990c6886428611e6c8ac94.scope: Deactivated successfully.
Oct 02 19:10:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct 02 19:10:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Oct 02 19:10:52 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Oct 02 19:10:52 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:10:52 compute-0 podman[213533]: 2025-10-02 19:10:52.695561085 +0000 UTC m=+0.075072923 container create 1617325e660475d510b7bc4cf855ef146d562ff038792f0cfad7da5b25f91d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:10:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 19:10:52 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1141127019' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 19:10:52 compute-0 systemd[1]: Started libpod-conmon-1617325e660475d510b7bc4cf855ef146d562ff038792f0cfad7da5b25f91d25.scope.
Oct 02 19:10:52 compute-0 podman[213533]: 2025-10-02 19:10:52.666760081 +0000 UTC m=+0.046271979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f047daa9b497afa8e5f4b65dfc3c183c41ae3fa96ac0b56bde428b5225fbc5b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f047daa9b497afa8e5f4b65dfc3c183c41ae3fa96ac0b56bde428b5225fbc5b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f047daa9b497afa8e5f4b65dfc3c183c41ae3fa96ac0b56bde428b5225fbc5b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f047daa9b497afa8e5f4b65dfc3c183c41ae3fa96ac0b56bde428b5225fbc5b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f047daa9b497afa8e5f4b65dfc3c183c41ae3fa96ac0b56bde428b5225fbc5b5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:52 compute-0 podman[213533]: 2025-10-02 19:10:52.832487939 +0000 UTC m=+0.211999867 container init 1617325e660475d510b7bc4cf855ef146d562ff038792f0cfad7da5b25f91d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 02 19:10:52 compute-0 podman[213533]: 2025-10-02 19:10:52.857892153 +0000 UTC m=+0.237403971 container start 1617325e660475d510b7bc4cf855ef146d562ff038792f0cfad7da5b25f91d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:52 compute-0 podman[213533]: 2025-10-02 19:10:52.863083131 +0000 UTC m=+0.242594929 container attach 1617325e660475d510b7bc4cf855ef146d562ff038792f0cfad7da5b25f91d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 19:10:53 compute-0 ceph-mon[191910]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 19:10:53 compute-0 ceph-mon[191910]: osdmap e20: 3 total, 3 up, 3 in
Oct 02 19:10:53 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1141127019' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 19:10:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v66: 3 pgs: 2 active+clean, 1 unknown; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct 02 19:10:53 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1141127019' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 19:10:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Oct 02 19:10:53 compute-0 funny_villani[213461]: pool 'backups' created
Oct 02 19:10:53 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Oct 02 19:10:53 compute-0 systemd[1]: libpod-0535a721c4ca17594f3a0e37ba4e620e252a2538ddd312fc31056ab78b78f074.scope: Deactivated successfully.
Oct 02 19:10:53 compute-0 podman[213426]: 2025-10-02 19:10:53.616720542 +0000 UTC m=+1.637459039 container died 0535a721c4ca17594f3a0e37ba4e620e252a2538ddd312fc31056ab78b78f074 (image=quay.io/ceph/ceph:v18, name=funny_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 19:10:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-081760d2c7d19302e2286208660c086ea374b61e942ce81631bc31dec04d1a57-merged.mount: Deactivated successfully.
Oct 02 19:10:53 compute-0 podman[213426]: 2025-10-02 19:10:53.685916408 +0000 UTC m=+1.706654845 container remove 0535a721c4ca17594f3a0e37ba4e620e252a2538ddd312fc31056ab78b78f074 (image=quay.io/ceph/ceph:v18, name=funny_villani, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 19:10:53 compute-0 systemd[1]: libpod-conmon-0535a721c4ca17594f3a0e37ba4e620e252a2538ddd312fc31056ab78b78f074.scope: Deactivated successfully.
Oct 02 19:10:53 compute-0 sudo[213415]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:53 compute-0 sudo[213610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phfoutxsvuqoldjxqtkmgvaxmfgcrhyv ; /usr/bin/python3'
Oct 02 19:10:53 compute-0 sudo[213610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:53 compute-0 inspiring_heyrovsky[213552]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:10:53 compute-0 inspiring_heyrovsky[213552]: --> relative data size: 1.0
Oct 02 19:10:53 compute-0 inspiring_heyrovsky[213552]: --> All data devices are unavailable
Oct 02 19:10:54 compute-0 systemd[1]: libpod-1617325e660475d510b7bc4cf855ef146d562ff038792f0cfad7da5b25f91d25.scope: Deactivated successfully.
Oct 02 19:10:54 compute-0 systemd[1]: libpod-1617325e660475d510b7bc4cf855ef146d562ff038792f0cfad7da5b25f91d25.scope: Consumed 1.112s CPU time.
Oct 02 19:10:54 compute-0 podman[213533]: 2025-10-02 19:10:54.014110749 +0000 UTC m=+1.393622537 container died 1617325e660475d510b7bc4cf855ef146d562ff038792f0cfad7da5b25f91d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 19:10:54 compute-0 python3[213614]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:10:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f047daa9b497afa8e5f4b65dfc3c183c41ae3fa96ac0b56bde428b5225fbc5b5-merged.mount: Deactivated successfully.
Oct 02 19:10:54 compute-0 podman[213533]: 2025-10-02 19:10:54.109737096 +0000 UTC m=+1.489248904 container remove 1617325e660475d510b7bc4cf855ef146d562ff038792f0cfad7da5b25f91d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:10:54 compute-0 systemd[1]: libpod-conmon-1617325e660475d510b7bc4cf855ef146d562ff038792f0cfad7da5b25f91d25.scope: Deactivated successfully.
Oct 02 19:10:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:10:54 compute-0 sudo[213368]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:54 compute-0 podman[213620]: 2025-10-02 19:10:54.095891579 +0000 UTC m=+0.056609583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:10:54 compute-0 podman[213620]: 2025-10-02 19:10:54.199887959 +0000 UTC m=+0.160605973 container create 28b7b9b38853ebeb1bc004b63ab3083c16c2f54e17a499ff0c1d22097087423d (image=quay.io/ceph/ceph:v18, name=magical_wu, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:10:54 compute-0 sudo[213642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:54 compute-0 sudo[213642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:54 compute-0 sudo[213642]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:54 compute-0 systemd[1]: Started libpod-conmon-28b7b9b38853ebeb1bc004b63ab3083c16c2f54e17a499ff0c1d22097087423d.scope.
Oct 02 19:10:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a964d4f3e18005926d23cd20660420683e93ee4e3603fdb5a450189e1b9b4cae/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a964d4f3e18005926d23cd20660420683e93ee4e3603fdb5a450189e1b9b4cae/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:54 compute-0 sudo[213667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:10:54 compute-0 sudo[213667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:54 compute-0 sudo[213667]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:54 compute-0 podman[213620]: 2025-10-02 19:10:54.401077059 +0000 UTC m=+0.361795073 container init 28b7b9b38853ebeb1bc004b63ab3083c16c2f54e17a499ff0c1d22097087423d (image=quay.io/ceph/ceph:v18, name=magical_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Oct 02 19:10:54 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:10:54 compute-0 podman[213620]: 2025-10-02 19:10:54.419029915 +0000 UTC m=+0.379747919 container start 28b7b9b38853ebeb1bc004b63ab3083c16c2f54e17a499ff0c1d22097087423d (image=quay.io/ceph/ceph:v18, name=magical_wu, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:10:54 compute-0 podman[213620]: 2025-10-02 19:10:54.424099509 +0000 UTC m=+0.384817513 container attach 28b7b9b38853ebeb1bc004b63ab3083c16c2f54e17a499ff0c1d22097087423d (image=quay.io/ceph/ceph:v18, name=magical_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:10:54 compute-0 sudo[213697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:54 compute-0 sudo[213697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:54 compute-0 sudo[213697]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:54 compute-0 sudo[213723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:10:54 compute-0 sudo[213723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:54 compute-0 ceph-mon[191910]: pgmap v66: 3 pgs: 2 active+clean, 1 unknown; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:54 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1141127019' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 19:10:54 compute-0 ceph-mon[191910]: osdmap e21: 3 total, 3 up, 3 in
Oct 02 19:10:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 19:10:54 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/74757624' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 19:10:55 compute-0 podman[213806]: 2025-10-02 19:10:55.051729847 +0000 UTC m=+0.065132190 container create fbe8a98491017dc73a1fe7ed3ac76dc1fad2a4d7a3bd5668d9fc9a2f040db632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_torvalds, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 19:10:55 compute-0 systemd[1]: Started libpod-conmon-fbe8a98491017dc73a1fe7ed3ac76dc1fad2a4d7a3bd5668d9fc9a2f040db632.scope.
Oct 02 19:10:55 compute-0 podman[213806]: 2025-10-02 19:10:55.027853983 +0000 UTC m=+0.041256356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:55 compute-0 podman[213806]: 2025-10-02 19:10:55.147279683 +0000 UTC m=+0.160682036 container init fbe8a98491017dc73a1fe7ed3ac76dc1fad2a4d7a3bd5668d9fc9a2f040db632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:10:55 compute-0 podman[213806]: 2025-10-02 19:10:55.156992851 +0000 UTC m=+0.170395194 container start fbe8a98491017dc73a1fe7ed3ac76dc1fad2a4d7a3bd5668d9fc9a2f040db632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 19:10:55 compute-0 podman[213806]: 2025-10-02 19:10:55.160829062 +0000 UTC m=+0.174231415 container attach fbe8a98491017dc73a1fe7ed3ac76dc1fad2a4d7a3bd5668d9fc9a2f040db632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_torvalds, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:10:55 compute-0 xenodochial_torvalds[213825]: 167 167
Oct 02 19:10:55 compute-0 systemd[1]: libpod-fbe8a98491017dc73a1fe7ed3ac76dc1fad2a4d7a3bd5668d9fc9a2f040db632.scope: Deactivated successfully.
Oct 02 19:10:55 compute-0 podman[213806]: 2025-10-02 19:10:55.164790358 +0000 UTC m=+0.178192681 container died fbe8a98491017dc73a1fe7ed3ac76dc1fad2a4d7a3bd5668d9fc9a2f040db632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_torvalds, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:10:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-68e3d704794a9d9e9cb1cf1e8ef476aaeeabbd9e4bd3d5ce3cd55c6268d35c6a-merged.mount: Deactivated successfully.
Oct 02 19:10:55 compute-0 podman[213806]: 2025-10-02 19:10:55.216988993 +0000 UTC m=+0.230391316 container remove fbe8a98491017dc73a1fe7ed3ac76dc1fad2a4d7a3bd5668d9fc9a2f040db632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_torvalds, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 19:10:55 compute-0 systemd[1]: libpod-conmon-fbe8a98491017dc73a1fe7ed3ac76dc1fad2a4d7a3bd5668d9fc9a2f040db632.scope: Deactivated successfully.
Oct 02 19:10:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct 02 19:10:55 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/74757624' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 19:10:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Oct 02 19:10:55 compute-0 magical_wu[213690]: pool 'images' created
Oct 02 19:10:55 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Oct 02 19:10:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v69: 5 pgs: 3 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:55 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:10:55 compute-0 podman[213849]: 2025-10-02 19:10:55.443641428 +0000 UTC m=+0.080853907 container create c366bafb2a6df0d746f26c2bd6085e73515ab74c79422dedee3277c8da1921ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_zhukovsky, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:10:55 compute-0 systemd[1]: libpod-28b7b9b38853ebeb1bc004b63ab3083c16c2f54e17a499ff0c1d22097087423d.scope: Deactivated successfully.
Oct 02 19:10:55 compute-0 podman[213620]: 2025-10-02 19:10:55.448697833 +0000 UTC m=+1.409415827 container died 28b7b9b38853ebeb1bc004b63ab3083c16c2f54e17a499ff0c1d22097087423d (image=quay.io/ceph/ceph:v18, name=magical_wu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 19:10:55 compute-0 systemd[1]: Started libpod-conmon-c366bafb2a6df0d746f26c2bd6085e73515ab74c79422dedee3277c8da1921ed.scope.
Oct 02 19:10:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a964d4f3e18005926d23cd20660420683e93ee4e3603fdb5a450189e1b9b4cae-merged.mount: Deactivated successfully.
Oct 02 19:10:55 compute-0 podman[213849]: 2025-10-02 19:10:55.419132218 +0000 UTC m=+0.056344757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:55 compute-0 podman[213620]: 2025-10-02 19:10:55.534815488 +0000 UTC m=+1.495533492 container remove 28b7b9b38853ebeb1bc004b63ab3083c16c2f54e17a499ff0c1d22097087423d (image=quay.io/ceph/ceph:v18, name=magical_wu, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97079bf73424e42324b6ff2a2d647c6db0e1a40c9250ac17ebc10199b088310f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97079bf73424e42324b6ff2a2d647c6db0e1a40c9250ac17ebc10199b088310f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97079bf73424e42324b6ff2a2d647c6db0e1a40c9250ac17ebc10199b088310f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97079bf73424e42324b6ff2a2d647c6db0e1a40c9250ac17ebc10199b088310f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:55 compute-0 systemd[1]: libpod-conmon-28b7b9b38853ebeb1bc004b63ab3083c16c2f54e17a499ff0c1d22097087423d.scope: Deactivated successfully.
Oct 02 19:10:55 compute-0 sudo[213610]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:55 compute-0 podman[213849]: 2025-10-02 19:10:55.581136738 +0000 UTC m=+0.218349247 container init c366bafb2a6df0d746f26c2bd6085e73515ab74c79422dedee3277c8da1921ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 19:10:55 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/74757624' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 19:10:55 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/74757624' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 19:10:55 compute-0 ceph-mon[191910]: osdmap e22: 3 total, 3 up, 3 in
Oct 02 19:10:55 compute-0 podman[213849]: 2025-10-02 19:10:55.591965845 +0000 UTC m=+0.229178334 container start c366bafb2a6df0d746f26c2bd6085e73515ab74c79422dedee3277c8da1921ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_zhukovsky, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 19:10:55 compute-0 podman[213849]: 2025-10-02 19:10:55.624904159 +0000 UTC m=+0.262116668 container attach c366bafb2a6df0d746f26c2bd6085e73515ab74c79422dedee3277c8da1921ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct 02 19:10:55 compute-0 sudo[213904]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fynaadcbsadtbmvxvpjnaakwkrnbrscj ; /usr/bin/python3'
Oct 02 19:10:55 compute-0 sudo[213904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:55 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 22 pg[5.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:10:55 compute-0 python3[213906]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:10:56 compute-0 podman[213907]: 2025-10-02 19:10:56.045660936 +0000 UTC m=+0.072628559 container create bd7fbf9b85b399ffeec8ff3123ee9acc973e86ffcdbdeced889523ae8b0d63a7 (image=quay.io/ceph/ceph:v18, name=gifted_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 19:10:56 compute-0 podman[213907]: 2025-10-02 19:10:56.015980838 +0000 UTC m=+0.042948461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:10:56 compute-0 systemd[1]: Started libpod-conmon-bd7fbf9b85b399ffeec8ff3123ee9acc973e86ffcdbdeced889523ae8b0d63a7.scope.
Oct 02 19:10:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6825f3fe381577f834ea7cc7d62b2218abd54014917066a5f506ac6ceb4b9a59/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6825f3fe381577f834ea7cc7d62b2218abd54014917066a5f506ac6ceb4b9a59/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:56 compute-0 podman[213907]: 2025-10-02 19:10:56.346727466 +0000 UTC m=+0.373695069 container init bd7fbf9b85b399ffeec8ff3123ee9acc973e86ffcdbdeced889523ae8b0d63a7 (image=quay.io/ceph/ceph:v18, name=gifted_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:10:56 compute-0 podman[213907]: 2025-10-02 19:10:56.358248512 +0000 UTC m=+0.385216095 container start bd7fbf9b85b399ffeec8ff3123ee9acc973e86ffcdbdeced889523ae8b0d63a7 (image=quay.io/ceph/ceph:v18, name=gifted_jepsen, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 19:10:56 compute-0 podman[213907]: 2025-10-02 19:10:56.376035584 +0000 UTC m=+0.403003167 container attach bd7fbf9b85b399ffeec8ff3123ee9acc973e86ffcdbdeced889523ae8b0d63a7 (image=quay.io/ceph/ceph:v18, name=gifted_jepsen, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:10:56 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]: {
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:     "0": [
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:         {
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "devices": [
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "/dev/loop3"
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             ],
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "lv_name": "ceph_lv0",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "lv_size": "21470642176",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "name": "ceph_lv0",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "tags": {
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.cluster_name": "ceph",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.crush_device_class": "",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.encrypted": "0",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.osd_id": "0",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.type": "block",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.vdo": "0"
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             },
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "type": "block",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "vg_name": "ceph_vg0"
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:         }
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:     ],
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:     "1": [
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:         {
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "devices": [
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "/dev/loop4"
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             ],
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "lv_name": "ceph_lv1",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "lv_size": "21470642176",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "name": "ceph_lv1",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "tags": {
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.cluster_name": "ceph",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.crush_device_class": "",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.encrypted": "0",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.osd_id": "1",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.type": "block",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.vdo": "0"
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             },
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "type": "block",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "vg_name": "ceph_vg1"
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:         }
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:     ],
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:     "2": [
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:         {
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "devices": [
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "/dev/loop5"
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             ],
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "lv_name": "ceph_lv2",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "lv_size": "21470642176",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "name": "ceph_lv2",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "tags": {
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.cluster_name": "ceph",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.crush_device_class": "",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.encrypted": "0",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.osd_id": "2",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.type": "block",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:                 "ceph.vdo": "0"
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             },
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "type": "block",
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:             "vg_name": "ceph_vg2"
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:         }
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]:     ]
Oct 02 19:10:56 compute-0 musing_zhukovsky[213873]: }
Oct 02 19:10:56 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Oct 02 19:10:56 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Oct 02 19:10:56 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:10:56 compute-0 systemd[1]: libpod-c366bafb2a6df0d746f26c2bd6085e73515ab74c79422dedee3277c8da1921ed.scope: Deactivated successfully.
Oct 02 19:10:56 compute-0 podman[213849]: 2025-10-02 19:10:56.460302781 +0000 UTC m=+1.097515280 container died c366bafb2a6df0d746f26c2bd6085e73515ab74c79422dedee3277c8da1921ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-97079bf73424e42324b6ff2a2d647c6db0e1a40c9250ac17ebc10199b088310f-merged.mount: Deactivated successfully.
Oct 02 19:10:56 compute-0 podman[213849]: 2025-10-02 19:10:56.550195496 +0000 UTC m=+1.187407985 container remove c366bafb2a6df0d746f26c2bd6085e73515ab74c79422dedee3277c8da1921ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_zhukovsky, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 19:10:56 compute-0 systemd[1]: libpod-conmon-c366bafb2a6df0d746f26c2bd6085e73515ab74c79422dedee3277c8da1921ed.scope: Deactivated successfully.
Oct 02 19:10:56 compute-0 ceph-mon[191910]: pgmap v69: 5 pgs: 3 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:56 compute-0 ceph-mon[191910]: osdmap e23: 3 total, 3 up, 3 in
Oct 02 19:10:56 compute-0 sudo[213723]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:56 compute-0 sudo[213943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:56 compute-0 sudo[213943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:56 compute-0 sudo[213943]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:56 compute-0 sudo[213978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:10:56 compute-0 sudo[213978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:56 compute-0 sudo[213978]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:56 compute-0 sudo[214012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:56 compute-0 sudo[214012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:56 compute-0 sudo[214012]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:56 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 19:10:56 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2452820057' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 19:10:57 compute-0 sudo[214037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:10:57 compute-0 sudo[214037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v71: 5 pgs: 4 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct 02 19:10:57 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2452820057' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 19:10:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Oct 02 19:10:57 compute-0 gifted_jepsen[213922]: pool 'cephfs.cephfs.meta' created
Oct 02 19:10:57 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Oct 02 19:10:57 compute-0 systemd[1]: libpod-bd7fbf9b85b399ffeec8ff3123ee9acc973e86ffcdbdeced889523ae8b0d63a7.scope: Deactivated successfully.
Oct 02 19:10:57 compute-0 podman[213907]: 2025-10-02 19:10:57.479801347 +0000 UTC m=+1.506768970 container died bd7fbf9b85b399ffeec8ff3123ee9acc973e86ffcdbdeced889523ae8b0d63a7 (image=quay.io/ceph/ceph:v18, name=gifted_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 19:10:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6825f3fe381577f834ea7cc7d62b2218abd54014917066a5f506ac6ceb4b9a59-merged.mount: Deactivated successfully.
Oct 02 19:10:57 compute-0 podman[214102]: 2025-10-02 19:10:57.561092884 +0000 UTC m=+0.079281155 container create 9aafe497782d8eed9df92413995479553b35ed87d12a5eab2d2ea7428ade4bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 19:10:57 compute-0 podman[213907]: 2025-10-02 19:10:57.597235814 +0000 UTC m=+1.624203397 container remove bd7fbf9b85b399ffeec8ff3123ee9acc973e86ffcdbdeced889523ae8b0d63a7 (image=quay.io/ceph/ceph:v18, name=gifted_jepsen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:10:57 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2452820057' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 19:10:57 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2452820057' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 19:10:57 compute-0 ceph-mon[191910]: osdmap e24: 3 total, 3 up, 3 in
Oct 02 19:10:57 compute-0 systemd[1]: libpod-conmon-bd7fbf9b85b399ffeec8ff3123ee9acc973e86ffcdbdeced889523ae8b0d63a7.scope: Deactivated successfully.
Oct 02 19:10:57 compute-0 podman[214102]: 2025-10-02 19:10:57.536780029 +0000 UTC m=+0.054968320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:57 compute-0 sudo[213904]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:57 compute-0 systemd[1]: Started libpod-conmon-9aafe497782d8eed9df92413995479553b35ed87d12a5eab2d2ea7428ade4bdc.scope.
Oct 02 19:10:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:57 compute-0 podman[214102]: 2025-10-02 19:10:57.689315247 +0000 UTC m=+0.207503568 container init 9aafe497782d8eed9df92413995479553b35ed87d12a5eab2d2ea7428ade4bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:10:57 compute-0 podman[214102]: 2025-10-02 19:10:57.706151554 +0000 UTC m=+0.224339825 container start 9aafe497782d8eed9df92413995479553b35ed87d12a5eab2d2ea7428ade4bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:10:57 compute-0 cranky_dhawan[214129]: 167 167
Oct 02 19:10:57 compute-0 systemd[1]: libpod-9aafe497782d8eed9df92413995479553b35ed87d12a5eab2d2ea7428ade4bdc.scope: Deactivated successfully.
Oct 02 19:10:57 compute-0 podman[214102]: 2025-10-02 19:10:57.761040131 +0000 UTC m=+0.279228462 container attach 9aafe497782d8eed9df92413995479553b35ed87d12a5eab2d2ea7428ade4bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 19:10:57 compute-0 podman[214102]: 2025-10-02 19:10:57.762665934 +0000 UTC m=+0.280854245 container died 9aafe497782d8eed9df92413995479553b35ed87d12a5eab2d2ea7428ade4bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 19:10:57 compute-0 sudo[214168]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkolhzuidcdukkreaujdixadxxrwxjlh ; /usr/bin/python3'
Oct 02 19:10:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ed856de46467539f3c7e63847f2054f3e3fee2481eaf579033b55ed77f07e23-merged.mount: Deactivated successfully.
Oct 02 19:10:57 compute-0 sudo[214168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:57 compute-0 podman[214102]: 2025-10-02 19:10:57.861188429 +0000 UTC m=+0.379376710 container remove 9aafe497782d8eed9df92413995479553b35ed87d12a5eab2d2ea7428ade4bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:57 compute-0 systemd[1]: libpod-conmon-9aafe497782d8eed9df92413995479553b35ed87d12a5eab2d2ea7428ade4bdc.scope: Deactivated successfully.
Oct 02 19:10:57 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 24 pg[6.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [0] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:10:57 compute-0 python3[214171]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:10:58 compute-0 podman[214174]: 2025-10-02 19:10:58.085259746 +0000 UTC m=+0.082999464 container create 8e1ee27af4b4d03ee79e7a3308a026ccbb7ab27ab56978ac5689479c33ca192b (image=quay.io/ceph/ceph:v18, name=keen_dubinsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 19:10:58 compute-0 podman[214186]: 2025-10-02 19:10:58.128510843 +0000 UTC m=+0.089446074 container create e69c8fcf16ab566e252185b421b8441776a1e99857ebbac7e1b458af7e94819e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_benz, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:10:58 compute-0 podman[214174]: 2025-10-02 19:10:58.042251074 +0000 UTC m=+0.039990882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:10:58 compute-0 systemd[1]: Started libpod-conmon-8e1ee27af4b4d03ee79e7a3308a026ccbb7ab27ab56978ac5689479c33ca192b.scope.
Oct 02 19:10:58 compute-0 podman[214186]: 2025-10-02 19:10:58.091709487 +0000 UTC m=+0.052644818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:10:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:58 compute-0 systemd[1]: Started libpod-conmon-e69c8fcf16ab566e252185b421b8441776a1e99857ebbac7e1b458af7e94819e.scope.
Oct 02 19:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39924b3eb95e84d7c4bce3fae19d8ba47a377395e9c6dad4ba244e020e015f0f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39924b3eb95e84d7c4bce3fae19d8ba47a377395e9c6dad4ba244e020e015f0f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:58 compute-0 podman[214174]: 2025-10-02 19:10:58.229480833 +0000 UTC m=+0.227220611 container init 8e1ee27af4b4d03ee79e7a3308a026ccbb7ab27ab56978ac5689479c33ca192b (image=quay.io/ceph/ceph:v18, name=keen_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 19:10:58 compute-0 podman[214174]: 2025-10-02 19:10:58.249146335 +0000 UTC m=+0.246886093 container start 8e1ee27af4b4d03ee79e7a3308a026ccbb7ab27ab56978ac5689479c33ca192b (image=quay.io/ceph/ceph:v18, name=keen_dubinsky, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 19:10:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ddc817c83e5835b616fda225c42661282df8bef7a3ccd76c11cb705e2993b17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ddc817c83e5835b616fda225c42661282df8bef7a3ccd76c11cb705e2993b17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ddc817c83e5835b616fda225c42661282df8bef7a3ccd76c11cb705e2993b17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ddc817c83e5835b616fda225c42661282df8bef7a3ccd76c11cb705e2993b17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:58 compute-0 podman[214174]: 2025-10-02 19:10:58.258713549 +0000 UTC m=+0.256453277 container attach 8e1ee27af4b4d03ee79e7a3308a026ccbb7ab27ab56978ac5689479c33ca192b (image=quay.io/ceph/ceph:v18, name=keen_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:10:58 compute-0 podman[214186]: 2025-10-02 19:10:58.278566636 +0000 UTC m=+0.239501877 container init e69c8fcf16ab566e252185b421b8441776a1e99857ebbac7e1b458af7e94819e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:10:58 compute-0 podman[214186]: 2025-10-02 19:10:58.29530031 +0000 UTC m=+0.256235531 container start e69c8fcf16ab566e252185b421b8441776a1e99857ebbac7e1b458af7e94819e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_benz, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 19:10:58 compute-0 podman[214186]: 2025-10-02 19:10:58.301508615 +0000 UTC m=+0.262443836 container attach e69c8fcf16ab566e252185b421b8441776a1e99857ebbac7e1b458af7e94819e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_benz, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:10:58 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct 02 19:10:58 compute-0 ceph-mon[191910]: pgmap v71: 5 pgs: 4 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:58 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Oct 02 19:10:58 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Oct 02 19:10:58 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 25 pg[6.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [0] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:10:58 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 02 19:10:58 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2882484075' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 19:10:59 compute-0 ceph-mon[191910]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 19:10:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:10:59 compute-0 lucid_benz[214212]: {
Oct 02 19:10:59 compute-0 lucid_benz[214212]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:10:59 compute-0 lucid_benz[214212]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:10:59 compute-0 lucid_benz[214212]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:10:59 compute-0 lucid_benz[214212]:         "osd_id": 1,
Oct 02 19:10:59 compute-0 lucid_benz[214212]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:10:59 compute-0 lucid_benz[214212]:         "type": "bluestore"
Oct 02 19:10:59 compute-0 lucid_benz[214212]:     },
Oct 02 19:10:59 compute-0 lucid_benz[214212]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:10:59 compute-0 lucid_benz[214212]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:10:59 compute-0 lucid_benz[214212]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:10:59 compute-0 lucid_benz[214212]:         "osd_id": 2,
Oct 02 19:10:59 compute-0 lucid_benz[214212]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:10:59 compute-0 lucid_benz[214212]:         "type": "bluestore"
Oct 02 19:10:59 compute-0 lucid_benz[214212]:     },
Oct 02 19:10:59 compute-0 lucid_benz[214212]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:10:59 compute-0 lucid_benz[214212]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:10:59 compute-0 lucid_benz[214212]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:10:59 compute-0 lucid_benz[214212]:         "osd_id": 0,
Oct 02 19:10:59 compute-0 lucid_benz[214212]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:10:59 compute-0 lucid_benz[214212]:         "type": "bluestore"
Oct 02 19:10:59 compute-0 lucid_benz[214212]:     }
Oct 02 19:10:59 compute-0 lucid_benz[214212]: }
Oct 02 19:10:59 compute-0 systemd[1]: libpod-e69c8fcf16ab566e252185b421b8441776a1e99857ebbac7e1b458af7e94819e.scope: Deactivated successfully.
Oct 02 19:10:59 compute-0 systemd[1]: libpod-e69c8fcf16ab566e252185b421b8441776a1e99857ebbac7e1b458af7e94819e.scope: Consumed 1.106s CPU time.
Oct 02 19:10:59 compute-0 conmon[214212]: conmon e69c8fcf16ab566e2521 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e69c8fcf16ab566e252185b421b8441776a1e99857ebbac7e1b458af7e94819e.scope/container/memory.events
Oct 02 19:10:59 compute-0 podman[214186]: 2025-10-02 19:10:59.403868881 +0000 UTC m=+1.364804102 container died e69c8fcf16ab566e252185b421b8441776a1e99857ebbac7e1b458af7e94819e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v74: 6 pgs: 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:10:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ddc817c83e5835b616fda225c42661282df8bef7a3ccd76c11cb705e2993b17-merged.mount: Deactivated successfully.
Oct 02 19:10:59 compute-0 podman[214186]: 2025-10-02 19:10:59.474417383 +0000 UTC m=+1.435352604 container remove e69c8fcf16ab566e252185b421b8441776a1e99857ebbac7e1b458af7e94819e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:10:59 compute-0 systemd[1]: libpod-conmon-e69c8fcf16ab566e252185b421b8441776a1e99857ebbac7e1b458af7e94819e.scope: Deactivated successfully.
Oct 02 19:10:59 compute-0 sudo[214037]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:10:59 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:10:59 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct 02 19:10:59 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2882484075' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 19:10:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Oct 02 19:10:59 compute-0 keen_dubinsky[214207]: pool 'cephfs.cephfs.data' created
Oct 02 19:10:59 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Oct 02 19:10:59 compute-0 ceph-mon[191910]: osdmap e25: 3 total, 3 up, 3 in
Oct 02 19:10:59 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2882484075' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 19:10:59 compute-0 ceph-mon[191910]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 19:10:59 compute-0 sudo[214279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:10:59 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:59 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:10:59 compute-0 sudo[214279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:59 compute-0 sudo[214279]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:59 compute-0 systemd[1]: libpod-8e1ee27af4b4d03ee79e7a3308a026ccbb7ab27ab56978ac5689479c33ca192b.scope: Deactivated successfully.
Oct 02 19:10:59 compute-0 podman[214174]: 2025-10-02 19:10:59.680634946 +0000 UTC m=+1.678374664 container died 8e1ee27af4b4d03ee79e7a3308a026ccbb7ab27ab56978ac5689479c33ca192b (image=quay.io/ceph/ceph:v18, name=keen_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 19:10:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-39924b3eb95e84d7c4bce3fae19d8ba47a377395e9c6dad4ba244e020e015f0f-merged.mount: Deactivated successfully.
Oct 02 19:10:59 compute-0 podman[157186]: time="2025-10-02T19:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:10:59 compute-0 podman[214174]: 2025-10-02 19:10:59.756315585 +0000 UTC m=+1.754055303 container remove 8e1ee27af4b4d03ee79e7a3308a026ccbb7ab27ab56978ac5689479c33ca192b (image=quay.io/ceph/ceph:v18, name=keen_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 19:10:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30686 "" "Go-http-client/1.1"
Oct 02 19:10:59 compute-0 systemd[1]: libpod-conmon-8e1ee27af4b4d03ee79e7a3308a026ccbb7ab27ab56978ac5689479c33ca192b.scope: Deactivated successfully.
Oct 02 19:10:59 compute-0 sudo[214306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:10:59 compute-0 sudo[214306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:10:59 compute-0 sudo[214306]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5845 "" "Go-http-client/1.1"
Oct 02 19:10:59 compute-0 sudo[214168]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:59 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 26 pg[7.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [1] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:00 compute-0 sudo[214364]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxzntyzdywogyygyhhfirbcsiwkccuuc ; /usr/bin/python3'
Oct 02 19:11:00 compute-0 sudo[214364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:00 compute-0 python3[214366]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:00 compute-0 podman[214367]: 2025-10-02 19:11:00.336791711 +0000 UTC m=+0.086967459 container create 43ca1920b19b578e4bd106b155ed89d010c04a771f06df41c1b624c18336b0e3 (image=quay.io/ceph/ceph:v18, name=vibrant_allen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:00 compute-0 podman[214367]: 2025-10-02 19:11:00.305447499 +0000 UTC m=+0.055623297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:00 compute-0 systemd[1]: Started libpod-conmon-43ca1920b19b578e4bd106b155ed89d010c04a771f06df41c1b624c18336b0e3.scope.
Oct 02 19:11:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c84b77c20d1ae7baa129a5be189ae355cd0e232449b59e98644f761a1d41a46e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c84b77c20d1ae7baa129a5be189ae355cd0e232449b59e98644f761a1d41a46e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:00 compute-0 podman[214367]: 2025-10-02 19:11:00.50330738 +0000 UTC m=+0.253483148 container init 43ca1920b19b578e4bd106b155ed89d010c04a771f06df41c1b624c18336b0e3 (image=quay.io/ceph/ceph:v18, name=vibrant_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:11:00 compute-0 podman[214367]: 2025-10-02 19:11:00.515049512 +0000 UTC m=+0.265225230 container start 43ca1920b19b578e4bd106b155ed89d010c04a771f06df41c1b624c18336b0e3 (image=quay.io/ceph/ceph:v18, name=vibrant_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:00 compute-0 podman[214367]: 2025-10-02 19:11:00.520454465 +0000 UTC m=+0.270630293 container attach 43ca1920b19b578e4bd106b155ed89d010c04a771f06df41c1b624c18336b0e3 (image=quay.io/ceph/ceph:v18, name=vibrant_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct 02 19:11:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct 02 19:11:00 compute-0 ceph-mon[191910]: pgmap v74: 6 pgs: 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:00 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2882484075' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 19:11:00 compute-0 ceph-mon[191910]: osdmap e26: 3 total, 3 up, 3 in
Oct 02 19:11:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Oct 02 19:11:00 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Oct 02 19:11:00 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 27 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [1] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Oct 02 19:11:01 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2546780657' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 02 19:11:01 compute-0 openstack_network_exporter[159337]: ERROR   19:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:11:01 compute-0 openstack_network_exporter[159337]: ERROR   19:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:11:01 compute-0 openstack_network_exporter[159337]: ERROR   19:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:11:01 compute-0 openstack_network_exporter[159337]: ERROR   19:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:11:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:11:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:01 compute-0 openstack_network_exporter[159337]: ERROR   19:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:11:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:11:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct 02 19:11:01 compute-0 ceph-mon[191910]: osdmap e27: 3 total, 3 up, 3 in
Oct 02 19:11:01 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2546780657' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 02 19:11:01 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2546780657' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 02 19:11:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Oct 02 19:11:01 compute-0 vibrant_allen[214381]: enabled application 'rbd' on pool 'vms'
Oct 02 19:11:01 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Oct 02 19:11:01 compute-0 systemd[1]: libpod-43ca1920b19b578e4bd106b155ed89d010c04a771f06df41c1b624c18336b0e3.scope: Deactivated successfully.
Oct 02 19:11:01 compute-0 podman[214367]: 2025-10-02 19:11:01.83751961 +0000 UTC m=+1.587695378 container died 43ca1920b19b578e4bd106b155ed89d010c04a771f06df41c1b624c18336b0e3 (image=quay.io/ceph/ceph:v18, name=vibrant_allen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c84b77c20d1ae7baa129a5be189ae355cd0e232449b59e98644f761a1d41a46e-merged.mount: Deactivated successfully.
Oct 02 19:11:02 compute-0 podman[214367]: 2025-10-02 19:11:02.098849826 +0000 UTC m=+1.849025564 container remove 43ca1920b19b578e4bd106b155ed89d010c04a771f06df41c1b624c18336b0e3 (image=quay.io/ceph/ceph:v18, name=vibrant_allen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 19:11:02 compute-0 systemd[1]: libpod-conmon-43ca1920b19b578e4bd106b155ed89d010c04a771f06df41c1b624c18336b0e3.scope: Deactivated successfully.
Oct 02 19:11:02 compute-0 sudo[214364]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:02 compute-0 sudo[214440]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihmpseqmleqomckcxobrzlandzrafyws ; /usr/bin/python3'
Oct 02 19:11:02 compute-0 sudo[214440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:02 compute-0 python3[214442]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:02 compute-0 podman[214443]: 2025-10-02 19:11:02.638292652 +0000 UTC m=+0.080204959 container create f32488a14e4aa7bdd6384c429b5e742b888aca031245eb20e7bd66db24f1d407 (image=quay.io/ceph/ceph:v18, name=exciting_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:02 compute-0 systemd[1]: Started libpod-conmon-f32488a14e4aa7bdd6384c429b5e742b888aca031245eb20e7bd66db24f1d407.scope.
Oct 02 19:11:02 compute-0 podman[214443]: 2025-10-02 19:11:02.608343817 +0000 UTC m=+0.050256204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/194fd668aa1c942b9c399522546d0887ad4efb3356ce04e2e6ac044fe4b03ef5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/194fd668aa1c942b9c399522546d0887ad4efb3356ce04e2e6ac044fe4b03ef5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:02 compute-0 ceph-mon[191910]: pgmap v77: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:02 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2546780657' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 02 19:11:02 compute-0 ceph-mon[191910]: osdmap e28: 3 total, 3 up, 3 in
Oct 02 19:11:02 compute-0 podman[214443]: 2025-10-02 19:11:02.8058845 +0000 UTC m=+0.247796887 container init f32488a14e4aa7bdd6384c429b5e742b888aca031245eb20e7bd66db24f1d407 (image=quay.io/ceph/ceph:v18, name=exciting_fermat, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 19:11:02 compute-0 podman[214443]: 2025-10-02 19:11:02.817313263 +0000 UTC m=+0.259225600 container start f32488a14e4aa7bdd6384c429b5e742b888aca031245eb20e7bd66db24f1d407 (image=quay.io/ceph/ceph:v18, name=exciting_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:11:02 compute-0 podman[214443]: 2025-10-02 19:11:02.861343752 +0000 UTC m=+0.303256149 container attach f32488a14e4aa7bdd6384c429b5e742b888aca031245eb20e7bd66db24f1d407 (image=quay.io/ceph/ceph:v18, name=exciting_fermat, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Oct 02 19:11:03 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1422860418' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:11:03
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Some PGs (0.142857) are inactive; try again later
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 19:11:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 19:11:03 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:11:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct 02 19:11:03 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1422860418' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 02 19:11:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:11:03 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1422860418' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 02 19:11:03 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:11:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Oct 02 19:11:03 compute-0 exciting_fermat[214458]: enabled application 'rbd' on pool 'volumes'
Oct 02 19:11:03 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Oct 02 19:11:03 compute-0 ceph-mgr[192222]: [progress INFO root] update: starting ev 7c1dfbcd-e7c4-4271-ab18-c874db8678d6 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 02 19:11:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 19:11:03 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:11:03 compute-0 systemd[1]: libpod-f32488a14e4aa7bdd6384c429b5e742b888aca031245eb20e7bd66db24f1d407.scope: Deactivated successfully.
Oct 02 19:11:03 compute-0 podman[214443]: 2025-10-02 19:11:03.880274454 +0000 UTC m=+1.322186781 container died f32488a14e4aa7bdd6384c429b5e742b888aca031245eb20e7bd66db24f1d407 (image=quay.io/ceph/ceph:v18, name=exciting_fermat, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 19:11:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-194fd668aa1c942b9c399522546d0887ad4efb3356ce04e2e6ac044fe4b03ef5-merged.mount: Deactivated successfully.
Oct 02 19:11:03 compute-0 podman[214443]: 2025-10-02 19:11:03.967232982 +0000 UTC m=+1.409145319 container remove f32488a14e4aa7bdd6384c429b5e742b888aca031245eb20e7bd66db24f1d407 (image=quay.io/ceph/ceph:v18, name=exciting_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:03 compute-0 systemd[1]: libpod-conmon-f32488a14e4aa7bdd6384c429b5e742b888aca031245eb20e7bd66db24f1d407.scope: Deactivated successfully.
Oct 02 19:11:03 compute-0 sudo[214440]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:04 compute-0 ceph-mon[191910]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 19:11:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:11:04 compute-0 sudo[214519]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fedsyqdyjexcluvdybcwmnicmfisqmav ; /usr/bin/python3'
Oct 02 19:11:04 compute-0 sudo[214519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:04 compute-0 python3[214521]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:04 compute-0 podman[214522]: 2025-10-02 19:11:04.517751832 +0000 UTC m=+0.101598648 container create 1cfc205713762c5483c2fdfeaece485725e060737288b70852368f63fd794afc (image=quay.io/ceph/ceph:v18, name=inspiring_gould, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 19:11:04 compute-0 podman[214522]: 2025-10-02 19:11:04.483599615 +0000 UTC m=+0.067446481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:04 compute-0 systemd[1]: Started libpod-conmon-1cfc205713762c5483c2fdfeaece485725e060737288b70852368f63fd794afc.scope.
Oct 02 19:11:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851fae96ef8930aedb44b201397abf232ce98a4a7dd4b3ab7265f5c793da77fe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851fae96ef8930aedb44b201397abf232ce98a4a7dd4b3ab7265f5c793da77fe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:04 compute-0 podman[214522]: 2025-10-02 19:11:04.654751048 +0000 UTC m=+0.238597874 container init 1cfc205713762c5483c2fdfeaece485725e060737288b70852368f63fd794afc (image=quay.io/ceph/ceph:v18, name=inspiring_gould, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 19:11:04 compute-0 podman[214522]: 2025-10-02 19:11:04.66423995 +0000 UTC m=+0.248086746 container start 1cfc205713762c5483c2fdfeaece485725e060737288b70852368f63fd794afc (image=quay.io/ceph/ceph:v18, name=inspiring_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:11:04 compute-0 podman[214522]: 2025-10-02 19:11:04.668289487 +0000 UTC m=+0.252136293 container attach 1cfc205713762c5483c2fdfeaece485725e060737288b70852368f63fd794afc (image=quay.io/ceph/ceph:v18, name=inspiring_gould, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 02 19:11:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct 02 19:11:04 compute-0 ceph-mon[191910]: pgmap v79: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:04 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1422860418' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 02 19:11:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:11:04 compute-0 ceph-mon[191910]: osdmap e29: 3 total, 3 up, 3 in
Oct 02 19:11:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:11:04 compute-0 ceph-mon[191910]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 19:11:04 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:11:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Oct 02 19:11:04 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Oct 02 19:11:04 compute-0 ceph-mgr[192222]: [progress INFO root] update: starting ev 496cb6d3-4776-4786-9d63-df08b9f71e1c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 02 19:11:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 19:11:04 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:11:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Oct 02 19:11:05 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/37851085' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 02 19:11:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v82: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 19:11:05 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 19:11:05 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:05 compute-0 podman[214560]: 2025-10-02 19:11:05.696606539 +0000 UTC m=+0.110036532 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:11:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct 02 19:11:05 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:11:05 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/37851085' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 02 19:11:05 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:11:05 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:11:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Oct 02 19:11:05 compute-0 inspiring_gould[214536]: enabled application 'rbd' on pool 'backups'
Oct 02 19:11:05 compute-0 podman[214585]: 2025-10-02 19:11:05.865032389 +0000 UTC m=+0.117742546 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:11:05 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Oct 02 19:11:05 compute-0 ceph-mgr[192222]: [progress INFO root] update: starting ev 92604024-1635-4272-8bb0-f1ba1d87acb8 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 02 19:11:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 19:11:05 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:11:05 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:11:05 compute-0 ceph-mon[191910]: osdmap e30: 3 total, 3 up, 3 in
Oct 02 19:11:05 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:11:05 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/37851085' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 02 19:11:05 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:05 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:05 compute-0 systemd[1]: libpod-1cfc205713762c5483c2fdfeaece485725e060737288b70852368f63fd794afc.scope: Deactivated successfully.
Oct 02 19:11:05 compute-0 podman[214522]: 2025-10-02 19:11:05.911145873 +0000 UTC m=+1.494992659 container died 1cfc205713762c5483c2fdfeaece485725e060737288b70852368f63fd794afc (image=quay.io/ceph/ceph:v18, name=inspiring_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-851fae96ef8930aedb44b201397abf232ce98a4a7dd4b3ab7265f5c793da77fe-merged.mount: Deactivated successfully.
Oct 02 19:11:05 compute-0 podman[214522]: 2025-10-02 19:11:05.981096469 +0000 UTC m=+1.564943255 container remove 1cfc205713762c5483c2fdfeaece485725e060737288b70852368f63fd794afc (image=quay.io/ceph/ceph:v18, name=inspiring_gould, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:11:06 compute-0 systemd[1]: libpod-conmon-1cfc205713762c5483c2fdfeaece485725e060737288b70852368f63fd794afc.scope: Deactivated successfully.
Oct 02 19:11:06 compute-0 sudo[214519]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:06 compute-0 sudo[214642]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xomsoqayfcgopvacogmbtdmljnftoxlw ; /usr/bin/python3'
Oct 02 19:11:06 compute-0 sudo[214642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:06 compute-0 python3[214644]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:06 compute-0 podman[214645]: 2025-10-02 19:11:06.445078883 +0000 UTC m=+0.101610097 container create e60d795731e60c758cec31623ce2a1b2d384d8f2c4995604d6866fc1e1245f4a (image=quay.io/ceph/ceph:v18, name=boring_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:06 compute-0 podman[214645]: 2025-10-02 19:11:06.409600432 +0000 UTC m=+0.066131706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:06 compute-0 systemd[1]: Started libpod-conmon-e60d795731e60c758cec31623ce2a1b2d384d8f2c4995604d6866fc1e1245f4a.scope.
Oct 02 19:11:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c4c87f904790daff9892b5673ec3432e75dd29fd8ef9f2848c47fd5fb45155/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c4c87f904790daff9892b5673ec3432e75dd29fd8ef9f2848c47fd5fb45155/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:06 compute-0 podman[214645]: 2025-10-02 19:11:06.589317802 +0000 UTC m=+0.245849096 container init e60d795731e60c758cec31623ce2a1b2d384d8f2c4995604d6866fc1e1245f4a (image=quay.io/ceph/ceph:v18, name=boring_feistel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 19:11:06 compute-0 podman[214645]: 2025-10-02 19:11:06.607969127 +0000 UTC m=+0.264500331 container start e60d795731e60c758cec31623ce2a1b2d384d8f2c4995604d6866fc1e1245f4a (image=quay.io/ceph/ceph:v18, name=boring_feistel, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 19:11:06 compute-0 podman[214645]: 2025-10-02 19:11:06.61411308 +0000 UTC m=+0.270644314 container attach e60d795731e60c758cec31623ce2a1b2d384d8f2c4995604d6866fc1e1245f4a (image=quay.io/ceph/ceph:v18, name=boring_feistel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct 02 19:11:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct 02 19:11:06 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:11:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Oct 02 19:11:06 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Oct 02 19:11:06 compute-0 ceph-mgr[192222]: [progress INFO root] update: starting ev 540ae037-8e68-4157-bcc7-241a8c3a4760 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 02 19:11:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 19:11:06 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:11:06 compute-0 ceph-mon[191910]: pgmap v82: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:11:06 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/37851085' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 02 19:11:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:11:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:11:06 compute-0 ceph-mon[191910]: osdmap e31: 3 total, 3 up, 3 in
Oct 02 19:11:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:11:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:11:06 compute-0 ceph-mon[191910]: osdmap e32: 3 total, 3 up, 3 in
Oct 02 19:11:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:11:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Oct 02 19:11:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3441873328' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 02 19:11:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v85: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 19:11:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 19:11:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 31 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=31 pruub=9.113685608s) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active pruub 52.804630280s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=31 pruub=9.113685608s) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown pruub 52.804630280s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.7( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.8( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.11( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.12( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.f( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.10( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.1f( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.1d( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.1e( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.a( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.9( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.1a( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.19( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.1b( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.1c( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.17( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.15( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.18( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.16( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.1( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.3( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.4( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.5( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.6( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.13( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.b( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.2( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.c( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.d( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.14( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 32 pg[3.e( empty local-lis/les=19/20 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 31 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=31 pruub=15.016342163s) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active pruub 51.947132111s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=31 pruub=15.016342163s) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown pruub 51.947132111s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.1( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.2( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.4( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.5( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.18( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.19( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.1a( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.1b( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.1e( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.1f( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.1c( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.1d( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.8( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.9( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.6( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.7( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.a( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.b( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.c( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.d( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.f( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.10( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.e( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.11( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.12( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.13( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.14( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.15( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.16( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.17( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 32 pg[2.3( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct 02 19:11:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:11:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3441873328' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 02 19:11:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:11:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:11:07 compute-0 boring_feistel[214660]: enabled application 'rbd' on pool 'images'
Oct 02 19:11:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Oct 02 19:11:07 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Oct 02 19:11:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 19:11:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:11:07 compute-0 ceph-mgr[192222]: [progress INFO root] update: starting ev 8cad4829-a654-4d76-b886-8e78c8e22084 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct 02 19:11:07 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3441873328' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 02 19:11:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:11:07 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3441873328' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 02 19:11:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:11:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:11:07 compute-0 ceph-mon[191910]: osdmap e33: 3 total, 3 up, 3 in
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=12.540251732s) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active pruub 49.823619843s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=12.540251732s) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown pruub 49.823619843s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.1a( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.19( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.1d( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 systemd[1]: libpod-e60d795731e60c758cec31623ce2a1b2d384d8f2c4995604d6866fc1e1245f4a.scope: Deactivated successfully.
Oct 02 19:11:07 compute-0 podman[214645]: 2025-10-02 19:11:07.933109255 +0000 UTC m=+1.589640479 container died e60d795731e60c758cec31623ce2a1b2d384d8f2c4995604d6866fc1e1245f4a (image=quay.io/ceph/ceph:v18, name=boring_feistel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.18( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.17( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.16( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.15( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.1b( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.14( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.12( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.10( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.f( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.d( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=31/33 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.7( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.3( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.4( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.11( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.2( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.5( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.13( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.6( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.8( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.9( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.a( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.1c( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.1e( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.1d( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.1f( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.b( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.c( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.e( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 33 pg[2.1( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [2] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.1b( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.a( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.9( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.8( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.1c( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.1e( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.1f( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.5( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.6( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.3( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.2( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.4( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=31/33 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.b( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.d( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.f( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.7( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.12( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.10( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.14( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.16( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.e( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.1( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.c( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.19( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.11( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.17( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.1a( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.18( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.13( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 33 pg[3.15( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=19/19 les/c/f=20/20/0 sis=31) [1] r=0 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0c4c87f904790daff9892b5673ec3432e75dd29fd8ef9f2848c47fd5fb45155-merged.mount: Deactivated successfully.
Oct 02 19:11:08 compute-0 podman[214645]: 2025-10-02 19:11:08.035866662 +0000 UTC m=+1.692397866 container remove e60d795731e60c758cec31623ce2a1b2d384d8f2c4995604d6866fc1e1245f4a (image=quay.io/ceph/ceph:v18, name=boring_feistel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 19:11:08 compute-0 systemd[1]: libpod-conmon-e60d795731e60c758cec31623ce2a1b2d384d8f2c4995604d6866fc1e1245f4a.scope: Deactivated successfully.
Oct 02 19:11:08 compute-0 sudo[214642]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:08 compute-0 sudo[214720]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkedficudmebjhuqnidsmczkpfyxrvfw ; /usr/bin/python3'
Oct 02 19:11:08 compute-0 sudo[214720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:08 compute-0 python3[214722]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 33 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=10.951191902s) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active pruub 62.187263489s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 33 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=10.951191902s) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown pruub 62.187263489s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 podman[214723]: 2025-10-02 19:11:08.514163816 +0000 UTC m=+0.080479027 container create eac2a4d3700abd10d6da598c50998be6399a923682a6891fc71c9903fb197c70 (image=quay.io/ceph/ceph:v18, name=optimistic_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:08 compute-0 podman[214723]: 2025-10-02 19:11:08.485176687 +0000 UTC m=+0.051491878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:08 compute-0 systemd[1]: Started libpod-conmon-eac2a4d3700abd10d6da598c50998be6399a923682a6891fc71c9903fb197c70.scope.
Oct 02 19:11:08 compute-0 ceph-mgr[192222]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Oct 02 19:11:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c94275fe68ca890ab114bab0a766d2c80f74618fc2de66d0a755949d93a8071/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c94275fe68ca890ab114bab0a766d2c80f74618fc2de66d0a755949d93a8071/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:08 compute-0 podman[214723]: 2025-10-02 19:11:08.651488171 +0000 UTC m=+0.217803392 container init eac2a4d3700abd10d6da598c50998be6399a923682a6891fc71c9903fb197c70 (image=quay.io/ceph/ceph:v18, name=optimistic_gates, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:08 compute-0 podman[214723]: 2025-10-02 19:11:08.662270537 +0000 UTC m=+0.228585728 container start eac2a4d3700abd10d6da598c50998be6399a923682a6891fc71c9903fb197c70 (image=quay.io/ceph/ceph:v18, name=optimistic_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:08 compute-0 podman[214723]: 2025-10-02 19:11:08.669112418 +0000 UTC m=+0.235427669 container attach eac2a4d3700abd10d6da598c50998be6399a923682a6891fc71c9903fb197c70 (image=quay.io/ceph/ceph:v18, name=optimistic_gates, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct 02 19:11:08 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:11:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct 02 19:11:08 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct 02 19:11:08 compute-0 ceph-mgr[192222]: [progress INFO root] update: starting ev b6d4a4fb-a585-4358-b2d4-b65fe579ad3a (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 02 19:11:08 compute-0 ceph-mgr[192222]: [progress INFO root] complete: finished ev 7c1dfbcd-e7c4-4271-ab18-c874db8678d6 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 02 19:11:08 compute-0 ceph-mgr[192222]: [progress INFO root] Completed event 7c1dfbcd-e7c4-4271-ab18-c874db8678d6 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Oct 02 19:11:08 compute-0 ceph-mgr[192222]: [progress INFO root] complete: finished ev 496cb6d3-4776-4786-9d63-df08b9f71e1c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 02 19:11:08 compute-0 ceph-mgr[192222]: [progress INFO root] Completed event 496cb6d3-4776-4786-9d63-df08b9f71e1c (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Oct 02 19:11:08 compute-0 ceph-mgr[192222]: [progress INFO root] complete: finished ev 92604024-1635-4272-8bb0-f1ba1d87acb8 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 02 19:11:08 compute-0 ceph-mgr[192222]: [progress INFO root] Completed event 92604024-1635-4272-8bb0-f1ba1d87acb8 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Oct 02 19:11:08 compute-0 ceph-mgr[192222]: [progress INFO root] complete: finished ev 540ae037-8e68-4157-bcc7-241a8c3a4760 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 02 19:11:08 compute-0 ceph-mgr[192222]: [progress INFO root] Completed event 540ae037-8e68-4157-bcc7-241a8c3a4760 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Oct 02 19:11:08 compute-0 ceph-mgr[192222]: [progress INFO root] complete: finished ev 8cad4829-a654-4d76-b886-8e78c8e22084 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct 02 19:11:08 compute-0 ceph-mgr[192222]: [progress INFO root] Completed event 8cad4829-a654-4d76-b886-8e78c8e22084 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Oct 02 19:11:08 compute-0 ceph-mgr[192222]: [progress INFO root] complete: finished ev b6d4a4fb-a585-4358-b2d4-b65fe579ad3a (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 02 19:11:08 compute-0 ceph-mgr[192222]: [progress INFO root] Completed event b6d4a4fb-a585-4358-b2d4-b65fe579ad3a (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.1c( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.1d( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.1e( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.10( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.11( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.1f( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.12( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.13( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.14( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.15( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.16( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-mon[191910]: pgmap v85: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:08 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:11:08 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:11:08 compute-0 ceph-mon[191910]: osdmap e34: 3 total, 3 up, 3 in
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.1f( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.1e( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.1c( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.8( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.1d( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.7( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.6( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.1b( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.a( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.5( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.1a( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.9( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.4( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.b( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.19( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.3( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.1( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.2( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.e( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.c( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.f( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.10( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.d( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.11( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.12( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.13( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.14( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.15( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.16( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.17( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.18( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.17( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.8( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.9( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.a( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.b( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.1f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.6( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.5( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.7( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.4( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.3( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.2( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.1( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.f( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.e( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.d( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.c( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.1a( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.1c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.7( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.1e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.19( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.18( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.1b( empty local-lis/les=22/23 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.1c( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.1e( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.1d( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.11( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.1f( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.10( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.12( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.13( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.15( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.16( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.17( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.8( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.14( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.a( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.0( empty local-lis/les=33/34 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.b( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.9( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.5( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.4( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.7( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.6( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.3( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.f( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.1( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.e( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.d( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.c( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.18( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.1a( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.1b( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.2( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 34 pg[5.19( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=22/22 les/c/f=23/23/0 sis=33) [2] r=0 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.6( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.1b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.5( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.9( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.1d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.1a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.3( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.1( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.8( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.19( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.2( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.0( empty local-lis/les=33/34 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.10( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.4( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.15( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.11( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.14( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.12( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.16( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.13( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.18( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:08 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 34 pg[4.17( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:09 compute-0 ceph-mon[191910]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 19:11:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:11:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Oct 02 19:11:09 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3333617489' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 02 19:11:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v88: 131 pgs: 1 peering, 124 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 19:11:09 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 19:11:09 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct 02 19:11:09 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3333617489' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 02 19:11:09 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:11:09 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:11:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct 02 19:11:09 compute-0 optimistic_gates[214738]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct 02 19:11:09 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct 02 19:11:09 compute-0 ceph-mon[191910]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 19:11:09 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3333617489' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 02 19:11:09 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:09 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:09 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Oct 02 19:11:09 compute-0 systemd[1]: libpod-eac2a4d3700abd10d6da598c50998be6399a923682a6891fc71c9903fb197c70.scope: Deactivated successfully.
Oct 02 19:11:09 compute-0 podman[214723]: 2025-10-02 19:11:09.958825727 +0000 UTC m=+1.525140908 container died eac2a4d3700abd10d6da598c50998be6399a923682a6891fc71c9903fb197c70 (image=quay.io/ceph/ceph:v18, name=optimistic_gates, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 19:11:09 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Oct 02 19:11:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c94275fe68ca890ab114bab0a766d2c80f74618fc2de66d0a755949d93a8071-merged.mount: Deactivated successfully.
Oct 02 19:11:10 compute-0 podman[214723]: 2025-10-02 19:11:10.024652534 +0000 UTC m=+1.590967685 container remove eac2a4d3700abd10d6da598c50998be6399a923682a6891fc71c9903fb197c70 (image=quay.io/ceph/ceph:v18, name=optimistic_gates, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct 02 19:11:10 compute-0 sudo[214720]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:10 compute-0 systemd[1]: libpod-conmon-eac2a4d3700abd10d6da598c50998be6399a923682a6891fc71c9903fb197c70.scope: Deactivated successfully.
Oct 02 19:11:10 compute-0 podman[214763]: 2025-10-02 19:11:10.092730861 +0000 UTC m=+0.103674072 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:11:10 compute-0 podman[214765]: 2025-10-02 19:11:10.159859983 +0000 UTC m=+0.170083455 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 35 pg[6.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=12.484011650s) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active pruub 65.417579651s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 35 pg[6.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=12.484011650s) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown pruub 65.417579651s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 sudo[214839]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uenvdcgmcafmbqewedmmapvdidfgvbye ; /usr/bin/python3'
Oct 02 19:11:10 compute-0 sudo[214839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:10 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Oct 02 19:11:10 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Oct 02 19:11:10 compute-0 python3[214841]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:10 compute-0 podman[214842]: 2025-10-02 19:11:10.559589642 +0000 UTC m=+0.091096529 container create ef23654d5d0abf86558889555cea768cda9d2f168664a1ef90eaef196417b95b (image=quay.io/ceph/ceph:v18, name=youthful_germain, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:11:10 compute-0 podman[214842]: 2025-10-02 19:11:10.527630663 +0000 UTC m=+0.059137630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:10 compute-0 systemd[1]: Started libpod-conmon-ef23654d5d0abf86558889555cea768cda9d2f168664a1ef90eaef196417b95b.scope.
Oct 02 19:11:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60d433727695878f9b23015d27326fefe3d4e0ed03093d88bfb7dc436d857130/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60d433727695878f9b23015d27326fefe3d4e0ed03093d88bfb7dc436d857130/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:10 compute-0 podman[214842]: 2025-10-02 19:11:10.714067862 +0000 UTC m=+0.245574739 container init ef23654d5d0abf86558889555cea768cda9d2f168664a1ef90eaef196417b95b (image=quay.io/ceph/ceph:v18, name=youthful_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 19:11:10 compute-0 podman[214842]: 2025-10-02 19:11:10.722590598 +0000 UTC m=+0.254097475 container start ef23654d5d0abf86558889555cea768cda9d2f168664a1ef90eaef196417b95b (image=quay.io/ceph/ceph:v18, name=youthful_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 19:11:10 compute-0 podman[214842]: 2025-10-02 19:11:10.727488088 +0000 UTC m=+0.258994985 container attach ef23654d5d0abf86558889555cea768cda9d2f168664a1ef90eaef196417b95b (image=quay.io/ceph/ceph:v18, name=youthful_germain, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 19:11:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct 02 19:11:10 compute-0 ceph-mon[191910]: pgmap v88: 131 pgs: 1 peering, 124 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:10 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3333617489' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 02 19:11:10 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:11:10 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:11:10 compute-0 ceph-mon[191910]: osdmap e35: 3 total, 3 up, 3 in
Oct 02 19:11:10 compute-0 ceph-mon[191910]: 3.1 scrub starts
Oct 02 19:11:10 compute-0 ceph-mon[191910]: 3.1 scrub ok
Oct 02 19:11:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct 02 19:11:10 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 35 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=13.735353470s) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active pruub 60.922950745s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.1a( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.15( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.14( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.17( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.16( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.11( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.10( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.13( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.12( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=13.735353470s) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown pruub 60.922950745s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.d( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.c( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.f( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.e( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.2( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.3( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.1( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.1b( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.6( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.b( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.18( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.7( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.5( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.8( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.19( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.6( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.1( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.2( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.4( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.9( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.5( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.1d( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.1e( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.a( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.7( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.8( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.1e( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.1f( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.3( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.4( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.9( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.1c( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.1d( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.b( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.c( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.d( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.e( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.f( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.10( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.13( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.15( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.16( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.11( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.12( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.17( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.18( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.1b( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.1c( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.a( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.1a( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.19( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.1a( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.1f( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 36 pg[7.14( empty local-lis/les=26/27 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.15( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.14( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.11( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.10( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.13( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.12( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.c( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.f( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.e( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.2( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.0( empty local-lis/les=35/36 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.3( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.1( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.1b( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.b( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.6( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.18( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.7( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.8( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.19( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.4( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.9( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.5( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.a( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.1e( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.1f( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.1c( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.1d( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.16( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.17( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:10 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 36 pg[6.d( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [0] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Oct 02 19:11:11 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1014568192' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 02 19:11:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v91: 193 pgs: 1 peering, 155 unknown, 37 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct 02 19:11:11 compute-0 ceph-mon[191910]: 4.1 scrub starts
Oct 02 19:11:11 compute-0 ceph-mon[191910]: 4.1 scrub ok
Oct 02 19:11:11 compute-0 ceph-mon[191910]: osdmap e36: 3 total, 3 up, 3 in
Oct 02 19:11:11 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1014568192' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 02 19:11:11 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1014568192' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 02 19:11:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct 02 19:11:11 compute-0 youthful_germain[214857]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct 02 19:11:11 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct 02 19:11:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.1e( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:11 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Oct 02 19:11:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.10( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.1c( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.13( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.12( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.16( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.15( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.11( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.1d( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.14( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.a( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.17( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.b( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.9( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.f( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.6( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.8( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:12 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.4( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:12 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.7( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:12 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.c( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:12 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.3( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:12 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.5( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:12 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.d( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:12 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=35/37 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:12 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.1f( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:12 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.1( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:12 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.1a( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:12 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.e( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:12 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.18( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:12 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.2( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:12 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.1b( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:12 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 37 pg[7.19( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=26/26 les/c/f=27/27/0 sis=35) [1] r=0 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:12 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Oct 02 19:11:12 compute-0 systemd[1]: libpod-ef23654d5d0abf86558889555cea768cda9d2f168664a1ef90eaef196417b95b.scope: Deactivated successfully.
Oct 02 19:11:12 compute-0 podman[214842]: 2025-10-02 19:11:12.018545671 +0000 UTC m=+1.550052588 container died ef23654d5d0abf86558889555cea768cda9d2f168664a1ef90eaef196417b95b (image=quay.io/ceph/ceph:v18, name=youthful_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-60d433727695878f9b23015d27326fefe3d4e0ed03093d88bfb7dc436d857130-merged.mount: Deactivated successfully.
Oct 02 19:11:12 compute-0 systemd[193692]: Starting Mark boot as successful...
Oct 02 19:11:12 compute-0 systemd[193692]: Finished Mark boot as successful.
Oct 02 19:11:12 compute-0 podman[214842]: 2025-10-02 19:11:12.102476719 +0000 UTC m=+1.633983626 container remove ef23654d5d0abf86558889555cea768cda9d2f168664a1ef90eaef196417b95b (image=quay.io/ceph/ceph:v18, name=youthful_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 19:11:12 compute-0 systemd[1]: libpod-conmon-ef23654d5d0abf86558889555cea768cda9d2f168664a1ef90eaef196417b95b.scope: Deactivated successfully.
Oct 02 19:11:12 compute-0 sudo[214839]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:12 compute-0 ceph-mon[191910]: pgmap v91: 193 pgs: 1 peering, 155 unknown, 37 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:12 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1014568192' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 02 19:11:12 compute-0 ceph-mon[191910]: osdmap e37: 3 total, 3 up, 3 in
Oct 02 19:11:12 compute-0 ceph-mon[191910]: 3.2 scrub starts
Oct 02 19:11:12 compute-0 ceph-mon[191910]: 3.2 scrub ok
Oct 02 19:11:12 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct 02 19:11:12 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct 02 19:11:13 compute-0 python3[214969]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 19:11:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v93: 193 pgs: 1 peering, 93 unknown, 99 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:13 compute-0 ceph-mgr[192222]: [progress INFO root] Writing back 9 completed events
Oct 02 19:11:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 19:11:13 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:13 compute-0 python3[215040]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432272.8237097-33695-158117949578753/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:11:13 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 02 19:11:13 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 02 19:11:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:11:14 compute-0 sudo[215140]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kazpxagpgndurjvhsibfiwcaiwormiyj ; /usr/bin/python3'
Oct 02 19:11:14 compute-0 sudo[215140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:14 compute-0 ceph-mon[191910]: 3.3 scrub starts
Oct 02 19:11:14 compute-0 ceph-mon[191910]: 3.3 scrub ok
Oct 02 19:11:14 compute-0 ceph-mon[191910]: pgmap v93: 193 pgs: 1 peering, 93 unknown, 99 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:14 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:14 compute-0 ceph-mon[191910]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 02 19:11:14 compute-0 ceph-mon[191910]: Cluster is now healthy
Oct 02 19:11:14 compute-0 python3[215142]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 19:11:14 compute-0 sudo[215140]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:14 compute-0 sudo[215215]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbftkjswsmytuyaeuvbpltjivsanewyc ; /usr/bin/python3'
Oct 02 19:11:14 compute-0 sudo[215215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:15 compute-0 podman[215217]: 2025-10-02 19:11:15.11384281 +0000 UTC m=+0.113156584 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:11:15 compute-0 python3[215218]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432274.1866732-33709-80874183940367/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=a1ec99b3098cc82c555375a7982c26d4a9d2b54c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:11:15 compute-0 sudo[215215]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v94: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 19:11:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 19:11:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 19:11:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 19:11:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 19:11:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 19:11:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:15 compute-0 sudo[215288]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hncneppkmmxnoltmlkuhjvwehdtvaoxu ; /usr/bin/python3'
Oct 02 19:11:15 compute-0 sudo[215288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct 02 19:11:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:11:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:11:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:11:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:11:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:11:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:11:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:11:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct 02 19:11:15 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.278950691s) [2] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.174049377s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.278483391s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.173599243s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.278388023s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.173599243s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.324273109s) [0] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.219520569s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.278080940s) [2] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.173336029s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.324236870s) [0] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.219520569s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.278033257s) [2] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.173336029s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.278652191s) [2] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.174049377s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.280284882s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.175945282s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.1c( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.323628426s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.219345093s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.280248642s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.175945282s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.277276039s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.173110962s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.1c( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.323534012s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.219345093s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.277917862s) [2] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.173919678s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.15( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.323596001s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.219657898s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.277873039s) [2] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.173919678s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.15( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.323564529s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.219657898s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.276862144s) [2] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.173152924s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.276834488s) [2] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.173152924s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.323329926s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.219848633s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.276274681s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.173069000s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.276206970s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.173110962s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.322380066s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.219665527s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.322341919s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.219665527s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.275568962s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.173069000s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.276218414s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.173816681s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.322302818s) [0] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.219917297s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.276183128s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.173816681s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.322233200s) [0] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.219917297s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.322060585s) [0] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.219940186s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.322025299s) [0] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.219940186s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.321938515s) [0] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.219985962s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.321899414s) [0] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.219985962s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.322283745s) [0] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.220420837s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.322251320s) [0] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.220420837s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.321769714s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.219985962s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.321731567s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.219985962s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.274001122s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.172340393s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.322241783s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.220634460s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.273955345s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.172340393s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.322206497s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.220634460s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.274132729s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.172603607s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.274098396s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.172603607s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.273894310s) [2] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.172489166s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.1( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.322212219s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.220863342s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.273860931s) [2] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.172489166s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.273046494s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.172557831s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.320789337s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.221046448s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.272306442s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.172557831s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.271798134s) [2] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.172809601s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.271767616s) [2] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.172809601s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.319402695s) [0] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.220603943s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.319374084s) [0] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.220603943s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.270757675s) [2] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.172107697s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.270732880s) [2] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.172107697s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.c( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.319070816s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.220603943s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.c( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.319046021s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.220603943s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.270355225s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.172054291s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.270332336s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.172054291s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.270040512s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.171993256s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.270014763s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.171993256s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.318853378s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.220993042s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.318823814s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.220993042s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.318392754s) [0] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.220687866s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.318366051s) [0] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.220687866s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.269388199s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.171825409s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.269360542s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.171825409s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.318407059s) [0] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.221023560s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.318378448s) [0] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.221023560s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.318260193s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.221046448s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.229423523s) [2] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.132472992s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.229382515s) [2] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.132472992s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.1a( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.317625999s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.220893860s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.1a( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.317597389s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.220893860s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.268712997s) [2] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.172210693s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.268675804s) [2] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.172210693s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.317340851s) [0] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 64.221061707s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.317171097s) [0] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.221061707s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.268245697s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 60.172332764s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=31/33 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.268218040s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.172332764s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.315481186s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.219848633s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[7.1( empty local-lis/les=35/37 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38 pruub=12.312547684s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.220863342s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[7.5( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[7.8( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[3.e( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [2] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[3.11( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [2] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[7.15( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[7.11( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[3.16( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [2] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[3.18( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [2] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[7.1c( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.1d( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.221196175s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.294567108s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.1d( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.221154213s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.294567108s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.1e( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.221023560s) [0] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.294532776s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.1e( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.220974922s) [0] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.294532776s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.19( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.221019745s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.294704437s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.19( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.220970154s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.294704437s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.18( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.249058723s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.323043823s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.18( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.249011993s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.323043823s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.16( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.250180244s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.324451447s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.11( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.231619835s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.305946350s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.16( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.250144005s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.324451447s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.11( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.231586456s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.305946350s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.15( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.249942780s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.324462891s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.15( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.249904633s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.324462891s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.12( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.231824875s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.306415558s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[5.1d( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[5.11( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.12( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.231797218s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.306415558s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.13( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.232155800s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.306926727s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.13( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.250007629s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.324836731s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.13( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.232115746s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.306926727s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.13( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.249963760s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.324836731s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.14( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.231937408s) [0] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.307041168s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.15( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.231798172s) [0] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.306934357s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.14( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.231896400s) [0] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.307041168s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.15( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.231767654s) [0] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.306934357s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.16( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.231650352s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.306983948s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.11( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.249222755s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.324558258s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.11( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.249194145s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.324558258s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.16( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.231625557s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.306983948s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.f( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.249091148s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.324687958s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.f( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.249055862s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.324687958s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.9( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.231434822s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.307094574s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.9( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.231344223s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.307094574s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.d( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.248740196s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.324768066s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.d( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.248635292s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.324768066s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.7( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.231371880s) [0] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.307556152s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.7( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.231327057s) [0] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.307556152s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.7( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.248526573s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.324832916s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.7( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.248492241s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.324832916s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.2( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.248508453s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.325061798s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.2( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.248473167s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.325061798s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.1b( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.247879982s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.324501038s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.230414391s) [0] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.307106018s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.1b( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.247836113s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.324501038s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.3( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.248202324s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.324958801s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.230376244s) [0] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.307106018s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.3( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.248176575s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.324958801s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.4( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.247969627s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.325019836s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.4( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.247928619s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.325019836s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.230293274s) [0] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.307617188s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.230245590s) [0] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.307617188s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.5( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.247445107s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.325080872s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.5( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.247414589s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.325080872s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.229826927s) [0] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.307861328s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.229779243s) [0] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.307861328s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.6( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.246809006s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.325099945s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.6( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.246770859s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.325099945s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.17( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.245652199s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.324066162s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.17( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.245616913s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.324066162s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.229084969s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.307659149s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.8( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.246505737s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.325119019s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.229046822s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.307659149s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.8( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.246479988s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.325119019s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.9( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.246329308s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.325134277s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.228813171s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.307636261s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.9( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.246304512s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.325134277s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.228773117s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.307636261s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.a( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.245982170s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.325157166s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.a( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.245951653s) [1] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.325157166s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.b( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.245880127s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.325229645s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.b( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.245855331s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.325229645s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.228187561s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.307701111s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[2.15( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[5.12( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[5.13( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[5.16( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[5.9( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[2.d( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[2.7( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[2.1b( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[2.3( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[2.4( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[2.5( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.1c( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.245578766s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.325168610s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.228114128s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.307701111s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.1c( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.245549202s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.325168610s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.228005409s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.307792664s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[2.6( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.1d( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.245389938s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.325187683s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.227978706s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.307792664s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.228266716s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.308101654s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.1d( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.245337486s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.325187683s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.227055550s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.307785034s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.227007866s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.307785034s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.225414276s) [0] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 54.307540894s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.225367546s) [0] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.307540894s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[3.7( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [2] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[3.8( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [2] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[2.17( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[7.c( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[5.1( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[7.e( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[3.5( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [2] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.1f( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.240482330s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 53.325202942s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[2.1f( empty local-lis/les=31/33 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=8.240440369s) [0] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.325202942s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[3.1d( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [2] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=33/34 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.228138924s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 54.308101654s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[2.9( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[7.1a( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[3.1e( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [2] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[7.a( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[7.2( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[7.1( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[5.1e( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [0] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[2.a( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[5.f( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[5.c( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[5.1a( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[5.18( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[5.19( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[2.19( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[2.18( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[7.1f( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [0] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[3.1b( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[3.f( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[7.4( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [0] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[3.c( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[3.1( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[5.7( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [0] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[2.1d( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[7.18( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [0] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[7.9( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [0] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[7.6( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [0] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[5.4( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [0] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[2.1c( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[2.f( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[3.3( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[2.2( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[5.5( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [0] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[2.1f( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[5.2( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [0] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[3.6( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[5.3( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [0] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[7.3( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [0] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[7.f( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [0] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[2.b( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[3.a( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[2.8( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[3.9( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[3.17( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[7.13( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [0] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[2.16( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[3.15( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[5.15( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [0] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[3.12( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[2.13( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[5.14( empty local-lis/les=0/0 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [0] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[2.11( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[7.1b( empty local-lis/les=0/0 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [0] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[3.1f( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.204381943s) [2] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.686500549s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.204336166s) [2] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.686500549s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.15( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.263361931s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.745674133s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.15( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.263273239s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.745674133s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.14( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.263074875s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.745697021s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.14( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.263049126s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.745697021s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.17( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.263008118s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.745780945s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.17( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.262988091s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.745780945s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 python3[215290]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                            _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.203462601s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.686447144s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.203427315s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.686447144s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.203398705s) [2] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.686500549s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.11( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.263102531s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.746284485s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.203342438s) [2] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.686500549s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.203250885s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.686462402s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.203212738s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.686462402s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.11( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.263064384s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.746284485s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.203001022s) [2] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.686439514s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.13( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.262883186s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.746353149s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.202960014s) [2] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.686439514s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.13( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.262842178s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.746353149s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.202734947s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.686340332s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.202703476s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.686340332s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.202637672s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.686325073s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.202608109s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.686325073s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.202356339s) [2] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.686180115s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.202318192s) [2] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.686180115s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.c( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.262421608s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.746398926s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.c( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.262389183s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.746398926s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.202305794s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.686370850s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.d( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.269240379s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.753082275s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.202282906s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.686370850s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.f( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.262153625s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.746429443s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.f( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.262121201s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.746429443s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.2( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.261985779s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.746475220s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[6.17( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.e( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.261944771s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.746452332s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.2( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.261961937s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.746475220s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[4.14( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.e( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.261874199s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.746452332s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[4.12( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.d( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.268486023s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.753082275s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.201395035s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.686172485s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[4.10( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.201348305s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.686172485s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.1( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.261564255s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.746559143s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.201350212s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.686401367s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.1( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.261525154s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.746559143s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[4.f( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.201319695s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.686401367s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.6( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.261462212s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.746604919s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.200697899s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.685905457s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.b( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.261361122s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.746604919s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.200669289s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.685905457s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.b( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.261332512s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.746604919s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.200598717s) [2] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.685997009s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.200572014s) [2] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.685997009s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.200423241s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.685882568s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.200391769s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.685882568s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.200252533s) [2] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.685844421s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.200222969s) [2] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.685844421s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.8( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.261013031s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.746658325s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.8( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.260979652s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.746658325s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.200019836s) [2] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.685867310s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.200143814s) [2] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.686050415s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.199986458s) [2] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.685867310s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.200052261s) [2] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.686050415s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.6( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.260554314s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.746604919s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.4( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.260543823s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.746688843s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.4( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.260514259s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.746688843s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.199517250s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.685745239s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.199493408s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.685745239s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.199347496s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.685691833s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.199316978s) [1] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.685691833s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.1e( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.260359764s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.746795654s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.199247360s) [2] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active pruub 67.685691833s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.1f( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.260257721s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.746810913s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.1f( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.260230064s) [2] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.746810913s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.1e( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.260211945s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.746795654s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38 pruub=9.199078560s) [2] r=-1 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.685691833s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.1c( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.259994507s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.746833801s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.1c( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.259945869s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.746833801s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.1d( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.259843826s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 69.746856689s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:11:15 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 38 pg[6.1d( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=11.259807587s) [1] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.746856689s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[4.18( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [2] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[6.15( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[6.14( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[4.13( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [2] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[6.11( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[4.d( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[6.c( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[4.11( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [2] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[6.13( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[4.e( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [2] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[6.f( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[6.2( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[4.1a( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [2] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[4.a( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [2] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[6.8( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[4.1b( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [2] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[4.1( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [2] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[4.1c( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [2] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 38 pg[6.1f( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[6.e( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[6.d( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[4.2( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[6.1( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[4.4( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[4.9( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[6.b( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[4.5( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[6.6( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[6.4( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[4.7( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[4.8( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[6.1e( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[6.1c( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 38 pg[6.1d( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:15 compute-0 podman[215291]: 2025-10-02 19:11:15.805685072 +0000 UTC m=+0.057001884 container create afb2b7fd7bb1ed6d3cc9c6078dbe06f3aa6d10cd8eb98695ce973f5a9b5295c9 (image=quay.io/ceph/ceph:v18, name=dreamy_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 19:11:15 compute-0 systemd[1]: Started libpod-conmon-afb2b7fd7bb1ed6d3cc9c6078dbe06f3aa6d10cd8eb98695ce973f5a9b5295c9.scope.
Oct 02 19:11:15 compute-0 podman[215291]: 2025-10-02 19:11:15.784352896 +0000 UTC m=+0.035669748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ad4d1837eff79b3c42ad327f3ac214d5a3ace7aa3a318b51bab6a3b42369b1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ad4d1837eff79b3c42ad327f3ac214d5a3ace7aa3a318b51bab6a3b42369b1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ad4d1837eff79b3c42ad327f3ac214d5a3ace7aa3a318b51bab6a3b42369b1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:15 compute-0 podman[215291]: 2025-10-02 19:11:15.952878098 +0000 UTC m=+0.204194980 container init afb2b7fd7bb1ed6d3cc9c6078dbe06f3aa6d10cd8eb98695ce973f5a9b5295c9 (image=quay.io/ceph/ceph:v18, name=dreamy_feistel, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:15 compute-0 podman[215291]: 2025-10-02 19:11:15.969992182 +0000 UTC m=+0.221309024 container start afb2b7fd7bb1ed6d3cc9c6078dbe06f3aa6d10cd8eb98695ce973f5a9b5295c9 (image=quay.io/ceph/ceph:v18, name=dreamy_feistel, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:11:15 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.4 deep-scrub starts
Oct 02 19:11:15 compute-0 podman[215291]: 2025-10-02 19:11:15.977007569 +0000 UTC m=+0.228324361 container attach afb2b7fd7bb1ed6d3cc9c6078dbe06f3aa6d10cd8eb98695ce973f5a9b5295c9 (image=quay.io/ceph/ceph:v18, name=dreamy_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 19:11:15 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.4 deep-scrub ok
Oct 02 19:11:15 compute-0 podman[215304]: 2025-10-02 19:11:15.990863036 +0000 UTC m=+0.121416683 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 19:11:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 02 19:11:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3197328049' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 19:11:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3197328049' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 02 19:11:16 compute-0 dreamy_feistel[215312]: 
Oct 02 19:11:16 compute-0 dreamy_feistel[215312]: [global]
Oct 02 19:11:16 compute-0 dreamy_feistel[215312]:         fsid = 6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:11:16 compute-0 dreamy_feistel[215312]:         mon_host = 192.168.122.100
Oct 02 19:11:16 compute-0 systemd[1]: libpod-afb2b7fd7bb1ed6d3cc9c6078dbe06f3aa6d10cd8eb98695ce973f5a9b5295c9.scope: Deactivated successfully.
Oct 02 19:11:16 compute-0 conmon[215312]: conmon afb2b7fd7bb1ed6d3cc9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-afb2b7fd7bb1ed6d3cc9c6078dbe06f3aa6d10cd8eb98695ce973f5a9b5295c9.scope/container/memory.events
Oct 02 19:11:16 compute-0 podman[215354]: 2025-10-02 19:11:16.626019183 +0000 UTC m=+0.042733065 container died afb2b7fd7bb1ed6d3cc9c6078dbe06f3aa6d10cd8eb98695ce973f5a9b5295c9 (image=quay.io/ceph/ceph:v18, name=dreamy_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 19:11:16 compute-0 sudo[215351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:16 compute-0 sudo[215351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:16 compute-0 sudo[215351]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct 02 19:11:16 compute-0 ceph-mon[191910]: pgmap v94: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:11:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:11:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:11:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:11:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:11:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:11:16 compute-0 ceph-mon[191910]: osdmap e38: 3 total, 3 up, 3 in
Oct 02 19:11:16 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3197328049' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 19:11:16 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3197328049' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 02 19:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-13ad4d1837eff79b3c42ad327f3ac214d5a3ace7aa3a318b51bab6a3b42369b1-merged.mount: Deactivated successfully.
Oct 02 19:11:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct 02 19:11:16 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[6.1f( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[5.1e( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [0] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[3.1f( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[2.19( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[7.1b( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [0] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[2.11( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[5.14( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [0] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[2.13( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[5.15( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [0] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[3.15( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[5.16( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[4.14( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[5.12( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[2.15( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[5.13( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[2.1b( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[6.17( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 podman[215354]: 2025-10-02 19:11:16.717508232 +0000 UTC m=+0.134222084 container remove afb2b7fd7bb1ed6d3cc9c6078dbe06f3aa6d10cd8eb98695ce973f5a9b5295c9 (image=quay.io/ceph/ceph:v18, name=dreamy_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[6.13( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[7.1c( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[3.16( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [2] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[4.11( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [2] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[6.11( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[4.13( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [2] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[7.11( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[6.14( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[7.15( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[4.1c( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [2] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[3.18( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [2] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[3.11( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [2] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[7.a( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[3.e( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [2] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[7.8( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[6.8( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[4.a( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [2] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[4.1( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [2] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[7.5( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[7.1( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[3.5( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [2] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[7.2( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[4.e( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [2] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[7.c( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[6.f( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[4.1a( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [2] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[7.e( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[3.7( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [2] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[3.8( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [2] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[4.1b( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [2] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[3.1e( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [2] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[3.1d( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [2] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[7.1a( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[6.15( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [2] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 39 pg[4.18( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [2] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[2.16( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[7.13( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [0] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[3.17( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[3.9( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[3.a( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[2.b( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[7.f( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [0] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[7.3( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [0] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[5.3( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [0] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[3.6( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[5.2( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [0] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[2.1f( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[5.5( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [0] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[2.2( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[3.3( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[2.f( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[2.1c( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[5.4( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [0] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[7.6( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [0] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[7.9( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [0] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[7.18( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [0] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[2.1d( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[2.8( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[5.7( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [0] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[3.1( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[5.11( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[6.1c( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[4.10( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[4.12( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[6.1d( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[4.8( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[6.b( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[2.d( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[4.9( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[5.1d( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[6.e( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[2.a( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[2.17( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[4.5( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[2.3( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[5.9( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[6.1( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[2.5( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[6.4( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[6.6( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[4.4( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[4.2( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[2.4( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[2.7( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[6.2( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[2.6( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[4.7( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[6.d( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[6.c( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[3.12( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[3.c( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[7.4( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [0] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[3.f( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[3.1b( empty local-lis/les=38/39 n=0 ec=31/19 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[7.1f( empty local-lis/les=38/39 n=0 ec=35/26 lis/c=35/35 les/c/f=37/37/0 sis=38) [0] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 39 pg[2.18( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [0] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[5.1( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[2.9( empty local-lis/les=38/39 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=38) [1] r=0 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[5.f( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[4.d( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[5.c( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[6.1e( empty local-lis/les=38/39 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=38) [1] r=0 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[5.1a( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[5.18( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[5.19( empty local-lis/les=38/39 n=0 ec=33/22 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 39 pg[4.f( empty local-lis/les=38/39 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=38) [1] r=0 lpr=38 pi=[33,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:16 compute-0 systemd[1]: libpod-conmon-afb2b7fd7bb1ed6d3cc9c6078dbe06f3aa6d10cd8eb98695ce973f5a9b5295c9.scope: Deactivated successfully.
Oct 02 19:11:16 compute-0 sudo[215392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:11:16 compute-0 sudo[215392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:16 compute-0 sudo[215392]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:16 compute-0 sudo[215288]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:16 compute-0 sudo[215417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:16 compute-0 sudo[215417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:16 compute-0 sudo[215417]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:16 compute-0 sudo[215483]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrvtcopmdqadubqinfdxzbsnksamtxsi ; /usr/bin/python3'
Oct 02 19:11:16 compute-0 sudo[215483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:16 compute-0 sudo[215449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 19:11:16 compute-0 sudo[215449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:17 compute-0 python3[215490]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                            _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:17 compute-0 podman[215495]: 2025-10-02 19:11:17.145642714 +0000 UTC m=+0.065789617 container create 895fa4be5e995298459dedd67bc36651bd2cb2fc9a136ee25eee35df90fa0772 (image=quay.io/ceph/ceph:v18, name=flamboyant_bell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:11:17 compute-0 systemd[1]: Started libpod-conmon-895fa4be5e995298459dedd67bc36651bd2cb2fc9a136ee25eee35df90fa0772.scope.
Oct 02 19:11:17 compute-0 podman[215495]: 2025-10-02 19:11:17.114932169 +0000 UTC m=+0.035079092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b046af6968258b90c7472fac20901c7c937fb13673d8956cec077393f35ea123/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b046af6968258b90c7472fac20901c7c937fb13673d8956cec077393f35ea123/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b046af6968258b90c7472fac20901c7c937fb13673d8956cec077393f35ea123/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:17 compute-0 podman[215495]: 2025-10-02 19:11:17.323794043 +0000 UTC m=+0.243940976 container init 895fa4be5e995298459dedd67bc36651bd2cb2fc9a136ee25eee35df90fa0772 (image=quay.io/ceph/ceph:v18, name=flamboyant_bell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:11:17 compute-0 podman[215495]: 2025-10-02 19:11:17.331702182 +0000 UTC m=+0.251849095 container start 895fa4be5e995298459dedd67bc36651bd2cb2fc9a136ee25eee35df90fa0772 (image=quay.io/ceph/ceph:v18, name=flamboyant_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 19:11:17 compute-0 podman[215495]: 2025-10-02 19:11:17.376213674 +0000 UTC m=+0.296360637 container attach 895fa4be5e995298459dedd67bc36651bd2cb2fc9a136ee25eee35df90fa0772 (image=quay.io/ceph/ceph:v18, name=flamboyant_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 19:11:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v97: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:17 compute-0 ceph-mon[191910]: 3.4 deep-scrub starts
Oct 02 19:11:17 compute-0 ceph-mon[191910]: 3.4 deep-scrub ok
Oct 02 19:11:17 compute-0 ceph-mon[191910]: osdmap e39: 3 total, 3 up, 3 in
Oct 02 19:11:17 compute-0 podman[215579]: 2025-10-02 19:11:17.859937112 +0000 UTC m=+0.166339546 container exec a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 19:11:18 compute-0 podman[215579]: 2025-10-02 19:11:18.003247195 +0000 UTC m=+0.309649559 container exec_died a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 19:11:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Oct 02 19:11:18 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1536619138' entity='client.admin' 
Oct 02 19:11:18 compute-0 flamboyant_bell[215531]: set ssl_option
Oct 02 19:11:18 compute-0 systemd[1]: libpod-895fa4be5e995298459dedd67bc36651bd2cb2fc9a136ee25eee35df90fa0772.scope: Deactivated successfully.
Oct 02 19:11:18 compute-0 podman[215495]: 2025-10-02 19:11:18.169092447 +0000 UTC m=+1.089239360 container died 895fa4be5e995298459dedd67bc36651bd2cb2fc9a136ee25eee35df90fa0772 (image=quay.io/ceph/ceph:v18, name=flamboyant_bell, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 19:11:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b046af6968258b90c7472fac20901c7c937fb13673d8956cec077393f35ea123-merged.mount: Deactivated successfully.
Oct 02 19:11:18 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct 02 19:11:18 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct 02 19:11:18 compute-0 ceph-mgr[192222]: [progress INFO root] Completed event 35679cb7-7a40-4c7a-b1bd-3e3eab468cb2 (Global Recovery Event) in 10 seconds
Oct 02 19:11:18 compute-0 podman[215495]: 2025-10-02 19:11:18.708126032 +0000 UTC m=+1.628272975 container remove 895fa4be5e995298459dedd67bc36651bd2cb2fc9a136ee25eee35df90fa0772 (image=quay.io/ceph/ceph:v18, name=flamboyant_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 19:11:18 compute-0 systemd[1]: libpod-conmon-895fa4be5e995298459dedd67bc36651bd2cb2fc9a136ee25eee35df90fa0772.scope: Deactivated successfully.
Oct 02 19:11:18 compute-0 sudo[215483]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:18 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct 02 19:11:18 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct 02 19:11:18 compute-0 podman[215668]: 2025-10-02 19:11:18.960077629 +0000 UTC m=+0.196768493 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release-0.7.12=, version=9.4, vcs-type=git, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, distribution-scope=public, name=ubi9, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.tags=base rhel9, io.openshift.expose-services=, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543)
Oct 02 19:11:18 compute-0 sudo[215728]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psahpzuzqfuqewsfpcsftriplvmzcwkj ; /usr/bin/python3'
Oct 02 19:11:18 compute-0 sudo[215728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:19 compute-0 python3[215735]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:11:19 compute-0 ceph-mon[191910]: pgmap v97: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:19 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1536619138' entity='client.admin' 
Oct 02 19:11:19 compute-0 podman[215753]: 2025-10-02 19:11:19.236967597 +0000 UTC m=+0.080781804 container create 689887b56027a3a52aa0f4462bf83abe77135d9080eb71013e43d8c1f3fe5f70 (image=quay.io/ceph/ceph:v18, name=musing_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:19 compute-0 sudo[215449]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:11:19 compute-0 podman[215753]: 2025-10-02 19:11:19.210718421 +0000 UTC m=+0.054532668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:19 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:11:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:19 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:11:19 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:11:19 compute-0 systemd[1]: Started libpod-conmon-689887b56027a3a52aa0f4462bf83abe77135d9080eb71013e43d8c1f3fe5f70.scope.
Oct 02 19:11:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:11:19 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:11:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:11:19 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:19 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 28f3b0e2-1335-4e9f-9662-37407f68ea60 does not exist
Oct 02 19:11:19 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev ef7fea4f-da32-482c-95f2-c8ba80fd61c9 does not exist
Oct 02 19:11:19 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 386b00c2-2245-456d-8a05-0599bcccde87 does not exist
Oct 02 19:11:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:11:19 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:11:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:11:19 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49eb9496fc505d0a6a225d95c824171202d3a25a20dda98a982c39f44730480d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49eb9496fc505d0a6a225d95c824171202d3a25a20dda98a982c39f44730480d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49eb9496fc505d0a6a225d95c824171202d3a25a20dda98a982c39f44730480d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:11:19 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:11:19 compute-0 podman[215753]: 2025-10-02 19:11:19.575856512 +0000 UTC m=+0.419670779 container init 689887b56027a3a52aa0f4462bf83abe77135d9080eb71013e43d8c1f3fe5f70 (image=quay.io/ceph/ceph:v18, name=musing_lederberg, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 19:11:19 compute-0 podman[215753]: 2025-10-02 19:11:19.590166531 +0000 UTC m=+0.433980748 container start 689887b56027a3a52aa0f4462bf83abe77135d9080eb71013e43d8c1f3fe5f70 (image=quay.io/ceph/ceph:v18, name=musing_lederberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:19 compute-0 sudo[215781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:19 compute-0 podman[215753]: 2025-10-02 19:11:19.651832908 +0000 UTC m=+0.495647125 container attach 689887b56027a3a52aa0f4462bf83abe77135d9080eb71013e43d8c1f3fe5f70 (image=quay.io/ceph/ceph:v18, name=musing_lederberg, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 19:11:19 compute-0 sudo[215781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:19 compute-0 sudo[215781]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:19 compute-0 sudo[215807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:11:19 compute-0 sudo[215807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:19 compute-0 sudo[215807]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:19 compute-0 sudo[215832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:19 compute-0 sudo[215832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:19 compute-0 sudo[215832]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:19 compute-0 sudo[215857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:11:19 compute-0 sudo[215857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:20 compute-0 ceph-mon[191910]: 4.3 scrub starts
Oct 02 19:11:20 compute-0 ceph-mon[191910]: 4.3 scrub ok
Oct 02 19:11:20 compute-0 ceph-mon[191910]: 2.1 scrub starts
Oct 02 19:11:20 compute-0 ceph-mon[191910]: 2.1 scrub ok
Oct 02 19:11:20 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:20 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:20 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:11:20 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:11:20 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:20 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:11:20 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:11:20 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:11:20 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:11:20 compute-0 ceph-mgr[192222]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Oct 02 19:11:20 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct 02 19:11:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 02 19:11:20 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:20 compute-0 musing_lederberg[215777]: Scheduled rgw.rgw update...
Oct 02 19:11:20 compute-0 systemd[1]: libpod-689887b56027a3a52aa0f4462bf83abe77135d9080eb71013e43d8c1f3fe5f70.scope: Deactivated successfully.
Oct 02 19:11:20 compute-0 podman[215753]: 2025-10-02 19:11:20.251458122 +0000 UTC m=+1.095272389 container died 689887b56027a3a52aa0f4462bf83abe77135d9080eb71013e43d8c1f3fe5f70 (image=quay.io/ceph/ceph:v18, name=musing_lederberg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-49eb9496fc505d0a6a225d95c824171202d3a25a20dda98a982c39f44730480d-merged.mount: Deactivated successfully.
Oct 02 19:11:20 compute-0 podman[215753]: 2025-10-02 19:11:20.345547719 +0000 UTC m=+1.189361916 container remove 689887b56027a3a52aa0f4462bf83abe77135d9080eb71013e43d8c1f3fe5f70 (image=quay.io/ceph/ceph:v18, name=musing_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 19:11:20 compute-0 systemd[1]: libpod-conmon-689887b56027a3a52aa0f4462bf83abe77135d9080eb71013e43d8c1f3fe5f70.scope: Deactivated successfully.
Oct 02 19:11:20 compute-0 sudo[215728]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:20 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Oct 02 19:11:20 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Oct 02 19:11:20 compute-0 podman[215951]: 2025-10-02 19:11:20.472234672 +0000 UTC m=+0.069106245 container create 261e76c74bf9f606a2637cd455428bea8fb7fb38d099cf7155889df6362b7cac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ishizaka, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 19:11:20 compute-0 systemd[1]: Started libpod-conmon-261e76c74bf9f606a2637cd455428bea8fb7fb38d099cf7155889df6362b7cac.scope.
Oct 02 19:11:20 compute-0 podman[215951]: 2025-10-02 19:11:20.437044558 +0000 UTC m=+0.033916211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:20 compute-0 podman[215951]: 2025-10-02 19:11:20.587666565 +0000 UTC m=+0.184538148 container init 261e76c74bf9f606a2637cd455428bea8fb7fb38d099cf7155889df6362b7cac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:11:20 compute-0 podman[215951]: 2025-10-02 19:11:20.603065574 +0000 UTC m=+0.199937147 container start 261e76c74bf9f606a2637cd455428bea8fb7fb38d099cf7155889df6362b7cac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ishizaka, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:20 compute-0 podman[215951]: 2025-10-02 19:11:20.607884222 +0000 UTC m=+0.204755815 container attach 261e76c74bf9f606a2637cd455428bea8fb7fb38d099cf7155889df6362b7cac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ishizaka, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 19:11:20 compute-0 confident_ishizaka[215966]: 167 167
Oct 02 19:11:20 compute-0 systemd[1]: libpod-261e76c74bf9f606a2637cd455428bea8fb7fb38d099cf7155889df6362b7cac.scope: Deactivated successfully.
Oct 02 19:11:20 compute-0 podman[215951]: 2025-10-02 19:11:20.612807723 +0000 UTC m=+0.209679306 container died 261e76c74bf9f606a2637cd455428bea8fb7fb38d099cf7155889df6362b7cac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:11:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-0232f50df36d9f58c6fcfade9fd71a22b8fa46c0d873e0ffc6468a41f2aa51d8-merged.mount: Deactivated successfully.
Oct 02 19:11:20 compute-0 podman[215951]: 2025-10-02 19:11:20.677363576 +0000 UTC m=+0.274235149 container remove 261e76c74bf9f606a2637cd455428bea8fb7fb38d099cf7155889df6362b7cac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ishizaka, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 19:11:20 compute-0 systemd[1]: libpod-conmon-261e76c74bf9f606a2637cd455428bea8fb7fb38d099cf7155889df6362b7cac.scope: Deactivated successfully.
Oct 02 19:11:20 compute-0 podman[215988]: 2025-10-02 19:11:20.934254654 +0000 UTC m=+0.073198104 container create b15723a16349f1e9289de861e03ec4e693431adea4b1b431e98763527b3cf310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 19:11:20 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.b scrub starts
Oct 02 19:11:21 compute-0 podman[215988]: 2025-10-02 19:11:20.909134347 +0000 UTC m=+0.048077877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:21 compute-0 systemd[1]: Started libpod-conmon-b15723a16349f1e9289de861e03ec4e693431adea4b1b431e98763527b3cf310.scope.
Oct 02 19:11:21 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.b scrub ok
Oct 02 19:11:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f1c988230e8629c477314e6cab9ea1e6509d1442730520af2fb65a347770b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f1c988230e8629c477314e6cab9ea1e6509d1442730520af2fb65a347770b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f1c988230e8629c477314e6cab9ea1e6509d1442730520af2fb65a347770b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f1c988230e8629c477314e6cab9ea1e6509d1442730520af2fb65a347770b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f1c988230e8629c477314e6cab9ea1e6509d1442730520af2fb65a347770b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:21 compute-0 podman[215988]: 2025-10-02 19:11:21.105709044 +0000 UTC m=+0.244652524 container init b15723a16349f1e9289de861e03ec4e693431adea4b1b431e98763527b3cf310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hertz, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:11:21 compute-0 podman[215988]: 2025-10-02 19:11:21.128322934 +0000 UTC m=+0.267266414 container start b15723a16349f1e9289de861e03ec4e693431adea4b1b431e98763527b3cf310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hertz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 19:11:21 compute-0 podman[215988]: 2025-10-02 19:11:21.134695073 +0000 UTC m=+0.273638563 container attach b15723a16349f1e9289de861e03ec4e693431adea4b1b431e98763527b3cf310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:21 compute-0 ceph-mon[191910]: pgmap v98: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:21 compute-0 ceph-mon[191910]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:11:21 compute-0 ceph-mon[191910]: Saving service rgw.rgw spec with placement compute-0
Oct 02 19:11:21 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:21 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct 02 19:11:21 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct 02 19:11:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:21 compute-0 python3[216083]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 19:11:22 compute-0 python3[216159]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432281.1140187-33750-71001844548754/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:11:22 compute-0 ceph-mon[191910]: 4.6 scrub starts
Oct 02 19:11:22 compute-0 ceph-mon[191910]: 4.6 scrub ok
Oct 02 19:11:22 compute-0 ceph-mon[191910]: 3.b scrub starts
Oct 02 19:11:22 compute-0 ceph-mon[191910]: 3.b scrub ok
Oct 02 19:11:22 compute-0 hungry_hertz[216003]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:11:22 compute-0 hungry_hertz[216003]: --> relative data size: 1.0
Oct 02 19:11:22 compute-0 hungry_hertz[216003]: --> All data devices are unavailable
Oct 02 19:11:22 compute-0 systemd[1]: libpod-b15723a16349f1e9289de861e03ec4e693431adea4b1b431e98763527b3cf310.scope: Deactivated successfully.
Oct 02 19:11:22 compute-0 systemd[1]: libpod-b15723a16349f1e9289de861e03ec4e693431adea4b1b431e98763527b3cf310.scope: Consumed 1.151s CPU time.
Oct 02 19:11:22 compute-0 podman[215988]: 2025-10-02 19:11:22.354695941 +0000 UTC m=+1.493639391 container died b15723a16349f1e9289de861e03ec4e693431adea4b1b431e98763527b3cf310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hertz, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 19:11:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-95f1c988230e8629c477314e6cab9ea1e6509d1442730520af2fb65a347770b4-merged.mount: Deactivated successfully.
Oct 02 19:11:22 compute-0 sudo[216236]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkaxivhvejpzozfcbggcnowtajesebdl ; /usr/bin/python3'
Oct 02 19:11:22 compute-0 sudo[216236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:22 compute-0 podman[215988]: 2025-10-02 19:11:22.524037986 +0000 UTC m=+1.662981466 container remove b15723a16349f1e9289de861e03ec4e693431adea4b1b431e98763527b3cf310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hertz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:22 compute-0 systemd[1]: libpod-conmon-b15723a16349f1e9289de861e03ec4e693431adea4b1b431e98763527b3cf310.scope: Deactivated successfully.
Oct 02 19:11:22 compute-0 sudo[215857]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:22 compute-0 sudo[216241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:22 compute-0 sudo[216241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:22 compute-0 sudo[216241]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:22 compute-0 python3[216240]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                            _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:22 compute-0 sudo[216266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:11:22 compute-0 sudo[216266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:22 compute-0 sudo[216266]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:22 compute-0 podman[216269]: 2025-10-02 19:11:22.82752199 +0000 UTC m=+0.106937639 container create edf9888501386c5baf02824d14a792d8dfdbda918b5569c3a754532a93ba20d9 (image=quay.io/ceph/ceph:v18, name=beautiful_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 02 19:11:22 compute-0 podman[216269]: 2025-10-02 19:11:22.761122398 +0000 UTC m=+0.040538147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:22 compute-0 sudo[216304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:22 compute-0 sudo[216304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:22 compute-0 sudo[216304]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:22 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct 02 19:11:23 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct 02 19:11:23 compute-0 sudo[216329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:11:23 compute-0 sudo[216329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:23 compute-0 systemd[1]: Started libpod-conmon-edf9888501386c5baf02824d14a792d8dfdbda918b5569c3a754532a93ba20d9.scope.
Oct 02 19:11:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd10487e5ca3ca4c175a376e3322f377b119c07e9988d4c9f7439cc33cc8ca7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd10487e5ca3ca4c175a376e3322f377b119c07e9988d4c9f7439cc33cc8ca7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd10487e5ca3ca4c175a376e3322f377b119c07e9988d4c9f7439cc33cc8ca7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:23 compute-0 podman[216269]: 2025-10-02 19:11:23.179861241 +0000 UTC m=+0.459276920 container init edf9888501386c5baf02824d14a792d8dfdbda918b5569c3a754532a93ba20d9 (image=quay.io/ceph/ceph:v18, name=beautiful_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 19:11:23 compute-0 podman[216269]: 2025-10-02 19:11:23.198839315 +0000 UTC m=+0.478254974 container start edf9888501386c5baf02824d14a792d8dfdbda918b5569c3a754532a93ba20d9 (image=quay.io/ceph/ceph:v18, name=beautiful_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 19:11:23 compute-0 podman[216269]: 2025-10-02 19:11:23.291938286 +0000 UTC m=+0.571353985 container attach edf9888501386c5baf02824d14a792d8dfdbda918b5569c3a754532a93ba20d9 (image=quay.io/ceph/ceph:v18, name=beautiful_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:11:23 compute-0 ceph-mon[191910]: 4.b scrub starts
Oct 02 19:11:23 compute-0 ceph-mon[191910]: 4.b scrub ok
Oct 02 19:11:23 compute-0 ceph-mon[191910]: pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v100: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:23 compute-0 ceph-mgr[192222]: [progress INFO root] Writing back 10 completed events
Oct 02 19:11:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 19:11:23 compute-0 podman[216418]: 2025-10-02 19:11:23.62933273 +0000 UTC m=+0.039853808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:23 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:23 compute-0 podman[216418]: 2025-10-02 19:11:23.755030216 +0000 UTC m=+0.165551284 container create b5f72fb1d96373d6604cda0100e181bc5f45e5265a1e743d863bf18efd31b3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:23 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:11:23 compute-0 ceph-mgr[192222]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 02 19:11:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Oct 02 19:11:23 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 02 19:11:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Oct 02 19:11:23 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 02 19:11:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Oct 02 19:11:23 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 02 19:11:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct 02 19:11:23 compute-0 ceph-mon[191910]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 02 19:11:23 compute-0 ceph-mon[191910]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 02 19:11:23 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0[191892]: 2025-10-02T19:11:23.853+0000 7ff8b696d640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 02 19:11:23 compute-0 systemd[1]: Started libpod-conmon-b5f72fb1d96373d6604cda0100e181bc5f45e5265a1e743d863bf18efd31b3a0.scope.
Oct 02 19:11:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:23 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 02 19:11:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).mds e2 new map
Oct 02 19:11:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).mds e2 print_map
                                            e2
                                            enable_multiple, ever_enabled_multiple: 1,1
                                            default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            legacy client fscid: 1
                                             
                                            Filesystem 'cephfs' (1)
                                            fs_name        cephfs
                                            epoch        2
                                            flags        12 joinable allow_snaps allow_multimds_snaps
                                            created        2025-10-02T19:11:23.853937+0000
                                            modified        2025-10-02T19:11:23.854004+0000
                                            tableserver        0
                                            root        0
                                            session_timeout        60
                                            session_autoclose        300
                                            max_file_size        1099511627776
                                            max_xattr_size        65536
                                            required_client_features        {}
                                            last_failure        0
                                            last_failure_osd_epoch        0
                                            compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            max_mds        1
                                            in        
                                            up        {}
                                            failed        
                                            damaged        
                                            stopped        
                                            data_pools        [7]
                                            metadata_pool        6
                                            inline_data        disabled
                                            balancer        
                                            bal_rank_mask        -1
                                            standby_count_wanted        0
                                             
                                             
Oct 02 19:11:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct 02 19:11:23 compute-0 podman[216418]: 2025-10-02 19:11:23.940087588 +0000 UTC m=+0.350608656 container init b5f72fb1d96373d6604cda0100e181bc5f45e5265a1e743d863bf18efd31b3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chatelet, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 19:11:23 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct 02 19:11:23 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct 02 19:11:23 compute-0 ceph-mgr[192222]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct 02 19:11:23 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct 02 19:11:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 02 19:11:23 compute-0 podman[216418]: 2025-10-02 19:11:23.955704552 +0000 UTC m=+0.366225580 container start b5f72fb1d96373d6604cda0100e181bc5f45e5265a1e743d863bf18efd31b3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 19:11:23 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct 02 19:11:23 compute-0 quirky_chatelet[216435]: 167 167
Oct 02 19:11:23 compute-0 systemd[1]: libpod-b5f72fb1d96373d6604cda0100e181bc5f45e5265a1e743d863bf18efd31b3a0.scope: Deactivated successfully.
Oct 02 19:11:23 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:23 compute-0 podman[216418]: 2025-10-02 19:11:23.973566576 +0000 UTC m=+0.384087654 container attach b5f72fb1d96373d6604cda0100e181bc5f45e5265a1e743d863bf18efd31b3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:23 compute-0 ceph-mgr[192222]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 02 19:11:23 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct 02 19:11:23 compute-0 podman[216418]: 2025-10-02 19:11:23.975168839 +0000 UTC m=+0.385689877 container died b5f72fb1d96373d6604cda0100e181bc5f45e5265a1e743d863bf18efd31b3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 19:11:24 compute-0 systemd[1]: libpod-edf9888501386c5baf02824d14a792d8dfdbda918b5569c3a754532a93ba20d9.scope: Deactivated successfully.
Oct 02 19:11:24 compute-0 podman[216269]: 2025-10-02 19:11:24.050254072 +0000 UTC m=+1.329669731 container died edf9888501386c5baf02824d14a792d8dfdbda918b5569c3a754532a93ba20d9 (image=quay.io/ceph/ceph:v18, name=beautiful_bouman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-729d1d5120a8739835b809b7bb9af46476b3ec229cd2016e16510448d2bc4396-merged.mount: Deactivated successfully.
Oct 02 19:11:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:11:24 compute-0 podman[216418]: 2025-10-02 19:11:24.179571564 +0000 UTC m=+0.590092612 container remove b5f72fb1d96373d6604cda0100e181bc5f45e5265a1e743d863bf18efd31b3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chatelet, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:24 compute-0 systemd[1]: libpod-conmon-b5f72fb1d96373d6604cda0100e181bc5f45e5265a1e743d863bf18efd31b3a0.scope: Deactivated successfully.
Oct 02 19:11:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fd10487e5ca3ca4c175a376e3322f377b119c07e9988d4c9f7439cc33cc8ca7-merged.mount: Deactivated successfully.
Oct 02 19:11:24 compute-0 podman[216269]: 2025-10-02 19:11:24.308479965 +0000 UTC m=+1.587895614 container remove edf9888501386c5baf02824d14a792d8dfdbda918b5569c3a754532a93ba20d9 (image=quay.io/ceph/ceph:v18, name=beautiful_bouman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:24 compute-0 systemd[1]: libpod-conmon-edf9888501386c5baf02824d14a792d8dfdbda918b5569c3a754532a93ba20d9.scope: Deactivated successfully.
Oct 02 19:11:24 compute-0 sudo[216236]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:24 compute-0 podman[216471]: 2025-10-02 19:11:24.442506602 +0000 UTC m=+0.079521101 container create 2d88fe085fe299b010ab51e11d80a3725bbd0152f39d730e11453967f651471c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 19:11:24 compute-0 podman[216471]: 2025-10-02 19:11:24.403635351 +0000 UTC m=+0.040649880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:24 compute-0 systemd[1]: Started libpod-conmon-2d88fe085fe299b010ab51e11d80a3725bbd0152f39d730e11453967f651471c.scope.
Oct 02 19:11:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:24 compute-0 sudo[216511]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gllqnfirjhojdcfmuepybmtqoghmhqwn ; /usr/bin/python3'
Oct 02 19:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca46ce66beaf5565b1d0736c42d897b0e0271fa5c1c1ec299251e7ee93c6f73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca46ce66beaf5565b1d0736c42d897b0e0271fa5c1c1ec299251e7ee93c6f73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:24 compute-0 sudo[216511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca46ce66beaf5565b1d0736c42d897b0e0271fa5c1c1ec299251e7ee93c6f73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca46ce66beaf5565b1d0736c42d897b0e0271fa5c1c1ec299251e7ee93c6f73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:24 compute-0 podman[216471]: 2025-10-02 19:11:24.709822877 +0000 UTC m=+0.346837396 container init 2d88fe085fe299b010ab51e11d80a3725bbd0152f39d730e11453967f651471c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 19:11:24 compute-0 podman[216471]: 2025-10-02 19:11:24.730092515 +0000 UTC m=+0.367107054 container start 2d88fe085fe299b010ab51e11d80a3725bbd0152f39d730e11453967f651471c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_allen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:11:24 compute-0 python3[216516]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:24 compute-0 podman[216471]: 2025-10-02 19:11:24.824020958 +0000 UTC m=+0.461035567 container attach 2d88fe085fe299b010ab51e11d80a3725bbd0152f39d730e11453967f651471c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_allen, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 19:11:24 compute-0 ceph-mon[191910]: 3.d scrub starts
Oct 02 19:11:24 compute-0 ceph-mon[191910]: 3.d scrub ok
Oct 02 19:11:24 compute-0 ceph-mon[191910]: pgmap v100: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:24 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:24 compute-0 ceph-mon[191910]: from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:11:24 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 02 19:11:24 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 02 19:11:24 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 02 19:11:24 compute-0 ceph-mon[191910]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 02 19:11:24 compute-0 ceph-mon[191910]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 02 19:11:24 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 02 19:11:24 compute-0 ceph-mon[191910]: osdmap e40: 3 total, 3 up, 3 in
Oct 02 19:11:24 compute-0 ceph-mon[191910]: fsmap cephfs:0
Oct 02 19:11:24 compute-0 ceph-mon[191910]: Saving service mds.cephfs spec with placement compute-0
Oct 02 19:11:24 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:24 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.13 deep-scrub starts
Oct 02 19:11:24 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.13 deep-scrub ok
Oct 02 19:11:24 compute-0 podman[216519]: 2025-10-02 19:11:24.891755415 +0000 UTC m=+0.055399571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:25 compute-0 podman[216519]: 2025-10-02 19:11:25.592879443 +0000 UTC m=+0.756523559 container create e03fe5fe330a71ec20a8b207bcf8fcbc4b28579b802d79501ae964d45f89f7ef (image=quay.io/ceph/ceph:v18, name=lucid_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:25 compute-0 determined_allen[216512]: {
Oct 02 19:11:25 compute-0 determined_allen[216512]:     "0": [
Oct 02 19:11:25 compute-0 determined_allen[216512]:         {
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "devices": [
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "/dev/loop3"
Oct 02 19:11:25 compute-0 determined_allen[216512]:             ],
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "lv_name": "ceph_lv0",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "lv_size": "21470642176",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "name": "ceph_lv0",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "tags": {
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.cluster_name": "ceph",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.crush_device_class": "",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.encrypted": "0",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.osd_id": "0",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.type": "block",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.vdo": "0"
Oct 02 19:11:25 compute-0 determined_allen[216512]:             },
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "type": "block",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "vg_name": "ceph_vg0"
Oct 02 19:11:25 compute-0 determined_allen[216512]:         }
Oct 02 19:11:25 compute-0 determined_allen[216512]:     ],
Oct 02 19:11:25 compute-0 determined_allen[216512]:     "1": [
Oct 02 19:11:25 compute-0 determined_allen[216512]:         {
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "devices": [
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "/dev/loop4"
Oct 02 19:11:25 compute-0 determined_allen[216512]:             ],
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "lv_name": "ceph_lv1",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "lv_size": "21470642176",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "name": "ceph_lv1",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "tags": {
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.cluster_name": "ceph",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.crush_device_class": "",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.encrypted": "0",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.osd_id": "1",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.type": "block",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.vdo": "0"
Oct 02 19:11:25 compute-0 determined_allen[216512]:             },
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "type": "block",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "vg_name": "ceph_vg1"
Oct 02 19:11:25 compute-0 determined_allen[216512]:         }
Oct 02 19:11:25 compute-0 determined_allen[216512]:     ],
Oct 02 19:11:25 compute-0 determined_allen[216512]:     "2": [
Oct 02 19:11:25 compute-0 determined_allen[216512]:         {
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "devices": [
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "/dev/loop5"
Oct 02 19:11:25 compute-0 determined_allen[216512]:             ],
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "lv_name": "ceph_lv2",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "lv_size": "21470642176",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "name": "ceph_lv2",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "tags": {
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.cluster_name": "ceph",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.crush_device_class": "",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.encrypted": "0",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.osd_id": "2",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.type": "block",
Oct 02 19:11:25 compute-0 determined_allen[216512]:                 "ceph.vdo": "0"
Oct 02 19:11:25 compute-0 determined_allen[216512]:             },
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "type": "block",
Oct 02 19:11:25 compute-0 determined_allen[216512]:             "vg_name": "ceph_vg2"
Oct 02 19:11:25 compute-0 determined_allen[216512]:         }
Oct 02 19:11:25 compute-0 determined_allen[216512]:     ]
Oct 02 19:11:25 compute-0 determined_allen[216512]: }
Oct 02 19:11:25 compute-0 systemd[1]: libpod-2d88fe085fe299b010ab51e11d80a3725bbd0152f39d730e11453967f651471c.scope: Deactivated successfully.
Oct 02 19:11:25 compute-0 podman[216471]: 2025-10-02 19:11:25.729061387 +0000 UTC m=+1.366075926 container died 2d88fe085fe299b010ab51e11d80a3725bbd0152f39d730e11453967f651471c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:11:25 compute-0 systemd[1]: Started libpod-conmon-e03fe5fe330a71ec20a8b207bcf8fcbc4b28579b802d79501ae964d45f89f7ef.scope.
Oct 02 19:11:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77c9845c614227794931f5d1726442bdf508f66a5068a76db9eb2ec0c8c714d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77c9845c614227794931f5d1726442bdf508f66a5068a76db9eb2ec0c8c714d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77c9845c614227794931f5d1726442bdf508f66a5068a76db9eb2ec0c8c714d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ca46ce66beaf5565b1d0736c42d897b0e0271fa5c1c1ec299251e7ee93c6f73-merged.mount: Deactivated successfully.
Oct 02 19:11:25 compute-0 podman[216519]: 2025-10-02 19:11:25.852369349 +0000 UTC m=+1.016013445 container init e03fe5fe330a71ec20a8b207bcf8fcbc4b28579b802d79501ae964d45f89f7ef (image=quay.io/ceph/ceph:v18, name=lucid_clarke, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:25 compute-0 podman[216519]: 2025-10-02 19:11:25.862032186 +0000 UTC m=+1.025676262 container start e03fe5fe330a71ec20a8b207bcf8fcbc4b28579b802d79501ae964d45f89f7ef (image=quay.io/ceph/ceph:v18, name=lucid_clarke, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:25 compute-0 podman[216519]: 2025-10-02 19:11:25.986363335 +0000 UTC m=+1.150007451 container attach e03fe5fe330a71ec20a8b207bcf8fcbc4b28579b802d79501ae964d45f89f7ef (image=quay.io/ceph/ceph:v18, name=lucid_clarke, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 19:11:26 compute-0 ceph-mon[191910]: 3.10 scrub starts
Oct 02 19:11:26 compute-0 ceph-mon[191910]: 3.10 scrub ok
Oct 02 19:11:26 compute-0 ceph-mon[191910]: 3.13 deep-scrub starts
Oct 02 19:11:26 compute-0 ceph-mon[191910]: 3.13 deep-scrub ok
Oct 02 19:11:26 compute-0 podman[216471]: 2025-10-02 19:11:26.047973991 +0000 UTC m=+1.684988520 container remove 2d88fe085fe299b010ab51e11d80a3725bbd0152f39d730e11453967f651471c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:26 compute-0 systemd[1]: libpod-conmon-2d88fe085fe299b010ab51e11d80a3725bbd0152f39d730e11453967f651471c.scope: Deactivated successfully.
Oct 02 19:11:26 compute-0 sudo[216329]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:26 compute-0 sudo[216553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:26 compute-0 sudo[216553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:26 compute-0 sudo[216553]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:26 compute-0 sudo[216597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:11:26 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:11:26 compute-0 sudo[216597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:26 compute-0 ceph-mgr[192222]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct 02 19:11:26 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct 02 19:11:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 02 19:11:26 compute-0 sudo[216597]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:26 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:26 compute-0 lucid_clarke[216544]: Scheduled mds.cephfs update...
Oct 02 19:11:26 compute-0 systemd[1]: libpod-e03fe5fe330a71ec20a8b207bcf8fcbc4b28579b802d79501ae964d45f89f7ef.scope: Deactivated successfully.
Oct 02 19:11:26 compute-0 podman[216519]: 2025-10-02 19:11:26.463655903 +0000 UTC m=+1.627299979 container died e03fe5fe330a71ec20a8b207bcf8fcbc4b28579b802d79501ae964d45f89f7ef (image=quay.io/ceph/ceph:v18, name=lucid_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:26 compute-0 sudo[216623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:26 compute-0 sudo[216623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:26 compute-0 sudo[216623]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d77c9845c614227794931f5d1726442bdf508f66a5068a76db9eb2ec0c8c714d-merged.mount: Deactivated successfully.
Oct 02 19:11:26 compute-0 sudo[216659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:11:26 compute-0 sudo[216659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:26 compute-0 podman[216519]: 2025-10-02 19:11:26.677723904 +0000 UTC m=+1.841367990 container remove e03fe5fe330a71ec20a8b207bcf8fcbc4b28579b802d79501ae964d45f89f7ef (image=quay.io/ceph/ceph:v18, name=lucid_clarke, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:26 compute-0 systemd[1]: libpod-conmon-e03fe5fe330a71ec20a8b207bcf8fcbc4b28579b802d79501ae964d45f89f7ef.scope: Deactivated successfully.
Oct 02 19:11:26 compute-0 sudo[216511]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:26 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Oct 02 19:11:26 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Oct 02 19:11:27 compute-0 ceph-mon[191910]: pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:27 compute-0 podman[216741]: 2025-10-02 19:11:27.234697946 +0000 UTC m=+0.067224505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:27 compute-0 podman[216741]: 2025-10-02 19:11:27.388737125 +0000 UTC m=+0.221263614 container create 14d4c4a178e777ff6c9805ee2fc636593fcd7299989004642db55ae4dadabc65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ishizaka, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 19:11:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:27 compute-0 sudo[216808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adgnsirqgrupfjcpzkeuyazqbpkmgozh ; /usr/bin/python3'
Oct 02 19:11:27 compute-0 sudo[216808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:27 compute-0 systemd[1]: Started libpod-conmon-14d4c4a178e777ff6c9805ee2fc636593fcd7299989004642db55ae4dadabc65.scope.
Oct 02 19:11:27 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:27 compute-0 podman[216741]: 2025-10-02 19:11:27.626891785 +0000 UTC m=+0.459418244 container init 14d4c4a178e777ff6c9805ee2fc636593fcd7299989004642db55ae4dadabc65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 19:11:27 compute-0 podman[216741]: 2025-10-02 19:11:27.639624303 +0000 UTC m=+0.472150762 container start 14d4c4a178e777ff6c9805ee2fc636593fcd7299989004642db55ae4dadabc65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ishizaka, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:27 compute-0 compassionate_ishizaka[216814]: 167 167
Oct 02 19:11:27 compute-0 systemd[1]: libpod-14d4c4a178e777ff6c9805ee2fc636593fcd7299989004642db55ae4dadabc65.scope: Deactivated successfully.
Oct 02 19:11:27 compute-0 conmon[216814]: conmon 14d4c4a178e777ff6c98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14d4c4a178e777ff6c9805ee2fc636593fcd7299989004642db55ae4dadabc65.scope/container/memory.events
Oct 02 19:11:27 compute-0 python3[216810]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 19:11:27 compute-0 sudo[216808]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:27 compute-0 podman[216741]: 2025-10-02 19:11:27.763522721 +0000 UTC m=+0.596049210 container attach 14d4c4a178e777ff6c9805ee2fc636593fcd7299989004642db55ae4dadabc65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:27 compute-0 podman[216741]: 2025-10-02 19:11:27.763963673 +0000 UTC m=+0.596490132 container died 14d4c4a178e777ff6c9805ee2fc636593fcd7299989004642db55ae4dadabc65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 19:11:27 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 2.c scrub starts
Oct 02 19:11:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4436c3341d9e9c7e154d9e53426aa83876c424c6435370045185bf30646adf29-merged.mount: Deactivated successfully.
Oct 02 19:11:27 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 2.c scrub ok
Oct 02 19:11:27 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Oct 02 19:11:27 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Oct 02 19:11:28 compute-0 sudo[216900]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfxkkpnygjcnreqbndscssnclmxsnnur ; /usr/bin/python3'
Oct 02 19:11:28 compute-0 sudo[216900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:28 compute-0 podman[216741]: 2025-10-02 19:11:28.030303562 +0000 UTC m=+0.862830021 container remove 14d4c4a178e777ff6c9805ee2fc636593fcd7299989004642db55ae4dadabc65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 19:11:28 compute-0 systemd[1]: libpod-conmon-14d4c4a178e777ff6c9805ee2fc636593fcd7299989004642db55ae4dadabc65.scope: Deactivated successfully.
Oct 02 19:11:28 compute-0 python3[216902]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432287.173441-33780-56030623203890/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=66cc2c217983223fce7f84f2a8cd1b6a8771b9cc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:11:28 compute-0 ceph-mon[191910]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 19:11:28 compute-0 ceph-mon[191910]: Saving service mds.cephfs spec with placement compute-0
Oct 02 19:11:28 compute-0 ceph-mon[191910]: 3.14 scrub starts
Oct 02 19:11:28 compute-0 ceph-mon[191910]: 3.14 scrub ok
Oct 02 19:11:28 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.c deep-scrub starts
Oct 02 19:11:28 compute-0 sudo[216900]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:28 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.c deep-scrub ok
Oct 02 19:11:28 compute-0 podman[216910]: 2025-10-02 19:11:28.244277211 +0000 UTC m=+0.043172237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:28 compute-0 podman[216910]: 2025-10-02 19:11:28.352089552 +0000 UTC m=+0.150984508 container create 600c1d1fe6488ec6c6d1f76d0d5af271a2f64a4850187eddc1c97b8c0bbc1fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_boyd, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:28 compute-0 systemd[1]: Started libpod-conmon-600c1d1fe6488ec6c6d1f76d0d5af271a2f64a4850187eddc1c97b8c0bbc1fd4.scope.
Oct 02 19:11:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd897dfd32bb22163689e04e5337af62853ff2f7534f4642205daf2ec89f6dd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd897dfd32bb22163689e04e5337af62853ff2f7534f4642205daf2ec89f6dd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd897dfd32bb22163689e04e5337af62853ff2f7534f4642205daf2ec89f6dd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd897dfd32bb22163689e04e5337af62853ff2f7534f4642205daf2ec89f6dd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:28 compute-0 podman[216910]: 2025-10-02 19:11:28.600446183 +0000 UTC m=+0.399341209 container init 600c1d1fe6488ec6c6d1f76d0d5af271a2f64a4850187eddc1c97b8c0bbc1fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_boyd, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:28 compute-0 podman[216910]: 2025-10-02 19:11:28.61993068 +0000 UTC m=+0.418825646 container start 600c1d1fe6488ec6c6d1f76d0d5af271a2f64a4850187eddc1c97b8c0bbc1fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:11:28 compute-0 podman[216910]: 2025-10-02 19:11:28.631146878 +0000 UTC m=+0.430041904 container attach 600c1d1fe6488ec6c6d1f76d0d5af271a2f64a4850187eddc1c97b8c0bbc1fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:28 compute-0 sudo[216978]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnyowknwfrvkjyshbdhrliqkthozoxdf ; /usr/bin/python3'
Oct 02 19:11:28 compute-0 sudo[216978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:28 compute-0 python3[216980]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:29 compute-0 podman[216981]: 2025-10-02 19:11:29.03098683 +0000 UTC m=+0.096848401 container create 1e9eb9bae56a85a80347c4e8e9e84ac9c800865b0287501f548757ce8b53f452 (image=quay.io/ceph/ceph:v18, name=exciting_haslett, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Oct 02 19:11:29 compute-0 podman[216981]: 2025-10-02 19:11:28.994699057 +0000 UTC m=+0.060560628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:29 compute-0 systemd[1]: Started libpod-conmon-1e9eb9bae56a85a80347c4e8e9e84ac9c800865b0287501f548757ce8b53f452.scope.
Oct 02 19:11:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edfa34790a2da679a48842edd55b277a33612ff5916ba354ce15c44a675ebe0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edfa34790a2da679a48842edd55b277a33612ff5916ba354ce15c44a675ebe0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:11:29 compute-0 podman[216981]: 2025-10-02 19:11:29.165675125 +0000 UTC m=+0.231536706 container init 1e9eb9bae56a85a80347c4e8e9e84ac9c800865b0287501f548757ce8b53f452 (image=quay.io/ceph/ceph:v18, name=exciting_haslett, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 19:11:29 compute-0 podman[216981]: 2025-10-02 19:11:29.175789303 +0000 UTC m=+0.241650834 container start 1e9eb9bae56a85a80347c4e8e9e84ac9c800865b0287501f548757ce8b53f452 (image=quay.io/ceph/ceph:v18, name=exciting_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:29 compute-0 podman[216981]: 2025-10-02 19:11:29.180844157 +0000 UTC m=+0.246705708 container attach 1e9eb9bae56a85a80347c4e8e9e84ac9c800865b0287501f548757ce8b53f452 (image=quay.io/ceph/ceph:v18, name=exciting_haslett, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 19:11:29 compute-0 ceph-mon[191910]: pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:29 compute-0 ceph-mon[191910]: 2.c scrub starts
Oct 02 19:11:29 compute-0 ceph-mon[191910]: 2.c scrub ok
Oct 02 19:11:29 compute-0 ceph-mon[191910]: 3.19 scrub starts
Oct 02 19:11:29 compute-0 ceph-mon[191910]: 3.19 scrub ok
Oct 02 19:11:29 compute-0 ceph-mon[191910]: 4.c deep-scrub starts
Oct 02 19:11:29 compute-0 ceph-mon[191910]: 4.c deep-scrub ok
Oct 02 19:11:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:29 compute-0 agitated_boyd[216950]: {
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:         "osd_id": 1,
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:         "type": "bluestore"
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:     },
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:         "osd_id": 2,
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:         "type": "bluestore"
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:     },
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:         "osd_id": 0,
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:         "type": "bluestore"
Oct 02 19:11:29 compute-0 agitated_boyd[216950]:     }
Oct 02 19:11:29 compute-0 agitated_boyd[216950]: }
Oct 02 19:11:29 compute-0 podman[157186]: time="2025-10-02T19:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:11:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32242 "" "Go-http-client/1.1"
Oct 02 19:11:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6691 "" "Go-http-client/1.1"
Oct 02 19:11:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Oct 02 19:11:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1726554353' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 02 19:11:29 compute-0 systemd[1]: libpod-600c1d1fe6488ec6c6d1f76d0d5af271a2f64a4850187eddc1c97b8c0bbc1fd4.scope: Deactivated successfully.
Oct 02 19:11:29 compute-0 systemd[1]: libpod-600c1d1fe6488ec6c6d1f76d0d5af271a2f64a4850187eddc1c97b8c0bbc1fd4.scope: Consumed 1.167s CPU time.
Oct 02 19:11:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1726554353' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 02 19:11:29 compute-0 systemd[1]: libpod-1e9eb9bae56a85a80347c4e8e9e84ac9c800865b0287501f548757ce8b53f452.scope: Deactivated successfully.
Oct 02 19:11:29 compute-0 conmon[216996]: conmon 1e9eb9bae56a85a80347 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e9eb9bae56a85a80347c4e8e9e84ac9c800865b0287501f548757ce8b53f452.scope/container/memory.events
Oct 02 19:11:29 compute-0 podman[216981]: 2025-10-02 19:11:29.81325024 +0000 UTC m=+0.879111771 container died 1e9eb9bae56a85a80347c4e8e9e84ac9c800865b0287501f548757ce8b53f452 (image=quay.io/ceph/ceph:v18, name=exciting_haslett, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-0edfa34790a2da679a48842edd55b277a33612ff5916ba354ce15c44a675ebe0-merged.mount: Deactivated successfully.
Oct 02 19:11:29 compute-0 podman[216981]: 2025-10-02 19:11:29.873306504 +0000 UTC m=+0.939168035 container remove 1e9eb9bae56a85a80347c4e8e9e84ac9c800865b0287501f548757ce8b53f452 (image=quay.io/ceph/ceph:v18, name=exciting_haslett, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:29 compute-0 systemd[1]: libpod-conmon-1e9eb9bae56a85a80347c4e8e9e84ac9c800865b0287501f548757ce8b53f452.scope: Deactivated successfully.
Oct 02 19:11:29 compute-0 podman[217048]: 2025-10-02 19:11:29.886293879 +0000 UTC m=+0.069446554 container died 600c1d1fe6488ec6c6d1f76d0d5af271a2f64a4850187eddc1c97b8c0bbc1fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:29 compute-0 sudo[216978]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:29 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 2.e deep-scrub starts
Oct 02 19:11:29 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 2.e deep-scrub ok
Oct 02 19:11:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd897dfd32bb22163689e04e5337af62853ff2f7534f4642205daf2ec89f6dd4-merged.mount: Deactivated successfully.
Oct 02 19:11:29 compute-0 podman[217048]: 2025-10-02 19:11:29.96169781 +0000 UTC m=+0.144850495 container remove 600c1d1fe6488ec6c6d1f76d0d5af271a2f64a4850187eddc1c97b8c0bbc1fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_boyd, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:29 compute-0 systemd[1]: libpod-conmon-600c1d1fe6488ec6c6d1f76d0d5af271a2f64a4850187eddc1c97b8c0bbc1fd4.scope: Deactivated successfully.
Oct 02 19:11:30 compute-0 sudo[216659]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:11:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:11:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:30 compute-0 sudo[217073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:30 compute-0 sudo[217073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:30 compute-0 sudo[217073]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:30 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct 02 19:11:30 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct 02 19:11:30 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1726554353' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 02 19:11:30 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1726554353' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 02 19:11:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:30 compute-0 ceph-mon[191910]: 4.15 scrub starts
Oct 02 19:11:30 compute-0 ceph-mon[191910]: 4.15 scrub ok
Oct 02 19:11:30 compute-0 sudo[217098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:11:30 compute-0 sudo[217098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:30 compute-0 sudo[217098]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:30 compute-0 sudo[217123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:30 compute-0 sudo[217123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:30 compute-0 sudo[217123]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:30 compute-0 sudo[217171]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzmvtwugdpntvdblzpnmhgqvwyfdjnwk ; /usr/bin/python3'
Oct 02 19:11:30 compute-0 sudo[217171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:30 compute-0 sudo[217172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:11:30 compute-0 sudo[217172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:30 compute-0 sudo[217172]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:30 compute-0 sudo[217199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:30 compute-0 sudo[217199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:30 compute-0 sudo[217199]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:30 compute-0 python3[217179]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:30 compute-0 sudo[217224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 19:11:30 compute-0 sudo[217224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:30 compute-0 podman[217233]: 2025-10-02 19:11:30.752972341 +0000 UTC m=+0.071319124 container create 199f7b897066bfc33713e8dfd531193acd47c93b37308320e704c10add758aaf (image=quay.io/ceph/ceph:v18, name=dazzling_perlman, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:30 compute-0 podman[217233]: 2025-10-02 19:11:30.71752063 +0000 UTC m=+0.035867393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:30 compute-0 systemd[1]: Started libpod-conmon-199f7b897066bfc33713e8dfd531193acd47c93b37308320e704c10add758aaf.scope.
Oct 02 19:11:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e93dca112b2ce5a8eb39337b617e773369b2837b583a2d99dfc106b3bff4698b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e93dca112b2ce5a8eb39337b617e773369b2837b583a2d99dfc106b3bff4698b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:30 compute-0 podman[217233]: 2025-10-02 19:11:30.886350631 +0000 UTC m=+0.204697394 container init 199f7b897066bfc33713e8dfd531193acd47c93b37308320e704c10add758aaf (image=quay.io/ceph/ceph:v18, name=dazzling_perlman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:30 compute-0 podman[217233]: 2025-10-02 19:11:30.894888138 +0000 UTC m=+0.213234881 container start 199f7b897066bfc33713e8dfd531193acd47c93b37308320e704c10add758aaf (image=quay.io/ceph/ceph:v18, name=dazzling_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:30 compute-0 podman[217233]: 2025-10-02 19:11:30.903020403 +0000 UTC m=+0.221367186 container attach 199f7b897066bfc33713e8dfd531193acd47c93b37308320e704c10add758aaf (image=quay.io/ceph/ceph:v18, name=dazzling_perlman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:30 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Oct 02 19:11:30 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Oct 02 19:11:31 compute-0 ceph-mon[191910]: pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:31 compute-0 ceph-mon[191910]: 2.e deep-scrub starts
Oct 02 19:11:31 compute-0 ceph-mon[191910]: 2.e deep-scrub ok
Oct 02 19:11:31 compute-0 openstack_network_exporter[159337]: ERROR   19:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:11:31 compute-0 openstack_network_exporter[159337]: ERROR   19:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:11:31 compute-0 openstack_network_exporter[159337]: ERROR   19:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:11:31 compute-0 openstack_network_exporter[159337]: ERROR   19:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:11:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:11:31 compute-0 openstack_network_exporter[159337]: ERROR   19:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:11:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:11:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:31 compute-0 podman[217358]: 2025-10-02 19:11:31.467788002 +0000 UTC m=+0.096303097 container exec a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 19:11:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 02 19:11:31 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4277210550' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 19:11:31 compute-0 dazzling_perlman[217267]: 
Oct 02 19:11:31 compute-0 dazzling_perlman[217267]: {"fsid":"6019f664-a1c2-5955-8391-692cb79a59f9","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":197,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":40,"num_osds":3,"num_up_osds":3,"osd_up_since":1759432236,"num_in_osds":3,"osd_in_since":1759432202,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84107264,"bytes_avail":64327819264,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2025-10-02T19:11:25.431804+0000","services":{"osd":{"daemons":{"summary":"","1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Oct 02 19:11:31 compute-0 podman[217358]: 2025-10-02 19:11:31.583114723 +0000 UTC m=+0.211629828 container exec_died a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:31 compute-0 systemd[1]: libpod-199f7b897066bfc33713e8dfd531193acd47c93b37308320e704c10add758aaf.scope: Deactivated successfully.
Oct 02 19:11:31 compute-0 podman[217233]: 2025-10-02 19:11:31.584654644 +0000 UTC m=+0.903001407 container died 199f7b897066bfc33713e8dfd531193acd47c93b37308320e704c10add758aaf (image=quay.io/ceph/ceph:v18, name=dazzling_perlman, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:11:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e93dca112b2ce5a8eb39337b617e773369b2837b583a2d99dfc106b3bff4698b-merged.mount: Deactivated successfully.
Oct 02 19:11:31 compute-0 podman[217233]: 2025-10-02 19:11:31.674838427 +0000 UTC m=+0.993185170 container remove 199f7b897066bfc33713e8dfd531193acd47c93b37308320e704c10add758aaf (image=quay.io/ceph/ceph:v18, name=dazzling_perlman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:11:31 compute-0 systemd[1]: libpod-conmon-199f7b897066bfc33713e8dfd531193acd47c93b37308320e704c10add758aaf.scope: Deactivated successfully.
Oct 02 19:11:31 compute-0 sudo[217171]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:31 compute-0 sudo[217469]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlbtydlcjhxosjtsixqmeulreozsqovo ; /usr/bin/python3'
Oct 02 19:11:31 compute-0 sudo[217469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:32 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Oct 02 19:11:32 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Oct 02 19:11:32 compute-0 python3[217475]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:32 compute-0 podman[217502]: 2025-10-02 19:11:32.140597058 +0000 UTC m=+0.048543789 container create c8808c08479a9026849b525ac8b261a848d8484341b9540c3bd6bf50b510df3b (image=quay.io/ceph/ceph:v18, name=jovial_brahmagupta, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:32 compute-0 systemd[1]: Started libpod-conmon-c8808c08479a9026849b525ac8b261a848d8484341b9540c3bd6bf50b510df3b.scope.
Oct 02 19:11:32 compute-0 sudo[217224]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:11:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:32 compute-0 podman[217502]: 2025-10-02 19:11:32.123202286 +0000 UTC m=+0.031149037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7581a401a82d2313b55cfbb81e946ab227704cae752671d81b746c39bbc0d2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7581a401a82d2313b55cfbb81e946ab227704cae752671d81b746c39bbc0d2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:32 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:11:32 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:32 compute-0 podman[217502]: 2025-10-02 19:11:32.250270689 +0000 UTC m=+0.158217440 container init c8808c08479a9026849b525ac8b261a848d8484341b9540c3bd6bf50b510df3b (image=quay.io/ceph/ceph:v18, name=jovial_brahmagupta, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:11:32 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:11:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:11:32 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:11:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:11:32 compute-0 podman[217502]: 2025-10-02 19:11:32.261736873 +0000 UTC m=+0.169683614 container start c8808c08479a9026849b525ac8b261a848d8484341b9540c3bd6bf50b510df3b (image=quay.io/ceph/ceph:v18, name=jovial_brahmagupta, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:32 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:32 compute-0 podman[217502]: 2025-10-02 19:11:32.267678711 +0000 UTC m=+0.175625542 container attach c8808c08479a9026849b525ac8b261a848d8484341b9540c3bd6bf50b510df3b (image=quay.io/ceph/ceph:v18, name=jovial_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 19:11:32 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 46091fc1-512d-412d-9e64-b9fb58fa4164 does not exist
Oct 02 19:11:32 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 9fe46e92-1d1a-49c4-8462-ee8c86dd77e2 does not exist
Oct 02 19:11:32 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev a143166b-1a5e-43ae-b3ce-f783a4da475a does not exist
Oct 02 19:11:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:11:32 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:11:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:11:32 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:11:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:11:32 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:11:32 compute-0 ceph-mon[191910]: 2.10 scrub starts
Oct 02 19:11:32 compute-0 ceph-mon[191910]: 2.10 scrub ok
Oct 02 19:11:32 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4277210550' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 19:11:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:11:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:11:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:11:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:11:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:11:32 compute-0 sudo[217533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:32 compute-0 sudo[217533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:32 compute-0 sudo[217533]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:32 compute-0 sudo[217558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:11:32 compute-0 sudo[217558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:32 compute-0 sudo[217558]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:32 compute-0 sudo[217583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:32 compute-0 sudo[217583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:32 compute-0 sudo[217583]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:32 compute-0 sudo[217609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:11:32 compute-0 sudo[217609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 19:11:32 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/725158383' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:11:32 compute-0 jovial_brahmagupta[217529]: 
Oct 02 19:11:32 compute-0 jovial_brahmagupta[217529]: {"epoch":1,"fsid":"6019f664-a1c2-5955-8391-692cb79a59f9","modified":"2025-10-02T19:08:05.866707Z","created":"2025-10-02T19:08:05.866707Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Oct 02 19:11:32 compute-0 jovial_brahmagupta[217529]: dumped monmap epoch 1
Oct 02 19:11:32 compute-0 systemd[1]: libpod-c8808c08479a9026849b525ac8b261a848d8484341b9540c3bd6bf50b510df3b.scope: Deactivated successfully.
Oct 02 19:11:32 compute-0 podman[217502]: 2025-10-02 19:11:32.943483056 +0000 UTC m=+0.851429787 container died c8808c08479a9026849b525ac8b261a848d8484341b9540c3bd6bf50b510df3b (image=quay.io/ceph/ceph:v18, name=jovial_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:11:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a7581a401a82d2313b55cfbb81e946ab227704cae752671d81b746c39bbc0d2-merged.mount: Deactivated successfully.
Oct 02 19:11:32 compute-0 podman[217502]: 2025-10-02 19:11:32.999570835 +0000 UTC m=+0.907517566 container remove c8808c08479a9026849b525ac8b261a848d8484341b9540c3bd6bf50b510df3b (image=quay.io/ceph/ceph:v18, name=jovial_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 19:11:33 compute-0 systemd[1]: libpod-conmon-c8808c08479a9026849b525ac8b261a848d8484341b9540c3bd6bf50b510df3b.scope: Deactivated successfully.
Oct 02 19:11:33 compute-0 sudo[217469]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:33 compute-0 podman[217704]: 2025-10-02 19:11:33.127059009 +0000 UTC m=+0.062431218 container create d63c5a9df3b8eaf354082a54f64cf78423416056858806a5466a350ac03b6df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 19:11:33 compute-0 systemd[1]: Started libpod-conmon-d63c5a9df3b8eaf354082a54f64cf78423416056858806a5466a350ac03b6df2.scope.
Oct 02 19:11:33 compute-0 podman[217704]: 2025-10-02 19:11:33.101840679 +0000 UTC m=+0.037212898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:33 compute-0 podman[217704]: 2025-10-02 19:11:33.236837822 +0000 UTC m=+0.172210121 container init d63c5a9df3b8eaf354082a54f64cf78423416056858806a5466a350ac03b6df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:33 compute-0 podman[217704]: 2025-10-02 19:11:33.251697067 +0000 UTC m=+0.187069266 container start d63c5a9df3b8eaf354082a54f64cf78423416056858806a5466a350ac03b6df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_colden, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:11:33 compute-0 nostalgic_colden[217720]: 167 167
Oct 02 19:11:33 compute-0 systemd[1]: libpod-d63c5a9df3b8eaf354082a54f64cf78423416056858806a5466a350ac03b6df2.scope: Deactivated successfully.
Oct 02 19:11:33 compute-0 podman[217704]: 2025-10-02 19:11:33.257166292 +0000 UTC m=+0.192538571 container attach d63c5a9df3b8eaf354082a54f64cf78423416056858806a5466a350ac03b6df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 19:11:33 compute-0 podman[217704]: 2025-10-02 19:11:33.25783107 +0000 UTC m=+0.193203319 container died d63c5a9df3b8eaf354082a54f64cf78423416056858806a5466a350ac03b6df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_colden, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:11:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-0120b31a23fb628390875f2ad088f4c2e5d67af258c8ec35eb7ed1d56a0494f6-merged.mount: Deactivated successfully.
Oct 02 19:11:33 compute-0 podman[217704]: 2025-10-02 19:11:33.316020954 +0000 UTC m=+0.251393163 container remove d63c5a9df3b8eaf354082a54f64cf78423416056858806a5466a350ac03b6df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 19:11:33 compute-0 systemd[1]: libpod-conmon-d63c5a9df3b8eaf354082a54f64cf78423416056858806a5466a350ac03b6df2.scope: Deactivated successfully.
Oct 02 19:11:33 compute-0 ceph-mon[191910]: pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:33 compute-0 ceph-mon[191910]: 3.1a scrub starts
Oct 02 19:11:33 compute-0 ceph-mon[191910]: 3.1a scrub ok
Oct 02 19:11:33 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/725158383' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:11:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:33 compute-0 sudo[217773]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsmuibjnoawkainkrxpurcswgpcnrdqm ; /usr/bin/python3'
Oct 02 19:11:33 compute-0 sudo[217773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:11:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:11:33 compute-0 podman[217759]: 2025-10-02 19:11:33.583794051 +0000 UTC m=+0.093824761 container create babe4c9181eaa9425d93343f74f0331a1c0ffba7f985a4a1226f3a620d154a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_diffie, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:11:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:11:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:11:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:11:33 compute-0 podman[217759]: 2025-10-02 19:11:33.551050652 +0000 UTC m=+0.061081362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:33 compute-0 systemd[1]: Started libpod-conmon-babe4c9181eaa9425d93343f74f0331a1c0ffba7f985a4a1226f3a620d154a47.scope.
Oct 02 19:11:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b615bf8cae2a88e8b24f2218cd6668653369942ba3a5bc9897ec994fb102fa6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b615bf8cae2a88e8b24f2218cd6668653369942ba3a5bc9897ec994fb102fa6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b615bf8cae2a88e8b24f2218cd6668653369942ba3a5bc9897ec994fb102fa6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b615bf8cae2a88e8b24f2218cd6668653369942ba3a5bc9897ec994fb102fa6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b615bf8cae2a88e8b24f2218cd6668653369942ba3a5bc9897ec994fb102fa6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:33 compute-0 podman[217759]: 2025-10-02 19:11:33.72317682 +0000 UTC m=+0.233207460 container init babe4c9181eaa9425d93343f74f0331a1c0ffba7f985a4a1226f3a620d154a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_diffie, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 19:11:33 compute-0 podman[217759]: 2025-10-02 19:11:33.741857606 +0000 UTC m=+0.251888266 container start babe4c9181eaa9425d93343f74f0331a1c0ffba7f985a4a1226f3a620d154a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_diffie, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 19:11:33 compute-0 podman[217759]: 2025-10-02 19:11:33.750198817 +0000 UTC m=+0.260229467 container attach babe4c9181eaa9425d93343f74f0331a1c0ffba7f985a4a1226f3a620d154a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_diffie, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 19:11:33 compute-0 python3[217779]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:33 compute-0 podman[217788]: 2025-10-02 19:11:33.856603681 +0000 UTC m=+0.075678350 container create 39575ecedaa9fd396ec78e83336382f67200bed5b4350a70c264d6a3756181ac (image=quay.io/ceph/ceph:v18, name=inspiring_greider, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:33 compute-0 podman[217788]: 2025-10-02 19:11:33.820366219 +0000 UTC m=+0.039440938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:33 compute-0 systemd[1]: Started libpod-conmon-39575ecedaa9fd396ec78e83336382f67200bed5b4350a70c264d6a3756181ac.scope.
Oct 02 19:11:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a2b4f6ce4fda4678c2a062b2a3fdc278a166e2d2926a2461b25e8743452eda/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a2b4f6ce4fda4678c2a062b2a3fdc278a166e2d2926a2461b25e8743452eda/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:33 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct 02 19:11:33 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct 02 19:11:34 compute-0 podman[217788]: 2025-10-02 19:11:34.034266936 +0000 UTC m=+0.253341625 container init 39575ecedaa9fd396ec78e83336382f67200bed5b4350a70c264d6a3756181ac (image=quay.io/ceph/ceph:v18, name=inspiring_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 19:11:34 compute-0 podman[217788]: 2025-10-02 19:11:34.04271478 +0000 UTC m=+0.261789459 container start 39575ecedaa9fd396ec78e83336382f67200bed5b4350a70c264d6a3756181ac (image=quay.io/ceph/ceph:v18, name=inspiring_greider, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:34 compute-0 podman[217788]: 2025-10-02 19:11:34.047890108 +0000 UTC m=+0.266964787 container attach 39575ecedaa9fd396ec78e83336382f67200bed5b4350a70c264d6a3756181ac (image=quay.io/ceph/ceph:v18, name=inspiring_greider, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 19:11:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:11:34 compute-0 ceph-mon[191910]: pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Oct 02 19:11:34 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2172987800' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 02 19:11:34 compute-0 inspiring_greider[217801]: [client.openstack]
Oct 02 19:11:34 compute-0 inspiring_greider[217801]:         key = AQBszd5oAAAAABAAYG5TvDrd17sIUtWIgmz5JA==
Oct 02 19:11:34 compute-0 inspiring_greider[217801]:         caps mgr = "allow *"
Oct 02 19:11:34 compute-0 inspiring_greider[217801]:         caps mon = "profile rbd"
Oct 02 19:11:34 compute-0 inspiring_greider[217801]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct 02 19:11:34 compute-0 systemd[1]: libpod-39575ecedaa9fd396ec78e83336382f67200bed5b4350a70c264d6a3756181ac.scope: Deactivated successfully.
Oct 02 19:11:34 compute-0 podman[217788]: 2025-10-02 19:11:34.700617201 +0000 UTC m=+0.919691870 container died 39575ecedaa9fd396ec78e83336382f67200bed5b4350a70c264d6a3756181ac (image=quay.io/ceph/ceph:v18, name=inspiring_greider, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-87a2b4f6ce4fda4678c2a062b2a3fdc278a166e2d2926a2461b25e8743452eda-merged.mount: Deactivated successfully.
Oct 02 19:11:34 compute-0 podman[217788]: 2025-10-02 19:11:34.761520248 +0000 UTC m=+0.980594917 container remove 39575ecedaa9fd396ec78e83336382f67200bed5b4350a70c264d6a3756181ac (image=quay.io/ceph/ceph:v18, name=inspiring_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:34 compute-0 systemd[1]: libpod-conmon-39575ecedaa9fd396ec78e83336382f67200bed5b4350a70c264d6a3756181ac.scope: Deactivated successfully.
Oct 02 19:11:34 compute-0 sudo[217773]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:34 compute-0 vigorous_diffie[217783]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:11:34 compute-0 vigorous_diffie[217783]: --> relative data size: 1.0
Oct 02 19:11:34 compute-0 vigorous_diffie[217783]: --> All data devices are unavailable
Oct 02 19:11:34 compute-0 systemd[1]: libpod-babe4c9181eaa9425d93343f74f0331a1c0ffba7f985a4a1226f3a620d154a47.scope: Deactivated successfully.
Oct 02 19:11:34 compute-0 systemd[1]: libpod-babe4c9181eaa9425d93343f74f0331a1c0ffba7f985a4a1226f3a620d154a47.scope: Consumed 1.092s CPU time.
Oct 02 19:11:34 compute-0 podman[217759]: 2025-10-02 19:11:34.894306172 +0000 UTC m=+1.404336822 container died babe4c9181eaa9425d93343f74f0331a1c0ffba7f985a4a1226f3a620d154a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:34 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Oct 02 19:11:34 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Oct 02 19:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b615bf8cae2a88e8b24f2218cd6668653369942ba3a5bc9897ec994fb102fa6-merged.mount: Deactivated successfully.
Oct 02 19:11:34 compute-0 podman[217759]: 2025-10-02 19:11:34.987795243 +0000 UTC m=+1.497825863 container remove babe4c9181eaa9425d93343f74f0331a1c0ffba7f985a4a1226f3a620d154a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_diffie, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 19:11:35 compute-0 systemd[1]: libpod-conmon-babe4c9181eaa9425d93343f74f0331a1c0ffba7f985a4a1226f3a620d154a47.scope: Deactivated successfully.
Oct 02 19:11:35 compute-0 sudo[217609]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:35 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct 02 19:11:35 compute-0 sudo[217876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:35 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct 02 19:11:35 compute-0 sudo[217876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:35 compute-0 sudo[217876]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:35 compute-0 sudo[217901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:11:35 compute-0 sudo[217901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:35 compute-0 sudo[217901]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:35 compute-0 ceph-mon[191910]: 3.1c scrub starts
Oct 02 19:11:35 compute-0 ceph-mon[191910]: 3.1c scrub ok
Oct 02 19:11:35 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2172987800' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 02 19:11:35 compute-0 ceph-mon[191910]: 4.16 scrub starts
Oct 02 19:11:35 compute-0 ceph-mon[191910]: 4.16 scrub ok
Oct 02 19:11:35 compute-0 sudo[217926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:35 compute-0 sudo[217926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:35 compute-0 sudo[217926]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:35 compute-0 sudo[217951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:11:35 compute-0 sudo[217951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:35 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct 02 19:11:35 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct 02 19:11:36 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct 02 19:11:36 compute-0 podman[218089]: 2025-10-02 19:11:36.098628345 +0000 UTC m=+0.060082316 container create d23b9e526cb6bf4d3b01ad4220d764ba679ed8d8746618edeba8e6c56e10bc5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 19:11:36 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct 02 19:11:36 compute-0 systemd[1]: Started libpod-conmon-d23b9e526cb6bf4d3b01ad4220d764ba679ed8d8746618edeba8e6c56e10bc5a.scope.
Oct 02 19:11:36 compute-0 podman[218089]: 2025-10-02 19:11:36.07021176 +0000 UTC m=+0.031665761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:36 compute-0 podman[218089]: 2025-10-02 19:11:36.206339113 +0000 UTC m=+0.167793094 container init d23b9e526cb6bf4d3b01ad4220d764ba679ed8d8746618edeba8e6c56e10bc5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:11:36 compute-0 podman[218089]: 2025-10-02 19:11:36.216646217 +0000 UTC m=+0.178100188 container start d23b9e526cb6bf4d3b01ad4220d764ba679ed8d8746618edeba8e6c56e10bc5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_galileo, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 19:11:36 compute-0 elegant_galileo[218147]: 167 167
Oct 02 19:11:36 compute-0 podman[218089]: 2025-10-02 19:11:36.222334758 +0000 UTC m=+0.183788719 container attach d23b9e526cb6bf4d3b01ad4220d764ba679ed8d8746618edeba8e6c56e10bc5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_galileo, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:11:36 compute-0 systemd[1]: libpod-d23b9e526cb6bf4d3b01ad4220d764ba679ed8d8746618edeba8e6c56e10bc5a.scope: Deactivated successfully.
Oct 02 19:11:36 compute-0 podman[218089]: 2025-10-02 19:11:36.224968078 +0000 UTC m=+0.186422039 container died d23b9e526cb6bf4d3b01ad4220d764ba679ed8d8746618edeba8e6c56e10bc5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_galileo, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 19:11:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b31fadc642ae7adabfcfce4c94aa36f788273ceb9224b3074bd0b2ec9835262-merged.mount: Deactivated successfully.
Oct 02 19:11:36 compute-0 podman[218129]: 2025-10-02 19:11:36.25405244 +0000 UTC m=+0.097178571 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:11:36 compute-0 podman[218089]: 2025-10-02 19:11:36.270323201 +0000 UTC m=+0.231777182 container remove d23b9e526cb6bf4d3b01ad4220d764ba679ed8d8746618edeba8e6c56e10bc5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_galileo, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:36 compute-0 podman[218136]: 2025-10-02 19:11:36.280034509 +0000 UTC m=+0.117098289 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:11:36 compute-0 systemd[1]: libpod-conmon-d23b9e526cb6bf4d3b01ad4220d764ba679ed8d8746618edeba8e6c56e10bc5a.scope: Deactivated successfully.
Oct 02 19:11:36 compute-0 sudo[218235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwgfyelslcpbnmwkkmmkywjbemuiikfp ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432295.7907472-33852-223584141803995/async_wrapper.py j293051734803 30 /home/zuul/.ansible/tmp/ansible-tmp-1759432295.7907472-33852-223584141803995/AnsiballZ_command.py _'
Oct 02 19:11:36 compute-0 sudo[218235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:36 compute-0 ceph-mon[191910]: 7.7 scrub starts
Oct 02 19:11:36 compute-0 ceph-mon[191910]: 7.7 scrub ok
Oct 02 19:11:36 compute-0 ceph-mon[191910]: pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:36 compute-0 ceph-mon[191910]: 4.17 scrub starts
Oct 02 19:11:36 compute-0 ceph-mon[191910]: 4.17 scrub ok
Oct 02 19:11:36 compute-0 podman[218243]: 2025-10-02 19:11:36.475320941 +0000 UTC m=+0.060078944 container create 9a1129a9c3da0e8501941fd51f07946f5df6b5bc9c4fdbc70c5b983a64e2ebce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_galois, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 19:11:36 compute-0 ansible-async_wrapper.py[218237]: Invoked with j293051734803 30 /home/zuul/.ansible/tmp/ansible-tmp-1759432295.7907472-33852-223584141803995/AnsiballZ_command.py _
Oct 02 19:11:36 compute-0 ansible-async_wrapper.py[218259]: Starting module and watcher
Oct 02 19:11:36 compute-0 ansible-async_wrapper.py[218259]: Start watching 218260 (30)
Oct 02 19:11:36 compute-0 ansible-async_wrapper.py[218260]: Start module (218260)
Oct 02 19:11:36 compute-0 ansible-async_wrapper.py[218237]: Return async_wrapper task started.
Oct 02 19:11:36 compute-0 sudo[218235]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:36 compute-0 podman[218243]: 2025-10-02 19:11:36.453258566 +0000 UTC m=+0.038016569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:36 compute-0 systemd[1]: Started libpod-conmon-9a1129a9c3da0e8501941fd51f07946f5df6b5bc9c4fdbc70c5b983a64e2ebce.scope.
Oct 02 19:11:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf2620a61f0c5f07c70595fa2f91009e7c002ef5b9ac4f0adb4144827af6b9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf2620a61f0c5f07c70595fa2f91009e7c002ef5b9ac4f0adb4144827af6b9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf2620a61f0c5f07c70595fa2f91009e7c002ef5b9ac4f0adb4144827af6b9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf2620a61f0c5f07c70595fa2f91009e7c002ef5b9ac4f0adb4144827af6b9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:36 compute-0 podman[218243]: 2025-10-02 19:11:36.612827481 +0000 UTC m=+0.197585484 container init 9a1129a9c3da0e8501941fd51f07946f5df6b5bc9c4fdbc70c5b983a64e2ebce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:36 compute-0 podman[218243]: 2025-10-02 19:11:36.634679411 +0000 UTC m=+0.219437444 container start 9a1129a9c3da0e8501941fd51f07946f5df6b5bc9c4fdbc70c5b983a64e2ebce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 19:11:36 compute-0 podman[218243]: 2025-10-02 19:11:36.641142702 +0000 UTC m=+0.225900735 container attach 9a1129a9c3da0e8501941fd51f07946f5df6b5bc9c4fdbc70c5b983a64e2ebce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_galois, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:36 compute-0 python3[218261]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:36 compute-0 podman[218269]: 2025-10-02 19:11:36.814423291 +0000 UTC m=+0.076185103 container create 535bf250c66aed8b45ca20c76a188088d2d2921e5cf8c1dede1a371842c0f7ac (image=quay.io/ceph/ceph:v18, name=inspiring_euclid, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:36 compute-0 systemd[1]: Started libpod-conmon-535bf250c66aed8b45ca20c76a188088d2d2921e5cf8c1dede1a371842c0f7ac.scope.
Oct 02 19:11:36 compute-0 podman[218269]: 2025-10-02 19:11:36.782932115 +0000 UTC m=+0.044693977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb5a56155399521591bfe90a6dc6dd326b54a5c554d1d424d2cbc3eda830bf6e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb5a56155399521591bfe90a6dc6dd326b54a5c554d1d424d2cbc3eda830bf6e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:36 compute-0 podman[218269]: 2025-10-02 19:11:36.971072678 +0000 UTC m=+0.232834520 container init 535bf250c66aed8b45ca20c76a188088d2d2921e5cf8c1dede1a371842c0f7ac (image=quay.io/ceph/ceph:v18, name=inspiring_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:36 compute-0 podman[218269]: 2025-10-02 19:11:36.986529789 +0000 UTC m=+0.248291581 container start 535bf250c66aed8b45ca20c76a188088d2d2921e5cf8c1dede1a371842c0f7ac (image=quay.io/ceph/ceph:v18, name=inspiring_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 19:11:36 compute-0 podman[218269]: 2025-10-02 19:11:36.993110473 +0000 UTC m=+0.254872265 container attach 535bf250c66aed8b45ca20c76a188088d2d2921e5cf8c1dede1a371842c0f7ac (image=quay.io/ceph/ceph:v18, name=inspiring_euclid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:37 compute-0 ceph-mon[191910]: 7.b scrub starts
Oct 02 19:11:37 compute-0 ceph-mon[191910]: 7.b scrub ok
Oct 02 19:11:37 compute-0 mystifying_galois[218264]: {
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:     "0": [
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:         {
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "devices": [
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "/dev/loop3"
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             ],
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "lv_name": "ceph_lv0",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "lv_size": "21470642176",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "name": "ceph_lv0",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "tags": {
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.cluster_name": "ceph",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.crush_device_class": "",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.encrypted": "0",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.osd_id": "0",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.type": "block",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.vdo": "0"
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             },
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "type": "block",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "vg_name": "ceph_vg0"
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:         }
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:     ],
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:     "1": [
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:         {
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "devices": [
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "/dev/loop4"
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             ],
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "lv_name": "ceph_lv1",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "lv_size": "21470642176",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "name": "ceph_lv1",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "tags": {
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.cluster_name": "ceph",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.crush_device_class": "",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.encrypted": "0",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.osd_id": "1",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.type": "block",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.vdo": "0"
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             },
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "type": "block",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "vg_name": "ceph_vg1"
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:         }
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:     ],
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:     "2": [
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:         {
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "devices": [
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "/dev/loop5"
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             ],
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "lv_name": "ceph_lv2",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "lv_size": "21470642176",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "name": "ceph_lv2",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "tags": {
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.cluster_name": "ceph",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.crush_device_class": "",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.encrypted": "0",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.osd_id": "2",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.type": "block",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:                 "ceph.vdo": "0"
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             },
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "type": "block",
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:             "vg_name": "ceph_vg2"
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:         }
Oct 02 19:11:37 compute-0 mystifying_galois[218264]:     ]
Oct 02 19:11:37 compute-0 mystifying_galois[218264]: }
Oct 02 19:11:37 compute-0 systemd[1]: libpod-9a1129a9c3da0e8501941fd51f07946f5df6b5bc9c4fdbc70c5b983a64e2ebce.scope: Deactivated successfully.
Oct 02 19:11:37 compute-0 podman[218243]: 2025-10-02 19:11:37.512139809 +0000 UTC m=+1.096897832 container died 9a1129a9c3da0e8501941fd51f07946f5df6b5bc9c4fdbc70c5b983a64e2ebce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-adf2620a61f0c5f07c70595fa2f91009e7c002ef5b9ac4f0adb4144827af6b9c-merged.mount: Deactivated successfully.
Oct 02 19:11:37 compute-0 podman[218243]: 2025-10-02 19:11:37.584332655 +0000 UTC m=+1.169090658 container remove 9a1129a9c3da0e8501941fd51f07946f5df6b5bc9c4fdbc70c5b983a64e2ebce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_galois, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 19:11:37 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 19:11:37 compute-0 inspiring_euclid[218284]: 
Oct 02 19:11:37 compute-0 inspiring_euclid[218284]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 02 19:11:37 compute-0 systemd[1]: libpod-conmon-9a1129a9c3da0e8501941fd51f07946f5df6b5bc9c4fdbc70c5b983a64e2ebce.scope: Deactivated successfully.
Oct 02 19:11:37 compute-0 systemd[1]: libpod-535bf250c66aed8b45ca20c76a188088d2d2921e5cf8c1dede1a371842c0f7ac.scope: Deactivated successfully.
Oct 02 19:11:37 compute-0 podman[218269]: 2025-10-02 19:11:37.618939603 +0000 UTC m=+0.880701375 container died 535bf250c66aed8b45ca20c76a188088d2d2921e5cf8c1dede1a371842c0f7ac (image=quay.io/ceph/ceph:v18, name=inspiring_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 19:11:37 compute-0 sudo[217951]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb5a56155399521591bfe90a6dc6dd326b54a5c554d1d424d2cbc3eda830bf6e-merged.mount: Deactivated successfully.
Oct 02 19:11:37 compute-0 podman[218269]: 2025-10-02 19:11:37.709755023 +0000 UTC m=+0.971516785 container remove 535bf250c66aed8b45ca20c76a188088d2d2921e5cf8c1dede1a371842c0f7ac (image=quay.io/ceph/ceph:v18, name=inspiring_euclid, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 19:11:37 compute-0 systemd[1]: libpod-conmon-535bf250c66aed8b45ca20c76a188088d2d2921e5cf8c1dede1a371842c0f7ac.scope: Deactivated successfully.
Oct 02 19:11:37 compute-0 ansible-async_wrapper.py[218260]: Module complete (218260)
Oct 02 19:11:37 compute-0 sudo[218356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:37 compute-0 sudo[218356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:37 compute-0 sudo[218356]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:37 compute-0 sudo[218405]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meuaikomfauiacjptkfpvjxnlssppwml ; /usr/bin/python3'
Oct 02 19:11:37 compute-0 sudo[218405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:37 compute-0 sudo[218407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:11:37 compute-0 sudo[218407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:37 compute-0 sudo[218407]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:37 compute-0 sudo[218434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:37 compute-0 sudo[218434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:37 compute-0 sudo[218434]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:37 compute-0 python3[218412]: ansible-ansible.legacy.async_status Invoked with jid=j293051734803.218237 mode=status _async_dir=/root/.ansible_async
Oct 02 19:11:37 compute-0 sudo[218405]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:37 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 2.12 deep-scrub starts
Oct 02 19:11:37 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 2.12 deep-scrub ok
Oct 02 19:11:38 compute-0 sudo[218459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:11:38 compute-0 sudo[218459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:38 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct 02 19:11:38 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct 02 19:11:38 compute-0 sudo[218530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvooswfkjnzwabkmiuolplfkdbtqypkp ; /usr/bin/python3'
Oct 02 19:11:38 compute-0 sudo[218530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:38 compute-0 python3[218533]: ansible-ansible.legacy.async_status Invoked with jid=j293051734803.218237 mode=cleanup _async_dir=/root/.ansible_async
Oct 02 19:11:38 compute-0 sudo[218530]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:38 compute-0 ceph-mon[191910]: pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:38 compute-0 ceph-mon[191910]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 19:11:38 compute-0 ceph-mon[191910]: 4.19 scrub starts
Oct 02 19:11:38 compute-0 ceph-mon[191910]: 4.19 scrub ok
Oct 02 19:11:38 compute-0 podman[218571]: 2025-10-02 19:11:38.527295861 +0000 UTC m=+0.075734561 container create 0be1fae3e6c1eae86de486cb77c452c14dc57f3e42b343eed0c88ac67a55e2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_herschel, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:11:38 compute-0 systemd[1]: Started libpod-conmon-0be1fae3e6c1eae86de486cb77c452c14dc57f3e42b343eed0c88ac67a55e2b1.scope.
Oct 02 19:11:38 compute-0 podman[218571]: 2025-10-02 19:11:38.495735143 +0000 UTC m=+0.044173913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:38 compute-0 podman[218571]: 2025-10-02 19:11:38.640912756 +0000 UTC m=+0.189351496 container init 0be1fae3e6c1eae86de486cb77c452c14dc57f3e42b343eed0c88ac67a55e2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:38 compute-0 podman[218571]: 2025-10-02 19:11:38.660249769 +0000 UTC m=+0.208688489 container start 0be1fae3e6c1eae86de486cb77c452c14dc57f3e42b343eed0c88ac67a55e2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:11:38 compute-0 vigilant_herschel[218587]: 167 167
Oct 02 19:11:38 compute-0 podman[218571]: 2025-10-02 19:11:38.668986571 +0000 UTC m=+0.217425381 container attach 0be1fae3e6c1eae86de486cb77c452c14dc57f3e42b343eed0c88ac67a55e2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:38 compute-0 systemd[1]: libpod-0be1fae3e6c1eae86de486cb77c452c14dc57f3e42b343eed0c88ac67a55e2b1.scope: Deactivated successfully.
Oct 02 19:11:38 compute-0 podman[218571]: 2025-10-02 19:11:38.670753038 +0000 UTC m=+0.219191738 container died 0be1fae3e6c1eae86de486cb77c452c14dc57f3e42b343eed0c88ac67a55e2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d35a208eb35905a1e962872de25a4236e5d6c6d46fea2be2eb386f70478c4bb-merged.mount: Deactivated successfully.
Oct 02 19:11:38 compute-0 podman[218571]: 2025-10-02 19:11:38.731018178 +0000 UTC m=+0.279456868 container remove 0be1fae3e6c1eae86de486cb77c452c14dc57f3e42b343eed0c88ac67a55e2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_herschel, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:38 compute-0 systemd[1]: libpod-conmon-0be1fae3e6c1eae86de486cb77c452c14dc57f3e42b343eed0c88ac67a55e2b1.scope: Deactivated successfully.
Oct 02 19:11:38 compute-0 sudo[218625]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ricdjoetahihwnzenrcuncqmvpbyxtmd ; /usr/bin/python3'
Oct 02 19:11:38 compute-0 sudo[218625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:38 compute-0 python3[218629]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:38 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 2.14 deep-scrub starts
Oct 02 19:11:38 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 2.14 deep-scrub ok
Oct 02 19:11:38 compute-0 podman[218635]: 2025-10-02 19:11:38.939706396 +0000 UTC m=+0.067597625 container create de18e5f497132032d0bac2ecf4a4528fd07b2dd4c92cebef7cf1ce76a3df702d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:39 compute-0 systemd[1]: Started libpod-conmon-de18e5f497132032d0bac2ecf4a4528fd07b2dd4c92cebef7cf1ce76a3df702d.scope.
Oct 02 19:11:39 compute-0 podman[218635]: 2025-10-02 19:11:38.911702673 +0000 UTC m=+0.039593932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:39 compute-0 podman[218647]: 2025-10-02 19:11:39.02272574 +0000 UTC m=+0.084466333 container create 63d97e97c93ccc0f93a59378683b65801490c82f3bd18b510907e1aa900db8a1 (image=quay.io/ceph/ceph:v18, name=epic_tu, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed4b38ac438a27d678236c747de6dc55b9415b56eced969f17941a032a4a32f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed4b38ac438a27d678236c747de6dc55b9415b56eced969f17941a032a4a32f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed4b38ac438a27d678236c747de6dc55b9415b56eced969f17941a032a4a32f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed4b38ac438a27d678236c747de6dc55b9415b56eced969f17941a032a4a32f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:39 compute-0 systemd[1]: Started libpod-conmon-63d97e97c93ccc0f93a59378683b65801490c82f3bd18b510907e1aa900db8a1.scope.
Oct 02 19:11:39 compute-0 podman[218635]: 2025-10-02 19:11:39.074890654 +0000 UTC m=+0.202781913 container init de18e5f497132032d0bac2ecf4a4528fd07b2dd4c92cebef7cf1ce76a3df702d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:11:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d09cb46419f196b35ebbf50b4a8d3102f8f258e7e4cc805ba89915b2db6bebcf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d09cb46419f196b35ebbf50b4a8d3102f8f258e7e4cc805ba89915b2db6bebcf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:39 compute-0 podman[218635]: 2025-10-02 19:11:39.089025019 +0000 UTC m=+0.216916258 container start de18e5f497132032d0bac2ecf4a4528fd07b2dd4c92cebef7cf1ce76a3df702d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 19:11:39 compute-0 podman[218647]: 2025-10-02 19:11:38.998136427 +0000 UTC m=+0.059877040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:39 compute-0 podman[218635]: 2025-10-02 19:11:39.093302403 +0000 UTC m=+0.221193682 container attach de18e5f497132032d0bac2ecf4a4528fd07b2dd4c92cebef7cf1ce76a3df702d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct 02 19:11:39 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Oct 02 19:11:39 compute-0 podman[218647]: 2025-10-02 19:11:39.10561845 +0000 UTC m=+0.167359083 container init 63d97e97c93ccc0f93a59378683b65801490c82f3bd18b510907e1aa900db8a1 (image=quay.io/ceph/ceph:v18, name=epic_tu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 19:11:39 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Oct 02 19:11:39 compute-0 podman[218647]: 2025-10-02 19:11:39.123521865 +0000 UTC m=+0.185262468 container start 63d97e97c93ccc0f93a59378683b65801490c82f3bd18b510907e1aa900db8a1 (image=quay.io/ceph/ceph:v18, name=epic_tu, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 19:11:39 compute-0 podman[218647]: 2025-10-02 19:11:39.129800931 +0000 UTC m=+0.191541534 container attach 63d97e97c93ccc0f93a59378683b65801490c82f3bd18b510907e1aa900db8a1 (image=quay.io/ceph/ceph:v18, name=epic_tu, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 19:11:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:11:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:39 compute-0 ceph-mon[191910]: 2.12 deep-scrub starts
Oct 02 19:11:39 compute-0 ceph-mon[191910]: 2.12 deep-scrub ok
Oct 02 19:11:39 compute-0 ceph-mon[191910]: 4.1d scrub starts
Oct 02 19:11:39 compute-0 ceph-mon[191910]: 4.1d scrub ok
Oct 02 19:11:39 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 19:11:39 compute-0 epic_tu[218668]: 
Oct 02 19:11:39 compute-0 epic_tu[218668]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 02 19:11:39 compute-0 systemd[1]: libpod-63d97e97c93ccc0f93a59378683b65801490c82f3bd18b510907e1aa900db8a1.scope: Deactivated successfully.
Oct 02 19:11:39 compute-0 podman[218647]: 2025-10-02 19:11:39.704916125 +0000 UTC m=+0.766656748 container died 63d97e97c93ccc0f93a59378683b65801490c82f3bd18b510907e1aa900db8a1 (image=quay.io/ceph/ceph:v18, name=epic_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:11:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d09cb46419f196b35ebbf50b4a8d3102f8f258e7e4cc805ba89915b2db6bebcf-merged.mount: Deactivated successfully.
Oct 02 19:11:39 compute-0 podman[218647]: 2025-10-02 19:11:39.837126484 +0000 UTC m=+0.898867077 container remove 63d97e97c93ccc0f93a59378683b65801490c82f3bd18b510907e1aa900db8a1 (image=quay.io/ceph/ceph:v18, name=epic_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 19:11:39 compute-0 systemd[1]: libpod-conmon-63d97e97c93ccc0f93a59378683b65801490c82f3bd18b510907e1aa900db8a1.scope: Deactivated successfully.
Oct 02 19:11:39 compute-0 sudo[218625]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:39 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct 02 19:11:39 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct 02 19:11:40 compute-0 objective_jackson[218663]: {
Oct 02 19:11:40 compute-0 objective_jackson[218663]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:11:40 compute-0 objective_jackson[218663]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:11:40 compute-0 objective_jackson[218663]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:11:40 compute-0 objective_jackson[218663]:         "osd_id": 1,
Oct 02 19:11:40 compute-0 objective_jackson[218663]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:11:40 compute-0 objective_jackson[218663]:         "type": "bluestore"
Oct 02 19:11:40 compute-0 objective_jackson[218663]:     },
Oct 02 19:11:40 compute-0 objective_jackson[218663]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:11:40 compute-0 objective_jackson[218663]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:11:40 compute-0 objective_jackson[218663]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:11:40 compute-0 objective_jackson[218663]:         "osd_id": 2,
Oct 02 19:11:40 compute-0 objective_jackson[218663]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:11:40 compute-0 objective_jackson[218663]:         "type": "bluestore"
Oct 02 19:11:40 compute-0 objective_jackson[218663]:     },
Oct 02 19:11:40 compute-0 objective_jackson[218663]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:11:40 compute-0 objective_jackson[218663]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:11:40 compute-0 objective_jackson[218663]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:11:40 compute-0 objective_jackson[218663]:         "osd_id": 0,
Oct 02 19:11:40 compute-0 objective_jackson[218663]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:11:40 compute-0 objective_jackson[218663]:         "type": "bluestore"
Oct 02 19:11:40 compute-0 objective_jackson[218663]:     }
Oct 02 19:11:40 compute-0 objective_jackson[218663]: }
Oct 02 19:11:40 compute-0 systemd[1]: libpod-de18e5f497132032d0bac2ecf4a4528fd07b2dd4c92cebef7cf1ce76a3df702d.scope: Deactivated successfully.
Oct 02 19:11:40 compute-0 systemd[1]: libpod-de18e5f497132032d0bac2ecf4a4528fd07b2dd4c92cebef7cf1ce76a3df702d.scope: Consumed 1.032s CPU time.
Oct 02 19:11:40 compute-0 podman[218736]: 2025-10-02 19:11:40.204512453 +0000 UTC m=+0.048551389 container died de18e5f497132032d0bac2ecf4a4528fd07b2dd4c92cebef7cf1ce76a3df702d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:11:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ed4b38ac438a27d678236c747de6dc55b9415b56eced969f17941a032a4a32f-merged.mount: Deactivated successfully.
Oct 02 19:11:40 compute-0 podman[218736]: 2025-10-02 19:11:40.276008121 +0000 UTC m=+0.120047047 container remove de18e5f497132032d0bac2ecf4a4528fd07b2dd4c92cebef7cf1ce76a3df702d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 19:11:40 compute-0 systemd[1]: libpod-conmon-de18e5f497132032d0bac2ecf4a4528fd07b2dd4c92cebef7cf1ce76a3df702d.scope: Deactivated successfully.
Oct 02 19:11:40 compute-0 podman[218737]: 2025-10-02 19:11:40.306186242 +0000 UTC m=+0.119743349 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, architecture=x86_64)
Oct 02 19:11:40 compute-0 sudo[218459]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:11:40 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:11:40 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:40 compute-0 ceph-mgr[192222]: [progress INFO root] update: starting ev ab56eb2c-6698-4dbd-b582-62789dae45d8 (Updating rgw.rgw deployment (+1 -> 1))
Oct 02 19:11:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fkvtrm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct 02 19:11:40 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fkvtrm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 02 19:11:40 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fkvtrm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 02 19:11:40 compute-0 podman[218761]: 2025-10-02 19:11:40.362078135 +0000 UTC m=+0.107578846 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:11:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct 02 19:11:40 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:11:40 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:11:40 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.fkvtrm on compute-0
Oct 02 19:11:40 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.fkvtrm on compute-0
Oct 02 19:11:40 compute-0 sudo[218792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:40 compute-0 sudo[218792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:40 compute-0 sudo[218792]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:40 compute-0 ceph-mon[191910]: 2.14 deep-scrub starts
Oct 02 19:11:40 compute-0 ceph-mon[191910]: 2.14 deep-scrub ok
Oct 02 19:11:40 compute-0 ceph-mon[191910]: pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:40 compute-0 ceph-mon[191910]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 19:11:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fkvtrm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 02 19:11:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fkvtrm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 02 19:11:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:11:40 compute-0 sudo[218817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:11:40 compute-0 sudo[218817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:40 compute-0 sudo[218817]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:40 compute-0 sudo[218845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:40 compute-0 sudo[218845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:40 compute-0 sudo[218845]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:40 compute-0 sudo[218888]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfljcniabyynubebaacqnlcndykqqlrb ; /usr/bin/python3'
Oct 02 19:11:40 compute-0 sudo[218888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:40 compute-0 sudo[218893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:11:40 compute-0 sudo[218893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:40 compute-0 python3[218892]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:40 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Oct 02 19:11:40 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Oct 02 19:11:40 compute-0 podman[218918]: 2025-10-02 19:11:40.98808517 +0000 UTC m=+0.083629231 container create 1d77cb689cdcd09f75750ff1256a92e2cdf382f74727fd9b7d8207c0d0ae8bcb (image=quay.io/ceph/ceph:v18, name=musing_gates, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 19:11:41 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct 02 19:11:41 compute-0 systemd[1]: Started libpod-conmon-1d77cb689cdcd09f75750ff1256a92e2cdf382f74727fd9b7d8207c0d0ae8bcb.scope.
Oct 02 19:11:41 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct 02 19:11:41 compute-0 podman[218918]: 2025-10-02 19:11:40.963748864 +0000 UTC m=+0.059292955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab98c5f849563f7a51426643d1951973e182aa3da28e933b5b0464e75d9325c9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab98c5f849563f7a51426643d1951973e182aa3da28e933b5b0464e75d9325c9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:41 compute-0 podman[218918]: 2025-10-02 19:11:41.094504404 +0000 UTC m=+0.190048485 container init 1d77cb689cdcd09f75750ff1256a92e2cdf382f74727fd9b7d8207c0d0ae8bcb (image=quay.io/ceph/ceph:v18, name=musing_gates, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:41 compute-0 podman[218918]: 2025-10-02 19:11:41.104627733 +0000 UTC m=+0.200171784 container start 1d77cb689cdcd09f75750ff1256a92e2cdf382f74727fd9b7d8207c0d0ae8bcb (image=quay.io/ceph/ceph:v18, name=musing_gates, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 19:11:41 compute-0 podman[218918]: 2025-10-02 19:11:41.108880016 +0000 UTC m=+0.204424067 container attach 1d77cb689cdcd09f75750ff1256a92e2cdf382f74727fd9b7d8207c0d0ae8bcb (image=quay.io/ceph/ceph:v18, name=musing_gates, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:11:41 compute-0 podman[218972]: 2025-10-02 19:11:41.296229518 +0000 UTC m=+0.079795349 container create e850365be039a038d03f97741f0542cc29f0dd2da82b203794b39cfe0bac2d54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:41 compute-0 podman[218972]: 2025-10-02 19:11:41.253243597 +0000 UTC m=+0.036809498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:41 compute-0 systemd[1]: Started libpod-conmon-e850365be039a038d03f97741f0542cc29f0dd2da82b203794b39cfe0bac2d54.scope.
Oct 02 19:11:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:41 compute-0 podman[218972]: 2025-10-02 19:11:41.427279586 +0000 UTC m=+0.210845477 container init e850365be039a038d03f97741f0542cc29f0dd2da82b203794b39cfe0bac2d54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_austin, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:41 compute-0 podman[218972]: 2025-10-02 19:11:41.442806588 +0000 UTC m=+0.226372389 container start e850365be039a038d03f97741f0542cc29f0dd2da82b203794b39cfe0bac2d54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:41 compute-0 flamboyant_austin[218988]: 167 167
Oct 02 19:11:41 compute-0 systemd[1]: libpod-e850365be039a038d03f97741f0542cc29f0dd2da82b203794b39cfe0bac2d54.scope: Deactivated successfully.
Oct 02 19:11:41 compute-0 podman[218972]: 2025-10-02 19:11:41.447649797 +0000 UTC m=+0.231215708 container attach e850365be039a038d03f97741f0542cc29f0dd2da82b203794b39cfe0bac2d54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 19:11:41 compute-0 podman[218972]: 2025-10-02 19:11:41.450254026 +0000 UTC m=+0.233819907 container died e850365be039a038d03f97741f0542cc29f0dd2da82b203794b39cfe0bac2d54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_austin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ca121000ba356aaf9c095482644a7f394a58e8660fd7ef117e5774398f754e0-merged.mount: Deactivated successfully.
Oct 02 19:11:41 compute-0 podman[218972]: 2025-10-02 19:11:41.520099319 +0000 UTC m=+0.303665140 container remove e850365be039a038d03f97741f0542cc29f0dd2da82b203794b39cfe0bac2d54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 19:11:41 compute-0 ansible-async_wrapper.py[218259]: Done in kid B.
Oct 02 19:11:41 compute-0 systemd[1]: libpod-conmon-e850365be039a038d03f97741f0542cc29f0dd2da82b203794b39cfe0bac2d54.scope: Deactivated successfully.
Oct 02 19:11:41 compute-0 ceph-mon[191910]: 2.1a scrub starts
Oct 02 19:11:41 compute-0 ceph-mon[191910]: 2.1a scrub ok
Oct 02 19:11:41 compute-0 ceph-mon[191910]: Deploying daemon rgw.rgw.compute-0.fkvtrm on compute-0
Oct 02 19:11:41 compute-0 ceph-mon[191910]: 4.1e scrub starts
Oct 02 19:11:41 compute-0 ceph-mon[191910]: 4.1e scrub ok
Oct 02 19:11:41 compute-0 systemd[1]: Reloading.
Oct 02 19:11:41 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 19:11:41 compute-0 musing_gates[218945]: 
Oct 02 19:11:41 compute-0 musing_gates[218945]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct 02 19:11:41 compute-0 podman[218918]: 2025-10-02 19:11:41.731713086 +0000 UTC m=+0.827257167 container died 1d77cb689cdcd09f75750ff1256a92e2cdf382f74727fd9b7d8207c0d0ae8bcb (image=quay.io/ceph/ceph:v18, name=musing_gates, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 19:11:41 compute-0 systemd-rc-local-generator[219055]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:11:41 compute-0 systemd-sysv-generator[219062]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:11:41 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct 02 19:11:41 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct 02 19:11:42 compute-0 systemd[1]: libpod-1d77cb689cdcd09f75750ff1256a92e2cdf382f74727fd9b7d8207c0d0ae8bcb.scope: Deactivated successfully.
Oct 02 19:11:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab98c5f849563f7a51426643d1951973e182aa3da28e933b5b0464e75d9325c9-merged.mount: Deactivated successfully.
Oct 02 19:11:42 compute-0 podman[218918]: 2025-10-02 19:11:42.101809958 +0000 UTC m=+1.197353999 container remove 1d77cb689cdcd09f75750ff1256a92e2cdf382f74727fd9b7d8207c0d0ae8bcb (image=quay.io/ceph/ceph:v18, name=musing_gates, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 19:11:42 compute-0 systemd[1]: libpod-conmon-1d77cb689cdcd09f75750ff1256a92e2cdf382f74727fd9b7d8207c0d0ae8bcb.scope: Deactivated successfully.
Oct 02 19:11:42 compute-0 systemd[1]: Reloading.
Oct 02 19:11:42 compute-0 sudo[218888]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:42 compute-0 systemd-sysv-generator[219114]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:11:42 compute-0 systemd-rc-local-generator[219111]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:11:42 compute-0 ceph-mon[191910]: 2.1e scrub starts
Oct 02 19:11:42 compute-0 ceph-mon[191910]: 2.1e scrub ok
Oct 02 19:11:42 compute-0 ceph-mon[191910]: pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:42 compute-0 ceph-mon[191910]: from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 19:11:42 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.fkvtrm for 6019f664-a1c2-5955-8391-692cb79a59f9...
Oct 02 19:11:42 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Oct 02 19:11:42 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Oct 02 19:11:43 compute-0 podman[219165]: 2025-10-02 19:11:43.026803318 +0000 UTC m=+0.106827407 container create fb442b25169af8f759af08e2392c8c0bd51d7d0b5b88311c1801f7935db25c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-rgw-rgw-compute-0-fkvtrm, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:11:43 compute-0 podman[219165]: 2025-10-02 19:11:42.979194144 +0000 UTC m=+0.059218273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ade3bbfdff804e76a170e9d2d408562f12967979100644801f054b446dd5dca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ade3bbfdff804e76a170e9d2d408562f12967979100644801f054b446dd5dca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ade3bbfdff804e76a170e9d2d408562f12967979100644801f054b446dd5dca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ade3bbfdff804e76a170e9d2d408562f12967979100644801f054b446dd5dca/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.fkvtrm supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:43 compute-0 sudo[219206]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azrmnhhadckgfumpphzcxzmvipmotxbf ; /usr/bin/python3'
Oct 02 19:11:43 compute-0 sudo[219206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:43 compute-0 podman[219165]: 2025-10-02 19:11:43.240182591 +0000 UTC m=+0.320206730 container init fb442b25169af8f759af08e2392c8c0bd51d7d0b5b88311c1801f7935db25c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-rgw-rgw-compute-0-fkvtrm, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct 02 19:11:43 compute-0 podman[219165]: 2025-10-02 19:11:43.25486558 +0000 UTC m=+0.334889659 container start fb442b25169af8f759af08e2392c8c0bd51d7d0b5b88311c1801f7935db25c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-rgw-rgw-compute-0-fkvtrm, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:43 compute-0 bash[219165]: fb442b25169af8f759af08e2392c8c0bd51d7d0b5b88311c1801f7935db25c7a
Oct 02 19:11:43 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.fkvtrm for 6019f664-a1c2-5955-8391-692cb79a59f9.
Oct 02 19:11:43 compute-0 radosgw[219210]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 02 19:11:43 compute-0 radosgw[219210]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct 02 19:11:43 compute-0 radosgw[219210]: framework: beast
Oct 02 19:11:43 compute-0 radosgw[219210]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct 02 19:11:43 compute-0 radosgw[219210]: init_numa not setting numa affinity
Oct 02 19:11:43 compute-0 sudo[218893]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:11:43 compute-0 python3[219208]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:43 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:11:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:43 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 02 19:11:43 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:43 compute-0 ceph-mgr[192222]: [progress INFO root] complete: finished ev ab56eb2c-6698-4dbd-b582-62789dae45d8 (Updating rgw.rgw deployment (+1 -> 1))
Oct 02 19:11:43 compute-0 ceph-mgr[192222]: [progress INFO root] Completed event ab56eb2c-6698-4dbd-b582-62789dae45d8 (Updating rgw.rgw deployment (+1 -> 1)) in 3 seconds
Oct 02 19:11:43 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Oct 02 19:11:43 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct 02 19:11:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 02 19:11:43 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:43 compute-0 podman[219260]: 2025-10-02 19:11:43.519234567 +0000 UTC m=+0.104361881 container create d99e672987a27c4bb65fab793ea32b8a31fe581fc3d5a0af3159dc15935ca454 (image=quay.io/ceph/ceph:v18, name=eloquent_hermann, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 19:11:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 02 19:11:43 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:43 compute-0 ceph-mgr[192222]: [progress INFO root] update: starting ev 24f72913-0009-42af-8174-195c4d73f743 (Updating mds.cephfs deployment (+1 -> 1))
Oct 02 19:11:43 compute-0 podman[219260]: 2025-10-02 19:11:43.462775258 +0000 UTC m=+0.047902602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.fuygbr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct 02 19:11:43 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.fuygbr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 02 19:11:43 compute-0 ceph-mon[191910]: 5.6 scrub starts
Oct 02 19:11:43 compute-0 ceph-mon[191910]: 5.6 scrub ok
Oct 02 19:11:43 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:43 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:43 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:43 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:43 compute-0 systemd[1]: Started libpod-conmon-d99e672987a27c4bb65fab793ea32b8a31fe581fc3d5a0af3159dc15935ca454.scope.
Oct 02 19:11:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66a274bab1649b7c0ebf96d0db327e721101506e4cebaa4ddc95589f6fd55d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66a274bab1649b7c0ebf96d0db327e721101506e4cebaa4ddc95589f6fd55d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:43 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.fuygbr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 02 19:11:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:11:43 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:11:43 compute-0 podman[219260]: 2025-10-02 19:11:43.699783747 +0000 UTC m=+0.284911121 container init d99e672987a27c4bb65fab793ea32b8a31fe581fc3d5a0af3159dc15935ca454 (image=quay.io/ceph/ceph:v18, name=eloquent_hermann, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:43 compute-0 ceph-mgr[192222]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.fuygbr on compute-0
Oct 02 19:11:43 compute-0 ceph-mgr[192222]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.fuygbr on compute-0
Oct 02 19:11:43 compute-0 podman[219260]: 2025-10-02 19:11:43.718245127 +0000 UTC m=+0.303372421 container start d99e672987a27c4bb65fab793ea32b8a31fe581fc3d5a0af3159dc15935ca454 (image=quay.io/ceph/ceph:v18, name=eloquent_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:43 compute-0 podman[219260]: 2025-10-02 19:11:43.730928684 +0000 UTC m=+0.316056078 container attach d99e672987a27c4bb65fab793ea32b8a31fe581fc3d5a0af3159dc15935ca454 (image=quay.io/ceph/ceph:v18, name=eloquent_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:43 compute-0 ceph-mgr[192222]: [progress INFO root] Writing back 11 completed events
Oct 02 19:11:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 19:11:43 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:43 compute-0 sudo[219289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:43 compute-0 sudo[219289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:43 compute-0 sudo[219289]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:43 compute-0 sudo[219314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:11:43 compute-0 sudo[219314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:43 compute-0 sudo[219314]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:44 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct 02 19:11:44 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct 02 19:11:44 compute-0 sudo[219339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:44 compute-0 sudo[219339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:44 compute-0 sudo[219339]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:11:44 compute-0 sudo[219383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 6019f664-a1c2-5955-8391-692cb79a59f9
Oct 02 19:11:44 compute-0 sudo[219383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:44 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 19:11:44 compute-0 eloquent_hermann[219285]: 
Oct 02 19:11:44 compute-0 eloquent_hermann[219285]: [{"container_id": "56c88518a73e", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.41%", "created": "2025-10-02T19:09:41.771499Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-10-02T19:09:41.829487Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T19:11:32.215251Z", "memory_usage": 11618222, "ports": [], "service_name": "crash", "started": "2025-10-02T19:09:41.591835Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-6019f664-a1c2-5955-8391-692cb79a59f9@crash.compute-0", "version": "18.2.7"}, {"container_id": "f7f69af0ab81", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "25.77%", "created": "2025-10-02T19:08:17.800295Z", "daemon_id": "compute-0.uktbkz", "daemon_name": "mgr.compute-0.uktbkz", "daemon_type": "mgr", "events": ["2025-10-02T19:10:49.547285Z daemon:mgr.compute-0.uktbkz [INFO] \"Reconfigured mgr.compute-0.uktbkz on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T19:11:32.215166Z", "memory_usage": 549244108, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-02T19:08:17.440941Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-6019f664-a1c2-5955-8391-692cb79a59f9@mgr.compute-0.uktbkz", "version": "18.2.7"}, {"container_id": "a22d7e12819e", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "3.04%", "created": "2025-10-02T19:08:09.719083Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-10-02T19:10:48.515540Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T19:11:32.215052Z", "memory_request": 2147483648, "memory_usage": 40380661, "ports": [], "service_name": "mon", "started": "2025-10-02T19:08:13.817099Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-6019f664-a1c2-5955-8391-692cb79a59f9@mon.compute-0", "version": "18.2.7"}, {"container_id": "67a0f30cd91e", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.21%", "created": "2025-10-02T19:10:15.689393Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-10-02T19:10:15.781250Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T19:11:32.215330Z", "memory_request": 4294967296, "memory_usage": 68188897, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-02T19:10:15.487249Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-6019f664-a1c2-5955-8391-692cb79a59f9@osd.0", "version": "18.2.7"}, {"container_id": "0b15b60ae73a", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.89%", "created": "2025-10-02T19:10:22.267567Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-10-02T19:10:22.367935Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T19:11:32.215489Z", "memory_request": 4294967296, "memory_usage": 69866618, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-02T19:10:22.080301Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-6019f664-a1c2-5955-8391-692cb79a59f9@osd.1", "version": "18.2.7"}, {"container_id": "a1dbd8bcb63e", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.59%", "created": "2025-10-02T19:10:29.172751Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-10-02T19:10:29.331263Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T19:11:32.215569Z", "memory_request": 4294967296, "memory_usage": 65672314, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-02T19:10:28.834696Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-6019f664-a1c2-5955-8391-692cb79a59f9@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.fkvtrm", "daemon_name": "rgw.rgw.compute-0.fkvtrm", "daemon_type": "rgw", "events": ["2025-10-02T19:11:43.462651Z daemon:rgw.rgw.compute-0.fkvtrm [INFO] \"Deployed rgw.rgw.compute-0.fkvtrm on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Oct 02 19:11:44 compute-0 systemd[1]: libpod-d99e672987a27c4bb65fab793ea32b8a31fe581fc3d5a0af3159dc15935ca454.scope: Deactivated successfully.
Oct 02 19:11:44 compute-0 podman[219260]: 2025-10-02 19:11:44.370801196 +0000 UTC m=+0.955928500 container died d99e672987a27c4bb65fab793ea32b8a31fe581fc3d5a0af3159dc15935ca454 (image=quay.io/ceph/ceph:v18, name=eloquent_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct 02 19:11:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct 02 19:11:44 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct 02 19:11:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Oct 02 19:11:44 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3650402172' entity='client.rgw.rgw.compute-0.fkvtrm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 02 19:11:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f66a274bab1649b7c0ebf96d0db327e721101506e4cebaa4ddc95589f6fd55d0-merged.mount: Deactivated successfully.
Oct 02 19:11:44 compute-0 podman[219260]: 2025-10-02 19:11:44.458277958 +0000 UTC m=+1.043405242 container remove d99e672987a27c4bb65fab793ea32b8a31fe581fc3d5a0af3159dc15935ca454 (image=quay.io/ceph/ceph:v18, name=eloquent_hermann, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:44 compute-0 systemd[1]: libpod-conmon-d99e672987a27c4bb65fab793ea32b8a31fe581fc3d5a0af3159dc15935ca454.scope: Deactivated successfully.
Oct 02 19:11:44 compute-0 sudo[219206]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:44 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 41 pg[8.0( empty local-lis/les=0/0 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:44 compute-0 ceph-mon[191910]: 5.8 scrub starts
Oct 02 19:11:44 compute-0 ceph-mon[191910]: 5.8 scrub ok
Oct 02 19:11:44 compute-0 ceph-mon[191910]: pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:44 compute-0 ceph-mon[191910]: Saving service rgw.rgw spec with placement compute-0
Oct 02 19:11:44 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:44 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.fuygbr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 02 19:11:44 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.fuygbr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 02 19:11:44 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:11:44 compute-0 ceph-mon[191910]: Deploying daemon mds.cephfs.compute-0.fuygbr on compute-0
Oct 02 19:11:44 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:44 compute-0 ceph-mon[191910]: 4.1f scrub starts
Oct 02 19:11:44 compute-0 ceph-mon[191910]: 4.1f scrub ok
Oct 02 19:11:44 compute-0 ceph-mon[191910]: from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 19:11:44 compute-0 ceph-mon[191910]: osdmap e41: 3 total, 3 up, 3 in
Oct 02 19:11:44 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3650402172' entity='client.rgw.rgw.compute-0.fkvtrm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 02 19:11:44 compute-0 podman[219460]: 2025-10-02 19:11:44.705841608 +0000 UTC m=+0.072636478 container create f97e362aaab632ffb2cd66b9cb3c7309e24e06b887fb3f0e72c6cf7b32231ac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_roentgen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct 02 19:11:44 compute-0 systemd[1]: Started libpod-conmon-f97e362aaab632ffb2cd66b9cb3c7309e24e06b887fb3f0e72c6cf7b32231ac8.scope.
Oct 02 19:11:44 compute-0 podman[219460]: 2025-10-02 19:11:44.685070097 +0000 UTC m=+0.051864997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:44 compute-0 podman[219460]: 2025-10-02 19:11:44.795076847 +0000 UTC m=+0.161871737 container init f97e362aaab632ffb2cd66b9cb3c7309e24e06b887fb3f0e72c6cf7b32231ac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:11:44 compute-0 podman[219460]: 2025-10-02 19:11:44.802912335 +0000 UTC m=+0.169707195 container start f97e362aaab632ffb2cd66b9cb3c7309e24e06b887fb3f0e72c6cf7b32231ac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_roentgen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:44 compute-0 podman[219460]: 2025-10-02 19:11:44.806747256 +0000 UTC m=+0.173542136 container attach f97e362aaab632ffb2cd66b9cb3c7309e24e06b887fb3f0e72c6cf7b32231ac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 19:11:44 compute-0 romantic_roentgen[219476]: 167 167
Oct 02 19:11:44 compute-0 systemd[1]: libpod-f97e362aaab632ffb2cd66b9cb3c7309e24e06b887fb3f0e72c6cf7b32231ac8.scope: Deactivated successfully.
Oct 02 19:11:44 compute-0 podman[219460]: 2025-10-02 19:11:44.814647026 +0000 UTC m=+0.181441896 container died f97e362aaab632ffb2cd66b9cb3c7309e24e06b887fb3f0e72c6cf7b32231ac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_roentgen, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1aecca350380d1b2a369787f208599d86ae04dec36bc010557a61e15f920043d-merged.mount: Deactivated successfully.
Oct 02 19:11:44 compute-0 podman[219460]: 2025-10-02 19:11:44.859047795 +0000 UTC m=+0.225842665 container remove f97e362aaab632ffb2cd66b9cb3c7309e24e06b887fb3f0e72c6cf7b32231ac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_roentgen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:44 compute-0 systemd[1]: libpod-conmon-f97e362aaab632ffb2cd66b9cb3c7309e24e06b887fb3f0e72c6cf7b32231ac8.scope: Deactivated successfully.
Oct 02 19:11:44 compute-0 systemd[1]: Reloading.
Oct 02 19:11:44 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.a scrub starts
Oct 02 19:11:45 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.a scrub ok
Oct 02 19:11:45 compute-0 systemd-sysv-generator[219519]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:11:45 compute-0 systemd-rc-local-generator[219516]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:11:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct 02 19:11:45 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3650402172' entity='client.rgw.rgw.compute-0.fkvtrm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 02 19:11:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct 02 19:11:45 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct 02 19:11:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v114: 194 pgs: 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:45 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 42 pg[8.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:45 compute-0 systemd[1]: Reloading.
Oct 02 19:11:45 compute-0 podman[219530]: 2025-10-02 19:11:45.578638083 +0000 UTC m=+0.137448879 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:11:45 compute-0 systemd-rc-local-generator[219611]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:11:45 compute-0 systemd-sysv-generator[219615]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:11:45 compute-0 sudo[219566]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvdywhzoghmzxqsoizojzuprftagnntu ; /usr/bin/python3'
Oct 02 19:11:45 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.fuygbr for 6019f664-a1c2-5955-8391-692cb79a59f9...
Oct 02 19:11:45 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct 02 19:11:45 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct 02 19:11:45 compute-0 sudo[219566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:46 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct 02 19:11:46 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct 02 19:11:46 compute-0 python3[219622]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:46 compute-0 podman[219653]: 2025-10-02 19:11:46.252957029 +0000 UTC m=+0.083426245 container create 9d2ea7b41d90a60a2b523454e33dd063b298608c9bb122d0cc593755896c545f (image=quay.io/ceph/ceph:v18, name=musing_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 19:11:46 compute-0 systemd[1]: Started libpod-conmon-9d2ea7b41d90a60a2b523454e33dd063b298608c9bb122d0cc593755896c545f.scope.
Oct 02 19:11:46 compute-0 podman[219653]: 2025-10-02 19:11:46.216763659 +0000 UTC m=+0.047232855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb336414a235c14e76f537d40d6be5198240576801cad2f1d9d6dff999406ea2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb336414a235c14e76f537d40d6be5198240576801cad2f1d9d6dff999406ea2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:46 compute-0 podman[219678]: 2025-10-02 19:11:46.349665226 +0000 UTC m=+0.069655900 container create 647c28a08f32febe7ed397c0ef16454340ca1d149f4526d6d262e80b2e3d2b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mds-cephfs-compute-0-fuygbr, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:46 compute-0 podman[219653]: 2025-10-02 19:11:46.374815303 +0000 UTC m=+0.205284479 container init 9d2ea7b41d90a60a2b523454e33dd063b298608c9bb122d0cc593755896c545f (image=quay.io/ceph/ceph:v18, name=musing_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:11:46 compute-0 podman[219679]: 2025-10-02 19:11:46.380770411 +0000 UTC m=+0.085588582 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 19:11:46 compute-0 podman[219653]: 2025-10-02 19:11:46.385496977 +0000 UTC m=+0.215966163 container start 9d2ea7b41d90a60a2b523454e33dd063b298608c9bb122d0cc593755896c545f (image=quay.io/ceph/ceph:v18, name=musing_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:11:46 compute-0 podman[219653]: 2025-10-02 19:11:46.390965872 +0000 UTC m=+0.221435108 container attach 9d2ea7b41d90a60a2b523454e33dd063b298608c9bb122d0cc593755896c545f (image=quay.io/ceph/ceph:v18, name=musing_cori, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 19:11:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f558664b1cd689fc2b9623d7add1ad7a232b3121286d49ff7504a6b87f7f2463/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f558664b1cd689fc2b9623d7add1ad7a232b3121286d49ff7504a6b87f7f2463/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f558664b1cd689fc2b9623d7add1ad7a232b3121286d49ff7504a6b87f7f2463/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f558664b1cd689fc2b9623d7add1ad7a232b3121286d49ff7504a6b87f7f2463/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.fuygbr supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:46 compute-0 podman[219678]: 2025-10-02 19:11:46.319370942 +0000 UTC m=+0.039361656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct 02 19:11:46 compute-0 ceph-mon[191910]: 5.a scrub starts
Oct 02 19:11:46 compute-0 ceph-mon[191910]: 5.a scrub ok
Oct 02 19:11:46 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3650402172' entity='client.rgw.rgw.compute-0.fkvtrm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 02 19:11:46 compute-0 ceph-mon[191910]: osdmap e42: 3 total, 3 up, 3 in
Oct 02 19:11:46 compute-0 ceph-mon[191910]: pgmap v114: 194 pgs: 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:46 compute-0 ceph-mon[191910]: 6.3 scrub starts
Oct 02 19:11:46 compute-0 ceph-mon[191910]: 6.3 scrub ok
Oct 02 19:11:46 compute-0 podman[219678]: 2025-10-02 19:11:46.435886534 +0000 UTC m=+0.155877238 container init 647c28a08f32febe7ed397c0ef16454340ca1d149f4526d6d262e80b2e3d2b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mds-cephfs-compute-0-fuygbr, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 19:11:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct 02 19:11:46 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct 02 19:11:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct 02 19:11:46 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3650402172' entity='client.rgw.rgw.compute-0.fkvtrm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 19:11:46 compute-0 podman[219678]: 2025-10-02 19:11:46.453798069 +0000 UTC m=+0.173788743 container start 647c28a08f32febe7ed397c0ef16454340ca1d149f4526d6d262e80b2e3d2b05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mds-cephfs-compute-0-fuygbr, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 19:11:46 compute-0 bash[219678]: 647c28a08f32febe7ed397c0ef16454340ca1d149f4526d6d262e80b2e3d2b05
Oct 02 19:11:46 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.fuygbr for 6019f664-a1c2-5955-8391-692cb79a59f9.
Oct 02 19:11:46 compute-0 ceph-mds[219722]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 19:11:46 compute-0 ceph-mds[219722]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct 02 19:11:46 compute-0 ceph-mds[219722]: main not setting numa affinity
Oct 02 19:11:46 compute-0 ceph-mds[219722]: pidfile_write: ignore empty --pid-file
Oct 02 19:11:46 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mds-cephfs-compute-0-fuygbr[219718]: starting mds.cephfs.compute-0.fuygbr at 
Oct 02 19:11:46 compute-0 sudo[219383]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:11:46 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:11:46 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr Updating MDS map to version 2 from mon.0
Oct 02 19:11:46 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 02 19:11:46 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:46 compute-0 ceph-mgr[192222]: [progress INFO root] complete: finished ev 24f72913-0009-42af-8174-195c4d73f743 (Updating mds.cephfs deployment (+1 -> 1))
Oct 02 19:11:46 compute-0 ceph-mgr[192222]: [progress INFO root] Completed event 24f72913-0009-42af-8174-195c4d73f743 (Updating mds.cephfs deployment (+1 -> 1)) in 3 seconds
Oct 02 19:11:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Oct 02 19:11:46 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 02 19:11:46 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:46 compute-0 sudo[219741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:46 compute-0 sudo[219741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:46 compute-0 sudo[219741]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:46 compute-0 sudo[219766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:11:46 compute-0 sudo[219766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:46 compute-0 sudo[219766]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:46 compute-0 sudo[219810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:46 compute-0 sudo[219810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:46 compute-0 sudo[219810]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:46 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 43 pg[9.0( empty local-lis/les=0/0 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:46 compute-0 sudo[219835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:11:46 compute-0 sudo[219835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:46 compute-0 sudo[219835]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 02 19:11:47 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/304931633' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 19:11:47 compute-0 musing_cori[219695]: 
Oct 02 19:11:47 compute-0 musing_cori[219695]: {"fsid":"6019f664-a1c2-5955-8391-692cb79a59f9","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":212,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":43,"num_osds":3,"num_up_osds":3,"osd_up_since":1759432236,"num_in_osds":3,"osd_in_since":1759432202,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193},{"state_name":"creating+peering","count":1}],"num_pgs":194,"num_pools":8,"num_objects":2,"data_bytes":459280,"bytes_used":84123648,"bytes_avail":64327802880,"bytes_total":64411926528,"inactive_pgs_ratio":0.0051546390168368816},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-10-02T19:11:37.437773+0000","services":{}},"progress_events":{"24f72913-0009-42af-8174-195c4d73f743":{"message":"Updating mds.cephfs deployment (+1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Oct 02 19:11:47 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct 02 19:11:47 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct 02 19:11:47 compute-0 systemd[1]: libpod-9d2ea7b41d90a60a2b523454e33dd063b298608c9bb122d0cc593755896c545f.scope: Deactivated successfully.
Oct 02 19:11:47 compute-0 conmon[219695]: conmon 9d2ea7b41d90a60a2b52 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9d2ea7b41d90a60a2b523454e33dd063b298608c9bb122d0cc593755896c545f.scope/container/memory.events
Oct 02 19:11:47 compute-0 sudo[219860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:47 compute-0 sudo[219860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:47 compute-0 sudo[219860]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:47 compute-0 podman[219886]: 2025-10-02 19:11:47.108991818 +0000 UTC m=+0.032991366 container died 9d2ea7b41d90a60a2b523454e33dd063b298608c9bb122d0cc593755896c545f (image=quay.io/ceph/ceph:v18, name=musing_cori, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:47 compute-0 sudo[219893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 19:11:47 compute-0 sudo[219893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb336414a235c14e76f537d40d6be5198240576801cad2f1d9d6dff999406ea2-merged.mount: Deactivated successfully.
Oct 02 19:11:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v116: 195 pgs: 1 unknown, 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct 02 19:11:47 compute-0 podman[219886]: 2025-10-02 19:11:47.52581035 +0000 UTC m=+0.449809928 container remove 9d2ea7b41d90a60a2b523454e33dd063b298608c9bb122d0cc593755896c545f (image=quay.io/ceph/ceph:v18, name=musing_cori, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:47 compute-0 systemd[1]: libpod-conmon-9d2ea7b41d90a60a2b523454e33dd063b298608c9bb122d0cc593755896c545f.scope: Deactivated successfully.
Oct 02 19:11:47 compute-0 ceph-mon[191910]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 19:11:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3650402172' entity='client.rgw.rgw.compute-0.fkvtrm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 02 19:11:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct 02 19:11:47 compute-0 sudo[219566]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:47 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct 02 19:11:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).mds e3 new map
Oct 02 19:11:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).mds e3 print_map
                                            e3
                                            enable_multiple, ever_enabled_multiple: 1,1
                                            default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            legacy client fscid: 1
                                             
                                            Filesystem 'cephfs' (1)
                                            fs_name        cephfs
                                            epoch        2
                                            flags        12 joinable allow_snaps allow_multimds_snaps
                                            created        2025-10-02T19:11:23.853937+0000
                                            modified        2025-10-02T19:11:23.854004+0000
                                            tableserver        0
                                            root        0
                                            session_timeout        60
                                            session_autoclose        300
                                            max_file_size        1099511627776
                                            max_xattr_size        65536
                                            required_client_features        {}
                                            last_failure        0
                                            last_failure_osd_epoch        0
                                            compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            max_mds        1
                                            in        
                                            up        {}
                                            failed        
                                            damaged        
                                            stopped        
                                            data_pools        [7]
                                            metadata_pool        6
                                            inline_data        disabled
                                            balancer        
                                            bal_rank_mask        -1
                                            standby_count_wanted        0
                                             
                                             
                                            Standby daemons:
                                             
                                            [mds.cephfs.compute-0.fuygbr{-1:14269} state up:standby seq 1 addr [v2:192.168.122.100:6814/816506983,v1:192.168.122.100:6815/816506983] compat {c=[1],r=[1],i=[7ff]}]
Oct 02 19:11:47 compute-0 ceph-mon[191910]: 7.d scrub starts
Oct 02 19:11:47 compute-0 ceph-mon[191910]: 7.d scrub ok
Oct 02 19:11:47 compute-0 ceph-mon[191910]: osdmap e43: 3 total, 3 up, 3 in
Oct 02 19:11:47 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3650402172' entity='client.rgw.rgw.compute-0.fkvtrm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 19:11:47 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:47 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:47 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:47 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:47 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:47 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/304931633' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 19:11:47 compute-0 ceph-mon[191910]: 6.5 scrub starts
Oct 02 19:11:47 compute-0 ceph-mon[191910]: 6.5 scrub ok
Oct 02 19:11:47 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr Updating MDS map to version 3 from mon.0
Oct 02 19:11:47 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr Monitors have assigned me to become a standby.
Oct 02 19:11:47 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/816506983,v1:192.168.122.100:6815/816506983] up:boot
Oct 02 19:11:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/816506983,v1:192.168.122.100:6815/816506983] as mds.0
Oct 02 19:11:47 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.fuygbr assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 02 19:11:47 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 02 19:11:47 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 02 19:11:47 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 44 pg[9.0( empty local-lis/les=43/44 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:47 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Oct 02 19:11:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.fuygbr"} v 0) v1
Oct 02 19:11:47 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.fuygbr"}]: dispatch
Oct 02 19:11:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).mds e3 all = 0
Oct 02 19:11:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).mds e4 new map
Oct 02 19:11:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).mds e4 print_map
                                            e4
                                            enable_multiple, ever_enabled_multiple: 1,1
                                            default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            legacy client fscid: 1
                                             
                                            Filesystem 'cephfs' (1)
                                            fs_name        cephfs
                                            epoch        4
                                            flags        12 joinable allow_snaps allow_multimds_snaps
                                            created        2025-10-02T19:11:23.853937+0000
                                            modified        2025-10-02T19:11:47.698023+0000
                                            tableserver        0
                                            root        0
                                            session_timeout        60
                                            session_autoclose        300
                                            max_file_size        1099511627776
                                            max_xattr_size        65536
                                            required_client_features        {}
                                            last_failure        0
                                            last_failure_osd_epoch        0
                                            compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            max_mds        1
                                            in        0
                                            up        {0=14269}
                                            failed        
                                            damaged        
                                            stopped        
                                            data_pools        [7]
                                            metadata_pool        6
                                            inline_data        disabled
                                            balancer        
                                            bal_rank_mask        -1
                                            standby_count_wanted        0
                                            [mds.cephfs.compute-0.fuygbr{0:14269} state up:creating seq 1 addr [v2:192.168.122.100:6814/816506983,v1:192.168.122.100:6815/816506983] compat {c=[1],r=[1],i=[7ff]}]
                                             
                                             
Oct 02 19:11:47 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr Updating MDS map to version 4 from mon.0
Oct 02 19:11:47 compute-0 ceph-mds[219722]: mds.0.4 handle_mds_map i am now mds.0.4
Oct 02 19:11:47 compute-0 ceph-mds[219722]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Oct 02 19:11:47 compute-0 ceph-mds[219722]: mds.0.cache creating system inode with ino:0x1
Oct 02 19:11:47 compute-0 ceph-mds[219722]: mds.0.cache creating system inode with ino:0x100
Oct 02 19:11:47 compute-0 ceph-mds[219722]: mds.0.cache creating system inode with ino:0x600
Oct 02 19:11:47 compute-0 ceph-mds[219722]: mds.0.cache creating system inode with ino:0x601
Oct 02 19:11:47 compute-0 ceph-mds[219722]: mds.0.cache creating system inode with ino:0x602
Oct 02 19:11:47 compute-0 ceph-mds[219722]: mds.0.cache creating system inode with ino:0x603
Oct 02 19:11:47 compute-0 ceph-mds[219722]: mds.0.cache creating system inode with ino:0x604
Oct 02 19:11:47 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.fuygbr=up:creating}
Oct 02 19:11:47 compute-0 ceph-mds[219722]: mds.0.cache creating system inode with ino:0x605
Oct 02 19:11:47 compute-0 ceph-mds[219722]: mds.0.cache creating system inode with ino:0x606
Oct 02 19:11:47 compute-0 ceph-mds[219722]: mds.0.cache creating system inode with ino:0x607
Oct 02 19:11:47 compute-0 ceph-mds[219722]: mds.0.cache creating system inode with ino:0x608
Oct 02 19:11:47 compute-0 ceph-mds[219722]: mds.0.cache creating system inode with ino:0x609
Oct 02 19:11:47 compute-0 ceph-mds[219722]: mds.0.4 creating_done
Oct 02 19:11:47 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.fuygbr is now active in filesystem cephfs as rank 0
Oct 02 19:11:48 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.b scrub starts
Oct 02 19:11:48 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.b scrub ok
Oct 02 19:11:48 compute-0 podman[220003]: 2025-10-02 19:11:48.134614257 +0000 UTC m=+0.125076280 container exec a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 19:11:48 compute-0 podman[220003]: 2025-10-02 19:11:48.223806785 +0000 UTC m=+0.214268798 container exec_died a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:11:48 compute-0 sudo[220092]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggfjsbdzdheetdcynfzdzofwdfigaaim ; /usr/bin/python3'
Oct 02 19:11:48 compute-0 sudo[220092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct 02 19:11:48 compute-0 ceph-mon[191910]: pgmap v116: 195 pgs: 1 unknown, 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:11:48 compute-0 ceph-mon[191910]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 19:11:48 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3650402172' entity='client.rgw.rgw.compute-0.fkvtrm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 02 19:11:48 compute-0 ceph-mon[191910]: osdmap e44: 3 total, 3 up, 3 in
Oct 02 19:11:48 compute-0 ceph-mon[191910]: mds.? [v2:192.168.122.100:6814/816506983,v1:192.168.122.100:6815/816506983] up:boot
Oct 02 19:11:48 compute-0 ceph-mon[191910]: daemon mds.cephfs.compute-0.fuygbr assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 02 19:11:48 compute-0 ceph-mon[191910]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 02 19:11:48 compute-0 ceph-mon[191910]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 02 19:11:48 compute-0 ceph-mon[191910]: fsmap cephfs:0 1 up:standby
Oct 02 19:11:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.fuygbr"}]: dispatch
Oct 02 19:11:48 compute-0 ceph-mon[191910]: fsmap cephfs:1 {0=cephfs.compute-0.fuygbr=up:creating}
Oct 02 19:11:48 compute-0 ceph-mon[191910]: daemon mds.cephfs.compute-0.fuygbr is now active in filesystem cephfs as rank 0
Oct 02 19:11:48 compute-0 ceph-mon[191910]: 5.b scrub starts
Oct 02 19:11:48 compute-0 ceph-mon[191910]: 5.b scrub ok
Oct 02 19:11:48 compute-0 python3[220102]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct 02 19:11:48 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct 02 19:11:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct 02 19:11:48 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3650402172' entity='client.rgw.rgw.compute-0.fkvtrm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 19:11:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).mds e5 new map
Oct 02 19:11:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).mds e5 print_map
                                            e5
                                            enable_multiple, ever_enabled_multiple: 1,1
                                            default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            legacy client fscid: 1
                                             
                                            Filesystem 'cephfs' (1)
                                            fs_name        cephfs
                                            epoch        5
                                            flags        12 joinable allow_snaps allow_multimds_snaps
                                            created        2025-10-02T19:11:23.853937+0000
                                            modified        2025-10-02T19:11:48.724947+0000
                                            tableserver        0
                                            root        0
                                            session_timeout        60
                                            session_autoclose        300
                                            max_file_size        1099511627776
                                            max_xattr_size        65536
                                            required_client_features        {}
                                            last_failure        0
                                            last_failure_osd_epoch        0
                                            compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            max_mds        1
                                            in        0
                                            up        {0=14269}
                                            failed        
                                            damaged        
                                            stopped        
                                            data_pools        [7]
                                            metadata_pool        6
                                            inline_data        disabled
                                            balancer        
                                            bal_rank_mask        -1
                                            standby_count_wanted        0
                                            [mds.cephfs.compute-0.fuygbr{0:14269} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/816506983,v1:192.168.122.100:6815/816506983] compat {c=[1],r=[1],i=[7ff]}]
                                             
                                             
Oct 02 19:11:48 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr Updating MDS map to version 5 from mon.0
Oct 02 19:11:48 compute-0 ceph-mds[219722]: mds.0.4 handle_mds_map i am now mds.0.4
Oct 02 19:11:48 compute-0 ceph-mds[219722]: mds.0.4 handle_mds_map state change up:creating --> up:active
Oct 02 19:11:48 compute-0 ceph-mds[219722]: mds.0.4 recovery_done -- successful recovery!
Oct 02 19:11:48 compute-0 ceph-mds[219722]: mds.0.4 active_start
Oct 02 19:11:48 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/816506983,v1:192.168.122.100:6815/816506983] up:active
Oct 02 19:11:48 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.fuygbr=up:active}
Oct 02 19:11:48 compute-0 ceph-mgr[192222]: [progress INFO root] Writing back 12 completed events
Oct 02 19:11:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 19:11:48 compute-0 podman[220130]: 2025-10-02 19:11:48.778539417 +0000 UTC m=+0.072430433 container create 680ec66aacb8e6f5a69fb9d62d59a9624c31d07d4319cd7852a1be8f7c3a752c (image=quay.io/ceph/ceph:v18, name=hungry_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:48 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:48 compute-0 systemd[1]: Started libpod-conmon-680ec66aacb8e6f5a69fb9d62d59a9624c31d07d4319cd7852a1be8f7c3a752c.scope.
Oct 02 19:11:48 compute-0 podman[220130]: 2025-10-02 19:11:48.755838445 +0000 UTC m=+0.049729491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df4a62b014a7d84cf30532cc6e6238fba4e0e8485bdee28e0f474f38c971eb68/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df4a62b014a7d84cf30532cc6e6238fba4e0e8485bdee28e0f474f38c971eb68/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:48 compute-0 podman[220130]: 2025-10-02 19:11:48.893800916 +0000 UTC m=+0.187691942 container init 680ec66aacb8e6f5a69fb9d62d59a9624c31d07d4319cd7852a1be8f7c3a752c (image=quay.io/ceph/ceph:v18, name=hungry_wilson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 19:11:48 compute-0 podman[220130]: 2025-10-02 19:11:48.904633034 +0000 UTC m=+0.198524060 container start 680ec66aacb8e6f5a69fb9d62d59a9624c31d07d4319cd7852a1be8f7c3a752c (image=quay.io/ceph/ceph:v18, name=hungry_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:48 compute-0 podman[220130]: 2025-10-02 19:11:48.911465315 +0000 UTC m=+0.205356411 container attach 680ec66aacb8e6f5a69fb9d62d59a9624c31d07d4319cd7852a1be8f7c3a752c (image=quay.io/ceph/ceph:v18, name=hungry_wilson, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:49 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.d scrub starts
Oct 02 19:11:49 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 45 pg[10.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [2] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:49 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.d scrub ok
Oct 02 19:11:49 compute-0 podman[220186]: 2025-10-02 19:11:49.098710125 +0000 UTC m=+0.097079278 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2024-09-18T21:23:30, release=1214.1726694543, io.openshift.expose-services=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., name=ubi9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_id=edpm, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler)
Oct 02 19:11:49 compute-0 sudo[219893]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:11:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:11:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:11:49 compute-0 sudo[220217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:49 compute-0 sudo[220217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:49 compute-0 sudo[220217]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:49 compute-0 sudo[220259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:11:49 compute-0 sudo[220259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:49 compute-0 sudo[220259]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v119: 196 pgs: 2 unknown, 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 3 op/s
Oct 02 19:11:49 compute-0 sudo[220286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:49 compute-0 sudo[220286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:49 compute-0 sudo[220286]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 02 19:11:49 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2781852428' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 19:11:49 compute-0 hungry_wilson[220165]: 
Oct 02 19:11:49 compute-0 hungry_wilson[220165]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.fkvtrm","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct 02 19:11:49 compute-0 systemd[1]: libpod-680ec66aacb8e6f5a69fb9d62d59a9624c31d07d4319cd7852a1be8f7c3a752c.scope: Deactivated successfully.
Oct 02 19:11:49 compute-0 podman[220130]: 2025-10-02 19:11:49.57102549 +0000 UTC m=+0.864916526 container died 680ec66aacb8e6f5a69fb9d62d59a9624c31d07d4319cd7852a1be8f7c3a752c (image=quay.io/ceph/ceph:v18, name=hungry_wilson, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-df4a62b014a7d84cf30532cc6e6238fba4e0e8485bdee28e0f474f38c971eb68-merged.mount: Deactivated successfully.
Oct 02 19:11:49 compute-0 sudo[220312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:11:49 compute-0 sudo[220312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:49 compute-0 podman[220130]: 2025-10-02 19:11:49.633982231 +0000 UTC m=+0.927873237 container remove 680ec66aacb8e6f5a69fb9d62d59a9624c31d07d4319cd7852a1be8f7c3a752c (image=quay.io/ceph/ceph:v18, name=hungry_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:49 compute-0 systemd[1]: libpod-conmon-680ec66aacb8e6f5a69fb9d62d59a9624c31d07d4319cd7852a1be8f7c3a752c.scope: Deactivated successfully.
Oct 02 19:11:49 compute-0 sudo[220092]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct 02 19:11:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3650402172' entity='client.rgw.rgw.compute-0.fkvtrm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 02 19:11:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct 02 19:11:49 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct 02 19:11:49 compute-0 ceph-mon[191910]: osdmap e45: 3 total, 3 up, 3 in
Oct 02 19:11:49 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3650402172' entity='client.rgw.rgw.compute-0.fkvtrm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 19:11:49 compute-0 ceph-mon[191910]: mds.? [v2:192.168.122.100:6814/816506983,v1:192.168.122.100:6815/816506983] up:active
Oct 02 19:11:49 compute-0 ceph-mon[191910]: fsmap cephfs:1 {0=cephfs.compute-0.fuygbr=up:active}
Oct 02 19:11:49 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:49 compute-0 ceph-mon[191910]: 5.d scrub starts
Oct 02 19:11:49 compute-0 ceph-mon[191910]: 5.d scrub ok
Oct 02 19:11:49 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:49 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:49 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2781852428' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 19:11:49 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 46 pg[10.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [2] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:49 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Oct 02 19:11:49 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Oct 02 19:11:50 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct 02 19:11:50 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct 02 19:11:50 compute-0 sudo[220312]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:11:50 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:11:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:11:50 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:11:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:11:50 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:50 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 4254ae6b-7a5d-4b13-a444-b439e22960e7 does not exist
Oct 02 19:11:50 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev a583da3c-3bef-4a33-87b6-b54ef9166438 does not exist
Oct 02 19:11:50 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev b3949810-aedd-488c-818e-3f5001bdac4d does not exist
Oct 02 19:11:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:11:50 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:11:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:11:50 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:11:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:11:50 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:11:50 compute-0 sudo[220395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:50 compute-0 sudo[220395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:50 compute-0 sudo[220395]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:50 compute-0 sudo[220420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:11:50 compute-0 sudo[220420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:50 compute-0 sudo[220420]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:50 compute-0 sudo[220445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:50 compute-0 sudo[220445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:50 compute-0 sudo[220445]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:50 compute-0 sudo[220495]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niyfekolomzngxiepbzukxyvtvhbgktp ; /usr/bin/python3'
Oct 02 19:11:50 compute-0 sudo[220495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:50 compute-0 sudo[220493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:11:50 compute-0 sudo[220493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct 02 19:11:50 compute-0 python3[220507]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:50 compute-0 ceph-mon[191910]: pgmap v119: 196 pgs: 2 unknown, 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 3 op/s
Oct 02 19:11:50 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3650402172' entity='client.rgw.rgw.compute-0.fkvtrm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 02 19:11:50 compute-0 ceph-mon[191910]: osdmap e46: 3 total, 3 up, 3 in
Oct 02 19:11:50 compute-0 ceph-mon[191910]: 6.7 scrub starts
Oct 02 19:11:50 compute-0 ceph-mon[191910]: 6.7 scrub ok
Oct 02 19:11:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:11:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:11:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:11:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:11:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:11:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct 02 19:11:50 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct 02 19:11:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct 02 19:11:50 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/347096427' entity='client.rgw.rgw.compute-0.fkvtrm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 19:11:50 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 47 pg[11.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:11:50 compute-0 podman[220521]: 2025-10-02 19:11:50.866249714 +0000 UTC m=+0.094753425 container create e4f26b0689ba0a7c2e5ac122125cb8a6eb8eb8e3c0174b3a7100229d61c54fa8 (image=quay.io/ceph/ceph:v18, name=sad_bohr, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:50 compute-0 podman[220521]: 2025-10-02 19:11:50.809498088 +0000 UTC m=+0.038001839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:50 compute-0 systemd[1]: Started libpod-conmon-e4f26b0689ba0a7c2e5ac122125cb8a6eb8eb8e3c0174b3a7100229d61c54fa8.scope.
Oct 02 19:11:51 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct 02 19:11:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cde034068a94b8a38b686d5ad861c7091878b775b6f771153c4416e7856bbc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cde034068a94b8a38b686d5ad861c7091878b775b6f771153c4416e7856bbc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:51 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct 02 19:11:51 compute-0 podman[220521]: 2025-10-02 19:11:51.0691934 +0000 UTC m=+0.297697152 container init e4f26b0689ba0a7c2e5ac122125cb8a6eb8eb8e3c0174b3a7100229d61c54fa8 (image=quay.io/ceph/ceph:v18, name=sad_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 19:11:51 compute-0 podman[220521]: 2025-10-02 19:11:51.079910875 +0000 UTC m=+0.308414576 container start e4f26b0689ba0a7c2e5ac122125cb8a6eb8eb8e3c0174b3a7100229d61c54fa8 (image=quay.io/ceph/ceph:v18, name=sad_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:11:51 compute-0 podman[220521]: 2025-10-02 19:11:51.12683941 +0000 UTC m=+0.355343141 container attach e4f26b0689ba0a7c2e5ac122125cb8a6eb8eb8e3c0174b3a7100229d61c54fa8 (image=quay.io/ceph/ceph:v18, name=sad_bohr, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:51 compute-0 podman[220573]: 2025-10-02 19:11:51.355599442 +0000 UTC m=+0.079987404 container create a9e6f961a5b16aad50238a91baae00c8e942519da47706dc0d55fa8210a3a9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_matsumoto, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:51 compute-0 podman[220573]: 2025-10-02 19:11:51.304310141 +0000 UTC m=+0.028698153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v122: 197 pgs: 2 unknown, 195 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 9 op/s
Oct 02 19:11:51 compute-0 systemd[1]: Started libpod-conmon-a9e6f961a5b16aad50238a91baae00c8e942519da47706dc0d55fa8210a3a9c6.scope.
Oct 02 19:11:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:51 compute-0 podman[220573]: 2025-10-02 19:11:51.575151649 +0000 UTC m=+0.299539621 container init a9e6f961a5b16aad50238a91baae00c8e942519da47706dc0d55fa8210a3a9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:51 compute-0 podman[220573]: 2025-10-02 19:11:51.588727989 +0000 UTC m=+0.313115961 container start a9e6f961a5b16aad50238a91baae00c8e942519da47706dc0d55fa8210a3a9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_matsumoto, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 19:11:51 compute-0 tender_matsumoto[220608]: 167 167
Oct 02 19:11:51 compute-0 systemd[1]: libpod-a9e6f961a5b16aad50238a91baae00c8e942519da47706dc0d55fa8210a3a9c6.scope: Deactivated successfully.
Oct 02 19:11:51 compute-0 conmon[220608]: conmon a9e6f961a5b16aad5023 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9e6f961a5b16aad50238a91baae00c8e942519da47706dc0d55fa8210a3a9c6.scope/container/memory.events
Oct 02 19:11:51 compute-0 podman[220573]: 2025-10-02 19:11:51.645314011 +0000 UTC m=+0.369702013 container attach a9e6f961a5b16aad50238a91baae00c8e942519da47706dc0d55fa8210a3a9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:11:51 compute-0 podman[220573]: 2025-10-02 19:11:51.646180304 +0000 UTC m=+0.370568326 container died a9e6f961a5b16aad50238a91baae00c8e942519da47706dc0d55fa8210a3a9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_matsumoto, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Oct 02 19:11:51 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/276872263' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 02 19:11:51 compute-0 sad_bohr[220546]: mimic
Oct 02 19:11:51 compute-0 systemd[1]: libpod-e4f26b0689ba0a7c2e5ac122125cb8a6eb8eb8e3c0174b3a7100229d61c54fa8.scope: Deactivated successfully.
Oct 02 19:11:51 compute-0 podman[220521]: 2025-10-02 19:11:51.758211637 +0000 UTC m=+0.986715338 container died e4f26b0689ba0a7c2e5ac122125cb8a6eb8eb8e3c0174b3a7100229d61c54fa8 (image=quay.io/ceph/ceph:v18, name=sad_bohr, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 02 19:11:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct 02 19:11:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-42cde034068a94b8a38b686d5ad861c7091878b775b6f771153c4416e7856bbc-merged.mount: Deactivated successfully.
Oct 02 19:11:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/347096427' entity='client.rgw.rgw.compute-0.fkvtrm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 02 19:11:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct 02 19:11:51 compute-0 ceph-mon[191910]: 7.10 scrub starts
Oct 02 19:11:51 compute-0 ceph-mon[191910]: 7.10 scrub ok
Oct 02 19:11:51 compute-0 ceph-mon[191910]: osdmap e47: 3 total, 3 up, 3 in
Oct 02 19:11:51 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/347096427' entity='client.rgw.rgw.compute-0.fkvtrm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 19:11:51 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/276872263' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 02 19:11:51 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct 02 19:11:51 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 48 pg[11.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:11:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0af43088710c0c7c43f180a8ca42619160f593455be926b0bb6012ef082ce40f-merged.mount: Deactivated successfully.
Oct 02 19:11:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct 02 19:11:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/347096427' entity='client.rgw.rgw.compute-0.fkvtrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 19:11:51 compute-0 podman[220521]: 2025-10-02 19:11:51.97496553 +0000 UTC m=+1.203469271 container remove e4f26b0689ba0a7c2e5ac122125cb8a6eb8eb8e3c0174b3a7100229d61c54fa8 (image=quay.io/ceph/ceph:v18, name=sad_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 19:11:52 compute-0 podman[220573]: 2025-10-02 19:11:52.004199196 +0000 UTC m=+0.728587168 container remove a9e6f961a5b16aad50238a91baae00c8e942519da47706dc0d55fa8210a3a9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_matsumoto, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:52 compute-0 systemd[1]: libpod-conmon-e4f26b0689ba0a7c2e5ac122125cb8a6eb8eb8e3c0174b3a7100229d61c54fa8.scope: Deactivated successfully.
Oct 02 19:11:52 compute-0 sudo[220495]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:52 compute-0 systemd[1]: libpod-conmon-a9e6f961a5b16aad50238a91baae00c8e942519da47706dc0d55fa8210a3a9c6.scope: Deactivated successfully.
Oct 02 19:11:52 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct 02 19:11:52 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct 02 19:11:52 compute-0 podman[220646]: 2025-10-02 19:11:52.221281137 +0000 UTC m=+0.066170547 container create e094f07088fd4d126a01736326acff9985e8701f9e7d7dffb693ed5039e32cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mestorf, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:52 compute-0 podman[220646]: 2025-10-02 19:11:52.192141564 +0000 UTC m=+0.037030984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:52 compute-0 systemd[1]: Started libpod-conmon-e094f07088fd4d126a01736326acff9985e8701f9e7d7dffb693ed5039e32cad.scope.
Oct 02 19:11:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/770a192fc43feaa0d4e2550fbd45037df4a1f0aae5cbae356261625792e66b0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/770a192fc43feaa0d4e2550fbd45037df4a1f0aae5cbae356261625792e66b0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/770a192fc43feaa0d4e2550fbd45037df4a1f0aae5cbae356261625792e66b0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/770a192fc43feaa0d4e2550fbd45037df4a1f0aae5cbae356261625792e66b0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/770a192fc43feaa0d4e2550fbd45037df4a1f0aae5cbae356261625792e66b0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:52 compute-0 podman[220646]: 2025-10-02 19:11:52.356880206 +0000 UTC m=+0.201769616 container init e094f07088fd4d126a01736326acff9985e8701f9e7d7dffb693ed5039e32cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mestorf, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:52 compute-0 podman[220646]: 2025-10-02 19:11:52.369810209 +0000 UTC m=+0.214699599 container start e094f07088fd4d126a01736326acff9985e8701f9e7d7dffb693ed5039e32cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mestorf, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:11:52 compute-0 podman[220646]: 2025-10-02 19:11:52.374258007 +0000 UTC m=+0.219147427 container attach e094f07088fd4d126a01736326acff9985e8701f9e7d7dffb693ed5039e32cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mestorf, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct 02 19:11:52 compute-0 sudo[220690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqcuflnswityldoiyonaeaadfwducmqo ; /usr/bin/python3'
Oct 02 19:11:52 compute-0 sudo[220690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:52 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/347096427' entity='client.rgw.rgw.compute-0.fkvtrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 02 19:11:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct 02 19:11:53 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct 02 19:11:53 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct 02 19:11:53 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct 02 19:11:53 compute-0 ceph-mon[191910]: 7.12 scrub starts
Oct 02 19:11:53 compute-0 ceph-mon[191910]: 7.12 scrub ok
Oct 02 19:11:53 compute-0 ceph-mon[191910]: pgmap v122: 197 pgs: 2 unknown, 195 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 9 op/s
Oct 02 19:11:53 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/347096427' entity='client.rgw.rgw.compute-0.fkvtrm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 02 19:11:53 compute-0 ceph-mon[191910]: osdmap e48: 3 total, 3 up, 3 in
Oct 02 19:11:53 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/347096427' entity='client.rgw.rgw.compute-0.fkvtrm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 19:11:53 compute-0 ceph-mon[191910]: 6.9 scrub starts
Oct 02 19:11:53 compute-0 ceph-mon[191910]: 6.9 scrub ok
Oct 02 19:11:53 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct 02 19:11:53 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct 02 19:11:53 compute-0 python3[220692]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:11:53 compute-0 podman[220701]: 2025-10-02 19:11:53.291335706 +0000 UTC m=+0.056213182 container create 304a0ca0edeb24e465937d1c24e8db98f24b915d0e72f87cad6ec0eefac22beb (image=quay.io/ceph/ceph:v18, name=agitated_goldberg, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 19:11:53 compute-0 systemd[1]: Started libpod-conmon-304a0ca0edeb24e465937d1c24e8db98f24b915d0e72f87cad6ec0eefac22beb.scope.
Oct 02 19:11:53 compute-0 podman[220701]: 2025-10-02 19:11:53.267694739 +0000 UTC m=+0.032572295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:11:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487c16ce236a962aec380d3c2bb59d67ea4c9f3e08d170f87345d7f6ad5cd216/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487c16ce236a962aec380d3c2bb59d67ea4c9f3e08d170f87345d7f6ad5cd216/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:53 compute-0 podman[220701]: 2025-10-02 19:11:53.393888638 +0000 UTC m=+0.158766114 container init 304a0ca0edeb24e465937d1c24e8db98f24b915d0e72f87cad6ec0eefac22beb (image=quay.io/ceph/ceph:v18, name=agitated_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 19:11:53 compute-0 podman[220701]: 2025-10-02 19:11:53.404071879 +0000 UTC m=+0.168949335 container start 304a0ca0edeb24e465937d1c24e8db98f24b915d0e72f87cad6ec0eefac22beb (image=quay.io/ceph/ceph:v18, name=agitated_goldberg, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:53 compute-0 radosgw[219210]: LDAP not started since no server URIs were provided in the configuration.
Oct 02 19:11:53 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-rgw-rgw-compute-0-fkvtrm[219181]: 2025-10-02T19:11:53.403+0000 7fc5248e9940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 02 19:11:53 compute-0 radosgw[219210]: framework: beast
Oct 02 19:11:53 compute-0 radosgw[219210]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 02 19:11:53 compute-0 radosgw[219210]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 02 19:11:53 compute-0 podman[220701]: 2025-10-02 19:11:53.408486846 +0000 UTC m=+0.173364302 container attach 304a0ca0edeb24e465937d1c24e8db98f24b915d0e72f87cad6ec0eefac22beb (image=quay.io/ceph/ceph:v18, name=agitated_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:53 compute-0 radosgw[219210]: starting handler: beast
Oct 02 19:11:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v125: 197 pgs: 1 unknown, 196 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 10 op/s
Oct 02 19:11:53 compute-0 radosgw[219210]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 19:11:53 compute-0 radosgw[219210]: mgrc service_daemon_register rgw.14275 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.fkvtrm,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864100,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=f157a659-25a5-4b41-847d-f6a3c8433d14,zone_name=default,zonegroup_id=a12c437b-bce6-45f3-8a29-fcaf1e8a3cf2,zonegroup_name=default}
Oct 02 19:11:53 compute-0 flamboyant_mestorf[220662]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:11:53 compute-0 flamboyant_mestorf[220662]: --> relative data size: 1.0
Oct 02 19:11:53 compute-0 flamboyant_mestorf[220662]: --> All data devices are unavailable
Oct 02 19:11:53 compute-0 systemd[1]: libpod-e094f07088fd4d126a01736326acff9985e8701f9e7d7dffb693ed5039e32cad.scope: Deactivated successfully.
Oct 02 19:11:53 compute-0 systemd[1]: libpod-e094f07088fd4d126a01736326acff9985e8701f9e7d7dffb693ed5039e32cad.scope: Consumed 1.126s CPU time.
Oct 02 19:11:53 compute-0 conmon[220662]: conmon e094f07088fd4d126a01 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e094f07088fd4d126a01736326acff9985e8701f9e7d7dffb693ed5039e32cad.scope/container/memory.events
Oct 02 19:11:53 compute-0 podman[221279]: 2025-10-02 19:11:53.620115192 +0000 UTC m=+0.037091445 container died e094f07088fd4d126a01736326acff9985e8701f9e7d7dffb693ed5039e32cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 19:11:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-770a192fc43feaa0d4e2550fbd45037df4a1f0aae5cbae356261625792e66b0c-merged.mount: Deactivated successfully.
Oct 02 19:11:53 compute-0 podman[221279]: 2025-10-02 19:11:53.806805767 +0000 UTC m=+0.223781990 container remove e094f07088fd4d126a01736326acff9985e8701f9e7d7dffb693ed5039e32cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mestorf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct 02 19:11:53 compute-0 systemd[1]: libpod-conmon-e094f07088fd4d126a01736326acff9985e8701f9e7d7dffb693ed5039e32cad.scope: Deactivated successfully.
Oct 02 19:11:53 compute-0 sudo[220493]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:53 compute-0 sudo[221312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:53 compute-0 sudo[221312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:53 compute-0 sudo[221312]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:54 compute-0 sudo[221337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:11:54 compute-0 sudo[221337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:54 compute-0 sudo[221337]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Oct 02 19:11:54 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/914652927' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 02 19:11:54 compute-0 agitated_goldberg[220722]: 
Oct 02 19:11:54 compute-0 systemd[1]: libpod-304a0ca0edeb24e465937d1c24e8db98f24b915d0e72f87cad6ec0eefac22beb.scope: Deactivated successfully.
Oct 02 19:11:54 compute-0 agitated_goldberg[220722]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Oct 02 19:11:54 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 02 19:11:54 compute-0 ceph-mon[191910]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 02 19:11:54 compute-0 podman[220701]: 2025-10-02 19:11:54.122944998 +0000 UTC m=+0.887822484 container died 304a0ca0edeb24e465937d1c24e8db98f24b915d0e72f87cad6ec0eefac22beb (image=quay.io/ceph/ceph:v18, name=agitated_goldberg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:54 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/347096427' entity='client.rgw.rgw.compute-0.fkvtrm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 02 19:11:54 compute-0 ceph-mon[191910]: osdmap e49: 3 total, 3 up, 3 in
Oct 02 19:11:54 compute-0 ceph-mon[191910]: 7.14 scrub starts
Oct 02 19:11:54 compute-0 ceph-mon[191910]: 7.14 scrub ok
Oct 02 19:11:54 compute-0 ceph-mon[191910]: 6.a scrub starts
Oct 02 19:11:54 compute-0 ceph-mon[191910]: 6.a scrub ok
Oct 02 19:11:54 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/914652927' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 02 19:11:54 compute-0 sudo[221363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:11:54 compute-0 sudo[221363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:54 compute-0 sudo[221363]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-487c16ce236a962aec380d3c2bb59d67ea4c9f3e08d170f87345d7f6ad5cd216-merged.mount: Deactivated successfully.
Oct 02 19:11:54 compute-0 sudo[221401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:11:54 compute-0 sudo[221401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:54 compute-0 podman[220701]: 2025-10-02 19:11:54.36987895 +0000 UTC m=+1.134756406 container remove 304a0ca0edeb24e465937d1c24e8db98f24b915d0e72f87cad6ec0eefac22beb (image=quay.io/ceph/ceph:v18, name=agitated_goldberg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:54 compute-0 systemd[1]: libpod-conmon-304a0ca0edeb24e465937d1c24e8db98f24b915d0e72f87cad6ec0eefac22beb.scope: Deactivated successfully.
Oct 02 19:11:54 compute-0 sudo[220690]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:54 compute-0 podman[221464]: 2025-10-02 19:11:54.729959407 +0000 UTC m=+0.035173455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:54 compute-0 podman[221464]: 2025-10-02 19:11:54.884121158 +0000 UTC m=+0.189335186 container create c764603f8d64a3d9ef836c8da7e97adfc246431d01c553589c5b9501c0e697d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_taussig, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:54 compute-0 systemd[1]: Started libpod-conmon-c764603f8d64a3d9ef836c8da7e97adfc246431d01c553589c5b9501c0e697d0.scope.
Oct 02 19:11:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:54 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Oct 02 19:11:54 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Oct 02 19:11:55 compute-0 podman[221464]: 2025-10-02 19:11:55.001470203 +0000 UTC m=+0.306684271 container init c764603f8d64a3d9ef836c8da7e97adfc246431d01c553589c5b9501c0e697d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_taussig, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:55 compute-0 podman[221464]: 2025-10-02 19:11:55.012967728 +0000 UTC m=+0.318181766 container start c764603f8d64a3d9ef836c8da7e97adfc246431d01c553589c5b9501c0e697d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:11:55 compute-0 unruffled_taussig[221479]: 167 167
Oct 02 19:11:55 compute-0 podman[221464]: 2025-10-02 19:11:55.019284116 +0000 UTC m=+0.324498174 container attach c764603f8d64a3d9ef836c8da7e97adfc246431d01c553589c5b9501c0e697d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:55 compute-0 systemd[1]: libpod-c764603f8d64a3d9ef836c8da7e97adfc246431d01c553589c5b9501c0e697d0.scope: Deactivated successfully.
Oct 02 19:11:55 compute-0 podman[221464]: 2025-10-02 19:11:55.020549699 +0000 UTC m=+0.325763727 container died c764603f8d64a3d9ef836c8da7e97adfc246431d01c553589c5b9501c0e697d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_taussig, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 19:11:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1462a286ff70e2c69bf7c266acb73e9aa51d5215c70f2f67e7e3fdf0e35d3599-merged.mount: Deactivated successfully.
Oct 02 19:11:55 compute-0 podman[221464]: 2025-10-02 19:11:55.08198845 +0000 UTC m=+0.387202478 container remove c764603f8d64a3d9ef836c8da7e97adfc246431d01c553589c5b9501c0e697d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 19:11:55 compute-0 systemd[1]: libpod-conmon-c764603f8d64a3d9ef836c8da7e97adfc246431d01c553589c5b9501c0e697d0.scope: Deactivated successfully.
Oct 02 19:11:55 compute-0 ceph-mon[191910]: pgmap v125: 197 pgs: 1 unknown, 196 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 10 op/s
Oct 02 19:11:55 compute-0 ceph-mon[191910]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 02 19:11:55 compute-0 ceph-mon[191910]: Cluster is now healthy
Oct 02 19:11:55 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Oct 02 19:11:55 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Oct 02 19:11:55 compute-0 podman[221504]: 2025-10-02 19:11:55.26850599 +0000 UTC m=+0.061172294 container create d8295f606307693efa0acc8d2339326b1e304bea0697b3394eb4592d00063b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 19:11:55 compute-0 systemd[1]: Started libpod-conmon-d8295f606307693efa0acc8d2339326b1e304bea0697b3394eb4592d00063b1e.scope.
Oct 02 19:11:55 compute-0 podman[221504]: 2025-10-02 19:11:55.244347339 +0000 UTC m=+0.037013663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/373a52045e5d5cf3d07728824e0cd170c25b89b8c7a091e44e8c6e0a943b085d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/373a52045e5d5cf3d07728824e0cd170c25b89b8c7a091e44e8c6e0a943b085d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/373a52045e5d5cf3d07728824e0cd170c25b89b8c7a091e44e8c6e0a943b085d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/373a52045e5d5cf3d07728824e0cd170c25b89b8c7a091e44e8c6e0a943b085d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:55 compute-0 podman[221504]: 2025-10-02 19:11:55.416542609 +0000 UTC m=+0.209208933 container init d8295f606307693efa0acc8d2339326b1e304bea0697b3394eb4592d00063b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hertz, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:55 compute-0 podman[221504]: 2025-10-02 19:11:55.430639263 +0000 UTC m=+0.223305607 container start d8295f606307693efa0acc8d2339326b1e304bea0697b3394eb4592d00063b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 19:11:55 compute-0 podman[221504]: 2025-10-02 19:11:55.442872278 +0000 UTC m=+0.235538602 container attach d8295f606307693efa0acc8d2339326b1e304bea0697b3394eb4592d00063b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v126: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 9.1 KiB/s wr, 137 op/s
Oct 02 19:11:55 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct 02 19:11:55 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct 02 19:11:56 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Oct 02 19:11:56 compute-0 ceph-mon[191910]: 7.16 scrub starts
Oct 02 19:11:56 compute-0 ceph-mon[191910]: 7.16 scrub ok
Oct 02 19:11:56 compute-0 ceph-mon[191910]: 6.10 scrub starts
Oct 02 19:11:56 compute-0 ceph-mon[191910]: 6.10 scrub ok
Oct 02 19:11:56 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Oct 02 19:11:56 compute-0 amazing_hertz[221519]: {
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:     "0": [
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:         {
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "devices": [
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "/dev/loop3"
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             ],
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "lv_name": "ceph_lv0",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "lv_size": "21470642176",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "name": "ceph_lv0",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "tags": {
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.cluster_name": "ceph",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.crush_device_class": "",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.encrypted": "0",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.osd_id": "0",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.type": "block",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.vdo": "0"
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             },
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "type": "block",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "vg_name": "ceph_vg0"
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:         }
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:     ],
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:     "1": [
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:         {
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "devices": [
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "/dev/loop4"
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             ],
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "lv_name": "ceph_lv1",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "lv_size": "21470642176",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "name": "ceph_lv1",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "tags": {
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.cluster_name": "ceph",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.crush_device_class": "",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.encrypted": "0",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.osd_id": "1",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.type": "block",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.vdo": "0"
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             },
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "type": "block",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "vg_name": "ceph_vg1"
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:         }
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:     ],
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:     "2": [
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:         {
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "devices": [
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "/dev/loop5"
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             ],
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "lv_name": "ceph_lv2",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "lv_size": "21470642176",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "name": "ceph_lv2",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "tags": {
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.cluster_name": "ceph",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.crush_device_class": "",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.encrypted": "0",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.osd_id": "2",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.type": "block",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:                 "ceph.vdo": "0"
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             },
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "type": "block",
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:             "vg_name": "ceph_vg2"
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:         }
Oct 02 19:11:56 compute-0 amazing_hertz[221519]:     ]
Oct 02 19:11:56 compute-0 amazing_hertz[221519]: }
Oct 02 19:11:56 compute-0 systemd[1]: libpod-d8295f606307693efa0acc8d2339326b1e304bea0697b3394eb4592d00063b1e.scope: Deactivated successfully.
Oct 02 19:11:56 compute-0 podman[221504]: 2025-10-02 19:11:56.287267528 +0000 UTC m=+1.079933872 container died d8295f606307693efa0acc8d2339326b1e304bea0697b3394eb4592d00063b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hertz, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:11:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-373a52045e5d5cf3d07728824e0cd170c25b89b8c7a091e44e8c6e0a943b085d-merged.mount: Deactivated successfully.
Oct 02 19:11:56 compute-0 podman[221504]: 2025-10-02 19:11:56.409590195 +0000 UTC m=+1.202256499 container remove d8295f606307693efa0acc8d2339326b1e304bea0697b3394eb4592d00063b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 19:11:56 compute-0 systemd[1]: libpod-conmon-d8295f606307693efa0acc8d2339326b1e304bea0697b3394eb4592d00063b1e.scope: Deactivated successfully.
Oct 02 19:11:56 compute-0 sudo[221401]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:56 compute-0 sudo[221539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:56 compute-0 sudo[221539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:56 compute-0 sudo[221539]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:56 compute-0 sudo[221564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:11:56 compute-0 sudo[221564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:56 compute-0 sudo[221564]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:56 compute-0 sudo[221589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:56 compute-0 sudo[221589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:56 compute-0 sudo[221589]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:56 compute-0 sudo[221614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:11:56 compute-0 sudo[221614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:56 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct 02 19:11:56 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct 02 19:11:56 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct 02 19:11:57 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct 02 19:11:57 compute-0 ceph-mon[191910]: pgmap v126: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 9.1 KiB/s wr, 137 op/s
Oct 02 19:11:57 compute-0 ceph-mon[191910]: 5.e scrub starts
Oct 02 19:11:57 compute-0 ceph-mon[191910]: 5.e scrub ok
Oct 02 19:11:57 compute-0 ceph-mon[191910]: 6.12 scrub starts
Oct 02 19:11:57 compute-0 ceph-mon[191910]: 6.12 scrub ok
Oct 02 19:11:57 compute-0 podman[221674]: 2025-10-02 19:11:57.276740989 +0000 UTC m=+0.099780569 container create d97ca438080571cf0c5eda9612774096db05e6947fccf3eb05cbfdc395d82446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chatelet, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 19:11:57 compute-0 podman[221674]: 2025-10-02 19:11:57.221685918 +0000 UTC m=+0.044725598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:57 compute-0 systemd[1]: Started libpod-conmon-d97ca438080571cf0c5eda9612774096db05e6947fccf3eb05cbfdc395d82446.scope.
Oct 02 19:11:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v127: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 6.8 KiB/s wr, 115 op/s
Oct 02 19:11:57 compute-0 podman[221674]: 2025-10-02 19:11:57.547512235 +0000 UTC m=+0.370551885 container init d97ca438080571cf0c5eda9612774096db05e6947fccf3eb05cbfdc395d82446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chatelet, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:57 compute-0 podman[221674]: 2025-10-02 19:11:57.565800381 +0000 UTC m=+0.388839981 container start d97ca438080571cf0c5eda9612774096db05e6947fccf3eb05cbfdc395d82446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:57 compute-0 friendly_chatelet[221689]: 167 167
Oct 02 19:11:57 compute-0 systemd[1]: libpod-d97ca438080571cf0c5eda9612774096db05e6947fccf3eb05cbfdc395d82446.scope: Deactivated successfully.
Oct 02 19:11:57 compute-0 podman[221674]: 2025-10-02 19:11:57.621263883 +0000 UTC m=+0.444303533 container attach d97ca438080571cf0c5eda9612774096db05e6947fccf3eb05cbfdc395d82446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 19:11:57 compute-0 podman[221674]: 2025-10-02 19:11:57.62192225 +0000 UTC m=+0.444961860 container died d97ca438080571cf0c5eda9612774096db05e6947fccf3eb05cbfdc395d82446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chatelet, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 02 19:11:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-13255bf2c285d604ca1c38f8f5669e9cd78c6b3838137ffe33abfc8fb7c19787-merged.mount: Deactivated successfully.
Oct 02 19:11:57 compute-0 podman[221674]: 2025-10-02 19:11:57.817866001 +0000 UTC m=+0.640905611 container remove d97ca438080571cf0c5eda9612774096db05e6947fccf3eb05cbfdc395d82446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:11:57 compute-0 systemd[1]: libpod-conmon-d97ca438080571cf0c5eda9612774096db05e6947fccf3eb05cbfdc395d82446.scope: Deactivated successfully.
Oct 02 19:11:58 compute-0 podman[221713]: 2025-10-02 19:11:58.071713017 +0000 UTC m=+0.090151994 container create c12df5d41c3a9a21c90adad92f3b1bcdaeaa5d95aa868380a99d6587b3c16336 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:11:58 compute-0 podman[221713]: 2025-10-02 19:11:58.019784619 +0000 UTC m=+0.038223626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:11:58 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Oct 02 19:11:58 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Oct 02 19:11:58 compute-0 systemd[1]: Started libpod-conmon-c12df5d41c3a9a21c90adad92f3b1bcdaeaa5d95aa868380a99d6587b3c16336.scope.
Oct 02 19:11:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ab581bdc04d13b3bdbfba5f248103a655f4841b6a40b3cd3ec315629a88380/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ab581bdc04d13b3bdbfba5f248103a655f4841b6a40b3cd3ec315629a88380/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ab581bdc04d13b3bdbfba5f248103a655f4841b6a40b3cd3ec315629a88380/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ab581bdc04d13b3bdbfba5f248103a655f4841b6a40b3cd3ec315629a88380/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:58 compute-0 podman[221713]: 2025-10-02 19:11:58.361836186 +0000 UTC m=+0.380275253 container init c12df5d41c3a9a21c90adad92f3b1bcdaeaa5d95aa868380a99d6587b3c16336 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Oct 02 19:11:58 compute-0 ceph-mon[191910]: 5.10 scrub starts
Oct 02 19:11:58 compute-0 ceph-mon[191910]: 5.10 scrub ok
Oct 02 19:11:58 compute-0 ceph-mon[191910]: 7.17 scrub starts
Oct 02 19:11:58 compute-0 ceph-mon[191910]: 7.17 scrub ok
Oct 02 19:11:58 compute-0 podman[221713]: 2025-10-02 19:11:58.384809706 +0000 UTC m=+0.403248713 container start c12df5d41c3a9a21c90adad92f3b1bcdaeaa5d95aa868380a99d6587b3c16336 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:11:58 compute-0 podman[221713]: 2025-10-02 19:11:58.471717453 +0000 UTC m=+0.490156520 container attach c12df5d41c3a9a21c90adad92f3b1bcdaeaa5d95aa868380a99d6587b3c16336 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:11:59 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.17 deep-scrub starts
Oct 02 19:11:59 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.17 deep-scrub ok
Oct 02 19:11:59 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Oct 02 19:11:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:11:59 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Oct 02 19:11:59 compute-0 ceph-mon[191910]: pgmap v127: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 6.8 KiB/s wr, 115 op/s
Oct 02 19:11:59 compute-0 ceph-mon[191910]: 6.16 scrub starts
Oct 02 19:11:59 compute-0 ceph-mon[191910]: 6.16 scrub ok
Oct 02 19:11:59 compute-0 ceph-mon[191910]: 6.18 scrub starts
Oct 02 19:11:59 compute-0 ceph-mon[191910]: 6.18 scrub ok
Oct 02 19:11:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v128: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 5.6 KiB/s wr, 93 op/s
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]: {
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:         "osd_id": 1,
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:         "type": "bluestore"
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:     },
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:         "osd_id": 2,
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:         "type": "bluestore"
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:     },
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:         "osd_id": 0,
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:         "type": "bluestore"
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]:     }
Oct 02 19:11:59 compute-0 naughty_lamarr[221728]: }
Oct 02 19:11:59 compute-0 systemd[1]: libpod-c12df5d41c3a9a21c90adad92f3b1bcdaeaa5d95aa868380a99d6587b3c16336.scope: Deactivated successfully.
Oct 02 19:11:59 compute-0 systemd[1]: libpod-c12df5d41c3a9a21c90adad92f3b1bcdaeaa5d95aa868380a99d6587b3c16336.scope: Consumed 1.162s CPU time.
Oct 02 19:11:59 compute-0 podman[221713]: 2025-10-02 19:11:59.54658672 +0000 UTC m=+1.565025697 container died c12df5d41c3a9a21c90adad92f3b1bcdaeaa5d95aa868380a99d6587b3c16336 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 19:11:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-25ab581bdc04d13b3bdbfba5f248103a655f4841b6a40b3cd3ec315629a88380-merged.mount: Deactivated successfully.
Oct 02 19:11:59 compute-0 podman[221713]: 2025-10-02 19:11:59.736215713 +0000 UTC m=+1.754654680 container remove c12df5d41c3a9a21c90adad92f3b1bcdaeaa5d95aa868380a99d6587b3c16336 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 19:11:59 compute-0 podman[157186]: time="2025-10-02T19:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:11:59 compute-0 systemd[1]: libpod-conmon-c12df5d41c3a9a21c90adad92f3b1bcdaeaa5d95aa868380a99d6587b3c16336.scope: Deactivated successfully.
Oct 02 19:11:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:11:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6795 "" "Go-http-client/1.1"
Oct 02 19:11:59 compute-0 sudo[221614]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:11:59 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:11:59 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:11:59 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 7914c580-c8a5-42f2-8571-7337943179aa does not exist
Oct 02 19:11:59 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev b246ae93-25b9-4e41-b42a-5fe6a5c0291e does not exist
Oct 02 19:11:59 compute-0 sudo[221775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:11:59 compute-0 sudo[221775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:11:59 compute-0 sudo[221775]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:00 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct 02 19:12:00 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct 02 19:12:00 compute-0 sudo[221800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:12:00 compute-0 sudo[221800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:00 compute-0 sudo[221800]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:00 compute-0 sudo[221825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:12:00 compute-0 sudo[221825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:00 compute-0 sudo[221825]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:00 compute-0 sudo[221850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:12:00 compute-0 sudo[221850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:00 compute-0 sudo[221850]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:00 compute-0 ceph-mon[191910]: 5.17 deep-scrub starts
Oct 02 19:12:00 compute-0 ceph-mon[191910]: 5.17 deep-scrub ok
Oct 02 19:12:00 compute-0 ceph-mon[191910]: pgmap v128: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 5.6 KiB/s wr, 93 op/s
Oct 02 19:12:00 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:12:00 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:12:00 compute-0 sudo[221875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:12:00 compute-0 sudo[221875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:00 compute-0 sudo[221875]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:00 compute-0 sudo[221900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 19:12:00 compute-0 sudo[221900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:01 compute-0 podman[221992]: 2025-10-02 19:12:01.303715724 +0000 UTC m=+0.070934653 container exec a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 19:12:01 compute-0 ceph-mon[191910]: 7.19 scrub starts
Oct 02 19:12:01 compute-0 ceph-mon[191910]: 7.19 scrub ok
Oct 02 19:12:01 compute-0 openstack_network_exporter[159337]: ERROR   19:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:12:01 compute-0 openstack_network_exporter[159337]: ERROR   19:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:12:01 compute-0 openstack_network_exporter[159337]: ERROR   19:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:12:01 compute-0 openstack_network_exporter[159337]: ERROR   19:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:12:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:12:01 compute-0 openstack_network_exporter[159337]: ERROR   19:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:12:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:12:01 compute-0 podman[221992]: 2025-10-02 19:12:01.440488104 +0000 UTC m=+0.207707043 container exec_died a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 19:12:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 4.7 KiB/s wr, 115 op/s
Oct 02 19:12:01 compute-0 sudo[222069]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hajrylgrellntnyvmyzdhauamanqwbsm ; /usr/bin/python3'
Oct 02 19:12:01 compute-0 sudo[222069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:01 compute-0 python3[222078]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:12:01 compute-0 podman[222103]: 2025-10-02 19:12:01.981242015 +0000 UTC m=+0.065365966 container create a2c8d86744087abcf9b20c1ca5048d9c3547a8823de215672531b3c025baeed0 (image=quay.io/ceph/ceph:v18, name=boring_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 19:12:02 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct 02 19:12:02 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct 02 19:12:02 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct 02 19:12:02 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct 02 19:12:02 compute-0 systemd[1]: Started libpod-conmon-a2c8d86744087abcf9b20c1ca5048d9c3547a8823de215672531b3c025baeed0.scope.
Oct 02 19:12:02 compute-0 podman[222103]: 2025-10-02 19:12:01.957896765 +0000 UTC m=+0.042020646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:12:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:12:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/856e72fa13ac3f93191c94224529e61c90f6965a0b8e077be446a6f6ff54fe14/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:12:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/856e72fa13ac3f93191c94224529e61c90f6965a0b8e077be446a6f6ff54fe14/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:12:02 compute-0 podman[222103]: 2025-10-02 19:12:02.130342822 +0000 UTC m=+0.214466693 container init a2c8d86744087abcf9b20c1ca5048d9c3547a8823de215672531b3c025baeed0 (image=quay.io/ceph/ceph:v18, name=boring_hertz, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 19:12:02 compute-0 podman[222103]: 2025-10-02 19:12:02.142149755 +0000 UTC m=+0.226273596 container start a2c8d86744087abcf9b20c1ca5048d9c3547a8823de215672531b3c025baeed0 (image=quay.io/ceph/ceph:v18, name=boring_hertz, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 19:12:02 compute-0 podman[222103]: 2025-10-02 19:12:02.14607607 +0000 UTC m=+0.230199931 container attach a2c8d86744087abcf9b20c1ca5048d9c3547a8823de215672531b3c025baeed0 (image=quay.io/ceph/ceph:v18, name=boring_hertz, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 19:12:02 compute-0 sudo[221900]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:02 compute-0 ceph-mon[191910]: pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 4.7 KiB/s wr, 115 op/s
Oct 02 19:12:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:12:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:12:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:12:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:12:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:12:02 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:12:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:12:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:12:02 compute-0 boring_hertz[222137]: could not fetch user info: no user info saved
Oct 02 19:12:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:12:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:12:02 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 806abeac-73df-4952-8028-b21e14bc2c0e does not exist
Oct 02 19:12:02 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 5bb91cb0-9d10-400a-ac48-4bdf2169b086 does not exist
Oct 02 19:12:02 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev c33b4444-de1f-4153-827a-58f6f8951c6a does not exist
Oct 02 19:12:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:12:02 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:12:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:12:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:12:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:12:02 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:12:02 compute-0 sudo[222269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:12:02 compute-0 sudo[222269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:02 compute-0 systemd[1]: libpod-a2c8d86744087abcf9b20c1ca5048d9c3547a8823de215672531b3c025baeed0.scope: Deactivated successfully.
Oct 02 19:12:02 compute-0 podman[222103]: 2025-10-02 19:12:02.547240637 +0000 UTC m=+0.631364478 container died a2c8d86744087abcf9b20c1ca5048d9c3547a8823de215672531b3c025baeed0 (image=quay.io/ceph/ceph:v18, name=boring_hertz, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 19:12:02 compute-0 sudo[222269]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-856e72fa13ac3f93191c94224529e61c90f6965a0b8e077be446a6f6ff54fe14-merged.mount: Deactivated successfully.
Oct 02 19:12:02 compute-0 podman[222103]: 2025-10-02 19:12:02.611534553 +0000 UTC m=+0.695658384 container remove a2c8d86744087abcf9b20c1ca5048d9c3547a8823de215672531b3c025baeed0 (image=quay.io/ceph/ceph:v18, name=boring_hertz, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 19:12:02 compute-0 systemd[1]: libpod-conmon-a2c8d86744087abcf9b20c1ca5048d9c3547a8823de215672531b3c025baeed0.scope: Deactivated successfully.
Oct 02 19:12:02 compute-0 sudo[222069]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:02 compute-0 sudo[222301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:12:02 compute-0 sudo[222301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:02 compute-0 sudo[222301]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:02 compute-0 sudo[222329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:12:02 compute-0 sudo[222329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:02 compute-0 sudo[222329]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:02 compute-0 sudo[222354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:12:02 compute-0 sudo[222354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:02 compute-0 sudo[222402]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-netlzzbvnwrmeyczrgccclommxhjzvxi ; /usr/bin/python3'
Oct 02 19:12:02 compute-0 sudo[222402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:03 compute-0 python3[222404]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:12:03 compute-0 podman[222424]: 2025-10-02 19:12:03.087346541 +0000 UTC m=+0.058685468 container create 3aee157fd3d8832cda7ad19807d10d018a5b8c7555d326905c4dfd932e18bf2e (image=quay.io/ceph/ceph:v18, name=quizzical_moser, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:12:03 compute-0 systemd[1]: Started libpod-conmon-3aee157fd3d8832cda7ad19807d10d018a5b8c7555d326905c4dfd932e18bf2e.scope.
Oct 02 19:12:03 compute-0 podman[222424]: 2025-10-02 19:12:03.06394287 +0000 UTC m=+0.035281807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 02 19:12:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c58df147bab03e3c664538d059d91ed07cb2cc431fa1ecd92df3ec21798324f5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c58df147bab03e3c664538d059d91ed07cb2cc431fa1ecd92df3ec21798324f5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:12:03 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.19 deep-scrub starts
Oct 02 19:12:03 compute-0 podman[222424]: 2025-10-02 19:12:03.212311318 +0000 UTC m=+0.183650245 container init 3aee157fd3d8832cda7ad19807d10d018a5b8c7555d326905c4dfd932e18bf2e (image=quay.io/ceph/ceph:v18, name=quizzical_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 19:12:03 compute-0 podman[222424]: 2025-10-02 19:12:03.222426666 +0000 UTC m=+0.193765593 container start 3aee157fd3d8832cda7ad19807d10d018a5b8c7555d326905c4dfd932e18bf2e (image=quay.io/ceph/ceph:v18, name=quizzical_moser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:12:03 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.19 deep-scrub ok
Oct 02 19:12:03 compute-0 podman[222424]: 2025-10-02 19:12:03.226839033 +0000 UTC m=+0.198177980 container attach 3aee157fd3d8832cda7ad19807d10d018a5b8c7555d326905c4dfd932e18bf2e (image=quay.io/ceph/ceph:v18, name=quizzical_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:12:03 compute-0 podman[222460]: 2025-10-02 19:12:03.335490387 +0000 UTC m=+0.064271037 container create b29e69c1fb716bf654963e2784e4ccfe6e8b42b1efa9f59762c0e2f38370c9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 19:12:03 compute-0 systemd[1]: Started libpod-conmon-b29e69c1fb716bf654963e2784e4ccfe6e8b42b1efa9f59762c0e2f38370c9d7.scope.
Oct 02 19:12:03 compute-0 podman[222460]: 2025-10-02 19:12:03.307985307 +0000 UTC m=+0.036765967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:12:03
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.mgr', 'images', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'volumes']
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:12:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:12:03 compute-0 ceph-mon[191910]: 7.1d scrub starts
Oct 02 19:12:03 compute-0 ceph-mon[191910]: 5.1b scrub starts
Oct 02 19:12:03 compute-0 ceph-mon[191910]: 7.1d scrub ok
Oct 02 19:12:03 compute-0 ceph-mon[191910]: 5.1b scrub ok
Oct 02 19:12:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:12:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:12:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:12:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:12:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:12:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:12:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:12:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:12:03 compute-0 ceph-mon[191910]: 6.19 deep-scrub starts
Oct 02 19:12:03 compute-0 ceph-mon[191910]: 6.19 deep-scrub ok
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.1 KiB/s wr, 103 op/s
Oct 02 19:12:03 compute-0 podman[222460]: 2025-10-02 19:12:03.453449198 +0000 UTC m=+0.182229858 container init b29e69c1fb716bf654963e2784e4ccfe6e8b42b1efa9f59762c0e2f38370c9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mcnulty, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 19:12:03 compute-0 podman[222460]: 2025-10-02 19:12:03.463273948 +0000 UTC m=+0.192054588 container start b29e69c1fb716bf654963e2784e4ccfe6e8b42b1efa9f59762c0e2f38370c9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mcnulty, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:12:03 compute-0 focused_mcnulty[222538]: 167 167
Oct 02 19:12:03 compute-0 systemd[1]: libpod-b29e69c1fb716bf654963e2784e4ccfe6e8b42b1efa9f59762c0e2f38370c9d7.scope: Deactivated successfully.
Oct 02 19:12:03 compute-0 podman[222460]: 2025-10-02 19:12:03.555914637 +0000 UTC m=+0.284695307 container attach b29e69c1fb716bf654963e2784e4ccfe6e8b42b1efa9f59762c0e2f38370c9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 19:12:03 compute-0 podman[222460]: 2025-10-02 19:12:03.556446261 +0000 UTC m=+0.285226921 container died b29e69c1fb716bf654963e2784e4ccfe6e8b42b1efa9f59762c0e2f38370c9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:12:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:12:03 compute-0 quizzical_moser[222456]: {
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     "user_id": "openstack",
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     "display_name": "openstack",
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     "email": "",
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     "suspended": 0,
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     "max_buckets": 1000,
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     "subusers": [],
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     "keys": [
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:         {
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:             "user": "openstack",
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:             "access_key": "JP5TP7J0PAYSCA16N581",
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:             "secret_key": "Wuo6V4WKZwSWEW4ZrSaILCfQqnsguCiW97hMqB7u"
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:         }
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     ],
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     "swift_keys": [],
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     "caps": [],
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     "op_mask": "read, write, delete",
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     "default_placement": "",
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     "default_storage_class": "",
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     "placement_tags": [],
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     "bucket_quota": {
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:         "enabled": false,
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:         "check_on_raw": false,
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:         "max_size": -1,
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:         "max_size_kb": 0,
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:         "max_objects": -1
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     },
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     "user_quota": {
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:         "enabled": false,
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:         "check_on_raw": false,
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:         "max_size": -1,
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:         "max_size_kb": 0,
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:         "max_objects": -1
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     },
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     "temp_url_keys": [],
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     "type": "rgw",
Oct 02 19:12:03 compute-0 quizzical_moser[222456]:     "mfa_ids": []
Oct 02 19:12:03 compute-0 quizzical_moser[222456]: }
Oct 02 19:12:03 compute-0 quizzical_moser[222456]: 
Oct 02 19:12:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-589de31be2cdcd4323fac7f18351ce27efd44a74f9da29fbc19ed01c67b5c943-merged.mount: Deactivated successfully.
Oct 02 19:12:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:12:04 compute-0 podman[222460]: 2025-10-02 19:12:04.247043011 +0000 UTC m=+0.975823691 container remove b29e69c1fb716bf654963e2784e4ccfe6e8b42b1efa9f59762c0e2f38370c9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 19:12:04 compute-0 systemd[1]: libpod-conmon-b29e69c1fb716bf654963e2784e4ccfe6e8b42b1efa9f59762c0e2f38370c9d7.scope: Deactivated successfully.
Oct 02 19:12:04 compute-0 ceph-mon[191910]: pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.1 KiB/s wr, 103 op/s
Oct 02 19:12:04 compute-0 podman[222579]: 2025-10-02 19:12:04.510235526 +0000 UTC m=+0.098749342 container create 21e1bd49d49880e1ef1039b5d223d6305ada40c91baeb15b0ca98a2961aec26f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 02 19:12:04 compute-0 systemd[1]: libpod-3aee157fd3d8832cda7ad19807d10d018a5b8c7555d326905c4dfd932e18bf2e.scope: Deactivated successfully.
Oct 02 19:12:04 compute-0 podman[222424]: 2025-10-02 19:12:04.519865321 +0000 UTC m=+1.491204278 container died 3aee157fd3d8832cda7ad19807d10d018a5b8c7555d326905c4dfd932e18bf2e (image=quay.io/ceph/ceph:v18, name=quizzical_moser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:12:04 compute-0 podman[222579]: 2025-10-02 19:12:04.444764958 +0000 UTC m=+0.033278784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:12:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c58df147bab03e3c664538d059d91ed07cb2cc431fa1ecd92df3ec21798324f5-merged.mount: Deactivated successfully.
Oct 02 19:12:05 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Oct 02 19:12:05 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Oct 02 19:12:05 compute-0 podman[222424]: 2025-10-02 19:12:05.080948492 +0000 UTC m=+2.052287449 container remove 3aee157fd3d8832cda7ad19807d10d018a5b8c7555d326905c4dfd932e18bf2e (image=quay.io/ceph/ceph:v18, name=quizzical_moser, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:12:05 compute-0 sudo[222402]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:05 compute-0 systemd[1]: libpod-conmon-3aee157fd3d8832cda7ad19807d10d018a5b8c7555d326905c4dfd932e18bf2e.scope: Deactivated successfully.
Oct 02 19:12:05 compute-0 systemd[1]: Started libpod-conmon-21e1bd49d49880e1ef1039b5d223d6305ada40c91baeb15b0ca98a2961aec26f.scope.
Oct 02 19:12:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d196ceeee257690667353f4c32f44362d4979c834548fa2370a5d759d6abaa26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d196ceeee257690667353f4c32f44362d4979c834548fa2370a5d759d6abaa26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d196ceeee257690667353f4c32f44362d4979c834548fa2370a5d759d6abaa26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d196ceeee257690667353f4c32f44362d4979c834548fa2370a5d759d6abaa26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d196ceeee257690667353f4c32f44362d4979c834548fa2370a5d759d6abaa26/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:12:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.8 KiB/s wr, 91 op/s
Oct 02 19:12:05 compute-0 podman[222579]: 2025-10-02 19:12:05.452599075 +0000 UTC m=+1.041113111 container init 21e1bd49d49880e1ef1039b5d223d6305ada40c91baeb15b0ca98a2961aec26f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mcnulty, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:12:05 compute-0 podman[222579]: 2025-10-02 19:12:05.47086621 +0000 UTC m=+1.059380016 container start 21e1bd49d49880e1ef1039b5d223d6305ada40c91baeb15b0ca98a2961aec26f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mcnulty, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:12:05 compute-0 podman[222579]: 2025-10-02 19:12:05.51003916 +0000 UTC m=+1.098552996 container attach 21e1bd49d49880e1ef1039b5d223d6305ada40c91baeb15b0ca98a2961aec26f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mcnulty, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:12:05 compute-0 ceph-mon[191910]: 5.1c scrub starts
Oct 02 19:12:05 compute-0 ceph-mon[191910]: 5.1c scrub ok
Oct 02 19:12:06 compute-0 podman[222626]: 2025-10-02 19:12:06.698916743 +0000 UTC m=+0.108354137 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Oct 02 19:12:06 compute-0 podman[222628]: 2025-10-02 19:12:06.701133682 +0000 UTC m=+0.116171424 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:12:06 compute-0 optimistic_mcnulty[222605]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:12:06 compute-0 optimistic_mcnulty[222605]: --> relative data size: 1.0
Oct 02 19:12:06 compute-0 optimistic_mcnulty[222605]: --> All data devices are unavailable
Oct 02 19:12:06 compute-0 systemd[1]: libpod-21e1bd49d49880e1ef1039b5d223d6305ada40c91baeb15b0ca98a2961aec26f.scope: Deactivated successfully.
Oct 02 19:12:06 compute-0 systemd[1]: libpod-21e1bd49d49880e1ef1039b5d223d6305ada40c91baeb15b0ca98a2961aec26f.scope: Consumed 1.218s CPU time.
Oct 02 19:12:06 compute-0 podman[222579]: 2025-10-02 19:12:06.748422987 +0000 UTC m=+2.336936753 container died 21e1bd49d49880e1ef1039b5d223d6305ada40c91baeb15b0ca98a2961aec26f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 19:12:06 compute-0 ceph-mon[191910]: pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.8 KiB/s wr, 91 op/s
Oct 02 19:12:06 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct 02 19:12:07 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct 02 19:12:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-d196ceeee257690667353f4c32f44362d4979c834548fa2370a5d759d6abaa26-merged.mount: Deactivated successfully.
Oct 02 19:12:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 170 B/s wr, 29 op/s
Oct 02 19:12:07 compute-0 podman[222579]: 2025-10-02 19:12:07.460888077 +0000 UTC m=+3.049401903 container remove 21e1bd49d49880e1ef1039b5d223d6305ada40c91baeb15b0ca98a2961aec26f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 19:12:07 compute-0 systemd[1]: libpod-conmon-21e1bd49d49880e1ef1039b5d223d6305ada40c91baeb15b0ca98a2961aec26f.scope: Deactivated successfully.
Oct 02 19:12:07 compute-0 sudo[222354]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:07 compute-0 sudo[222688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:12:07 compute-0 sudo[222688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:07 compute-0 sudo[222688]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:07 compute-0 sudo[222713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:12:07 compute-0 sudo[222713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:07 compute-0 sudo[222713]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:07 compute-0 sudo[222738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:12:07 compute-0 sudo[222738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:07 compute-0 sudo[222738]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:07 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Oct 02 19:12:07 compute-0 ceph-mon[191910]: 7.1e scrub starts
Oct 02 19:12:07 compute-0 ceph-mon[191910]: 7.1e scrub ok
Oct 02 19:12:07 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Oct 02 19:12:08 compute-0 sudo[222763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:12:08 compute-0 sudo[222763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:08 compute-0 podman[222826]: 2025-10-02 19:12:08.578921349 +0000 UTC m=+0.056179931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:12:08 compute-0 podman[222826]: 2025-10-02 19:12:08.690780676 +0000 UTC m=+0.168039218 container create e4e0f09848011ecfc798660a56d7e1a97ad0b277f8260feb804df23ae8301504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_roentgen, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:12:08 compute-0 systemd[1]: Started libpod-conmon-e4e0f09848011ecfc798660a56d7e1a97ad0b277f8260feb804df23ae8301504.scope.
Oct 02 19:12:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:12:08 compute-0 podman[222826]: 2025-10-02 19:12:08.82770027 +0000 UTC m=+0.304958852 container init e4e0f09848011ecfc798660a56d7e1a97ad0b277f8260feb804df23ae8301504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_roentgen, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:12:08 compute-0 podman[222826]: 2025-10-02 19:12:08.846501639 +0000 UTC m=+0.323760171 container start e4e0f09848011ecfc798660a56d7e1a97ad0b277f8260feb804df23ae8301504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 19:12:08 compute-0 podman[222826]: 2025-10-02 19:12:08.852514389 +0000 UTC m=+0.329773001 container attach e4e0f09848011ecfc798660a56d7e1a97ad0b277f8260feb804df23ae8301504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_roentgen, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 19:12:08 compute-0 angry_roentgen[222842]: 167 167
Oct 02 19:12:08 compute-0 systemd[1]: libpod-e4e0f09848011ecfc798660a56d7e1a97ad0b277f8260feb804df23ae8301504.scope: Deactivated successfully.
Oct 02 19:12:08 compute-0 podman[222826]: 2025-10-02 19:12:08.860947263 +0000 UTC m=+0.338205795 container died e4e0f09848011ecfc798660a56d7e1a97ad0b277f8260feb804df23ae8301504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_roentgen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:12:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-79fb3f8a02924b59347d9acd65dd011daa0820a21903160709148f1b10185f7f-merged.mount: Deactivated successfully.
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:12:08 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Oct 02 19:12:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 19:12:08 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:12:08 compute-0 podman[222826]: 2025-10-02 19:12:08.940069632 +0000 UTC m=+0.417328144 container remove e4e0f09848011ecfc798660a56d7e1a97ad0b277f8260feb804df23ae8301504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_roentgen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:12:08 compute-0 systemd[1]: libpod-conmon-e4e0f09848011ecfc798660a56d7e1a97ad0b277f8260feb804df23ae8301504.scope: Deactivated successfully.
Oct 02 19:12:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct 02 19:12:08 compute-0 ceph-mon[191910]: pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 170 B/s wr, 29 op/s
Oct 02 19:12:08 compute-0 ceph-mon[191910]: 5.1d scrub starts
Oct 02 19:12:08 compute-0 ceph-mon[191910]: 5.1d scrub ok
Oct 02 19:12:08 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:12:08 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:12:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct 02 19:12:09 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct 02 19:12:09 compute-0 ceph-mgr[192222]: [progress INFO root] update: starting ev c542669c-6e98-430b-8830-5504551ec714 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 02 19:12:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 19:12:09 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:12:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:12:09 compute-0 podman[222864]: 2025-10-02 19:12:09.249858864 +0000 UTC m=+0.098291910 container create 26b347c18bac04c9f70fc066522483348a842561b9d5cb993242eaeb0a0f64e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:12:09 compute-0 podman[222864]: 2025-10-02 19:12:09.218677917 +0000 UTC m=+0.067111053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:12:09 compute-0 systemd[1]: Started libpod-conmon-26b347c18bac04c9f70fc066522483348a842561b9d5cb993242eaeb0a0f64e7.scope.
Oct 02 19:12:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b80901d74eea712709fc163498bd7bf70ab78f0f54bb0c139b8cf3d6e56b5d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b80901d74eea712709fc163498bd7bf70ab78f0f54bb0c139b8cf3d6e56b5d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b80901d74eea712709fc163498bd7bf70ab78f0f54bb0c139b8cf3d6e56b5d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b80901d74eea712709fc163498bd7bf70ab78f0f54bb0c139b8cf3d6e56b5d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:12:09 compute-0 podman[222864]: 2025-10-02 19:12:09.39097704 +0000 UTC m=+0.239410176 container init 26b347c18bac04c9f70fc066522483348a842561b9d5cb993242eaeb0a0f64e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_franklin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 19:12:09 compute-0 podman[222864]: 2025-10-02 19:12:09.410269102 +0000 UTC m=+0.258702178 container start 26b347c18bac04c9f70fc066522483348a842561b9d5cb993242eaeb0a0f64e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_franklin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 19:12:09 compute-0 podman[222864]: 2025-10-02 19:12:09.417494233 +0000 UTC m=+0.265927319 container attach 26b347c18bac04c9f70fc066522483348a842561b9d5cb993242eaeb0a0f64e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_franklin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:12:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 204 B/s wr, 36 op/s
Oct 02 19:12:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 19:12:09 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:12:09 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Oct 02 19:12:09 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Oct 02 19:12:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct 02 19:12:09 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:12:09 compute-0 ceph-mon[191910]: osdmap e50: 3 total, 3 up, 3 in
Oct 02 19:12:09 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:12:09 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:12:10 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:12:10 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:12:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct 02 19:12:10 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct 02 19:12:10 compute-0 ceph-mgr[192222]: [progress INFO root] update: starting ev 9b1680a4-9257-4d2f-8d09-4342dbdc6213 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 02 19:12:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 19:12:10 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:12:10 compute-0 agitated_franklin[222879]: {
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:     "0": [
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:         {
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "devices": [
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "/dev/loop3"
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             ],
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "lv_name": "ceph_lv0",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "lv_size": "21470642176",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "name": "ceph_lv0",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "tags": {
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.cluster_name": "ceph",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.crush_device_class": "",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.encrypted": "0",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.osd_id": "0",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.type": "block",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.vdo": "0"
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             },
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "type": "block",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "vg_name": "ceph_vg0"
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:         }
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:     ],
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:     "1": [
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:         {
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "devices": [
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "/dev/loop4"
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             ],
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "lv_name": "ceph_lv1",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "lv_size": "21470642176",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "name": "ceph_lv1",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "tags": {
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.cluster_name": "ceph",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.crush_device_class": "",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.encrypted": "0",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.osd_id": "1",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.type": "block",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.vdo": "0"
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             },
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "type": "block",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "vg_name": "ceph_vg1"
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:         }
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:     ],
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:     "2": [
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:         {
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "devices": [
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "/dev/loop5"
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             ],
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "lv_name": "ceph_lv2",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "lv_size": "21470642176",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "name": "ceph_lv2",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "tags": {
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.cluster_name": "ceph",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.crush_device_class": "",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.encrypted": "0",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.osd_id": "2",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.type": "block",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:                 "ceph.vdo": "0"
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             },
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "type": "block",
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:             "vg_name": "ceph_vg2"
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:         }
Oct 02 19:12:10 compute-0 agitated_franklin[222879]:     ]
Oct 02 19:12:10 compute-0 agitated_franklin[222879]: }
Oct 02 19:12:10 compute-0 systemd[1]: libpod-26b347c18bac04c9f70fc066522483348a842561b9d5cb993242eaeb0a0f64e7.scope: Deactivated successfully.
Oct 02 19:12:10 compute-0 podman[222888]: 2025-10-02 19:12:10.265357296 +0000 UTC m=+0.044695817 container died 26b347c18bac04c9f70fc066522483348a842561b9d5cb993242eaeb0a0f64e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 19:12:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b80901d74eea712709fc163498bd7bf70ab78f0f54bb0c139b8cf3d6e56b5d3-merged.mount: Deactivated successfully.
Oct 02 19:12:10 compute-0 podman[222888]: 2025-10-02 19:12:10.352891019 +0000 UTC m=+0.132229580 container remove 26b347c18bac04c9f70fc066522483348a842561b9d5cb993242eaeb0a0f64e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 19:12:10 compute-0 systemd[1]: libpod-conmon-26b347c18bac04c9f70fc066522483348a842561b9d5cb993242eaeb0a0f64e7.scope: Deactivated successfully.
Oct 02 19:12:10 compute-0 sudo[222763]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:10 compute-0 sudo[222913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:12:10 compute-0 sudo[222913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:10 compute-0 sudo[222913]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:10 compute-0 podman[222901]: 2025-10-02 19:12:10.560425407 +0000 UTC m=+0.135649322 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container)
Oct 02 19:12:10 compute-0 podman[222902]: 2025-10-02 19:12:10.571642134 +0000 UTC m=+0.144533937 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 19:12:10 compute-0 sudo[222963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:12:10 compute-0 sudo[222963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:10 compute-0 sudo[222963]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 51 pg[8.0( v 42'4 (0'0,42'4] local-lis/les=41/42 n=4 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=14.818240166s) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 42'3 mlcod 42'3 active pruub 121.679244995s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:10 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 51 pg[8.0( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=51 pruub=14.818240166s) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 42'3 mlcod 0'0 unknown pruub 121.679244995s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:10 compute-0 sudo[222996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:12:10 compute-0 sudo[222996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:10 compute-0 sudo[222996]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:10 compute-0 sudo[223021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:12:10 compute-0 sudo[223021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct 02 19:12:11 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:12:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct 02 19:12:11 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct 02 19:12:11 compute-0 ceph-mgr[192222]: [progress INFO root] update: starting ev 92b2ca3a-72cb-460f-84e4-4bb2bb4dce95 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 02 19:12:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct 02 19:12:11 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:12:11 compute-0 ceph-mon[191910]: pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 204 B/s wr, 36 op/s
Oct 02 19:12:11 compute-0 ceph-mon[191910]: 2.15 scrub starts
Oct 02 19:12:11 compute-0 ceph-mon[191910]: 2.15 scrub ok
Oct 02 19:12:11 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:12:11 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:12:11 compute-0 ceph-mon[191910]: osdmap e51: 3 total, 3 up, 3 in
Oct 02 19:12:11 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.11( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.1d( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.1e( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.1f( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.18( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.1a( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.19( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.1b( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.4( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=1 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.1c( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.5( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.6( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.7( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.9( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.b( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.f( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.a( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.8( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.e( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.d( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.c( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.3( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=1 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.2( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=1 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.1( v 42'4 (0'0,42'4] local-lis/les=41/42 n=1 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.10( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.17( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.16( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.15( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.14( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.13( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.12( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=41/42 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.11( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.1d( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.18( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.1f( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.19( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.1b( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.1a( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.4( v 42'4 (0'0,42'4] local-lis/les=51/52 n=1 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.6( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.7( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.5( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.1c( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.0( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 42'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.b( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.f( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.9( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.d( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.e( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.8( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.c( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.1e( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.1( v 42'4 (0'0,42'4] local-lis/les=51/52 n=1 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.2( v 42'4 (0'0,42'4] local-lis/les=51/52 n=1 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.3( v 42'4 (0'0,42'4] local-lis/les=51/52 n=1 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.10( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.16( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.17( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.15( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.14( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.13( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.a( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 52 pg[8.12( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=41/41 les/c/f=42/42/0 sis=51) [1] r=0 lpr=51 pi=[41,51)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:11 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Oct 02 19:12:11 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Oct 02 19:12:11 compute-0 podman[223085]: 2025-10-02 19:12:11.305288284 +0000 UTC m=+0.061454062 container create 75078edc2c733c00d9237d5839e3a179e07d7615eae8484f53b6b9cb8094704d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:12:11 compute-0 systemd[1]: Started libpod-conmon-75078edc2c733c00d9237d5839e3a179e07d7615eae8484f53b6b9cb8094704d.scope.
Oct 02 19:12:11 compute-0 podman[223085]: 2025-10-02 19:12:11.281082152 +0000 UTC m=+0.037247970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:12:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:12:11 compute-0 podman[223085]: 2025-10-02 19:12:11.413224058 +0000 UTC m=+0.169389836 container init 75078edc2c733c00d9237d5839e3a179e07d7615eae8484f53b6b9cb8094704d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 19:12:11 compute-0 podman[223085]: 2025-10-02 19:12:11.421515068 +0000 UTC m=+0.177680846 container start 75078edc2c733c00d9237d5839e3a179e07d7615eae8484f53b6b9cb8094704d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:12:11 compute-0 podman[223085]: 2025-10-02 19:12:11.426247364 +0000 UTC m=+0.182413172 container attach 75078edc2c733c00d9237d5839e3a179e07d7615eae8484f53b6b9cb8094704d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:12:11 compute-0 wonderful_lewin[223101]: 167 167
Oct 02 19:12:11 compute-0 systemd[1]: libpod-75078edc2c733c00d9237d5839e3a179e07d7615eae8484f53b6b9cb8094704d.scope: Deactivated successfully.
Oct 02 19:12:11 compute-0 podman[223085]: 2025-10-02 19:12:11.42909436 +0000 UTC m=+0.185260128 container died 75078edc2c733c00d9237d5839e3a179e07d7615eae8484f53b6b9cb8094704d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 19:12:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v137: 228 pgs: 31 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 19:12:11 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:12:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 19:12:11 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:12:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-abb28abe8f15021b3c266b20fba2f495bf46d29b3e54fec427338669e2c8ebe9-merged.mount: Deactivated successfully.
Oct 02 19:12:11 compute-0 podman[223085]: 2025-10-02 19:12:11.478793129 +0000 UTC m=+0.234958897 container remove 75078edc2c733c00d9237d5839e3a179e07d7615eae8484f53b6b9cb8094704d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:12:11 compute-0 systemd[1]: libpod-conmon-75078edc2c733c00d9237d5839e3a179e07d7615eae8484f53b6b9cb8094704d.scope: Deactivated successfully.
Oct 02 19:12:11 compute-0 podman[223123]: 2025-10-02 19:12:11.752612396 +0000 UTC m=+0.091575992 container create bde284ac2a3ebe97689d0c696e69d71fcc5dd976f51d9b1dc9f03eef239a6fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_chatterjee, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 19:12:11 compute-0 podman[223123]: 2025-10-02 19:12:11.718674895 +0000 UTC m=+0.057638551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:12:11 compute-0 systemd[1]: Started libpod-conmon-bde284ac2a3ebe97689d0c696e69d71fcc5dd976f51d9b1dc9f03eef239a6fcb.scope.
Oct 02 19:12:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3a3aea0c8d6d9549f282772e49184a7e27def24f7bf91b5ef523fc8aeb78f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3a3aea0c8d6d9549f282772e49184a7e27def24f7bf91b5ef523fc8aeb78f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3a3aea0c8d6d9549f282772e49184a7e27def24f7bf91b5ef523fc8aeb78f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3a3aea0c8d6d9549f282772e49184a7e27def24f7bf91b5ef523fc8aeb78f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:12:11 compute-0 podman[223123]: 2025-10-02 19:12:11.902626727 +0000 UTC m=+0.241590393 container init bde284ac2a3ebe97689d0c696e69d71fcc5dd976f51d9b1dc9f03eef239a6fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_chatterjee, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:12:11 compute-0 podman[223123]: 2025-10-02 19:12:11.921050396 +0000 UTC m=+0.260013982 container start bde284ac2a3ebe97689d0c696e69d71fcc5dd976f51d9b1dc9f03eef239a6fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_chatterjee, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 19:12:11 compute-0 podman[223123]: 2025-10-02 19:12:11.92911056 +0000 UTC m=+0.268074226 container attach bde284ac2a3ebe97689d0c696e69d71fcc5dd976f51d9b1dc9f03eef239a6fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_chatterjee, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:12:12 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct 02 19:12:12 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:12:12 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:12:12 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:12:12 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct 02 19:12:12 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct 02 19:12:12 compute-0 ceph-mgr[192222]: [progress INFO root] update: starting ev 9600a5bd-cd74-4083-81e8-2f2e1e952ca7 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 02 19:12:12 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:12:12 compute-0 ceph-mon[191910]: osdmap e52: 3 total, 3 up, 3 in
Oct 02 19:12:12 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 19:12:12 compute-0 ceph-mon[191910]: 6.1a scrub starts
Oct 02 19:12:12 compute-0 ceph-mon[191910]: 6.1a scrub ok
Oct 02 19:12:12 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:12:12 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:12:12 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 53 pg[10.0( v 49'64 (0'0,49'64] local-lis/les=45/46 n=8 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=9.678304672s) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 49'63 mlcod 49'63 active pruub 111.117713928s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:12 compute-0 ceph-mgr[192222]: [progress INFO root] complete: finished ev c542669c-6e98-430b-8830-5504551ec714 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 02 19:12:12 compute-0 ceph-mgr[192222]: [progress INFO root] Completed event c542669c-6e98-430b-8830-5504551ec714 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Oct 02 19:12:12 compute-0 ceph-mgr[192222]: [progress INFO root] complete: finished ev 9b1680a4-9257-4d2f-8d09-4342dbdc6213 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 02 19:12:12 compute-0 ceph-mgr[192222]: [progress INFO root] Completed event 9b1680a4-9257-4d2f-8d09-4342dbdc6213 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Oct 02 19:12:12 compute-0 ceph-mgr[192222]: [progress INFO root] complete: finished ev 92b2ca3a-72cb-460f-84e4-4bb2bb4dce95 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 02 19:12:12 compute-0 ceph-mgr[192222]: [progress INFO root] Completed event 92b2ca3a-72cb-460f-84e4-4bb2bb4dce95 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Oct 02 19:12:12 compute-0 ceph-mgr[192222]: [progress INFO root] complete: finished ev 9600a5bd-cd74-4083-81e8-2f2e1e952ca7 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 02 19:12:12 compute-0 ceph-mgr[192222]: [progress INFO root] Completed event 9600a5bd-cd74-4083-81e8-2f2e1e952ca7 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Oct 02 19:12:12 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 53 pg[10.0( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=9.678304672s) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 49'63 mlcod 0'0 unknown pruub 111.117713928s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:12 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.1b deep-scrub starts
Oct 02 19:12:12 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 6.1b deep-scrub ok
Oct 02 19:12:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct 02 19:12:13 compute-0 ceph-mon[191910]: pgmap v137: 228 pgs: 31 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:13 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 02 19:12:13 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:12:13 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:12:13 compute-0 ceph-mon[191910]: osdmap e53: 3 total, 3 up, 3 in
Oct 02 19:12:13 compute-0 ceph-mon[191910]: 6.1b deep-scrub starts
Oct 02 19:12:13 compute-0 ceph-mon[191910]: 6.1b deep-scrub ok
Oct 02 19:12:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct 02 19:12:13 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 53 pg[9.0( v 49'389 (0'0,49'389] local-lis/les=43/44 n=177 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=14.601119995s) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 49'388 mlcod 49'388 active pruub 123.923065186s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.1e( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.1b( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.d( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.13( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.b( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.a( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.11( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.12( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.10( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.1f( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.1d( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.1c( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.19( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.18( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.1a( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.6( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=1 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.5( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=1 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.4( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=1 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.8( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=1 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.7( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=1 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.f( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.9( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.c( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.e( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.1( v 49'64 (0'0,49'64] local-lis/les=45/46 n=1 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.0( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=5 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=14.601119995s) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 49'388 mlcod 0'0 unknown pruub 123.923065186s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.2( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=1 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.3( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=1 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.14( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.15( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.16( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.17( v 49'64 lc 0'0 (0'0,49'64] local-lis/les=45/46 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.1( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.2( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.3( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.4( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.5( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.6( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.7( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.8( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.9( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.a( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.b( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.c( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.e( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.d( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.f( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.11( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.12( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.13( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.10( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.14( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.16( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.17( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.18( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.15( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.19( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.1a( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.1b( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.1c( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.1d( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.1e( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 54 pg[9.1f( v 49'389 lc 0'0 (0'0,49'389] local-lis/les=43/44 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.1e( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.1b( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.13( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.12( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.b( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.11( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.10( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.1f( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.19( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.1a( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.18( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.6( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.5( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.1c( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.7( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.1d( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.f( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.c( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.9( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.0( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 49'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.1( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.e( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.3( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.4( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.d( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.2( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.a( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.8( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.14( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.16( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.17( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 54 pg[10.15( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=45/45 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[45,53)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]: {
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:         "osd_id": 1,
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:         "type": "bluestore"
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:     },
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:         "osd_id": 2,
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:         "type": "bluestore"
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:     },
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:         "osd_id": 0,
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:         "type": "bluestore"
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]:     }
Oct 02 19:12:13 compute-0 goofy_chatterjee[223139]: }
Oct 02 19:12:13 compute-0 podman[223123]: 2025-10-02 19:12:13.208794072 +0000 UTC m=+1.547757688 container died bde284ac2a3ebe97689d0c696e69d71fcc5dd976f51d9b1dc9f03eef239a6fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_chatterjee, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 19:12:13 compute-0 systemd[1]: libpod-bde284ac2a3ebe97689d0c696e69d71fcc5dd976f51d9b1dc9f03eef239a6fcb.scope: Deactivated successfully.
Oct 02 19:12:13 compute-0 systemd[1]: libpod-bde284ac2a3ebe97689d0c696e69d71fcc5dd976f51d9b1dc9f03eef239a6fcb.scope: Consumed 1.280s CPU time.
Oct 02 19:12:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd3a3aea0c8d6d9549f282772e49184a7e27def24f7bf91b5ef523fc8aeb78f0-merged.mount: Deactivated successfully.
Oct 02 19:12:13 compute-0 podman[223123]: 2025-10-02 19:12:13.301937644 +0000 UTC m=+1.640901210 container remove bde284ac2a3ebe97689d0c696e69d71fcc5dd976f51d9b1dc9f03eef239a6fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:12:13 compute-0 systemd[1]: libpod-conmon-bde284ac2a3ebe97689d0c696e69d71fcc5dd976f51d9b1dc9f03eef239a6fcb.scope: Deactivated successfully.
Oct 02 19:12:13 compute-0 sudo[223021]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:12:13 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:12:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:12:13 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:12:13 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 7b0c3400-772b-4e8e-a366-0b49727bd5c8 does not exist
Oct 02 19:12:13 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 4b9d1b5b-3b21-4f55-8a44-e0e68a2d97fe does not exist
Oct 02 19:12:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v140: 290 pgs: 1 peering, 93 unknown, 196 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 02 19:12:13 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:12:13 compute-0 sudo[223184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:12:13 compute-0 sudo[223184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:13 compute-0 sudo[223184]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:13 compute-0 sudo[223209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:12:13 compute-0 sudo[223209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:12:13 compute-0 sudo[223209]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:13 compute-0 ceph-mgr[192222]: [progress INFO root] Writing back 16 completed events
Oct 02 19:12:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 02 19:12:13 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:12:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct 02 19:12:14 compute-0 ceph-mon[191910]: osdmap e54: 3 total, 3 up, 3 in
Oct 02 19:12:14 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:12:14 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:12:14 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 19:12:14 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:12:14 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:12:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct 02 19:12:14 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[11.0( v 49'2 (0'0,49'2] local-lis/les=47/48 n=2 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=9.840505600s) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 49'1 mlcod 49'1 active pruub 120.178138733s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[11.0( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=9.840505600s) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 49'1 mlcod 0'0 unknown pruub 120.178138733s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.0( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 49'388 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.2( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.14( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.11( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.c( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.d( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.3( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.b( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.9( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.e( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.a( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.1( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.6( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.4( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.5( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.8( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.18( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.1a( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.1d( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.12( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.1b( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.10( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 55 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=49'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:12:14 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.11 deep-scrub starts
Oct 02 19:12:15 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.11 deep-scrub ok
Oct 02 19:12:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct 02 19:12:15 compute-0 ceph-mon[191910]: pgmap v140: 290 pgs: 1 peering, 93 unknown, 196 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 19:12:15 compute-0 ceph-mon[191910]: osdmap e55: 3 total, 3 up, 3 in
Oct 02 19:12:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct 02 19:12:15 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.16( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.15( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.17( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.2( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.14( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.13( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.1( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.f( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.e( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.d( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.9( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.c( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.b( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.8( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.a( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.3( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.4( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.5( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.6( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.18( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.7( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.1a( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.1b( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.1c( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.1d( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.1e( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.1f( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.10( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.12( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.19( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.11( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.16( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.15( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.2( v 49'2 (0'0,49'2] local-lis/les=55/56 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.13( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.17( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.14( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.1( v 49'2 (0'0,49'2] local-lis/les=55/56 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.f( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.e( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.0( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 49'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.c( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.8( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.9( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.a( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.b( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.5( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.6( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.7( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.1a( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.3( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.4( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.1b( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.18( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.1c( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.d( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.1d( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.1e( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.1f( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.10( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.12( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.19( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 56 pg[11.11( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[47,55)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v143: 321 pgs: 1 peering, 31 unknown, 289 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:15 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct 02 19:12:15 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct 02 19:12:15 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Oct 02 19:12:16 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Oct 02 19:12:16 compute-0 ceph-mon[191910]: 5.11 deep-scrub starts
Oct 02 19:12:16 compute-0 ceph-mon[191910]: 5.11 deep-scrub ok
Oct 02 19:12:16 compute-0 ceph-mon[191910]: osdmap e56: 3 total, 3 up, 3 in
Oct 02 19:12:16 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct 02 19:12:16 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct 02 19:12:16 compute-0 podman[223234]: 2025-10-02 19:12:16.661167797 +0000 UTC m=+0.088544431 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:12:16 compute-0 podman[223235]: 2025-10-02 19:12:16.679585816 +0000 UTC m=+0.096002569 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:12:17 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Oct 02 19:12:17 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Oct 02 19:12:17 compute-0 ceph-mon[191910]: pgmap v143: 321 pgs: 1 peering, 31 unknown, 289 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:17 compute-0 ceph-mon[191910]: 5.1f scrub starts
Oct 02 19:12:17 compute-0 ceph-mon[191910]: 5.1f scrub ok
Oct 02 19:12:17 compute-0 ceph-mon[191910]: 5.12 scrub starts
Oct 02 19:12:17 compute-0 ceph-mon[191910]: 5.12 scrub ok
Oct 02 19:12:17 compute-0 ceph-mon[191910]: 5.1e scrub starts
Oct 02 19:12:17 compute-0 ceph-mon[191910]: 5.1e scrub ok
Oct 02 19:12:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v144: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 19:12:17 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:12:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 19:12:17 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:12:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct 02 19:12:17 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 02 19:12:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 19:12:17 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:12:17 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct 02 19:12:17 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct 02 19:12:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct 02 19:12:18 compute-0 ceph-mon[191910]: 5.13 scrub starts
Oct 02 19:12:18 compute-0 ceph-mon[191910]: 5.13 scrub ok
Oct 02 19:12:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:12:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:12:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 02 19:12:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:12:18 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:12:18 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:12:18 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 02 19:12:18 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:12:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct 02 19:12:18 compute-0 sshd-session[223276]: Accepted publickey for zuul from 192.168.122.30 port 60138 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:12:18 compute-0 systemd-logind[793]: New session 41 of user zuul.
Oct 02 19:12:18 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct 02 19:12:18 compute-0 systemd[1]: Started Session 41 of User zuul.
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.d( v 54'65 (0'0,54'65] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.867646217s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 49'64 mlcod 49'64 active pruub 118.517089844s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.13( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.867570877s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active pruub 118.517051697s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.12( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.867641449s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active pruub 118.517211914s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.1e( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.854843140s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active pruub 118.504425049s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.13( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.867453575s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.517051697s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.12( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.867585182s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.517211914s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.1e( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.854754448s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.504425049s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.b( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.867434502s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active pruub 118.517204285s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.d( v 54'65 (0'0,54'65] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.866907120s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 49'64 mlcod 0'0 unknown NOTIFY pruub 118.517089844s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.11( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.866058350s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active pruub 118.517250061s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.10( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.866084099s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active pruub 118.517295837s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.11( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.866001129s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.517250061s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.10( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.866027832s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.517295837s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.1a( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.865605354s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active pruub 118.517440796s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.1a( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.865560532s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.517440796s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.19( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.865187645s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active pruub 118.517410278s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.19( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.865143776s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.517410278s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.b( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.867349625s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.517204285s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.6( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.864868164s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active pruub 118.517471313s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.6( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.864824295s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.517471313s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.4( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.863974571s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active pruub 118.517562866s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.4( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.863928795s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.517562866s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.8( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.864850044s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active pruub 118.518676758s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.7( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.863616943s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active pruub 118.517555237s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.7( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.863578796s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.517555237s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.f( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.863384247s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active pruub 118.517677307s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.e( v 54'65 (0'0,54'65] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.863367081s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 49'64 mlcod 49'64 active pruub 118.517791748s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.f( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.863244057s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.517677307s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.e( v 54'65 (0'0,54'65] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.863328934s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 49'64 mlcod 0'0 unknown NOTIFY pruub 118.517791748s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.8( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.864407539s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.518676758s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.9( v 54'65 (0'0,54'65] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.862605095s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 49'64 mlcod 49'64 active pruub 118.517730713s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.9( v 54'65 (0'0,54'65] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.862540245s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 49'64 mlcod 0'0 unknown NOTIFY pruub 118.517730713s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.1( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.862195969s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active pruub 118.517784119s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.1( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.862152100s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.517784119s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.14( v 54'65 (0'0,54'65] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.861758232s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 49'64 mlcod 49'64 active pruub 118.517845154s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.14( v 54'65 (0'0,54'65] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.861702919s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 49'64 mlcod 0'0 unknown NOTIFY pruub 118.517845154s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.15( v 54'65 (0'0,54'65] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.862000465s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 49'64 mlcod 49'64 active pruub 118.518447876s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.15( v 54'65 (0'0,54'65] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.861946106s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 49'64 mlcod 0'0 unknown NOTIFY pruub 118.518447876s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.16( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.860854149s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active pruub 118.518005371s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.17( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.861130714s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active pruub 118.518310547s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.2( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.860617638s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active pruub 118.517837524s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.16( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.860807419s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.518005371s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.2( v 49'64 (0'0,49'64] local-lis/les=53/54 n=1 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.860367775s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.517837524s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[10.17( v 49'64 (0'0,49'64] local-lis/les=53/54 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.860539436s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.518310547s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 sshd-session[223276]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[10.11( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[10.6( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[10.2( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.17( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.850613594s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.388633728s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.17( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.850583076s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.388633728s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.14( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.741431236s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279670715s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.14( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.741411209s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279670715s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.808773994s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 126.347114563s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.808735847s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.347114563s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.15( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.740327835s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279655457s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.15( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.740304947s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279655457s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.15( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.842648506s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.382026672s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.15( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.842608452s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.382026672s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.807594299s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 126.347122192s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.807575226s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.347122192s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.10( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.739778519s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279594421s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.2( v 49'2 (0'0,49'2] local-lis/les=55/56 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.842469215s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.382347107s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.11( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.815967560s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 126.355842590s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.2( v 49'2 (0'0,49'2] local-lis/les=55/56 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.842445374s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.382347107s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.10( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.739727020s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279594421s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.11( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.815931320s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.355842590s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.1( v 49'2 (0'0,49'2] local-lis/les=55/56 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.848751068s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.388824463s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.1( v 49'2 (0'0,49'2] local-lis/les=55/56 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.848731995s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.388824463s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.2( v 42'4 (0'0,42'4] local-lis/les=51/52 n=1 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.739338875s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279548645s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.3( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.815795898s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 126.356018066s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.3( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.815778732s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.356018066s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.2( v 42'4 (0'0,42'4] local-lis/les=51/52 n=1 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.739296913s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279548645s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.c( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.738966942s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279533386s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.14( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.848113060s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.388671875s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.c( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.738945007s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279533386s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.f( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.848265648s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.388870239s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.f( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.848220825s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.388870239s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.d( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.815279961s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 126.355987549s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.d( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.815224648s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.355987549s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.e( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.848046303s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.388923645s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.e( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.848020554s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.388923645s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.d( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.738511086s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279487610s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.d( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.738375664s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279487610s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.e( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.738282204s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279502869s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.e( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.738251686s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279502869s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.d( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.847754478s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.389137268s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.814595222s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 126.355987549s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.d( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.847720146s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.389137268s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.814549446s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.355987549s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.9( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.814564705s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 126.356063843s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.9( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.814535141s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.356063843s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.b( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.847845078s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.389259338s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.9( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.847170830s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.388969421s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.b( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.847456932s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.389259338s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.9( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.847141266s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.388969421s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.b( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.814088821s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 126.356025696s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.f( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.737463951s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279449463s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.b( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.814043999s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.356025696s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.b( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.737175941s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279449463s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.8( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.846767426s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.389038086s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.b( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.737143517s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279449463s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.8( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.846720695s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.389038086s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.f( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.737024307s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279449463s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[8.15( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.3( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.846582413s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.389083862s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.3( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.846553802s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.389083862s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.4( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.846669197s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.389259338s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.4( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.846647263s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.389259338s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[11.15( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.1( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.813481331s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 126.356101990s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.1( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.813439369s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.356101990s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.6( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.736605644s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279396057s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.6( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.736586571s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279396057s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.813407898s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 126.356307983s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.813390732s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.356307983s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.6( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.846382141s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.389358521s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.6( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.846340179s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.389358521s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.4( v 42'4 (0'0,42'4] local-lis/les=51/52 n=1 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.736260414s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279365540s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.4( v 42'4 (0'0,42'4] local-lis/les=51/52 n=1 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.736240387s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279365540s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[11.2( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.5( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.813073158s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 126.356407166s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.1b( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.735902786s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279335022s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.5( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.813024521s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.356407166s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.1b( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.735871315s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279335022s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.1a( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.845797539s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.389411926s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.1a( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.845770836s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.389411926s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.1b( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.845818520s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.389656067s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.1b( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.845788956s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.389656067s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[8.2( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.812554359s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 126.356491089s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.812504768s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.356491089s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.9( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.735411644s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279472351s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.18( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.735184669s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279258728s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.18( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.845481873s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.389686584s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.1c( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.845472336s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.389709473s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.14( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.847547531s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.388671875s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.9( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.734869957s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279472351s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.18( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.734197617s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279258728s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.18( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.844442368s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.389686584s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.1c( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.844212532s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.389709473s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.1f( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.733342171s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279235840s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.1f( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.733310699s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279235840s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.1d( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.733081818s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279197693s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.1f( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.843643188s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.389831543s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[8.d( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.1d( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.810365677s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 126.356567383s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.810311317s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 126.356536865s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.1d( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.810308456s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.356567383s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.1d( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.732616425s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279197693s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.1c( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.732804298s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279434204s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.809909821s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.356536865s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.10( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.843087196s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.389877319s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.1f( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.843118668s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.389831543s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.11( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.843063354s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.389968872s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.1e( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.842799187s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.389816284s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.1c( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.732327461s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279434204s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.10( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.843055725s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.389877319s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.12( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.732297897s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279708862s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.12( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.842479706s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.389900208s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.809154510s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 126.356620789s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.11( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.841963768s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.389968872s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.1e( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.841685295s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.389816284s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.11( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.722588539s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.271003723s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.19( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.841472626s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 127.389930725s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.19( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.841447830s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.389930725s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.11( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.722554207s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.271003723s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.12( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.731132507s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279708862s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.1a( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.730313301s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active pruub 123.279342651s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[8.1a( v 42'4 (0'0,42'4] local-lis/les=51/52 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57 pruub=8.730265617s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.279342651s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.1b( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.807301521s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 126.356613159s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[11.d( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.1b( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.807264328s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.356613159s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[11.9( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[11.b( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[11.10( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[11.8( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[11.3( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[11.12( v 49'2 (0'0,49'2] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.841198921s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.389900208s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 57 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=11.807605743s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.356620789s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[8.4( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[8.1b( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[11.1a( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[11.1b( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[11.18( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[11.1c( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[8.10( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[11.1f( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[9.11( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[9.5( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[10.9( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[8.b( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[8.1c( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[11.11( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[11.1e( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[8.12( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[11.4( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[9.b( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[10.15( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[11.14( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[8.11( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[10.4( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[8.6( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[8.9( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 57 pg[11.12( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[10.7( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[11.6( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[9.9( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[10.17( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[8.f( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[11.e( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[8.e( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[10.e( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[8.c( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[9.d( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[11.f( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[9.1( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[11.1( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[9.3( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[9.1d( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[8.18( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[11.19( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[10.16( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[11.17( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[8.14( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[10.1( empty local-lis/les=0/0 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[8.1f( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[8.1d( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[8.1a( empty local-lis/les=0/0 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 57 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:18 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct 02 19:12:18 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct 02 19:12:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:12:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct 02 19:12:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct 02 19:12:19 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-mon[191910]: pgmap v144: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:19 compute-0 ceph-mon[191910]: 7.5 scrub starts
Oct 02 19:12:19 compute-0 ceph-mon[191910]: 7.5 scrub ok
Oct 02 19:12:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:12:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:12:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 02 19:12:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:12:19 compute-0 ceph-mon[191910]: osdmap e57: 3 total, 3 up, 3 in
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.1d( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.1d( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.3( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.1b( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.1b( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.1d( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.1d( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.3( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.1( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.1( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.d( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.d( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.9( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.9( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.b( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.b( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.5( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.5( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.11( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.11( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.5( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.5( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.1( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.1( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.b( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.b( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.9( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.d( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.d( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.3( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.3( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.11( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.11( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.9( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[8.1d( v 42'4 (0'0,42'4] local-lis/les=57/58 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[10.1( v 49'64 (0'0,49'64] local-lis/les=57/58 n=1 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[8.14( v 42'4 (0'0,42'4] local-lis/les=57/58 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[10.16( v 49'64 (0'0,49'64] local-lis/les=57/58 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[11.17( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[11.19( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[8.18( v 42'4 (0'0,42'4] local-lis/les=57/58 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[8.1f( v 42'4 (0'0,42'4] local-lis/les=57/58 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[10.1e( v 49'64 (0'0,49'64] local-lis/les=57/58 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[11.1( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=57/58 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=49'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[10.e( v 54'65 lc 49'48 (0'0,54'65] local-lis/les=57/58 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=54'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[8.c( v 42'4 (0'0,42'4] local-lis/les=57/58 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[11.e( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[11.f( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[10.d( v 54'65 lc 49'50 (0'0,54'65] local-lis/les=57/58 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=54'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[8.f( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=57/58 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=42'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[8.e( v 42'4 (0'0,42'4] local-lis/les=57/58 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[10.17( v 49'64 (0'0,49'64] local-lis/les=57/58 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[8.1a( v 42'4 (0'0,42'4] local-lis/les=57/58 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[11.6( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[10.7( v 49'64 (0'0,49'64] local-lis/les=57/58 n=1 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[10.4( v 49'64 (0'0,49'64] local-lis/les=57/58 n=1 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[8.6( v 42'4 (0'0,42'4] local-lis/les=57/58 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[10.15( v 54'65 lc 49'46 (0'0,54'65] local-lis/les=57/58 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=54'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[11.14( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[10.8( v 49'64 (0'0,49'64] local-lis/les=57/58 n=1 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[11.4( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[8.b( v 42'4 lc 0'0 (0'0,42'4] local-lis/les=57/58 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=42'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[10.9( v 54'65 lc 49'56 (0'0,54'65] local-lis/les=57/58 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=54'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[8.10( v 42'4 (0'0,42'4] local-lis/les=57/58 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[11.10( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[8.1c( v 42'4 (0'0,42'4] local-lis/les=57/58 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[11.1a( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[11.b( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[11.12( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[8.12( v 42'4 (0'0,42'4] local-lis/les=57/58 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[11.1f( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[8.11( v 42'4 (0'0,42'4] local-lis/les=57/58 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[11.11( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[11.1e( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[11.1c( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[11.1b( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[11.18( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[8.1b( v 42'4 (0'0,42'4] local-lis/les=57/58 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 58 pg[8.9( v 42'4 (0'0,42'4] local-lis/les=57/58 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[10.b( v 49'64 (0'0,49'64] local-lis/les=57/58 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[10.14( v 54'65 lc 49'54 (0'0,54'65] local-lis/les=57/58 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=54'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[10.13( v 49'64 (0'0,49'64] local-lis/les=57/58 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[10.11( v 49'64 (0'0,49'64] local-lis/les=57/58 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[10.10( v 49'64 (0'0,49'64] local-lis/les=57/58 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[10.12( v 49'64 (0'0,49'64] local-lis/les=57/58 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[10.f( v 49'64 (0'0,49'64] local-lis/les=57/58 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[10.2( v 49'64 (0'0,49'64] local-lis/les=57/58 n=1 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[10.6( v 49'64 (0'0,49'64] local-lis/les=57/58 n=1 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[10.1a( v 49'64 (0'0,49'64] local-lis/les=57/58 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 58 pg[10.19( v 49'64 (0'0,49'64] local-lis/les=57/58 n=0 ec=53/45 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=49'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[11.9( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=49'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[8.d( v 42'4 (0'0,42'4] local-lis/les=57/58 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[11.8( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[8.4( v 42'4 (0'0,42'4] local-lis/les=57/58 n=1 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[11.d( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[8.2( v 42'4 (0'0,42'4] local-lis/les=57/58 n=1 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[11.3( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[11.15( v 49'2 (0'0,49'2] local-lis/les=57/58 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[11.2( v 49'2 (0'0,49'2] local-lis/les=57/58 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 58 pg[8.15( v 42'4 (0'0,42'4] local-lis/les=57/58 n=0 ec=51/41 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=42'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:19 compute-0 podman[223403]: 2025-10-02 19:12:19.382654743 +0000 UTC m=+0.088680285 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, container_name=kepler, vcs-type=git, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, managed_by=edpm_ansible)
Oct 02 19:12:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v147: 321 pgs: 16 unknown, 32 peering, 273 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:19 compute-0 python3.9[223443]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:12:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct 02 19:12:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct 02 19:12:20 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct 02 19:12:20 compute-0 ceph-mon[191910]: 7.8 scrub starts
Oct 02 19:12:20 compute-0 ceph-mon[191910]: 7.8 scrub ok
Oct 02 19:12:20 compute-0 ceph-mon[191910]: osdmap e58: 3 total, 3 up, 3 in
Oct 02 19:12:20 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Oct 02 19:12:20 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Oct 02 19:12:20 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 59 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:20 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 59 pg[9.5( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:20 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 59 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:20 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 59 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:20 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 59 pg[9.1( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:20 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 59 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:20 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 59 pg[9.1d( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:20 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 59 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:20 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 59 pg[9.9( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:20 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 59 pg[9.d( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:20 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 59 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:20 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 59 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:20 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 59 pg[9.1b( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:20 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 59 pg[9.11( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:20 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 59 pg[9.3( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:20 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 59 pg[9.b( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:20 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct 02 19:12:20 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct 02 19:12:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct 02 19:12:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct 02 19:12:21 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.143390656s) [0] async=[0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 132.631851196s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.143326759s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.631851196s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.d( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.143123627s) [0] async=[0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 132.632095337s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.142713547s) [0] async=[0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 132.631713867s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.9( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.142897606s) [0] async=[0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 132.631973267s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.142652512s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.631713867s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.d( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.143034935s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.632095337s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.9( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.142852783s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.631973267s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.1( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.141754150s) [0] async=[0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 132.631790161s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.1( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.141691208s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.631790161s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.141318321s) [0] async=[0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 132.631607056s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.141229630s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.631607056s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.5( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.140822411s) [0] async=[0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 132.631225586s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.5( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.140781403s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.631225586s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.141571999s) [0] async=[0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 132.632171631s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.141511917s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.632171631s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.141123772s) [0] async=[0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 132.631942749s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.141072273s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.631942749s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.1d( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.140974045s) [0] async=[0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 132.631912231s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.1d( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.140902519s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.631912231s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.139701843s) [0] async=[0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 132.630798340s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 60 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.139652252s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.630798340s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:21 compute-0 ceph-mon[191910]: pgmap v147: 321 pgs: 16 unknown, 32 peering, 273 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:21 compute-0 ceph-mon[191910]: osdmap e59: 3 total, 3 up, 3 in
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.5( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.5( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.9( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.9( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.d( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.d( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.1( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.1d( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.1( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.1d( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:21 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 60 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v150: 321 pgs: 1 active+recovering+remapped, 15 active+recovery_wait+remapped, 32 peering, 273 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 87/215 objects misplaced (40.465%); 0 B/s, 0 objects/s recovering
Oct 02 19:12:21 compute-0 sudo[223671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycncdtrlkxsbhdlfdvxjwnahhfmbqcks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432341.2034233-32-17397008226199/AnsiballZ_command.py'
Oct 02 19:12:21 compute-0 sudo[223671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:22 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct 02 19:12:22 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct 02 19:12:22 compute-0 python3.9[223673]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:12:22 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct 02 19:12:22 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct 02 19:12:22 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct 02 19:12:22 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 61 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.121953011s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 132.632186890s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:22 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 61 pg[9.11( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.121960640s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 132.632324219s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:22 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 61 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.121812820s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.632186890s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:22 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 61 pg[9.11( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.121895790s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.632324219s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:22 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 61 pg[9.b( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.120934486s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 132.631469727s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:22 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 61 pg[9.b( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.120839119s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.631469727s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:22 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 61 pg[9.3( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.121521950s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 132.632431030s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:22 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 61 pg[9.3( v 49'389 (0'0,49'389] local-lis/les=58/59 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.121409416s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.632431030s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:22 compute-0 ceph-mon[191910]: 2.19 scrub starts
Oct 02 19:12:22 compute-0 ceph-mon[191910]: 2.19 scrub ok
Oct 02 19:12:22 compute-0 ceph-mon[191910]: 3.e scrub starts
Oct 02 19:12:22 compute-0 ceph-mon[191910]: 3.e scrub ok
Oct 02 19:12:22 compute-0 ceph-mon[191910]: osdmap e60: 3 total, 3 up, 3 in
Oct 02 19:12:22 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 61 pg[9.1b( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.119788170s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 132.632232666s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:22 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 61 pg[9.1b( v 49'389 (0'0,49'389] local-lis/les=58/59 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.119754791s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.632232666s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.1b( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.1b( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.3( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.3( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.11( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.11( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.b( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.b( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=60/61 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.9( v 49'389 (0'0,49'389] local-lis/les=60/61 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.1( v 49'389 (0'0,49'389] local-lis/les=60/61 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=60/61 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.5( v 49'389 (0'0,49'389] local-lis/les=60/61 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.1d( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:22 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 61 pg[9.d( v 49'389 (0'0,49'389] local-lis/les=60/61 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:22 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct 02 19:12:22 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct 02 19:12:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct 02 19:12:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct 02 19:12:23 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct 02 19:12:23 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 62 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=61/62 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:23 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 62 pg[9.1b( v 49'389 (0'0,49'389] local-lis/les=61/62 n=5 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:23 compute-0 ceph-mon[191910]: pgmap v150: 321 pgs: 1 active+recovering+remapped, 15 active+recovery_wait+remapped, 32 peering, 273 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 87/215 objects misplaced (40.465%); 0 B/s, 0 objects/s recovering
Oct 02 19:12:23 compute-0 ceph-mon[191910]: 5.16 scrub starts
Oct 02 19:12:23 compute-0 ceph-mon[191910]: 5.16 scrub ok
Oct 02 19:12:23 compute-0 ceph-mon[191910]: osdmap e61: 3 total, 3 up, 3 in
Oct 02 19:12:23 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 62 pg[9.3( v 49'389 (0'0,49'389] local-lis/les=61/62 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:23 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 62 pg[9.b( v 49'389 (0'0,49'389] local-lis/les=61/62 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:23 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 62 pg[9.11( v 49'389 (0'0,49'389] local-lis/les=61/62 n=6 ec=53/43 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v153: 321 pgs: 1 active+recovering+remapped, 15 active+recovery_wait+remapped, 32 peering, 273 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 4.2 KiB/s wr, 138 op/s; 87/215 objects misplaced (40.465%); 90 B/s, 0 objects/s recovering
Oct 02 19:12:23 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct 02 19:12:23 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct 02 19:12:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:12:24 compute-0 ceph-mon[191910]: 3.11 scrub starts
Oct 02 19:12:24 compute-0 ceph-mon[191910]: 3.11 scrub ok
Oct 02 19:12:24 compute-0 ceph-mon[191910]: osdmap e62: 3 total, 3 up, 3 in
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.435 17 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.436 17 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.436 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a00e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.437 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feec38a00b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec5f86ab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38272c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec6d242f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38273e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38274d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec4ac1ee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3825730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3779820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feec72bb7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feec38264b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feec3827c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feec3827230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feec3825700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feec3827290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feec38277a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feec38272f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feec3827350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feec38a0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feec38273b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feec49646e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feec3827440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feec3827c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feec38274a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feec3827ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feec3827d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feec3827dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feec3827e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feec38a0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feec38276e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feec3827ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feec49641d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feec3827740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feec3827f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:12:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:12:24 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.18 deep-scrub starts
Oct 02 19:12:24 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.18 deep-scrub ok
Oct 02 19:12:25 compute-0 ceph-mon[191910]: pgmap v153: 321 pgs: 1 active+recovering+remapped, 15 active+recovery_wait+remapped, 32 peering, 273 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 4.2 KiB/s wr, 138 op/s; 87/215 objects misplaced (40.465%); 90 B/s, 0 objects/s recovering
Oct 02 19:12:25 compute-0 ceph-mon[191910]: 7.15 scrub starts
Oct 02 19:12:25 compute-0 ceph-mon[191910]: 7.15 scrub ok
Oct 02 19:12:25 compute-0 ceph-mon[191910]: 2.18 deep-scrub starts
Oct 02 19:12:25 compute-0 ceph-mon[191910]: 2.18 deep-scrub ok
Oct 02 19:12:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v154: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.3 KiB/s wr, 106 op/s; 600 B/s, 18 objects/s recovering
Oct 02 19:12:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct 02 19:12:25 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 02 19:12:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct 02 19:12:26 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 02 19:12:26 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 02 19:12:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct 02 19:12:26 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct 02 19:12:26 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 7.1f deep-scrub starts
Oct 02 19:12:26 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 7.1f deep-scrub ok
Oct 02 19:12:26 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct 02 19:12:26 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct 02 19:12:27 compute-0 ceph-mon[191910]: pgmap v154: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.3 KiB/s wr, 106 op/s; 600 B/s, 18 objects/s recovering
Oct 02 19:12:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 02 19:12:27 compute-0 ceph-mon[191910]: osdmap e63: 3 total, 3 up, 3 in
Oct 02 19:12:27 compute-0 ceph-mon[191910]: 7.1f deep-scrub starts
Oct 02 19:12:27 compute-0 ceph-mon[191910]: 7.1f deep-scrub ok
Oct 02 19:12:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v156: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 521 B/s, 15 objects/s recovering
Oct 02 19:12:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct 02 19:12:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 02 19:12:27 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct 02 19:12:27 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct 02 19:12:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct 02 19:12:28 compute-0 ceph-mon[191910]: 5.9 scrub starts
Oct 02 19:12:28 compute-0 ceph-mon[191910]: 5.9 scrub ok
Oct 02 19:12:28 compute-0 ceph-mon[191910]: pgmap v156: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 521 B/s, 15 objects/s recovering
Oct 02 19:12:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 02 19:12:28 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 02 19:12:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct 02 19:12:28 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct 02 19:12:28 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Oct 02 19:12:28 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Oct 02 19:12:28 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct 02 19:12:28 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct 02 19:12:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:12:29 compute-0 sudo[223671]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:29 compute-0 ceph-mon[191910]: 2.7 scrub starts
Oct 02 19:12:29 compute-0 ceph-mon[191910]: 2.7 scrub ok
Oct 02 19:12:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 02 19:12:29 compute-0 ceph-mon[191910]: osdmap e64: 3 total, 3 up, 3 in
Oct 02 19:12:29 compute-0 ceph-mon[191910]: 3.1b scrub starts
Oct 02 19:12:29 compute-0 ceph-mon[191910]: 3.1b scrub ok
Oct 02 19:12:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v158: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 449 B/s, 15 objects/s recovering
Oct 02 19:12:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct 02 19:12:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 02 19:12:29 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.f deep-scrub starts
Oct 02 19:12:29 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.f deep-scrub ok
Oct 02 19:12:29 compute-0 podman[157186]: time="2025-10-02T19:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:12:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:12:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6805 "" "Go-http-client/1.1"
Oct 02 19:12:29 compute-0 sshd-session[223279]: Connection closed by 192.168.122.30 port 60138
Oct 02 19:12:29 compute-0 sshd-session[223276]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:12:29 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Oct 02 19:12:29 compute-0 systemd[1]: session-41.scope: Consumed 9.888s CPU time.
Oct 02 19:12:29 compute-0 systemd-logind[793]: Session 41 logged out. Waiting for processes to exit.
Oct 02 19:12:29 compute-0 systemd-logind[793]: Removed session 41.
Oct 02 19:12:30 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct 02 19:12:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct 02 19:12:30 compute-0 ceph-mon[191910]: 7.11 scrub starts
Oct 02 19:12:30 compute-0 ceph-mon[191910]: 7.11 scrub ok
Oct 02 19:12:30 compute-0 ceph-mon[191910]: pgmap v158: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 449 B/s, 15 objects/s recovering
Oct 02 19:12:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 02 19:12:30 compute-0 ceph-mon[191910]: 3.f deep-scrub starts
Oct 02 19:12:30 compute-0 ceph-mon[191910]: 3.f deep-scrub ok
Oct 02 19:12:30 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct 02 19:12:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 02 19:12:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct 02 19:12:30 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct 02 19:12:31 compute-0 openstack_network_exporter[159337]: ERROR   19:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:12:31 compute-0 openstack_network_exporter[159337]: ERROR   19:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:12:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:12:31 compute-0 openstack_network_exporter[159337]: ERROR   19:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:12:31 compute-0 openstack_network_exporter[159337]: ERROR   19:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:12:31 compute-0 openstack_network_exporter[159337]: ERROR   19:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:12:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:12:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v160: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct 02 19:12:31 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 02 19:12:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct 02 19:12:31 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 02 19:12:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct 02 19:12:31 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct 02 19:12:31 compute-0 ceph-mon[191910]: 7.4 scrub starts
Oct 02 19:12:31 compute-0 ceph-mon[191910]: 7.4 scrub ok
Oct 02 19:12:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 02 19:12:31 compute-0 ceph-mon[191910]: osdmap e65: 3 total, 3 up, 3 in
Oct 02 19:12:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 02 19:12:31 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.1b deep-scrub starts
Oct 02 19:12:31 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.1b deep-scrub ok
Oct 02 19:12:32 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct 02 19:12:32 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct 02 19:12:32 compute-0 ceph-mon[191910]: pgmap v160: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 02 19:12:32 compute-0 ceph-mon[191910]: osdmap e66: 3 total, 3 up, 3 in
Oct 02 19:12:33 compute-0 PackageKit[186781]: daemon quit
Oct 02 19:12:33 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 02 19:12:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v162: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct 02 19:12:33 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 02 19:12:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct 02 19:12:33 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 02 19:12:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct 02 19:12:33 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct 02 19:12:33 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 67 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=67 pruub=12.616235733s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 142.348373413s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:33 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 67 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=67 pruub=12.616119385s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.348373413s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:33 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 67 pg[9.e( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=67 pruub=12.624519348s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 142.357391357s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:33 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 67 pg[9.e( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=67 pruub=12.624469757s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.357391357s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:33 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 67 pg[9.6( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=67 pruub=12.623586655s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 142.357406616s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:33 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 67 pg[9.6( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=67 pruub=12.623483658s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.357406616s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:33 compute-0 ceph-mon[191910]: 2.1b deep-scrub starts
Oct 02 19:12:33 compute-0 ceph-mon[191910]: 2.1b deep-scrub ok
Oct 02 19:12:33 compute-0 ceph-mon[191910]: 3.c scrub starts
Oct 02 19:12:33 compute-0 ceph-mon[191910]: 3.c scrub ok
Oct 02 19:12:33 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 02 19:12:33 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 67 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=67 pruub=12.620733261s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 142.357452393s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:33 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 67 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=67 pruub=12.619795799s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.357452393s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:33 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:33 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:33 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:33 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:12:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:12:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:12:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:12:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:12:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:12:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:12:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct 02 19:12:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct 02 19:12:34 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct 02 19:12:34 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 68 pg[9.e( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[53,68)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:34 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 68 pg[9.e( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[53,68)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:34 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[53,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:34 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[53,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:34 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 68 pg[9.6( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[53,68)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:34 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 68 pg[9.6( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[53,68)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:34 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 68 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[53,68)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:34 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 68 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[53,68)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:34 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[53,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:34 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[53,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:34 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 68 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[53,68)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:34 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 68 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[53,68)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:34 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[53,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:34 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[53,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:34 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[53,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:34 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[53,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:34 compute-0 ceph-mon[191910]: pgmap v162: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:34 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 02 19:12:34 compute-0 ceph-mon[191910]: osdmap e67: 3 total, 3 up, 3 in
Oct 02 19:12:34 compute-0 ceph-mon[191910]: osdmap e68: 3 total, 3 up, 3 in
Oct 02 19:12:34 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Oct 02 19:12:34 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Oct 02 19:12:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct 02 19:12:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct 02 19:12:35 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct 02 19:12:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v166: 321 pgs: 4 activating+remapped, 317 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20/215 objects misplaced (9.302%)
Oct 02 19:12:35 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct 02 19:12:35 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct 02 19:12:35 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 69 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=68/69 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[53,68)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:35 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 69 pg[9.6( v 49'389 (0'0,49'389] local-lis/les=68/69 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[53,68)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:35 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 69 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=68/69 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[53,68)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:35 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 69 pg[9.e( v 49'389 (0'0,49'389] local-lis/les=68/69 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[53,68)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct 02 19:12:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct 02 19:12:36 compute-0 ceph-mon[191910]: 3.16 deep-scrub starts
Oct 02 19:12:36 compute-0 ceph-mon[191910]: 3.16 deep-scrub ok
Oct 02 19:12:36 compute-0 ceph-mon[191910]: osdmap e69: 3 total, 3 up, 3 in
Oct 02 19:12:36 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct 02 19:12:36 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 70 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=68/69 n=5 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70 pruub=15.506381989s) [2] async=[2] r=-1 lpr=70 pi=[53,70)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 147.984954834s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:36 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 70 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=68/69 n=5 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70 pruub=15.505826950s) [2] r=-1 lpr=70 pi=[53,70)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.984954834s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:36 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 70 pg[9.6( v 49'389 (0'0,49'389] local-lis/les=68/69 n=6 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70 pruub=15.503908157s) [2] async=[2] r=-1 lpr=70 pi=[53,70)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 147.984893799s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:36 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 70 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=68/69 n=5 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70 pruub=15.502450943s) [2] async=[2] r=-1 lpr=70 pi=[53,70)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 147.984436035s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:36 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 70 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=68/69 n=5 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70 pruub=15.502342224s) [2] r=-1 lpr=70 pi=[53,70)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.984436035s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:36 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 70 pg[9.6( v 49'389 (0'0,49'389] local-lis/les=68/69 n=6 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70 pruub=15.503830910s) [2] r=-1 lpr=70 pi=[53,70)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.984893799s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:36 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 70 pg[9.e( v 49'389 (0'0,49'389] local-lis/les=68/69 n=6 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70 pruub=15.501548767s) [2] async=[2] r=-1 lpr=70 pi=[53,70)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 147.984954834s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:36 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 70 pg[9.e( v 49'389 (0'0,49'389] local-lis/les=68/69 n=6 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70 pruub=15.501483917s) [2] r=-1 lpr=70 pi=[53,70)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.984954834s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:36 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 70 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:36 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 70 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:36 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 70 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:36 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 70 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:36 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 70 pg[9.e( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:36 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 70 pg[9.e( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:36 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 70 pg[9.6( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:36 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 70 pg[9.6( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:36 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct 02 19:12:36 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct 02 19:12:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct 02 19:12:37 compute-0 ceph-mon[191910]: pgmap v166: 321 pgs: 4 activating+remapped, 317 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20/215 objects misplaced (9.302%)
Oct 02 19:12:37 compute-0 ceph-mon[191910]: 3.18 scrub starts
Oct 02 19:12:37 compute-0 ceph-mon[191910]: 3.18 scrub ok
Oct 02 19:12:37 compute-0 ceph-mon[191910]: osdmap e70: 3 total, 3 up, 3 in
Oct 02 19:12:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct 02 19:12:37 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct 02 19:12:37 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 71 pg[9.e( v 49'389 (0'0,49'389] local-lis/les=70/71 n=6 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:37 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 71 pg[9.6( v 49'389 (0'0,49'389] local-lis/les=70/71 n=6 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:37 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 71 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=70/71 n=5 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:37 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 71 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=70/71 n=5 ec=53/43 lis/c=68/53 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v169: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 55 B/s, 5 objects/s recovering
Oct 02 19:12:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct 02 19:12:37 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 02 19:12:37 compute-0 podman[223731]: 2025-10-02 19:12:37.655003804 +0000 UTC m=+0.075613475 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Oct 02 19:12:37 compute-0 podman[223732]: 2025-10-02 19:12:37.655050316 +0000 UTC m=+0.080304162 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:12:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct 02 19:12:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 02 19:12:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct 02 19:12:38 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct 02 19:12:38 compute-0 ceph-mon[191910]: 7.18 scrub starts
Oct 02 19:12:38 compute-0 ceph-mon[191910]: 7.18 scrub ok
Oct 02 19:12:38 compute-0 ceph-mon[191910]: osdmap e71: 3 total, 3 up, 3 in
Oct 02 19:12:38 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 02 19:12:38 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 72 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=60/61 n=6 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.024686813s) [2] r=-1 lpr=72 pi=[60,72)/1 crt=49'389 mlcod 0'0 active pruub 149.069900513s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:38 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 72 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=60/61 n=6 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.025608063s) [2] r=-1 lpr=72 pi=[60,72)/1 crt=49'389 mlcod 0'0 active pruub 149.070877075s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:38 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 72 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=60/61 n=6 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.024628639s) [2] r=-1 lpr=72 pi=[60,72)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 149.069900513s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:38 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 72 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=60/61 n=6 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.025556564s) [2] r=-1 lpr=72 pi=[60,72)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 149.070877075s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:38 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 72 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.024312019s) [2] r=-1 lpr=72 pi=[60,72)/1 crt=49'389 mlcod 0'0 active pruub 149.069992065s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:38 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 72 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.024263382s) [2] r=-1 lpr=72 pi=[60,72)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 149.069992065s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:38 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 72 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.014255524s) [2] r=-1 lpr=72 pi=[60,72)/1 crt=49'389 mlcod 0'0 active pruub 149.060302734s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:38 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 72 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.014201164s) [2] r=-1 lpr=72 pi=[60,72)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 149.060302734s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:38 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 72 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=72) [2] r=0 lpr=72 pi=[60,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:38 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 72 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=72) [2] r=0 lpr=72 pi=[60,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:38 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 72 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=72) [2] r=0 lpr=72 pi=[60,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:38 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 72 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=72) [2] r=0 lpr=72 pi=[60,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:38 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct 02 19:12:38 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct 02 19:12:38 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Oct 02 19:12:38 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Oct 02 19:12:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:12:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct 02 19:12:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct 02 19:12:39 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct 02 19:12:39 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:39 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:39 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:39 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:39 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:39 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:39 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:39 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:39 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 73 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=60/61 n=6 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] r=0 lpr=73 pi=[60,73)/1 crt=49'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:39 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 73 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=60/61 n=6 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] r=0 lpr=73 pi=[60,73)/1 crt=49'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:39 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 73 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=60/61 n=6 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] r=0 lpr=73 pi=[60,73)/1 crt=49'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:39 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 73 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] r=0 lpr=73 pi=[60,73)/1 crt=49'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:39 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 73 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] r=0 lpr=73 pi=[60,73)/1 crt=49'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:39 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 73 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=60/61 n=6 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] r=0 lpr=73 pi=[60,73)/1 crt=49'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:39 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 73 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] r=0 lpr=73 pi=[60,73)/1 crt=49'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:39 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 73 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] r=0 lpr=73 pi=[60,73)/1 crt=49'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:39 compute-0 ceph-mon[191910]: pgmap v169: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 55 B/s, 5 objects/s recovering
Oct 02 19:12:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 02 19:12:39 compute-0 ceph-mon[191910]: osdmap e72: 3 total, 3 up, 3 in
Oct 02 19:12:39 compute-0 ceph-mon[191910]: osdmap e73: 3 total, 3 up, 3 in
Oct 02 19:12:39 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct 02 19:12:39 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct 02 19:12:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v172: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 4 objects/s recovering
Oct 02 19:12:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct 02 19:12:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 02 19:12:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct 02 19:12:40 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 02 19:12:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct 02 19:12:40 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct 02 19:12:40 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 74 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=73/74 n=6 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[60,73)/1 crt=49'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:40 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 74 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=73/74 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[60,73)/1 crt=49'389 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:40 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 74 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=73/74 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[60,73)/1 crt=49'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:40 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 74 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=73/74 n=6 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[60,73)/1 crt=49'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:40 compute-0 ceph-mon[191910]: 7.9 scrub starts
Oct 02 19:12:40 compute-0 ceph-mon[191910]: 7.9 scrub ok
Oct 02 19:12:40 compute-0 ceph-mon[191910]: 7.1c deep-scrub starts
Oct 02 19:12:40 compute-0 ceph-mon[191910]: 7.1c deep-scrub ok
Oct 02 19:12:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 02 19:12:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 02 19:12:40 compute-0 ceph-mon[191910]: osdmap e74: 3 total, 3 up, 3 in
Oct 02 19:12:40 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 74 pg[9.8( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=74 pruub=13.693486214s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 150.358352661s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:40 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 74 pg[9.8( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=74 pruub=13.693440437s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.358352661s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:40 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 74 pg[9.8( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=74) [2] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:40 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 74 pg[9.18( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=74 pruub=13.692212105s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 150.358352661s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:40 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 74 pg[9.18( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=74 pruub=13.691242218s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.358352661s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:40 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 74 pg[9.18( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=74) [2] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:41 compute-0 ceph-mon[191910]: 7.6 scrub starts
Oct 02 19:12:41 compute-0 ceph-mon[191910]: 7.6 scrub ok
Oct 02 19:12:41 compute-0 ceph-mon[191910]: pgmap v172: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 4 objects/s recovering
Oct 02 19:12:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct 02 19:12:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct 02 19:12:41 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct 02 19:12:41 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 75 pg[9.8( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=75) [2]/[1] r=0 lpr=75 pi=[53,75)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:41 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 75 pg[9.8( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=75) [2]/[1] r=0 lpr=75 pi=[53,75)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:41 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 75 pg[9.18( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=75) [2]/[1] r=0 lpr=75 pi=[53,75)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:41 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 75 pg[9.18( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=75) [2]/[1] r=0 lpr=75 pi=[53,75)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:41 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:41 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:41 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:41 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:41 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 75 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75) [2] r=0 lpr=75 pi=[60,75)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:41 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 75 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75) [2] r=0 lpr=75 pi=[60,75)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:41 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 75 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75) [2] r=0 lpr=75 pi=[60,75)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:41 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 75 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75) [2] r=0 lpr=75 pi=[60,75)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:41 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 75 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75) [2] r=0 lpr=75 pi=[60,75)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:41 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 75 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75) [2] r=0 lpr=75 pi=[60,75)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:41 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 75 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75) [2] r=0 lpr=75 pi=[60,75)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:41 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 75 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75) [2] r=0 lpr=75 pi=[60,75)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:41 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Oct 02 19:12:41 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 75 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=73/74 n=6 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.915497780s) [2] async=[2] r=-1 lpr=75 pi=[60,75)/1 crt=49'389 mlcod 49'389 active pruub 159.027084351s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:41 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 75 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=73/74 n=5 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.915415764s) [2] async=[2] r=-1 lpr=75 pi=[60,75)/1 crt=49'389 mlcod 49'389 active pruub 159.027084351s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:41 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 75 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=73/74 n=6 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.915435791s) [2] async=[2] r=-1 lpr=75 pi=[60,75)/1 crt=49'389 mlcod 49'389 active pruub 159.027084351s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:41 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 75 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=73/74 n=6 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.915424347s) [2] r=-1 lpr=75 pi=[60,75)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 159.027084351s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:41 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 75 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=73/74 n=5 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.915322304s) [2] r=-1 lpr=75 pi=[60,75)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 159.027084351s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:41 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 75 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=73/74 n=6 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.914984703s) [2] r=-1 lpr=75 pi=[60,75)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 159.027084351s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:41 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 75 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=73/74 n=5 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.912100792s) [2] async=[2] r=-1 lpr=75 pi=[60,75)/1 crt=49'389 mlcod 49'389 active pruub 159.027130127s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:41 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 75 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=73/74 n=5 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.912014961s) [2] r=-1 lpr=75 pi=[60,75)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 159.027130127s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:41 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Oct 02 19:12:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v175: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct 02 19:12:41 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 02 19:12:41 compute-0 podman[223773]: 2025-10-02 19:12:41.705651088 +0000 UTC m=+0.127426584 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, architecture=x86_64, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, name=ubi9-minimal, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 02 19:12:41 compute-0 podman[223774]: 2025-10-02 19:12:41.788715182 +0000 UTC m=+0.204997211 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:12:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct 02 19:12:42 compute-0 ceph-mon[191910]: osdmap e75: 3 total, 3 up, 3 in
Oct 02 19:12:42 compute-0 ceph-mon[191910]: 3.6 scrub starts
Oct 02 19:12:42 compute-0 ceph-mon[191910]: 3.6 scrub ok
Oct 02 19:12:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 02 19:12:42 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 02 19:12:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct 02 19:12:42 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct 02 19:12:42 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 76 pg[9.17( v 49'389 (0'0,49'389] local-lis/les=75/76 n=5 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75) [2] r=0 lpr=75 pi=[60,75)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:42 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 76 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=75/76 n=5 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75) [2] r=0 lpr=75 pi=[60,75)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:42 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 76 pg[9.7( v 49'389 (0'0,49'389] local-lis/les=75/76 n=6 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75) [2] r=0 lpr=75 pi=[60,75)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:42 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 76 pg[9.8( v 49'389 (0'0,49'389] local-lis/les=75/76 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=75) [2]/[1] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:42 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 76 pg[9.f( v 49'389 (0'0,49'389] local-lis/les=75/76 n=6 ec=53/43 lis/c=73/60 les/c/f=74/61/0 sis=75) [2] r=0 lpr=75 pi=[60,75)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:42 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 76 pg[9.18( v 49'389 (0'0,49'389] local-lis/les=75/76 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=75) [2]/[1] async=[2] r=0 lpr=75 pi=[53,75)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:43 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.d scrub starts
Oct 02 19:12:43 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.d scrub ok
Oct 02 19:12:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct 02 19:12:43 compute-0 ceph-mon[191910]: pgmap v175: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:43 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 02 19:12:43 compute-0 ceph-mon[191910]: osdmap e76: 3 total, 3 up, 3 in
Oct 02 19:12:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct 02 19:12:43 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct 02 19:12:43 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 77 pg[9.8( v 49'389 (0'0,49'389] local-lis/les=75/76 n=6 ec=53/43 lis/c=75/53 les/c/f=76/55/0 sis=77 pruub=15.000177383s) [2] async=[2] r=-1 lpr=77 pi=[53,77)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 154.603225708s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:43 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 77 pg[9.8( v 49'389 (0'0,49'389] local-lis/les=75/76 n=6 ec=53/43 lis/c=75/53 les/c/f=76/55/0 sis=77 pruub=15.000052452s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 154.603225708s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:43 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 77 pg[9.18( v 49'389 (0'0,49'389] local-lis/les=75/76 n=5 ec=53/43 lis/c=75/53 les/c/f=76/55/0 sis=77 pruub=15.006735802s) [2] async=[2] r=-1 lpr=77 pi=[53,77)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 154.610382080s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:43 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 77 pg[9.18( v 49'389 (0'0,49'389] local-lis/les=75/76 n=5 ec=53/43 lis/c=75/53 les/c/f=76/55/0 sis=77 pruub=15.006553650s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 154.610382080s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:43 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 77 pg[9.18( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=75/53 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[53,77)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:43 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 77 pg[9.18( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=75/53 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[53,77)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:43 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 77 pg[9.8( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=75/53 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[53,77)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:43 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 77 pg[9.8( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=75/53 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[53,77)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v178: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 137 B/s, 4 objects/s recovering
Oct 02 19:12:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct 02 19:12:43 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 02 19:12:43 compute-0 sshd-session[223820]: error: kex_exchange_identification: read: Connection reset by peer
Oct 02 19:12:43 compute-0 sshd-session[223820]: Connection reset by 45.140.17.97 port 3568
Oct 02 19:12:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:12:44 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.1d deep-scrub starts
Oct 02 19:12:44 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.1d deep-scrub ok
Oct 02 19:12:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct 02 19:12:44 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 02 19:12:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct 02 19:12:44 compute-0 ceph-mon[191910]: 2.d scrub starts
Oct 02 19:12:44 compute-0 ceph-mon[191910]: 2.d scrub ok
Oct 02 19:12:44 compute-0 ceph-mon[191910]: osdmap e77: 3 total, 3 up, 3 in
Oct 02 19:12:44 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 02 19:12:44 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct 02 19:12:44 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 78 pg[9.8( v 49'389 (0'0,49'389] local-lis/les=77/78 n=6 ec=53/43 lis/c=75/53 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[53,77)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:44 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 78 pg[9.18( v 49'389 (0'0,49'389] local-lis/les=77/78 n=5 ec=53/43 lis/c=75/53 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[53,77)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:45 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct 02 19:12:45 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct 02 19:12:45 compute-0 ceph-mon[191910]: pgmap v178: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 137 B/s, 4 objects/s recovering
Oct 02 19:12:45 compute-0 ceph-mon[191910]: 2.1d deep-scrub starts
Oct 02 19:12:45 compute-0 ceph-mon[191910]: 2.1d deep-scrub ok
Oct 02 19:12:45 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 02 19:12:45 compute-0 ceph-mon[191910]: osdmap e78: 3 total, 3 up, 3 in
Oct 02 19:12:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v180: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 132 B/s, 4 objects/s recovering
Oct 02 19:12:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct 02 19:12:45 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 02 19:12:45 compute-0 sshd-session[223821]: Accepted publickey for zuul from 192.168.122.30 port 33108 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:12:45 compute-0 systemd-logind[793]: New session 42 of user zuul.
Oct 02 19:12:46 compute-0 systemd[1]: Started Session 42 of User zuul.
Oct 02 19:12:46 compute-0 sshd-session[223821]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:12:46 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Oct 02 19:12:46 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Oct 02 19:12:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct 02 19:12:46 compute-0 ceph-mon[191910]: 2.6 scrub starts
Oct 02 19:12:46 compute-0 ceph-mon[191910]: 2.6 scrub ok
Oct 02 19:12:46 compute-0 ceph-mon[191910]: pgmap v180: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 132 B/s, 4 objects/s recovering
Oct 02 19:12:46 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 02 19:12:46 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 02 19:12:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct 02 19:12:46 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct 02 19:12:46 compute-0 podman[223949]: 2025-10-02 19:12:46.919854934 +0000 UTC m=+0.063970154 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:12:46 compute-0 podman[223948]: 2025-10-02 19:12:46.936570472 +0000 UTC m=+0.079604253 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:12:47 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Oct 02 19:12:47 compute-0 python3.9[224005]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 02 19:12:47 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Oct 02 19:12:47 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Oct 02 19:12:47 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Oct 02 19:12:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v182: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 150 B/s, 6 objects/s recovering
Oct 02 19:12:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct 02 19:12:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 02 19:12:47 compute-0 ceph-mon[191910]: 2.17 scrub starts
Oct 02 19:12:47 compute-0 ceph-mon[191910]: 2.17 scrub ok
Oct 02 19:12:47 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 02 19:12:47 compute-0 ceph-mon[191910]: osdmap e79: 3 total, 3 up, 3 in
Oct 02 19:12:47 compute-0 ceph-mon[191910]: 2.1c scrub starts
Oct 02 19:12:47 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 02 19:12:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct 02 19:12:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 02 19:12:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct 02 19:12:47 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct 02 19:12:47 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 3.8 deep-scrub starts
Oct 02 19:12:47 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 3.8 deep-scrub ok
Oct 02 19:12:48 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.4 deep-scrub starts
Oct 02 19:12:48 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.4 deep-scrub ok
Oct 02 19:12:48 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 80 pg[9.c( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=80 pruub=13.692145348s) [2] r=-1 lpr=80 pi=[53,80)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 158.358703613s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:48 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 80 pg[9.c( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=80 pruub=13.692081451s) [2] r=-1 lpr=80 pi=[53,80)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.358703613s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:48 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 80 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=80 pruub=13.688138008s) [2] r=-1 lpr=80 pi=[53,80)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 158.359832764s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:48 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 80 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=80 pruub=13.686697006s) [2] r=-1 lpr=80 pi=[53,80)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.359832764s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:48 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=80) [2] r=0 lpr=80 pi=[53,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:48 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=80) [2] r=0 lpr=80 pi=[53,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct 02 19:12:48 compute-0 ceph-mon[191910]: 2.3 scrub starts
Oct 02 19:12:48 compute-0 ceph-mon[191910]: 2.3 scrub ok
Oct 02 19:12:48 compute-0 ceph-mon[191910]: 2.1c scrub ok
Oct 02 19:12:48 compute-0 ceph-mon[191910]: pgmap v182: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 150 B/s, 6 objects/s recovering
Oct 02 19:12:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 02 19:12:48 compute-0 ceph-mon[191910]: osdmap e80: 3 total, 3 up, 3 in
Oct 02 19:12:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct 02 19:12:48 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct 02 19:12:48 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[53,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:48 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[53,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:48 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[53,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:48 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[53,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:48 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 81 pg[9.c( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=81) [2]/[1] r=0 lpr=81 pi=[53,81)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:48 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 81 pg[9.c( v 49'389 (0'0,49'389] local-lis/les=53/55 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=81) [2]/[1] r=0 lpr=81 pi=[53,81)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:48 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 81 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=81) [2]/[1] r=0 lpr=81 pi=[53,81)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:48 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 81 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=53/55 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=81) [2]/[1] r=0 lpr=81 pi=[53,81)/1 crt=49'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:48 compute-0 python3.9[224187]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:12:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:12:49 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Oct 02 19:12:49 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Oct 02 19:12:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v185: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 2 objects/s recovering
Oct 02 19:12:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct 02 19:12:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 02 19:12:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct 02 19:12:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 02 19:12:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct 02 19:12:49 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct 02 19:12:49 compute-0 ceph-mon[191910]: 3.8 deep-scrub starts
Oct 02 19:12:49 compute-0 ceph-mon[191910]: 3.8 deep-scrub ok
Oct 02 19:12:49 compute-0 ceph-mon[191910]: 2.4 deep-scrub starts
Oct 02 19:12:49 compute-0 ceph-mon[191910]: 2.4 deep-scrub ok
Oct 02 19:12:49 compute-0 ceph-mon[191910]: osdmap e81: 3 total, 3 up, 3 in
Oct 02 19:12:49 compute-0 ceph-mon[191910]: 5.4 scrub starts
Oct 02 19:12:49 compute-0 ceph-mon[191910]: 5.4 scrub ok
Oct 02 19:12:49 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 02 19:12:49 compute-0 podman[224268]: 2025-10-02 19:12:49.700696062 +0000 UTC m=+0.127523286 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release=1214.1726694543, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4)
Oct 02 19:12:49 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 82 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=81/82 n=5 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[53,81)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:49 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 82 pg[9.c( v 49'389 (0'0,49'389] local-lis/les=81/82 n=6 ec=53/43 lis/c=53/53 les/c/f=55/55/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[53,81)/1 crt=49'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:50 compute-0 sudo[224362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqtigieakypqwklqkwhkplfsicxrybnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432369.460909-45-199284156927258/AnsiballZ_command.py'
Oct 02 19:12:50 compute-0 sudo[224362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:50 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.f deep-scrub starts
Oct 02 19:12:50 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.f deep-scrub ok
Oct 02 19:12:50 compute-0 python3.9[224364]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:12:50 compute-0 sudo[224362]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct 02 19:12:50 compute-0 ceph-mon[191910]: pgmap v185: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 2 objects/s recovering
Oct 02 19:12:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 02 19:12:50 compute-0 ceph-mon[191910]: osdmap e82: 3 total, 3 up, 3 in
Oct 02 19:12:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct 02 19:12:50 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct 02 19:12:50 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 83 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=81/82 n=5 ec=53/43 lis/c=81/53 les/c/f=82/55/0 sis=83 pruub=15.265540123s) [2] async=[2] r=-1 lpr=83 pi=[53,83)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 162.094848633s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:50 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 83 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=81/82 n=5 ec=53/43 lis/c=81/53 les/c/f=82/55/0 sis=83 pruub=15.265404701s) [2] r=-1 lpr=83 pi=[53,83)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 162.094848633s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:50 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 83 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=81/53 les/c/f=82/55/0 sis=83) [2] r=0 lpr=83 pi=[53,83)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:50 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 83 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=81/53 les/c/f=82/55/0 sis=83) [2] r=0 lpr=83 pi=[53,83)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:50 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 83 pg[9.c( v 49'389 (0'0,49'389] local-lis/les=81/82 n=6 ec=53/43 lis/c=81/53 les/c/f=82/55/0 sis=83 pruub=15.272298813s) [2] async=[2] r=-1 lpr=83 pi=[53,83)/1 crt=49'389 lcod 0'0 mlcod 0'0 active pruub 162.102890015s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:50 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 83 pg[9.c( v 49'389 (0'0,49'389] local-lis/les=81/82 n=6 ec=53/43 lis/c=81/53 les/c/f=82/55/0 sis=83 pruub=15.270957947s) [2] r=-1 lpr=83 pi=[53,83)/1 crt=49'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 162.102890015s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:12:50 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 83 pg[9.c( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=81/53 les/c/f=82/55/0 sis=83) [2] r=0 lpr=83 pi=[53,83)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:12:50 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 83 pg[9.c( v 49'389 (0'0,49'389] local-lis/les=0/0 n=6 ec=53/43 lis/c=81/53 les/c/f=82/55/0 sis=83) [2] r=0 lpr=83 pi=[53,83)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:12:50 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct 02 19:12:50 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct 02 19:12:51 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.2 deep-scrub starts
Oct 02 19:12:51 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.2 deep-scrub ok
Oct 02 19:12:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v188: 321 pgs: 2 active+remapped, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Oct 02 19:12:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct 02 19:12:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 02 19:12:51 compute-0 sudo[224515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuuxonirzofxvqlaevzvnxjdwjrojcxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432370.8734078-57-240387901911569/AnsiballZ_stat.py'
Oct 02 19:12:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct 02 19:12:51 compute-0 sudo[224515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:51 compute-0 ceph-mon[191910]: 2.f deep-scrub starts
Oct 02 19:12:51 compute-0 ceph-mon[191910]: 2.f deep-scrub ok
Oct 02 19:12:51 compute-0 ceph-mon[191910]: osdmap e83: 3 total, 3 up, 3 in
Oct 02 19:12:51 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 02 19:12:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 02 19:12:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct 02 19:12:51 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct 02 19:12:51 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 84 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=83/84 n=5 ec=53/43 lis/c=81/53 les/c/f=82/55/0 sis=83) [2] r=0 lpr=83 pi=[53,83)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:51 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 84 pg[9.c( v 49'389 (0'0,49'389] local-lis/les=83/84 n=6 ec=53/43 lis/c=81/53 les/c/f=82/55/0 sis=83) [2] r=0 lpr=83 pi=[53,83)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:12:51 compute-0 python3.9[224517]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:12:51 compute-0 sudo[224515]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:52 compute-0 ceph-mon[191910]: 7.c scrub starts
Oct 02 19:12:52 compute-0 ceph-mon[191910]: 7.c scrub ok
Oct 02 19:12:52 compute-0 ceph-mon[191910]: 2.2 deep-scrub starts
Oct 02 19:12:52 compute-0 ceph-mon[191910]: 2.2 deep-scrub ok
Oct 02 19:12:52 compute-0 ceph-mon[191910]: pgmap v188: 321 pgs: 2 active+remapped, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Oct 02 19:12:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 02 19:12:52 compute-0 ceph-mon[191910]: osdmap e84: 3 total, 3 up, 3 in
Oct 02 19:12:52 compute-0 sudo[224669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emdyjsiypyjalutjfcvatdmpxnbguipe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432372.247773-68-44982680139130/AnsiballZ_file.py'
Oct 02 19:12:52 compute-0 sudo[224669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:53 compute-0 python3.9[224671]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:12:53 compute-0 sudo[224669]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:53 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct 02 19:12:53 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct 02 19:12:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v190: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 2 objects/s recovering
Oct 02 19:12:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct 02 19:12:53 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 02 19:12:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct 02 19:12:53 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 02 19:12:53 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 02 19:12:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct 02 19:12:53 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct 02 19:12:53 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct 02 19:12:53 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct 02 19:12:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:12:54 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct 02 19:12:54 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct 02 19:12:54 compute-0 python3.9[224821]: ansible-ansible.builtin.service_facts Invoked
Oct 02 19:12:54 compute-0 network[224838]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 19:12:54 compute-0 network[224839]: 'network-scripts' will be removed from distribution in near future.
Oct 02 19:12:54 compute-0 network[224840]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 19:12:54 compute-0 ceph-mon[191910]: 2.5 scrub starts
Oct 02 19:12:54 compute-0 ceph-mon[191910]: 2.5 scrub ok
Oct 02 19:12:54 compute-0 ceph-mon[191910]: pgmap v190: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 2 objects/s recovering
Oct 02 19:12:54 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 02 19:12:54 compute-0 ceph-mon[191910]: osdmap e85: 3 total, 3 up, 3 in
Oct 02 19:12:54 compute-0 ceph-mon[191910]: 2.1f scrub starts
Oct 02 19:12:54 compute-0 ceph-mon[191910]: 2.1f scrub ok
Oct 02 19:12:54 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct 02 19:12:54 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct 02 19:12:55 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct 02 19:12:55 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct 02 19:12:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v192: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 2 objects/s recovering
Oct 02 19:12:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Oct 02 19:12:55 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 02 19:12:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct 02 19:12:55 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 02 19:12:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct 02 19:12:55 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct 02 19:12:55 compute-0 ceph-mon[191910]: 7.e scrub starts
Oct 02 19:12:55 compute-0 ceph-mon[191910]: 7.e scrub ok
Oct 02 19:12:55 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 02 19:12:56 compute-0 ceph-mon[191910]: 3.5 scrub starts
Oct 02 19:12:56 compute-0 ceph-mon[191910]: 3.5 scrub ok
Oct 02 19:12:56 compute-0 ceph-mon[191910]: 2.9 scrub starts
Oct 02 19:12:56 compute-0 ceph-mon[191910]: 2.9 scrub ok
Oct 02 19:12:56 compute-0 ceph-mon[191910]: pgmap v192: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 2 objects/s recovering
Oct 02 19:12:56 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 02 19:12:56 compute-0 ceph-mon[191910]: osdmap e86: 3 total, 3 up, 3 in
Oct 02 19:12:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v194: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Oct 02 19:12:57 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 02 19:12:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct 02 19:12:57 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 02 19:12:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct 02 19:12:57 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct 02 19:12:57 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 02 19:12:58 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.f scrub starts
Oct 02 19:12:58 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.f scrub ok
Oct 02 19:12:58 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct 02 19:12:58 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct 02 19:12:58 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Oct 02 19:12:58 compute-0 ceph-mon[191910]: pgmap v194: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:58 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 02 19:12:58 compute-0 ceph-mon[191910]: osdmap e87: 3 total, 3 up, 3 in
Oct 02 19:12:58 compute-0 ceph-mon[191910]: 5.5 scrub starts
Oct 02 19:12:58 compute-0 ceph-mon[191910]: 5.5 scrub ok
Oct 02 19:12:58 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Oct 02 19:12:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:12:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v196: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:12:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Oct 02 19:12:59 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 02 19:12:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct 02 19:12:59 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 02 19:12:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct 02 19:12:59 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct 02 19:12:59 compute-0 ceph-mon[191910]: 5.f scrub starts
Oct 02 19:12:59 compute-0 ceph-mon[191910]: 5.f scrub ok
Oct 02 19:12:59 compute-0 ceph-mon[191910]: 3.1d scrub starts
Oct 02 19:12:59 compute-0 ceph-mon[191910]: 3.1d scrub ok
Oct 02 19:12:59 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 02 19:12:59 compute-0 podman[157186]: time="2025-10-02T19:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:12:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:12:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6814 "" "Go-http-client/1.1"
Oct 02 19:13:00 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct 02 19:13:00 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct 02 19:13:00 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Oct 02 19:13:00 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Oct 02 19:13:00 compute-0 ceph-mon[191910]: pgmap v196: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:00 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 02 19:13:00 compute-0 ceph-mon[191910]: osdmap e88: 3 total, 3 up, 3 in
Oct 02 19:13:00 compute-0 ceph-mon[191910]: 5.3 scrub starts
Oct 02 19:13:00 compute-0 ceph-mon[191910]: 5.3 scrub ok
Oct 02 19:13:00 compute-0 ceph-mon[191910]: 7.1a scrub starts
Oct 02 19:13:00 compute-0 python3.9[225112]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:01 compute-0 openstack_network_exporter[159337]: ERROR   19:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:13:01 compute-0 openstack_network_exporter[159337]: ERROR   19:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:13:01 compute-0 openstack_network_exporter[159337]: ERROR   19:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:13:01 compute-0 openstack_network_exporter[159337]: ERROR   19:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:13:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:13:01 compute-0 openstack_network_exporter[159337]: ERROR   19:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:13:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:13:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v198: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Oct 02 19:13:01 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 02 19:13:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct 02 19:13:01 compute-0 ceph-mon[191910]: 7.1a scrub ok
Oct 02 19:13:01 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 02 19:13:01 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 02 19:13:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct 02 19:13:01 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct 02 19:13:01 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 89 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=89 pruub=8.462997437s) [2] r=-1 lpr=89 pi=[60,89)/1 crt=49'389 mlcod 0'0 active pruub 173.072708130s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:01 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 89 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=89 pruub=8.462936401s) [2] r=-1 lpr=89 pi=[60,89)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 173.072708130s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:01 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 89 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=89) [2] r=0 lpr=89 pi=[60,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:02 compute-0 python3.9[225262]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:13:02 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.b deep-scrub starts
Oct 02 19:13:02 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.b deep-scrub ok
Oct 02 19:13:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct 02 19:13:02 compute-0 ceph-mon[191910]: pgmap v198: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:02 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 02 19:13:02 compute-0 ceph-mon[191910]: osdmap e89: 3 total, 3 up, 3 in
Oct 02 19:13:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct 02 19:13:02 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct 02 19:13:02 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 90 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=90) [2]/[0] r=0 lpr=90 pi=[60,90)/1 crt=49'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:02 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 90 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=90) [2]/[0] r=0 lpr=90 pi=[60,90)/1 crt=49'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:02 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 90 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[60,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:02 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 90 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[60,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:03 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.8 deep-scrub starts
Oct 02 19:13:03 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.8 deep-scrub ok
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:13:03
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Some PGs (0.003115) are unknown; try again later
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v201: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:13:03 compute-0 python3.9[225416]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:13:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:13:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct 02 19:13:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct 02 19:13:03 compute-0 ceph-mon[191910]: 2.b deep-scrub starts
Oct 02 19:13:03 compute-0 ceph-mon[191910]: 2.b deep-scrub ok
Oct 02 19:13:03 compute-0 ceph-mon[191910]: osdmap e90: 3 total, 3 up, 3 in
Oct 02 19:13:03 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct 02 19:13:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:13:04 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 91 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=90/91 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=90) [2]/[0] async=[2] r=0 lpr=90 pi=[60,90)/1 crt=49'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:13:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct 02 19:13:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct 02 19:13:04 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct 02 19:13:04 compute-0 ceph-mon[191910]: 2.8 deep-scrub starts
Oct 02 19:13:04 compute-0 ceph-mon[191910]: 2.8 deep-scrub ok
Oct 02 19:13:04 compute-0 ceph-mon[191910]: pgmap v201: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:04 compute-0 ceph-mon[191910]: osdmap e91: 3 total, 3 up, 3 in
Oct 02 19:13:04 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 92 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=90/60 les/c/f=91/61/0 sis=92) [2] r=0 lpr=92 pi=[60,92)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:04 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 92 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=90/60 les/c/f=91/61/0 sis=92) [2] r=0 lpr=92 pi=[60,92)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:04 compute-0 sudo[225572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsapwejcetlkzxzaylccrwgzquixgtxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432384.2393448-116-239215894877536/AnsiballZ_setup.py'
Oct 02 19:13:04 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 92 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=90/91 n=5 ec=53/43 lis/c=90/60 les/c/f=91/61/0 sis=92 pruub=15.735583305s) [2] async=[2] r=-1 lpr=92 pi=[60,92)/1 crt=49'389 mlcod 49'389 active pruub 183.339401245s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:04 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 92 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=90/91 n=5 ec=53/43 lis/c=90/60 les/c/f=91/61/0 sis=92 pruub=15.734775543s) [2] r=-1 lpr=92 pi=[60,92)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 183.339401245s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:04 compute-0 sudo[225572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:05 compute-0 python3.9[225574]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 19:13:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v204: 321 pgs: 1 activating+remapped, 320 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 5/215 objects misplaced (2.326%)
Oct 02 19:13:05 compute-0 sudo[225572]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct 02 19:13:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct 02 19:13:05 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct 02 19:13:05 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 93 pg[9.13( v 49'389 (0'0,49'389] local-lis/les=92/93 n=5 ec=53/43 lis/c=90/60 les/c/f=91/61/0 sis=92) [2] r=0 lpr=92 pi=[60,92)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:13:05 compute-0 ceph-mon[191910]: osdmap e92: 3 total, 3 up, 3 in
Oct 02 19:13:06 compute-0 sudo[225656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exqvysarqmspxwjmctrotyctivmktjvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432384.2393448-116-239215894877536/AnsiballZ_dnf.py'
Oct 02 19:13:06 compute-0 sudo[225656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:06 compute-0 python3.9[225658]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:13:06 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.a scrub starts
Oct 02 19:13:06 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.a scrub ok
Oct 02 19:13:06 compute-0 ceph-mon[191910]: pgmap v204: 321 pgs: 1 activating+remapped, 320 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 5/215 objects misplaced (2.326%)
Oct 02 19:13:06 compute-0 ceph-mon[191910]: osdmap e93: 3 total, 3 up, 3 in
Oct 02 19:13:06 compute-0 ceph-mon[191910]: 7.a scrub starts
Oct 02 19:13:06 compute-0 ceph-mon[191910]: 7.a scrub ok
Oct 02 19:13:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v206: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 218 B/s wr, 7 op/s; 46 B/s, 2 objects/s recovering
Oct 02 19:13:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Oct 02 19:13:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 02 19:13:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct 02 19:13:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 02 19:13:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 02 19:13:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct 02 19:13:07 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct 02 19:13:08 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct 02 19:13:08 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct 02 19:13:08 compute-0 podman[225711]: 2025-10-02 19:13:08.673964161 +0000 UTC m=+0.098607131 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250930, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 02 19:13:08 compute-0 podman[225712]: 2025-10-02 19:13:08.68027253 +0000 UTC m=+0.089730594 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:13:08 compute-0 ceph-mon[191910]: pgmap v206: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 218 B/s wr, 7 op/s; 46 B/s, 2 objects/s recovering
Oct 02 19:13:08 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 02 19:13:08 compute-0 ceph-mon[191910]: osdmap e94: 3 total, 3 up, 3 in
Oct 02 19:13:08 compute-0 ceph-mon[191910]: 5.1 scrub starts
Oct 02 19:13:08 compute-0 ceph-mon[191910]: 5.1 scrub ok
Oct 02 19:13:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:13:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v208: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 180 B/s wr, 6 op/s; 38 B/s, 1 objects/s recovering
Oct 02 19:13:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Oct 02 19:13:09 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 02 19:13:09 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct 02 19:13:09 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct 02 19:13:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct 02 19:13:09 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 02 19:13:09 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 02 19:13:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct 02 19:13:09 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct 02 19:13:09 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 95 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=61/62 n=5 ec=53/43 lis/c=61/61 les/c/f=62/62/0 sis=95 pruub=9.389330864s) [1] r=-1 lpr=95 pi=[61,95)/1 crt=49'389 mlcod 0'0 active pruub 182.085037231s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:09 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 95 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=61/62 n=5 ec=53/43 lis/c=61/61 les/c/f=62/62/0 sis=95 pruub=9.388841629s) [1] r=-1 lpr=95 pi=[61,95)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 182.085037231s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:09 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=61/61 les/c/f=62/62/0 sis=95) [1] r=0 lpr=95 pi=[61,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct 02 19:13:10 compute-0 ceph-mon[191910]: pgmap v208: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 180 B/s wr, 6 op/s; 38 B/s, 1 objects/s recovering
Oct 02 19:13:10 compute-0 ceph-mon[191910]: 3.7 scrub starts
Oct 02 19:13:10 compute-0 ceph-mon[191910]: 3.7 scrub ok
Oct 02 19:13:10 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 02 19:13:10 compute-0 ceph-mon[191910]: osdmap e95: 3 total, 3 up, 3 in
Oct 02 19:13:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct 02 19:13:10 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct 02 19:13:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=61/61 les/c/f=62/62/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[61,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:11 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=61/61 les/c/f=62/62/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[61,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:11 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 96 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=61/62 n=5 ec=53/43 lis/c=61/61 les/c/f=62/62/0 sis=96) [1]/[0] r=0 lpr=96 pi=[61,96)/1 crt=49'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:11 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 96 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=61/62 n=5 ec=53/43 lis/c=61/61 les/c/f=62/62/0 sis=96) [1]/[0] r=0 lpr=96 pi=[61,96)/1 crt=49'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:11 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct 02 19:13:11 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct 02 19:13:11 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.16 deep-scrub starts
Oct 02 19:13:11 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.16 deep-scrub ok
Oct 02 19:13:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v211: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 02 19:13:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct 02 19:13:11 compute-0 ceph-mon[191910]: osdmap e96: 3 total, 3 up, 3 in
Oct 02 19:13:11 compute-0 ceph-mon[191910]: 2.a scrub starts
Oct 02 19:13:11 compute-0 ceph-mon[191910]: 2.a scrub ok
Oct 02 19:13:11 compute-0 ceph-mon[191910]: 2.16 deep-scrub starts
Oct 02 19:13:11 compute-0 ceph-mon[191910]: 2.16 deep-scrub ok
Oct 02 19:13:12 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct 02 19:13:12 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:13:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:13:12 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 97 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=96/97 n=5 ec=53/43 lis/c=61/61 les/c/f=62/62/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[61,96)/1 crt=49'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:13:12 compute-0 podman[225768]: 2025-10-02 19:13:12.72516431 +0000 UTC m=+0.143603647 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, architecture=x86_64)
Oct 02 19:13:12 compute-0 podman[225769]: 2025-10-02 19:13:12.752766739 +0000 UTC m=+0.169577382 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:13:12 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.2 deep-scrub starts
Oct 02 19:13:12 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.2 deep-scrub ok
Oct 02 19:13:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct 02 19:13:13 compute-0 ceph-mon[191910]: pgmap v211: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 02 19:13:13 compute-0 ceph-mon[191910]: osdmap e97: 3 total, 3 up, 3 in
Oct 02 19:13:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct 02 19:13:13 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct 02 19:13:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 98 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=96/61 les/c/f=97/62/0 sis=98) [1] r=0 lpr=98 pi=[61,98)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:13 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 98 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=96/61 les/c/f=97/62/0 sis=98) [1] r=0 lpr=98 pi=[61,98)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:13 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 98 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=96/97 n=5 ec=53/43 lis/c=96/61 les/c/f=97/62/0 sis=98 pruub=15.194083214s) [1] async=[1] r=-1 lpr=98 pi=[61,98)/1 crt=49'389 mlcod 49'389 active pruub 190.989837646s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:13 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 98 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=96/97 n=5 ec=53/43 lis/c=96/61 les/c/f=97/62/0 sis=98 pruub=15.194025040s) [1] r=-1 lpr=98 pi=[61,98)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 190.989837646s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v214: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:13 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct 02 19:13:13 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct 02 19:13:13 compute-0 sudo[225812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:13:13 compute-0 sudo[225812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:13 compute-0 sudo[225812]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:13 compute-0 sudo[225837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:13:13 compute-0 sudo[225837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:13 compute-0 sudo[225837]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:13 compute-0 sudo[225862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:13:14 compute-0 sudo[225862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:14 compute-0 sudo[225862]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct 02 19:13:14 compute-0 ceph-mon[191910]: 7.2 deep-scrub starts
Oct 02 19:13:14 compute-0 ceph-mon[191910]: 7.2 deep-scrub ok
Oct 02 19:13:14 compute-0 ceph-mon[191910]: osdmap e98: 3 total, 3 up, 3 in
Oct 02 19:13:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct 02 19:13:14 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct 02 19:13:14 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 99 pg[9.15( v 49'389 (0'0,49'389] local-lis/les=98/99 n=5 ec=53/43 lis/c=96/61 les/c/f=97/62/0 sis=98) [1] r=0 lpr=98 pi=[61,98)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:13:14 compute-0 sudo[225887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 19:13:14 compute-0 sudo[225887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:13:14 compute-0 podman[225978]: 2025-10-02 19:13:14.866320998 +0000 UTC m=+0.127050763 container exec a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 19:13:14 compute-0 podman[225978]: 2025-10-02 19:13:14.966675445 +0000 UTC m=+0.227405130 container exec_died a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 19:13:15 compute-0 ceph-mon[191910]: pgmap v214: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:15 compute-0 ceph-mon[191910]: 3.1e scrub starts
Oct 02 19:13:15 compute-0 ceph-mon[191910]: 3.1e scrub ok
Oct 02 19:13:15 compute-0 ceph-mon[191910]: osdmap e99: 3 total, 3 up, 3 in
Oct 02 19:13:15 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct 02 19:13:15 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct 02 19:13:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v216: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 227 B/s wr, 7 op/s; 48 B/s, 1 objects/s recovering
Oct 02 19:13:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct 02 19:13:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 02 19:13:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct 02 19:13:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 02 19:13:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct 02 19:13:16 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct 02 19:13:16 compute-0 ceph-mon[191910]: 5.15 scrub starts
Oct 02 19:13:16 compute-0 ceph-mon[191910]: 5.15 scrub ok
Oct 02 19:13:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 02 19:13:16 compute-0 sudo[225887]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:13:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:13:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:13:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:13:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 100 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=70/71 n=5 ec=53/43 lis/c=70/70 les/c/f=71/71/0 sis=100 pruub=9.114533424s) [0] r=-1 lpr=100 pi=[70,100)/1 crt=49'389 mlcod 0'0 active pruub 174.677230835s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:16 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 100 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=70/71 n=5 ec=53/43 lis/c=70/70 les/c/f=71/71/0 sis=100 pruub=9.114493370s) [0] r=-1 lpr=100 pi=[70,100)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 174.677230835s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:16 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=70/70 les/c/f=71/71/0 sis=100) [0] r=0 lpr=100 pi=[70,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:16 compute-0 sudo[226127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:13:16 compute-0 sudo[226127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:16 compute-0 sudo[226127]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:16 compute-0 sudo[226152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:13:16 compute-0 sudo[226152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:16 compute-0 sudo[226152]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:16 compute-0 sudo[226177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:13:16 compute-0 sudo[226177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:16 compute-0 sudo[226177]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:16 compute-0 sudo[226202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:13:16 compute-0 sudo[226202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:17 compute-0 ceph-mon[191910]: pgmap v216: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 227 B/s wr, 7 op/s; 48 B/s, 1 objects/s recovering
Oct 02 19:13:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 02 19:13:17 compute-0 ceph-mon[191910]: osdmap e100: 3 total, 3 up, 3 in
Oct 02 19:13:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:13:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:13:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct 02 19:13:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct 02 19:13:17 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct 02 19:13:17 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.1a deep-scrub starts
Oct 02 19:13:17 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 101 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=70/71 n=5 ec=53/43 lis/c=70/70 les/c/f=71/71/0 sis=101) [0]/[2] r=0 lpr=101 pi=[70,101)/1 crt=49'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:17 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 101 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=70/71 n=5 ec=53/43 lis/c=70/70 les/c/f=71/71/0 sis=101) [0]/[2] r=0 lpr=101 pi=[70,101)/1 crt=49'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:17 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=70/70 les/c/f=71/71/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[70,101)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:17 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=70/70 les/c/f=71/71/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[70,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:17 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.1a deep-scrub ok
Oct 02 19:13:17 compute-0 sudo[226202]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:13:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:13:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:13:17 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:13:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:13:17 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:13:17 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 95d650b5-144d-470d-9bb7-a1fe4a95a32a does not exist
Oct 02 19:13:17 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev a1892d51-71a6-4fc6-b146-61ba21d32192 does not exist
Oct 02 19:13:17 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev debec307-e2c3-495a-8e8c-cb4599a669b2 does not exist
Oct 02 19:13:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:13:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:13:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:13:17 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:13:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:13:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:13:17 compute-0 sudo[226259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:13:17 compute-0 sudo[226259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:17 compute-0 sudo[226259]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v219: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 229 B/s wr, 7 op/s; 49 B/s, 1 objects/s recovering
Oct 02 19:13:17 compute-0 sudo[226297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:13:17 compute-0 sudo[226297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:17 compute-0 sudo[226297]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:17 compute-0 podman[226284]: 2025-10-02 19:13:17.624254543 +0000 UTC m=+0.120932380 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:13:17 compute-0 podman[226283]: 2025-10-02 19:13:17.636661615 +0000 UTC m=+0.147372107 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 19:13:17 compute-0 sudo[226349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:13:17 compute-0 sudo[226349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:17 compute-0 sudo[226349]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:17 compute-0 sudo[226374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:13:17 compute-0 sudo[226374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct 02 19:13:18 compute-0 ceph-mon[191910]: osdmap e101: 3 total, 3 up, 3 in
Oct 02 19:13:18 compute-0 ceph-mon[191910]: 5.1a deep-scrub starts
Oct 02 19:13:18 compute-0 ceph-mon[191910]: 5.1a deep-scrub ok
Oct 02 19:13:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:13:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:13:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:13:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:13:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:13:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:13:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct 02 19:13:18 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct 02 19:13:18 compute-0 podman[226438]: 2025-10-02 19:13:18.367818155 +0000 UTC m=+0.058357914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:13:18 compute-0 podman[226438]: 2025-10-02 19:13:18.465902592 +0000 UTC m=+0.156442311 container create 4bbd98597c6996617d84e6cd07b8cafb6a84ecdebce2137673cf6e4dd31c67fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:13:18 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 102 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=101/102 n=5 ec=53/43 lis/c=70/70 les/c/f=71/71/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[70,101)/1 crt=49'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:13:18 compute-0 systemd[1]: Started libpod-conmon-4bbd98597c6996617d84e6cd07b8cafb6a84ecdebce2137673cf6e4dd31c67fa.scope.
Oct 02 19:13:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:13:18 compute-0 podman[226438]: 2025-10-02 19:13:18.654321898 +0000 UTC m=+0.344861687 container init 4bbd98597c6996617d84e6cd07b8cafb6a84ecdebce2137673cf6e4dd31c67fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shamir, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:13:18 compute-0 podman[226438]: 2025-10-02 19:13:18.671030145 +0000 UTC m=+0.361569884 container start 4bbd98597c6996617d84e6cd07b8cafb6a84ecdebce2137673cf6e4dd31c67fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shamir, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 19:13:18 compute-0 podman[226438]: 2025-10-02 19:13:18.677065667 +0000 UTC m=+0.367605406 container attach 4bbd98597c6996617d84e6cd07b8cafb6a84ecdebce2137673cf6e4dd31c67fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shamir, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Oct 02 19:13:18 compute-0 romantic_shamir[226453]: 167 167
Oct 02 19:13:18 compute-0 systemd[1]: libpod-4bbd98597c6996617d84e6cd07b8cafb6a84ecdebce2137673cf6e4dd31c67fa.scope: Deactivated successfully.
Oct 02 19:13:18 compute-0 podman[226438]: 2025-10-02 19:13:18.685007609 +0000 UTC m=+0.375547348 container died 4bbd98597c6996617d84e6cd07b8cafb6a84ecdebce2137673cf6e4dd31c67fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 19:13:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-544b89cebf4612b09886f3ac6058c070004a6b916f051c084ee85397293c1013-merged.mount: Deactivated successfully.
Oct 02 19:13:18 compute-0 podman[226438]: 2025-10-02 19:13:18.763560513 +0000 UTC m=+0.454100262 container remove 4bbd98597c6996617d84e6cd07b8cafb6a84ecdebce2137673cf6e4dd31c67fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:13:18 compute-0 systemd[1]: libpod-conmon-4bbd98597c6996617d84e6cd07b8cafb6a84ecdebce2137673cf6e4dd31c67fa.scope: Deactivated successfully.
Oct 02 19:13:18 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct 02 19:13:18 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct 02 19:13:18 compute-0 podman[226477]: 2025-10-02 19:13:18.985826575 +0000 UTC m=+0.078776280 container create 915efc4313a873da8332fe949c671176283f9947f9bbdae4cf2c2529da4ee7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_goldstine, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:13:19 compute-0 podman[226477]: 2025-10-02 19:13:18.954571838 +0000 UTC m=+0.047521583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:13:19 compute-0 systemd[1]: Started libpod-conmon-915efc4313a873da8332fe949c671176283f9947f9bbdae4cf2c2529da4ee7b6.scope.
Oct 02 19:13:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7235b553b69a36ea5587cbcb7f3c0270851dcb71dab7c68976ab233fcf9c7193/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7235b553b69a36ea5587cbcb7f3c0270851dcb71dab7c68976ab233fcf9c7193/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7235b553b69a36ea5587cbcb7f3c0270851dcb71dab7c68976ab233fcf9c7193/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7235b553b69a36ea5587cbcb7f3c0270851dcb71dab7c68976ab233fcf9c7193/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7235b553b69a36ea5587cbcb7f3c0270851dcb71dab7c68976ab233fcf9c7193/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:19 compute-0 podman[226477]: 2025-10-02 19:13:19.147651689 +0000 UTC m=+0.240601434 container init 915efc4313a873da8332fe949c671176283f9947f9bbdae4cf2c2529da4ee7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_goldstine, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 19:13:19 compute-0 podman[226477]: 2025-10-02 19:13:19.185752699 +0000 UTC m=+0.278702404 container start 915efc4313a873da8332fe949c671176283f9947f9bbdae4cf2c2529da4ee7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_goldstine, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:13:19 compute-0 podman[226477]: 2025-10-02 19:13:19.192335575 +0000 UTC m=+0.285285260 container attach 915efc4313a873da8332fe949c671176283f9947f9bbdae4cf2c2529da4ee7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_goldstine, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 19:13:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:13:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct 02 19:13:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct 02 19:13:19 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct 02 19:13:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 103 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=101/102 n=5 ec=53/43 lis/c=101/70 les/c/f=102/71/0 sis=103 pruub=15.242666245s) [0] async=[0] r=-1 lpr=103 pi=[70,103)/1 crt=49'389 mlcod 49'389 active pruub 183.873428345s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:19 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 103 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=101/102 n=5 ec=53/43 lis/c=101/70 les/c/f=102/71/0 sis=103 pruub=15.242557526s) [0] r=-1 lpr=103 pi=[70,103)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 183.873428345s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:19 compute-0 ceph-mon[191910]: pgmap v219: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 229 B/s wr, 7 op/s; 49 B/s, 1 objects/s recovering
Oct 02 19:13:19 compute-0 ceph-mon[191910]: osdmap e102: 3 total, 3 up, 3 in
Oct 02 19:13:19 compute-0 ceph-mon[191910]: osdmap e103: 3 total, 3 up, 3 in
Oct 02 19:13:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 103 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=101/70 les/c/f=102/71/0 sis=103) [0] r=0 lpr=103 pi=[70,103)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:19 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 103 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=101/70 les/c/f=102/71/0 sis=103) [0] r=0 lpr=103 pi=[70,103)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v222: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct 02 19:13:20 compute-0 ceph-mon[191910]: 7.1 scrub starts
Oct 02 19:13:20 compute-0 ceph-mon[191910]: 7.1 scrub ok
Oct 02 19:13:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct 02 19:13:20 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct 02 19:13:20 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 104 pg[9.16( v 49'389 (0'0,49'389] local-lis/les=103/104 n=5 ec=53/43 lis/c=101/70 les/c/f=102/71/0 sis=103) [0] r=0 lpr=103 pi=[70,103)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:13:20 compute-0 sleepy_goldstine[226494]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:13:20 compute-0 sleepy_goldstine[226494]: --> relative data size: 1.0
Oct 02 19:13:20 compute-0 sleepy_goldstine[226494]: --> All data devices are unavailable
Oct 02 19:13:20 compute-0 systemd[1]: libpod-915efc4313a873da8332fe949c671176283f9947f9bbdae4cf2c2529da4ee7b6.scope: Deactivated successfully.
Oct 02 19:13:20 compute-0 systemd[1]: libpod-915efc4313a873da8332fe949c671176283f9947f9bbdae4cf2c2529da4ee7b6.scope: Consumed 1.221s CPU time.
Oct 02 19:13:20 compute-0 podman[226477]: 2025-10-02 19:13:20.476912745 +0000 UTC m=+1.569862410 container died 915efc4313a873da8332fe949c671176283f9947f9bbdae4cf2c2529da4ee7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7235b553b69a36ea5587cbcb7f3c0270851dcb71dab7c68976ab233fcf9c7193-merged.mount: Deactivated successfully.
Oct 02 19:13:20 compute-0 podman[226477]: 2025-10-02 19:13:20.572947497 +0000 UTC m=+1.665897162 container remove 915efc4313a873da8332fe949c671176283f9947f9bbdae4cf2c2529da4ee7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:13:20 compute-0 systemd[1]: libpod-conmon-915efc4313a873da8332fe949c671176283f9947f9bbdae4cf2c2529da4ee7b6.scope: Deactivated successfully.
Oct 02 19:13:20 compute-0 sudo[226374]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:20 compute-0 podman[226528]: 2025-10-02 19:13:20.690019422 +0000 UTC m=+0.166683635 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, version=9.4, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, release-0.7.12=, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Oct 02 19:13:20 compute-0 sudo[226556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:13:20 compute-0 sudo[226556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:20 compute-0 sudo[226556]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:20 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Oct 02 19:13:20 compute-0 sudo[226584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:13:20 compute-0 sudo[226584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:20 compute-0 sudo[226584]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:20 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Oct 02 19:13:20 compute-0 sudo[226609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:13:20 compute-0 sudo[226609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:20 compute-0 sudo[226609]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:21 compute-0 sudo[226634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:13:21 compute-0 sudo[226634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:21 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Oct 02 19:13:21 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Oct 02 19:13:21 compute-0 ceph-mon[191910]: pgmap v222: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:21 compute-0 ceph-mon[191910]: osdmap e104: 3 total, 3 up, 3 in
Oct 02 19:13:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v224: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:21 compute-0 podman[226700]: 2025-10-02 19:13:21.60035443 +0000 UTC m=+0.083800245 container create 322399ada8185e2a9ffe3be70fb84cbd1d5052ba562c1190f903fc5ed516d6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:13:21 compute-0 podman[226700]: 2025-10-02 19:13:21.566214716 +0000 UTC m=+0.049660571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:13:21 compute-0 systemd[1]: Started libpod-conmon-322399ada8185e2a9ffe3be70fb84cbd1d5052ba562c1190f903fc5ed516d6dc.scope.
Oct 02 19:13:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:13:21 compute-0 podman[226700]: 2025-10-02 19:13:21.758869275 +0000 UTC m=+0.242315130 container init 322399ada8185e2a9ffe3be70fb84cbd1d5052ba562c1190f903fc5ed516d6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:13:21 compute-0 podman[226700]: 2025-10-02 19:13:21.777488844 +0000 UTC m=+0.260934659 container start 322399ada8185e2a9ffe3be70fb84cbd1d5052ba562c1190f903fc5ed516d6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 19:13:21 compute-0 podman[226700]: 2025-10-02 19:13:21.785810247 +0000 UTC m=+0.269256092 container attach 322399ada8185e2a9ffe3be70fb84cbd1d5052ba562c1190f903fc5ed516d6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:13:21 compute-0 beautiful_brahmagupta[226717]: 167 167
Oct 02 19:13:21 compute-0 systemd[1]: libpod-322399ada8185e2a9ffe3be70fb84cbd1d5052ba562c1190f903fc5ed516d6dc.scope: Deactivated successfully.
Oct 02 19:13:21 compute-0 podman[226700]: 2025-10-02 19:13:21.790487662 +0000 UTC m=+0.273933507 container died 322399ada8185e2a9ffe3be70fb84cbd1d5052ba562c1190f903fc5ed516d6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:13:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b692560fb152a56c5837fdb745c0c783675a13e1ccb23f46df19378b4ad58b54-merged.mount: Deactivated successfully.
Oct 02 19:13:21 compute-0 podman[226700]: 2025-10-02 19:13:21.876499355 +0000 UTC m=+0.359945160 container remove 322399ada8185e2a9ffe3be70fb84cbd1d5052ba562c1190f903fc5ed516d6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:13:21 compute-0 systemd[1]: libpod-conmon-322399ada8185e2a9ffe3be70fb84cbd1d5052ba562c1190f903fc5ed516d6dc.scope: Deactivated successfully.
Oct 02 19:13:22 compute-0 podman[226744]: 2025-10-02 19:13:22.193123664 +0000 UTC m=+0.107418257 container create 03c8dbd8017f4dbb61825a0fc021e794d599847355632288ddf933e3314158b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 19:13:22 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct 02 19:13:22 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct 02 19:13:22 compute-0 podman[226744]: 2025-10-02 19:13:22.143015822 +0000 UTC m=+0.057310445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:13:22 compute-0 systemd[1]: Started libpod-conmon-03c8dbd8017f4dbb61825a0fc021e794d599847355632288ddf933e3314158b5.scope.
Oct 02 19:13:22 compute-0 ceph-mon[191910]: 6.15 scrub starts
Oct 02 19:13:22 compute-0 ceph-mon[191910]: 6.15 scrub ok
Oct 02 19:13:22 compute-0 ceph-mon[191910]: 2.13 scrub starts
Oct 02 19:13:22 compute-0 ceph-mon[191910]: 2.13 scrub ok
Oct 02 19:13:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb0b0d235e8b1231f6e1cdd1ac6edb7b72a779002c11d2dca460a0beacf79950/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb0b0d235e8b1231f6e1cdd1ac6edb7b72a779002c11d2dca460a0beacf79950/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb0b0d235e8b1231f6e1cdd1ac6edb7b72a779002c11d2dca460a0beacf79950/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb0b0d235e8b1231f6e1cdd1ac6edb7b72a779002c11d2dca460a0beacf79950/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:22 compute-0 podman[226744]: 2025-10-02 19:13:22.365090159 +0000 UTC m=+0.279384782 container init 03c8dbd8017f4dbb61825a0fc021e794d599847355632288ddf933e3314158b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_moore, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 19:13:22 compute-0 podman[226744]: 2025-10-02 19:13:22.384226992 +0000 UTC m=+0.298521615 container start 03c8dbd8017f4dbb61825a0fc021e794d599847355632288ddf933e3314158b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 02 19:13:22 compute-0 podman[226744]: 2025-10-02 19:13:22.39011551 +0000 UTC m=+0.304410133 container attach 03c8dbd8017f4dbb61825a0fc021e794d599847355632288ddf933e3314158b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_moore, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:13:23 compute-0 sharp_moore[226760]: {
Oct 02 19:13:23 compute-0 sharp_moore[226760]:     "0": [
Oct 02 19:13:23 compute-0 sharp_moore[226760]:         {
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "devices": [
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "/dev/loop3"
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             ],
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "lv_name": "ceph_lv0",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "lv_size": "21470642176",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "name": "ceph_lv0",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "tags": {
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.cluster_name": "ceph",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.crush_device_class": "",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.encrypted": "0",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.osd_id": "0",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.type": "block",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.vdo": "0"
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             },
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "type": "block",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "vg_name": "ceph_vg0"
Oct 02 19:13:23 compute-0 sharp_moore[226760]:         }
Oct 02 19:13:23 compute-0 sharp_moore[226760]:     ],
Oct 02 19:13:23 compute-0 sharp_moore[226760]:     "1": [
Oct 02 19:13:23 compute-0 sharp_moore[226760]:         {
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "devices": [
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "/dev/loop4"
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             ],
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "lv_name": "ceph_lv1",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "lv_size": "21470642176",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "name": "ceph_lv1",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "tags": {
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.cluster_name": "ceph",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.crush_device_class": "",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.encrypted": "0",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.osd_id": "1",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.type": "block",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.vdo": "0"
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             },
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "type": "block",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "vg_name": "ceph_vg1"
Oct 02 19:13:23 compute-0 sharp_moore[226760]:         }
Oct 02 19:13:23 compute-0 sharp_moore[226760]:     ],
Oct 02 19:13:23 compute-0 sharp_moore[226760]:     "2": [
Oct 02 19:13:23 compute-0 sharp_moore[226760]:         {
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "devices": [
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "/dev/loop5"
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             ],
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "lv_name": "ceph_lv2",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "lv_size": "21470642176",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "name": "ceph_lv2",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "tags": {
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.cluster_name": "ceph",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.crush_device_class": "",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.encrypted": "0",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.osd_id": "2",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.type": "block",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:                 "ceph.vdo": "0"
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             },
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "type": "block",
Oct 02 19:13:23 compute-0 sharp_moore[226760]:             "vg_name": "ceph_vg2"
Oct 02 19:13:23 compute-0 sharp_moore[226760]:         }
Oct 02 19:13:23 compute-0 sharp_moore[226760]:     ]
Oct 02 19:13:23 compute-0 sharp_moore[226760]: }
Oct 02 19:13:23 compute-0 systemd[1]: libpod-03c8dbd8017f4dbb61825a0fc021e794d599847355632288ddf933e3314158b5.scope: Deactivated successfully.
Oct 02 19:13:23 compute-0 conmon[226760]: conmon 03c8dbd8017f4dbb6182 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-03c8dbd8017f4dbb61825a0fc021e794d599847355632288ddf933e3314158b5.scope/container/memory.events
Oct 02 19:13:23 compute-0 ceph-mon[191910]: pgmap v224: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:23 compute-0 ceph-mon[191910]: 5.c scrub starts
Oct 02 19:13:23 compute-0 ceph-mon[191910]: 5.c scrub ok
Oct 02 19:13:23 compute-0 podman[226780]: 2025-10-02 19:13:23.3841934 +0000 UTC m=+0.064407506 container died 03c8dbd8017f4dbb61825a0fc021e794d599847355632288ddf933e3314158b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_moore, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:13:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb0b0d235e8b1231f6e1cdd1ac6edb7b72a779002c11d2dca460a0beacf79950-merged.mount: Deactivated successfully.
Oct 02 19:13:23 compute-0 podman[226780]: 2025-10-02 19:13:23.477001825 +0000 UTC m=+0.157215841 container remove 03c8dbd8017f4dbb61825a0fc021e794d599847355632288ddf933e3314158b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_moore, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:13:23 compute-0 systemd[1]: libpod-conmon-03c8dbd8017f4dbb61825a0fc021e794d599847355632288ddf933e3314158b5.scope: Deactivated successfully.
Oct 02 19:13:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v225: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:23 compute-0 sudo[226634]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:23 compute-0 sudo[226802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:13:23 compute-0 sudo[226802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:23 compute-0 sudo[226802]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:23 compute-0 sudo[226827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:13:23 compute-0 sudo[226827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:23 compute-0 sudo[226827]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:23 compute-0 sudo[226856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:13:23 compute-0 sudo[226856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:23 compute-0 sudo[226856]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:24 compute-0 sudo[226883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:13:24 compute-0 sudo[226883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:24 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct 02 19:13:24 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct 02 19:13:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:13:24 compute-0 podman[226960]: 2025-10-02 19:13:24.65204323 +0000 UTC m=+0.048387607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:13:24 compute-0 podman[226960]: 2025-10-02 19:13:24.746727186 +0000 UTC m=+0.143071553 container create 43ffe20e58c6ccc4dd78279f8270a1c206ade5f84e66554e259555aa56ced15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:13:24 compute-0 systemd[1]: Started libpod-conmon-43ffe20e58c6ccc4dd78279f8270a1c206ade5f84e66554e259555aa56ced15f.scope.
Oct 02 19:13:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:13:24 compute-0 podman[226960]: 2025-10-02 19:13:24.889995352 +0000 UTC m=+0.286339719 container init 43ffe20e58c6ccc4dd78279f8270a1c206ade5f84e66554e259555aa56ced15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_leavitt, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct 02 19:13:24 compute-0 podman[226960]: 2025-10-02 19:13:24.900209246 +0000 UTC m=+0.296553603 container start 43ffe20e58c6ccc4dd78279f8270a1c206ade5f84e66554e259555aa56ced15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:13:24 compute-0 exciting_leavitt[226976]: 167 167
Oct 02 19:13:24 compute-0 podman[226960]: 2025-10-02 19:13:24.906043712 +0000 UTC m=+0.302388079 container attach 43ffe20e58c6ccc4dd78279f8270a1c206ade5f84e66554e259555aa56ced15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_leavitt, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:13:24 compute-0 systemd[1]: libpod-43ffe20e58c6ccc4dd78279f8270a1c206ade5f84e66554e259555aa56ced15f.scope: Deactivated successfully.
Oct 02 19:13:24 compute-0 podman[226960]: 2025-10-02 19:13:24.909131535 +0000 UTC m=+0.305475862 container died 43ffe20e58c6ccc4dd78279f8270a1c206ade5f84e66554e259555aa56ced15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_leavitt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 19:13:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-e47d19a1110b9578782a1e5e98554974c571d96f00b45451ce4cb8897f479cd2-merged.mount: Deactivated successfully.
Oct 02 19:13:24 compute-0 podman[226960]: 2025-10-02 19:13:24.978537363 +0000 UTC m=+0.374881700 container remove 43ffe20e58c6ccc4dd78279f8270a1c206ade5f84e66554e259555aa56ced15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:13:24 compute-0 systemd[1]: libpod-conmon-43ffe20e58c6ccc4dd78279f8270a1c206ade5f84e66554e259555aa56ced15f.scope: Deactivated successfully.
Oct 02 19:13:25 compute-0 podman[226999]: 2025-10-02 19:13:25.261814499 +0000 UTC m=+0.072903663 container create 0bf3b89a0b3c7aa2ee59cf605ed0707d445ded8a52a95986b357350f1c6d6157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_easley, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:13:25 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 5.14 deep-scrub starts
Oct 02 19:13:25 compute-0 systemd[1]: Started libpod-conmon-0bf3b89a0b3c7aa2ee59cf605ed0707d445ded8a52a95986b357350f1c6d6157.scope.
Oct 02 19:13:25 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 5.14 deep-scrub ok
Oct 02 19:13:25 compute-0 podman[226999]: 2025-10-02 19:13:25.239202284 +0000 UTC m=+0.050291418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:13:25 compute-0 ceph-mon[191910]: pgmap v225: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:25 compute-0 ceph-mon[191910]: 5.19 scrub starts
Oct 02 19:13:25 compute-0 ceph-mon[191910]: 5.19 scrub ok
Oct 02 19:13:25 compute-0 ceph-mon[191910]: 5.14 deep-scrub starts
Oct 02 19:13:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:13:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd2158c25eef9796464d9f8adc43a8f8eb045149c096465368a075f476c9dab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd2158c25eef9796464d9f8adc43a8f8eb045149c096465368a075f476c9dab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd2158c25eef9796464d9f8adc43a8f8eb045149c096465368a075f476c9dab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd2158c25eef9796464d9f8adc43a8f8eb045149c096465368a075f476c9dab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:25 compute-0 podman[226999]: 2025-10-02 19:13:25.405936709 +0000 UTC m=+0.217025813 container init 0bf3b89a0b3c7aa2ee59cf605ed0707d445ded8a52a95986b357350f1c6d6157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_easley, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 19:13:25 compute-0 podman[226999]: 2025-10-02 19:13:25.433483567 +0000 UTC m=+0.244572661 container start 0bf3b89a0b3c7aa2ee59cf605ed0707d445ded8a52a95986b357350f1c6d6157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 19:13:25 compute-0 podman[226999]: 2025-10-02 19:13:25.438787089 +0000 UTC m=+0.249876253 container attach 0bf3b89a0b3c7aa2ee59cf605ed0707d445ded8a52a95986b357350f1c6d6157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:13:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v226: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Oct 02 19:13:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Oct 02 19:13:25 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 02 19:13:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct 02 19:13:26 compute-0 ceph-mon[191910]: 5.14 deep-scrub ok
Oct 02 19:13:26 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 02 19:13:26 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 02 19:13:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct 02 19:13:26 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct 02 19:13:26 compute-0 amazing_easley[227015]: {
Oct 02 19:13:26 compute-0 amazing_easley[227015]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:13:26 compute-0 amazing_easley[227015]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:13:26 compute-0 amazing_easley[227015]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:13:26 compute-0 amazing_easley[227015]:         "osd_id": 1,
Oct 02 19:13:26 compute-0 amazing_easley[227015]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:13:26 compute-0 amazing_easley[227015]:         "type": "bluestore"
Oct 02 19:13:26 compute-0 amazing_easley[227015]:     },
Oct 02 19:13:26 compute-0 amazing_easley[227015]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:13:26 compute-0 amazing_easley[227015]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:13:26 compute-0 amazing_easley[227015]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:13:26 compute-0 amazing_easley[227015]:         "osd_id": 2,
Oct 02 19:13:26 compute-0 amazing_easley[227015]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:13:26 compute-0 amazing_easley[227015]:         "type": "bluestore"
Oct 02 19:13:26 compute-0 amazing_easley[227015]:     },
Oct 02 19:13:26 compute-0 amazing_easley[227015]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:13:26 compute-0 amazing_easley[227015]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:13:26 compute-0 amazing_easley[227015]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:13:26 compute-0 amazing_easley[227015]:         "osd_id": 0,
Oct 02 19:13:26 compute-0 amazing_easley[227015]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:13:26 compute-0 amazing_easley[227015]:         "type": "bluestore"
Oct 02 19:13:26 compute-0 amazing_easley[227015]:     }
Oct 02 19:13:26 compute-0 amazing_easley[227015]: }
Oct 02 19:13:26 compute-0 systemd[1]: libpod-0bf3b89a0b3c7aa2ee59cf605ed0707d445ded8a52a95986b357350f1c6d6157.scope: Deactivated successfully.
Oct 02 19:13:26 compute-0 podman[226999]: 2025-10-02 19:13:26.663990019 +0000 UTC m=+1.475079153 container died 0bf3b89a0b3c7aa2ee59cf605ed0707d445ded8a52a95986b357350f1c6d6157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_easley, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:13:26 compute-0 systemd[1]: libpod-0bf3b89a0b3c7aa2ee59cf605ed0707d445ded8a52a95986b357350f1c6d6157.scope: Consumed 1.225s CPU time.
Oct 02 19:13:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dd2158c25eef9796464d9f8adc43a8f8eb045149c096465368a075f476c9dab-merged.mount: Deactivated successfully.
Oct 02 19:13:26 compute-0 podman[226999]: 2025-10-02 19:13:26.768533418 +0000 UTC m=+1.579622512 container remove 0bf3b89a0b3c7aa2ee59cf605ed0707d445ded8a52a95986b357350f1c6d6157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 19:13:26 compute-0 systemd[1]: libpod-conmon-0bf3b89a0b3c7aa2ee59cf605ed0707d445ded8a52a95986b357350f1c6d6157.scope: Deactivated successfully.
Oct 02 19:13:26 compute-0 sudo[226883]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:13:26 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:13:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:13:26 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:13:26 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 3bfa9202-03db-4533-a44f-cd4598f0dedf does not exist
Oct 02 19:13:26 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev d6217397-5676-4400-b611-0adfda278d4d does not exist
Oct 02 19:13:26 compute-0 sudo[227061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:13:26 compute-0 sudo[227061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:26 compute-0 sudo[227061]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:27 compute-0 sudo[227086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:13:27 compute-0 sudo[227086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:13:27 compute-0 sudo[227086]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:27 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.11 deep-scrub starts
Oct 02 19:13:27 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 2.11 deep-scrub ok
Oct 02 19:13:27 compute-0 ceph-mon[191910]: pgmap v226: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Oct 02 19:13:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 02 19:13:27 compute-0 ceph-mon[191910]: osdmap e105: 3 total, 3 up, 3 in
Oct 02 19:13:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:13:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:13:27 compute-0 ceph-mon[191910]: 2.11 deep-scrub starts
Oct 02 19:13:27 compute-0 ceph-mon[191910]: 2.11 deep-scrub ok
Oct 02 19:13:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v228: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Oct 02 19:13:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Oct 02 19:13:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 02 19:13:28 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Oct 02 19:13:28 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Oct 02 19:13:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct 02 19:13:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 02 19:13:28 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 02 19:13:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct 02 19:13:28 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct 02 19:13:29 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Oct 02 19:13:29 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Oct 02 19:13:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:13:29 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Oct 02 19:13:29 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Oct 02 19:13:29 compute-0 ceph-mon[191910]: pgmap v228: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Oct 02 19:13:29 compute-0 ceph-mon[191910]: 5.18 deep-scrub starts
Oct 02 19:13:29 compute-0 ceph-mon[191910]: 5.18 deep-scrub ok
Oct 02 19:13:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 02 19:13:29 compute-0 ceph-mon[191910]: osdmap e106: 3 total, 3 up, 3 in
Oct 02 19:13:29 compute-0 ceph-mon[191910]: 5.7 scrub starts
Oct 02 19:13:29 compute-0 ceph-mon[191910]: 5.7 scrub ok
Oct 02 19:13:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v230: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Oct 02 19:13:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Oct 02 19:13:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 02 19:13:29 compute-0 podman[157186]: time="2025-10-02T19:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:13:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:13:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6814 "" "Go-http-client/1.1"
Oct 02 19:13:30 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Oct 02 19:13:30 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Oct 02 19:13:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct 02 19:13:30 compute-0 ceph-mon[191910]: 4.14 scrub starts
Oct 02 19:13:30 compute-0 ceph-mon[191910]: 4.14 scrub ok
Oct 02 19:13:30 compute-0 ceph-mon[191910]: pgmap v230: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Oct 02 19:13:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 02 19:13:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 02 19:13:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct 02 19:13:30 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct 02 19:13:30 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 107 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=107 pruub=11.879725456s) [2] r=-1 lpr=107 pi=[60,107)/1 crt=49'389 mlcod 0'0 active pruub 205.082214355s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:30 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 107 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=107 pruub=11.879671097s) [2] r=-1 lpr=107 pi=[60,107)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 205.082214355s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:30 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=107) [2] r=0 lpr=107 pi=[60,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:31 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct 02 19:13:31 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct 02 19:13:31 compute-0 openstack_network_exporter[159337]: ERROR   19:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:13:31 compute-0 openstack_network_exporter[159337]: ERROR   19:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:13:31 compute-0 openstack_network_exporter[159337]: ERROR   19:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:13:31 compute-0 openstack_network_exporter[159337]: ERROR   19:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:13:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:13:31 compute-0 openstack_network_exporter[159337]: ERROR   19:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:13:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:13:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct 02 19:13:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct 02 19:13:31 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct 02 19:13:31 compute-0 ceph-mon[191910]: 6.17 scrub starts
Oct 02 19:13:31 compute-0 ceph-mon[191910]: 6.17 scrub ok
Oct 02 19:13:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 02 19:13:31 compute-0 ceph-mon[191910]: osdmap e107: 3 total, 3 up, 3 in
Oct 02 19:13:31 compute-0 ceph-mon[191910]: 5.2 scrub starts
Oct 02 19:13:31 compute-0 ceph-mon[191910]: 5.2 scrub ok
Oct 02 19:13:31 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[60,108)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:31 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[60,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:31 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 108 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=108) [2]/[0] r=0 lpr=108 pi=[60,108)/1 crt=49'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:31 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 108 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=60/61 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=108) [2]/[0] r=0 lpr=108 pi=[60,108)/1 crt=49'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v233: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Oct 02 19:13:31 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 02 19:13:31 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct 02 19:13:31 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct 02 19:13:32 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct 02 19:13:32 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct 02 19:13:32 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct 02 19:13:32 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct 02 19:13:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct 02 19:13:32 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 02 19:13:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct 02 19:13:32 compute-0 ceph-mon[191910]: osdmap e108: 3 total, 3 up, 3 in
Oct 02 19:13:32 compute-0 ceph-mon[191910]: pgmap v233: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 02 19:13:32 compute-0 ceph-mon[191910]: 7.f scrub starts
Oct 02 19:13:32 compute-0 ceph-mon[191910]: 7.f scrub ok
Oct 02 19:13:32 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct 02 19:13:32 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Oct 02 19:13:32 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Oct 02 19:13:32 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 109 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=108/109 n=5 ec=53/43 lis/c=60/60 les/c/f=61/61/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[60,108)/1 crt=49'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:13:33 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct 02 19:13:33 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct 02 19:13:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct 02 19:13:33 compute-0 ceph-mon[191910]: 4.18 scrub starts
Oct 02 19:13:33 compute-0 ceph-mon[191910]: 4.18 scrub ok
Oct 02 19:13:33 compute-0 ceph-mon[191910]: 4.12 scrub starts
Oct 02 19:13:33 compute-0 ceph-mon[191910]: 4.12 scrub ok
Oct 02 19:13:33 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 02 19:13:33 compute-0 ceph-mon[191910]: osdmap e109: 3 total, 3 up, 3 in
Oct 02 19:13:33 compute-0 ceph-mon[191910]: 3.a scrub starts
Oct 02 19:13:33 compute-0 ceph-mon[191910]: 3.a scrub ok
Oct 02 19:13:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct 02 19:13:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v235: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:33 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct 02 19:13:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Oct 02 19:13:33 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 02 19:13:33 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 110 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=108/109 n=5 ec=53/43 lis/c=108/60 les/c/f=109/61/0 sis=110 pruub=15.404472351s) [2] async=[2] r=-1 lpr=110 pi=[60,110)/1 crt=49'389 mlcod 49'389 active pruub 211.687637329s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:33 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 110 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=108/109 n=5 ec=53/43 lis/c=108/60 les/c/f=109/61/0 sis=110 pruub=15.404338837s) [2] r=-1 lpr=110 pi=[60,110)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 211.687637329s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:33 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 110 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=108/60 les/c/f=109/61/0 sis=110) [2] r=0 lpr=110 pi=[60,110)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:33 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 110 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=108/60 les/c/f=109/61/0 sis=110) [2] r=0 lpr=110 pi=[60,110)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:13:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:13:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:13:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:13:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:13:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:13:33 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Oct 02 19:13:33 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Oct 02 19:13:34 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct 02 19:13:34 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct 02 19:13:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:13:34 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Oct 02 19:13:34 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Oct 02 19:13:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct 02 19:13:34 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 02 19:13:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct 02 19:13:34 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct 02 19:13:34 compute-0 ceph-mon[191910]: 4.13 scrub starts
Oct 02 19:13:34 compute-0 ceph-mon[191910]: 4.13 scrub ok
Oct 02 19:13:34 compute-0 ceph-mon[191910]: pgmap v235: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:34 compute-0 ceph-mon[191910]: osdmap e110: 3 total, 3 up, 3 in
Oct 02 19:13:34 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 02 19:13:34 compute-0 ceph-mon[191910]: 3.9 scrub starts
Oct 02 19:13:34 compute-0 ceph-mon[191910]: 3.9 scrub ok
Oct 02 19:13:34 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 111 pg[9.19( v 49'389 (0'0,49'389] local-lis/les=110/111 n=5 ec=53/43 lis/c=108/60 les/c/f=109/61/0 sis=110) [2] r=0 lpr=110 pi=[60,110)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:13:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v238: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Oct 02 19:13:35 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 02 19:13:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct 02 19:13:35 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 02 19:13:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct 02 19:13:35 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct 02 19:13:36 compute-0 ceph-mon[191910]: 6.14 scrub starts
Oct 02 19:13:36 compute-0 ceph-mon[191910]: 6.14 scrub ok
Oct 02 19:13:36 compute-0 ceph-mon[191910]: 4.10 scrub starts
Oct 02 19:13:36 compute-0 ceph-mon[191910]: 4.10 scrub ok
Oct 02 19:13:36 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 02 19:13:36 compute-0 ceph-mon[191910]: osdmap e111: 3 total, 3 up, 3 in
Oct 02 19:13:36 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 02 19:13:36 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct 02 19:13:36 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct 02 19:13:36 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Oct 02 19:13:36 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Oct 02 19:13:36 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct 02 19:13:36 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct 02 19:13:37 compute-0 ceph-mon[191910]: pgmap v238: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:37 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 02 19:13:37 compute-0 ceph-mon[191910]: osdmap e112: 3 total, 3 up, 3 in
Oct 02 19:13:37 compute-0 ceph-mon[191910]: 3.17 scrub starts
Oct 02 19:13:37 compute-0 ceph-mon[191910]: 3.17 scrub ok
Oct 02 19:13:37 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 112 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=83/84 n=5 ec=53/43 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=10.390202522s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=49'389 mlcod 0'0 active pruub 197.031204224s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:37 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 112 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=83/84 n=5 ec=53/43 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=10.388632774s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 197.031204224s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:37 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.15 deep-scrub starts
Oct 02 19:13:37 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:37 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.15 deep-scrub ok
Oct 02 19:13:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v240: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 1 objects/s recovering
Oct 02 19:13:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Oct 02 19:13:37 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 02 19:13:37 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts
Oct 02 19:13:37 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok
Oct 02 19:13:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct 02 19:13:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 02 19:13:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct 02 19:13:38 compute-0 ceph-mon[191910]: 6.11 scrub starts
Oct 02 19:13:38 compute-0 ceph-mon[191910]: 6.11 scrub ok
Oct 02 19:13:38 compute-0 ceph-mon[191910]: 4.f scrub starts
Oct 02 19:13:38 compute-0 ceph-mon[191910]: 4.f scrub ok
Oct 02 19:13:38 compute-0 ceph-mon[191910]: 3.15 deep-scrub starts
Oct 02 19:13:38 compute-0 ceph-mon[191910]: 3.15 deep-scrub ok
Oct 02 19:13:38 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 02 19:13:38 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct 02 19:13:38 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 113 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=83/84 n=5 ec=53/43 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=49'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:38 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 113 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=83/84 n=5 ec=53/43 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=49'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:38 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:38 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:38 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct 02 19:13:38 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct 02 19:13:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct 02 19:13:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct 02 19:13:39 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct 02 19:13:39 compute-0 ceph-mon[191910]: pgmap v240: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 1 objects/s recovering
Oct 02 19:13:39 compute-0 ceph-mon[191910]: 4.e deep-scrub starts
Oct 02 19:13:39 compute-0 ceph-mon[191910]: 4.e deep-scrub ok
Oct 02 19:13:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 02 19:13:39 compute-0 ceph-mon[191910]: osdmap e113: 3 total, 3 up, 3 in
Oct 02 19:13:39 compute-0 ceph-mon[191910]: 3.12 scrub starts
Oct 02 19:13:39 compute-0 ceph-mon[191910]: 3.12 scrub ok
Oct 02 19:13:39 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 114 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=113/114 n=5 ec=53/43 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=49'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:13:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:13:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v243: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 1 objects/s recovering
Oct 02 19:13:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Oct 02 19:13:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 02 19:13:39 compute-0 podman[227141]: 2025-10-02 19:13:39.707089612 +0000 UTC m=+0.109481643 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:13:39 compute-0 podman[227140]: 2025-10-02 19:13:39.709189798 +0000 UTC m=+0.117430445 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Oct 02 19:13:40 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.c scrub starts
Oct 02 19:13:40 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.c scrub ok
Oct 02 19:13:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct 02 19:13:40 compute-0 ceph-mon[191910]: osdmap e114: 3 total, 3 up, 3 in
Oct 02 19:13:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 02 19:13:40 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 02 19:13:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct 02 19:13:40 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct 02 19:13:40 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 115 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=113/114 n=5 ec=53/43 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.958153725s) [0] async=[0] r=-1 lpr=115 pi=[83,115)/1 crt=49'389 mlcod 49'389 active pruub 204.539520264s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:40 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 115 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=70/71 n=5 ec=53/43 lis/c=70/70 les/c/f=71/71/0 sis=115 pruub=9.096595764s) [0] r=-1 lpr=115 pi=[70,115)/1 crt=49'389 mlcod 0'0 active pruub 198.678283691s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:40 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 115 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=113/114 n=5 ec=53/43 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.957871437s) [0] r=-1 lpr=115 pi=[83,115)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 204.539520264s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:40 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 115 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=70/71 n=5 ec=53/43 lis/c=70/70 les/c/f=71/71/0 sis=115 pruub=9.096514702s) [0] r=-1 lpr=115 pi=[70,115)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 198.678283691s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:40 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=70/70 les/c/f=71/71/0 sis=115) [0] r=0 lpr=115 pi=[70,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:40 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 115 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:40 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 115 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:40 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct 02 19:13:40 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct 02 19:13:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Oct 02 19:13:41 compute-0 ceph-mon[191910]: pgmap v243: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 1 objects/s recovering
Oct 02 19:13:41 compute-0 ceph-mon[191910]: 6.c scrub starts
Oct 02 19:13:41 compute-0 ceph-mon[191910]: 6.c scrub ok
Oct 02 19:13:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 02 19:13:41 compute-0 ceph-mon[191910]: osdmap e115: 3 total, 3 up, 3 in
Oct 02 19:13:41 compute-0 ceph-mon[191910]: 7.1b scrub starts
Oct 02 19:13:41 compute-0 ceph-mon[191910]: 7.1b scrub ok
Oct 02 19:13:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v245: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 1 objects/s recovering
Oct 02 19:13:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 02 19:13:41 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:13:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Oct 02 19:13:41 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Oct 02 19:13:41 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=70/70 les/c/f=71/71/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[70,116)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:41 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=70/70 les/c/f=71/71/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[70,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:41 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 116 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=70/71 n=5 ec=53/43 lis/c=70/70 les/c/f=71/71/0 sis=116) [0]/[2] r=0 lpr=116 pi=[70,116)/1 crt=49'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:41 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 116 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=70/71 n=5 ec=53/43 lis/c=70/70 les/c/f=71/71/0 sis=116) [0]/[2] r=0 lpr=116 pi=[70,116)/1 crt=49'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:41 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 116 pg[9.1c( v 49'389 (0'0,49'389] local-lis/les=115/116 n=5 ec=53/43 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:13:41 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct 02 19:13:41 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct 02 19:13:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 19:13:42 compute-0 ceph-mon[191910]: osdmap e116: 3 total, 3 up, 3 in
Oct 02 19:13:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Oct 02 19:13:42 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:13:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Oct 02 19:13:42 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Oct 02 19:13:42 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 117 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=75/76 n=5 ec=53/43 lis/c=75/75 les/c/f=76/76/0 sis=117 pruub=11.815251350s) [1] r=-1 lpr=117 pi=[75,117)/1 crt=49'389 mlcod 0'0 active pruub 203.768066406s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:42 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 117 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=75/76 n=5 ec=53/43 lis/c=75/75 les/c/f=76/76/0 sis=117 pruub=11.815056801s) [1] r=-1 lpr=117 pi=[75,117)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 203.768066406s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:42 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=75/75 les/c/f=76/76/0 sis=117) [1] r=0 lpr=117 pi=[75,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:42 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 117 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=116/117 n=5 ec=53/43 lis/c=70/70 les/c/f=71/71/0 sis=116) [0]/[2] async=[0] r=0 lpr=116 pi=[70,116)/1 crt=49'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:13:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v248: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:43 compute-0 ceph-mon[191910]: pgmap v245: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 1 objects/s recovering
Oct 02 19:13:43 compute-0 ceph-mon[191910]: 6.f scrub starts
Oct 02 19:13:43 compute-0 ceph-mon[191910]: 6.f scrub ok
Oct 02 19:13:43 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 19:13:43 compute-0 ceph-mon[191910]: osdmap e117: 3 total, 3 up, 3 in
Oct 02 19:13:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Oct 02 19:13:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Oct 02 19:13:43 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 118 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=116/70 les/c/f=117/71/0 sis=118) [0] r=0 lpr=118 pi=[70,118)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:43 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 118 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=116/70 les/c/f=117/71/0 sis=118) [0] r=0 lpr=118 pi=[70,118)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:43 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Oct 02 19:13:43 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 118 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=116/117 n=5 ec=53/43 lis/c=116/70 les/c/f=117/71/0 sis=118 pruub=14.993392944s) [0] async=[0] r=-1 lpr=118 pi=[70,118)/1 crt=49'389 mlcod 49'389 active pruub 207.970657349s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:43 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=75/75 les/c/f=76/76/0 sis=118) [1]/[2] r=-1 lpr=118 pi=[75,118)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:43 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=75/75 les/c/f=76/76/0 sis=118) [1]/[2] r=-1 lpr=118 pi=[75,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:43 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 118 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=116/117 n=5 ec=53/43 lis/c=116/70 les/c/f=117/71/0 sis=118 pruub=14.989396095s) [0] r=-1 lpr=118 pi=[70,118)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 207.970657349s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:43 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 118 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=75/76 n=5 ec=53/43 lis/c=75/75 les/c/f=76/76/0 sis=118) [1]/[2] r=0 lpr=118 pi=[75,118)/1 crt=49'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:43 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 118 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=75/76 n=5 ec=53/43 lis/c=75/75 les/c/f=76/76/0 sis=118) [1]/[2] r=0 lpr=118 pi=[75,118)/1 crt=49'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:43 compute-0 podman[227181]: 2025-10-02 19:13:43.75080678 +0000 UTC m=+0.173968920 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 02 19:13:43 compute-0 podman[227182]: 2025-10-02 19:13:43.80608583 +0000 UTC m=+0.217832874 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:13:43 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Oct 02 19:13:43 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Oct 02 19:13:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:13:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct 02 19:13:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct 02 19:13:44 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct 02 19:13:44 compute-0 ceph-mon[191910]: pgmap v248: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:13:44 compute-0 ceph-mon[191910]: osdmap e118: 3 total, 3 up, 3 in
Oct 02 19:13:44 compute-0 ceph-osd[206053]: osd.0 pg_epoch: 119 pg[9.1e( v 49'389 (0'0,49'389] local-lis/les=118/119 n=5 ec=53/43 lis/c=116/70 les/c/f=117/71/0 sis=118) [0] r=0 lpr=118 pi=[70,118)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:13:44 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 119 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=118/119 n=5 ec=53/43 lis/c=75/75 les/c/f=76/76/0 sis=118) [1]/[2] async=[1] r=0 lpr=118 pi=[75,118)/1 crt=49'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:13:45 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct 02 19:13:45 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct 02 19:13:45 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Oct 02 19:13:45 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Oct 02 19:13:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v251: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Oct 02 19:13:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct 02 19:13:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct 02 19:13:45 compute-0 ceph-mon[191910]: 4.1a scrub starts
Oct 02 19:13:45 compute-0 ceph-mon[191910]: 4.1a scrub ok
Oct 02 19:13:45 compute-0 ceph-mon[191910]: osdmap e119: 3 total, 3 up, 3 in
Oct 02 19:13:45 compute-0 ceph-mon[191910]: 7.13 scrub starts
Oct 02 19:13:45 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct 02 19:13:45 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 120 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=118/119 n=5 ec=53/43 lis/c=118/75 les/c/f=119/76/0 sis=120 pruub=14.983733177s) [1] async=[1] r=-1 lpr=120 pi=[75,120)/1 crt=49'389 mlcod 49'389 active pruub 209.998764038s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:45 compute-0 ceph-osd[208121]: osd.2 pg_epoch: 120 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=118/119 n=5 ec=53/43 lis/c=118/75 les/c/f=119/76/0 sis=120 pruub=14.982284546s) [1] r=-1 lpr=120 pi=[75,120)/1 crt=49'389 mlcod 0'0 unknown NOTIFY pruub 209.998764038s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 19:13:45 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 120 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=118/75 les/c/f=119/76/0 sis=120) [1] r=0 lpr=120 pi=[75,120)/1 luod=0'0 crt=49'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 19:13:45 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 120 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=0/0 n=5 ec=53/43 lis/c=118/75 les/c/f=119/76/0 sis=120) [1] r=0 lpr=120 pi=[75,120)/1 crt=49'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 19:13:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct 02 19:13:46 compute-0 ceph-mon[191910]: 6.2 scrub starts
Oct 02 19:13:46 compute-0 ceph-mon[191910]: 6.2 scrub ok
Oct 02 19:13:46 compute-0 ceph-mon[191910]: 7.13 scrub ok
Oct 02 19:13:46 compute-0 ceph-mon[191910]: pgmap v251: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Oct 02 19:13:46 compute-0 ceph-mon[191910]: osdmap e120: 3 total, 3 up, 3 in
Oct 02 19:13:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct 02 19:13:46 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct 02 19:13:46 compute-0 ceph-osd[207106]: osd.1 pg_epoch: 121 pg[9.1f( v 49'389 (0'0,49'389] local-lis/les=120/121 n=5 ec=53/43 lis/c=118/75 les/c/f=119/76/0 sis=120) [1] r=0 lpr=120 pi=[75,120)/1 crt=49'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 19:13:46 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct 02 19:13:46 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct 02 19:13:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Oct 02 19:13:47 compute-0 ceph-mon[191910]: osdmap e121: 3 total, 3 up, 3 in
Oct 02 19:13:48 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct 02 19:13:48 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct 02 19:13:48 compute-0 ceph-mon[191910]: 4.a scrub starts
Oct 02 19:13:48 compute-0 ceph-mon[191910]: 4.a scrub ok
Oct 02 19:13:48 compute-0 ceph-mon[191910]: pgmap v254: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Oct 02 19:13:48 compute-0 podman[227225]: 2025-10-02 19:13:48.702125773 +0000 UTC m=+0.121095854 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:13:48 compute-0 podman[227224]: 2025-10-02 19:13:48.737365176 +0000 UTC m=+0.162087531 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 19:13:48 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct 02 19:13:48 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct 02 19:13:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:13:49 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 7.3 deep-scrub starts
Oct 02 19:13:49 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 7.3 deep-scrub ok
Oct 02 19:13:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v255: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 2 objects/s recovering
Oct 02 19:13:49 compute-0 ceph-mon[191910]: 3.1f scrub starts
Oct 02 19:13:49 compute-0 ceph-mon[191910]: 3.1f scrub ok
Oct 02 19:13:49 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Oct 02 19:13:49 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Oct 02 19:13:50 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct 02 19:13:50 compute-0 sudo[225656]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:50 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct 02 19:13:50 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct 02 19:13:50 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct 02 19:13:50 compute-0 ceph-mon[191910]: 6.8 scrub starts
Oct 02 19:13:50 compute-0 ceph-mon[191910]: 6.8 scrub ok
Oct 02 19:13:50 compute-0 ceph-mon[191910]: 7.3 deep-scrub starts
Oct 02 19:13:50 compute-0 ceph-mon[191910]: 7.3 deep-scrub ok
Oct 02 19:13:50 compute-0 ceph-mon[191910]: pgmap v255: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 2 objects/s recovering
Oct 02 19:13:50 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 6.13 deep-scrub starts
Oct 02 19:13:50 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 6.13 deep-scrub ok
Oct 02 19:13:50 compute-0 sudo[227429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbpoztkbxdwwauzeiffljzrutbljojkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432430.3197892-128-188640519598524/AnsiballZ_command.py'
Oct 02 19:13:50 compute-0 sudo[227429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:50 compute-0 podman[227387]: 2025-10-02 19:13:50.916910483 +0000 UTC m=+0.131466852 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, io.buildah.version=1.29.0, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, build-date=2024-09-18T21:23:30, name=ubi9, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, config_id=edpm, container_name=kepler, com.redhat.component=ubi9-container, io.openshift.expose-services=)
Oct 02 19:13:51 compute-0 python3.9[227434]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:13:51 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct 02 19:13:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v256: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Oct 02 19:13:51 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct 02 19:13:51 compute-0 ceph-mon[191910]: 4.1b scrub starts
Oct 02 19:13:51 compute-0 ceph-mon[191910]: 4.1b scrub ok
Oct 02 19:13:51 compute-0 ceph-mon[191910]: 4.d scrub starts
Oct 02 19:13:51 compute-0 ceph-mon[191910]: 4.d scrub ok
Oct 02 19:13:51 compute-0 ceph-mon[191910]: 10.8 scrub starts
Oct 02 19:13:51 compute-0 ceph-mon[191910]: 10.8 scrub ok
Oct 02 19:13:52 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct 02 19:13:52 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct 02 19:13:52 compute-0 ceph-mon[191910]: 6.13 deep-scrub starts
Oct 02 19:13:52 compute-0 ceph-mon[191910]: 6.13 deep-scrub ok
Oct 02 19:13:52 compute-0 ceph-mon[191910]: 10.4 scrub starts
Oct 02 19:13:52 compute-0 ceph-mon[191910]: pgmap v256: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Oct 02 19:13:52 compute-0 ceph-mon[191910]: 10.4 scrub ok
Oct 02 19:13:52 compute-0 sudo[227429]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Oct 02 19:13:53 compute-0 ceph-mon[191910]: 6.e scrub starts
Oct 02 19:13:53 compute-0 ceph-mon[191910]: 6.e scrub ok
Oct 02 19:13:53 compute-0 sudo[227719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obxnzrecqlbybpknvfsjyprefevnpymi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432433.035057-136-153260165231329/AnsiballZ_selinux.py'
Oct 02 19:13:53 compute-0 sudo[227719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:13:54 compute-0 python3.9[227721]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 02 19:13:54 compute-0 sudo[227719]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:54 compute-0 ceph-mon[191910]: pgmap v257: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Oct 02 19:13:54 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Oct 02 19:13:54 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Oct 02 19:13:55 compute-0 sudo[227871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufbordhhltqkvnlulaogockxpeehmjrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432434.712728-147-213692269807053/AnsiballZ_command.py'
Oct 02 19:13:55 compute-0 sudo[227871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:55 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct 02 19:13:55 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct 02 19:13:55 compute-0 python3.9[227873]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 02 19:13:55 compute-0 sudo[227871]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v258: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Oct 02 19:13:55 compute-0 ceph-mon[191910]: 4.1c scrub starts
Oct 02 19:13:55 compute-0 ceph-mon[191910]: 4.1c scrub ok
Oct 02 19:13:56 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.1 deep-scrub starts
Oct 02 19:13:56 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.1 deep-scrub ok
Oct 02 19:13:56 compute-0 sudo[228023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyebcngryfxhsiufnndzcsmxwockfbst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432435.764068-155-121574721555963/AnsiballZ_file.py'
Oct 02 19:13:56 compute-0 sudo[228023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:56 compute-0 python3.9[228025]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:56 compute-0 sudo[228023]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:56 compute-0 ceph-mon[191910]: 10.d scrub starts
Oct 02 19:13:56 compute-0 ceph-mon[191910]: 10.d scrub ok
Oct 02 19:13:56 compute-0 ceph-mon[191910]: pgmap v258: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Oct 02 19:13:57 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.9 deep-scrub starts
Oct 02 19:13:57 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.9 deep-scrub ok
Oct 02 19:13:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v259: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Oct 02 19:13:57 compute-0 sudo[228175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mldmfsrntvjqpuvamsmislantidncxsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432436.973836-163-151431203761233/AnsiballZ_mount.py'
Oct 02 19:13:57 compute-0 sudo[228175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:57 compute-0 ceph-mon[191910]: 6.1 deep-scrub starts
Oct 02 19:13:57 compute-0 ceph-mon[191910]: 6.1 deep-scrub ok
Oct 02 19:13:57 compute-0 python3.9[228177]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 02 19:13:57 compute-0 sudo[228175]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:58 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Oct 02 19:13:58 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Oct 02 19:13:58 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Oct 02 19:13:58 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Oct 02 19:13:58 compute-0 ceph-mon[191910]: 10.9 deep-scrub starts
Oct 02 19:13:58 compute-0 ceph-mon[191910]: 10.9 deep-scrub ok
Oct 02 19:13:58 compute-0 ceph-mon[191910]: pgmap v259: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Oct 02 19:13:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:13:59 compute-0 sudo[228327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iswjollbqohhldksizddbmqlkhcweryb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432438.7744093-191-87468352120486/AnsiballZ_file.py'
Oct 02 19:13:59 compute-0 sudo[228327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Oct 02 19:13:59 compute-0 python3.9[228329]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:13:59 compute-0 sudo[228327]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:59 compute-0 podman[157186]: time="2025-10-02T19:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:13:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:13:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6816 "" "Go-http-client/1.1"
Oct 02 19:13:59 compute-0 ceph-mon[191910]: 4.2 scrub starts
Oct 02 19:13:59 compute-0 ceph-mon[191910]: 4.2 scrub ok
Oct 02 19:13:59 compute-0 ceph-mon[191910]: 10.7 scrub starts
Oct 02 19:13:59 compute-0 ceph-mon[191910]: 10.7 scrub ok
Oct 02 19:14:00 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.17 deep-scrub starts
Oct 02 19:14:00 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.17 deep-scrub ok
Oct 02 19:14:00 compute-0 sudo[228479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sklwajvucfrvmqglxqciklteuumgvyms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432439.9544265-199-102903762304724/AnsiballZ_stat.py'
Oct 02 19:14:00 compute-0 sudo[228479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:00 compute-0 python3.9[228481]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:14:00 compute-0 sudo[228479]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:00 compute-0 ceph-mon[191910]: pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Oct 02 19:14:01 compute-0 sudo[228557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfccgcztmratvoykzoycwyfzkpbkfyth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432439.9544265-199-102903762304724/AnsiballZ_file.py'
Oct 02 19:14:01 compute-0 sudo[228557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:01 compute-0 python3.9[228559]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:01 compute-0 openstack_network_exporter[159337]: ERROR   19:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:14:01 compute-0 openstack_network_exporter[159337]: ERROR   19:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:14:01 compute-0 openstack_network_exporter[159337]: ERROR   19:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:14:01 compute-0 openstack_network_exporter[159337]: ERROR   19:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:14:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:14:01 compute-0 openstack_network_exporter[159337]: ERROR   19:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:14:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:14:01 compute-0 sudo[228557]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Oct 02 19:14:01 compute-0 ceph-mon[191910]: 10.17 deep-scrub starts
Oct 02 19:14:01 compute-0 ceph-mon[191910]: 10.17 deep-scrub ok
Oct 02 19:14:01 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Oct 02 19:14:01 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Oct 02 19:14:02 compute-0 sudo[228709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuqyitfglzqeqxrdpuogpehknrptwmtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432442.1542552-223-55620214937201/AnsiballZ_getent.py'
Oct 02 19:14:02 compute-0 sudo[228709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:02 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct 02 19:14:02 compute-0 ceph-mon[191910]: pgmap v261: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Oct 02 19:14:02 compute-0 ceph-mon[191910]: 6.1f scrub starts
Oct 02 19:14:02 compute-0 ceph-mon[191910]: 6.1f scrub ok
Oct 02 19:14:02 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct 02 19:14:03 compute-0 python3.9[228711]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 02 19:14:03 compute-0 sudo[228709]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:14:03
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['.mgr', 'backups', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms']
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:14:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:14:03 compute-0 ceph-mon[191910]: 4.11 scrub starts
Oct 02 19:14:03 compute-0 ceph-mon[191910]: 4.11 scrub ok
Oct 02 19:14:03 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Oct 02 19:14:03 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Oct 02 19:14:03 compute-0 sudo[228862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzxzywsrvnfiwpkllympkpetdoyuapig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432443.4125524-233-192836226145996/AnsiballZ_getent.py'
Oct 02 19:14:03 compute-0 sudo[228862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:04 compute-0 python3.9[228864]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 02 19:14:04 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct 02 19:14:04 compute-0 sudo[228862]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:04 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct 02 19:14:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:14:04 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.e scrub starts
Oct 02 19:14:04 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.e scrub ok
Oct 02 19:14:04 compute-0 ceph-mon[191910]: pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:04 compute-0 ceph-mon[191910]: 10.3 scrub starts
Oct 02 19:14:04 compute-0 ceph-mon[191910]: 10.3 scrub ok
Oct 02 19:14:04 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct 02 19:14:04 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct 02 19:14:05 compute-0 sudo[229015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glvyfiladqnkbkkbqysufpdaftekmkqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432444.474014-241-265907949716659/AnsiballZ_group.py'
Oct 02 19:14:05 compute-0 sudo[229015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:05 compute-0 python3.9[229017]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 19:14:05 compute-0 sudo[229015]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:05 compute-0 ceph-mon[191910]: 4.4 scrub starts
Oct 02 19:14:05 compute-0 ceph-mon[191910]: 4.4 scrub ok
Oct 02 19:14:05 compute-0 ceph-mon[191910]: 10.e scrub starts
Oct 02 19:14:05 compute-0 ceph-mon[191910]: 10.e scrub ok
Oct 02 19:14:05 compute-0 ceph-mon[191910]: 10.5 scrub starts
Oct 02 19:14:05 compute-0 ceph-mon[191910]: 10.5 scrub ok
Oct 02 19:14:05 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct 02 19:14:05 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct 02 19:14:06 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct 02 19:14:06 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct 02 19:14:06 compute-0 sudo[229167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arflvnrvwxtuynvormvlrcxzyspnchjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432445.8411684-250-157071972034949/AnsiballZ_file.py'
Oct 02 19:14:06 compute-0 sudo[229167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:06 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Oct 02 19:14:06 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Oct 02 19:14:06 compute-0 python3.9[229169]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 02 19:14:06 compute-0 sudo[229167]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:06 compute-0 ceph-mon[191910]: pgmap v263: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:06 compute-0 ceph-mon[191910]: 10.a scrub starts
Oct 02 19:14:06 compute-0 ceph-mon[191910]: 10.a scrub ok
Oct 02 19:14:06 compute-0 ceph-mon[191910]: 6.4 scrub starts
Oct 02 19:14:06 compute-0 ceph-mon[191910]: 6.4 scrub ok
Oct 02 19:14:06 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct 02 19:14:06 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct 02 19:14:07 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.d scrub starts
Oct 02 19:14:07 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.d scrub ok
Oct 02 19:14:07 compute-0 sudo[229319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrcubwljcohqanzlkufyxxqsrwyidaei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432446.9678774-261-277744513641063/AnsiballZ_dnf.py'
Oct 02 19:14:07 compute-0 sudo[229319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:07 compute-0 python3.9[229321]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:14:07 compute-0 ceph-mon[191910]: 10.1e scrub starts
Oct 02 19:14:07 compute-0 ceph-mon[191910]: 10.1e scrub ok
Oct 02 19:14:07 compute-0 ceph-mon[191910]: 10.c scrub starts
Oct 02 19:14:07 compute-0 ceph-mon[191910]: 10.c scrub ok
Oct 02 19:14:07 compute-0 ceph-mon[191910]: 6.d scrub starts
Oct 02 19:14:07 compute-0 ceph-mon[191910]: 6.d scrub ok
Oct 02 19:14:07 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct 02 19:14:07 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct 02 19:14:08 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.b scrub starts
Oct 02 19:14:08 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.b scrub ok
Oct 02 19:14:08 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Oct 02 19:14:08 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Oct 02 19:14:08 compute-0 ceph-mon[191910]: pgmap v264: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:08 compute-0 ceph-mon[191910]: 10.18 scrub starts
Oct 02 19:14:08 compute-0 ceph-mon[191910]: 10.18 scrub ok
Oct 02 19:14:08 compute-0 ceph-mon[191910]: 6.b scrub starts
Oct 02 19:14:08 compute-0 ceph-mon[191910]: 6.b scrub ok
Oct 02 19:14:08 compute-0 sudo[229319]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:09 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct 02 19:14:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:14:09 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct 02 19:14:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:09 compute-0 sudo[229472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwwsawxktgwxrazggzdmouaczskztcat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432449.2854044-269-250084421891068/AnsiballZ_file.py'
Oct 02 19:14:09 compute-0 sudo[229472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:09 compute-0 podman[229474]: 2025-10-02 19:14:09.945639825 +0000 UTC m=+0.099555867 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm)
Oct 02 19:14:09 compute-0 ceph-mon[191910]: 10.1 scrub starts
Oct 02 19:14:09 compute-0 ceph-mon[191910]: 10.1 scrub ok
Oct 02 19:14:09 compute-0 ceph-mon[191910]: 6.6 scrub starts
Oct 02 19:14:09 compute-0 ceph-mon[191910]: 6.6 scrub ok
Oct 02 19:14:09 compute-0 podman[229475]: 2025-10-02 19:14:09.960247096 +0000 UTC m=+0.098127669 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:14:10 compute-0 python3.9[229480]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:14:10 compute-0 sudo[229472]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:10 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct 02 19:14:10 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct 02 19:14:10 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Oct 02 19:14:10 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Oct 02 19:14:10 compute-0 ceph-mon[191910]: pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:10 compute-0 ceph-mon[191910]: 4.5 scrub starts
Oct 02 19:14:10 compute-0 ceph-mon[191910]: 4.5 scrub ok
Oct 02 19:14:10 compute-0 sudo[229667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akhvnlvpktqccvvynvixnwpssvrljsql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432450.3554096-277-17939485941736/AnsiballZ_stat.py'
Oct 02 19:14:10 compute-0 sudo[229667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:11 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Oct 02 19:14:11 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Oct 02 19:14:11 compute-0 python3.9[229669]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:14:11 compute-0 sudo[229667]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:11 compute-0 sudo[229745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggzogiruaqhgxokyjcepnfyvjvlqarwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432450.3554096-277-17939485941736/AnsiballZ_file.py'
Oct 02 19:14:11 compute-0 sudo[229745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:11 compute-0 python3.9[229747]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:14:11 compute-0 sudo[229745]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:11 compute-0 ceph-mon[191910]: 10.1b scrub starts
Oct 02 19:14:11 compute-0 ceph-mon[191910]: 10.1b scrub ok
Oct 02 19:14:11 compute-0 ceph-mon[191910]: 6.1c scrub starts
Oct 02 19:14:11 compute-0 ceph-mon[191910]: 6.1c scrub ok
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:14:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:14:12 compute-0 sudo[229897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-galejmqvmmlajreizmruzkzpcxjehmsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432452.2382786-290-224202453306117/AnsiballZ_stat.py'
Oct 02 19:14:12 compute-0 sudo[229897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:12 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 10.1c deep-scrub starts
Oct 02 19:14:12 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 10.1c deep-scrub ok
Oct 02 19:14:12 compute-0 ceph-mon[191910]: pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:13 compute-0 python3.9[229899]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:14:13 compute-0 sudo[229897]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:13 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.1d deep-scrub starts
Oct 02 19:14:13 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.1d deep-scrub ok
Oct 02 19:14:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:13 compute-0 sudo[229975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbgupimlxesiguzmwwkruuqjuwaykeno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432452.2382786-290-224202453306117/AnsiballZ_file.py'
Oct 02 19:14:13 compute-0 sudo[229975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:13 compute-0 python3.9[229977]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:14:13 compute-0 sudo[229975]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:13 compute-0 ceph-mon[191910]: 10.1c deep-scrub starts
Oct 02 19:14:13 compute-0 ceph-mon[191910]: 10.1c deep-scrub ok
Oct 02 19:14:13 compute-0 ceph-mon[191910]: 6.1d deep-scrub starts
Oct 02 19:14:13 compute-0 ceph-mon[191910]: 6.1d deep-scrub ok
Oct 02 19:14:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:14:14 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Oct 02 19:14:14 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Oct 02 19:14:14 compute-0 podman[230054]: 2025-10-02 19:14:14.743342493 +0000 UTC m=+0.161798934 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-08-20T13:12:41, version=9.6, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 02 19:14:14 compute-0 podman[230055]: 2025-10-02 19:14:14.789264463 +0000 UTC m=+0.201659972 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct 02 19:14:14 compute-0 sudo[230169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlwqjqywajknzjowftikkjkoodqyzxkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432454.2898476-305-21425288555081/AnsiballZ_dnf.py'
Oct 02 19:14:14 compute-0 sudo[230169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:15 compute-0 ceph-mon[191910]: pgmap v267: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:15 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Oct 02 19:14:15 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Oct 02 19:14:15 compute-0 python3.9[230171]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:14:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:16 compute-0 ceph-mon[191910]: 10.15 scrub starts
Oct 02 19:14:16 compute-0 ceph-mon[191910]: 10.15 scrub ok
Oct 02 19:14:16 compute-0 ceph-mon[191910]: 6.1e scrub starts
Oct 02 19:14:16 compute-0 ceph-mon[191910]: 6.1e scrub ok
Oct 02 19:14:16 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Oct 02 19:14:16 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Oct 02 19:14:16 compute-0 sudo[230169]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:17 compute-0 ceph-mon[191910]: pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:17 compute-0 python3.9[230322]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:14:18 compute-0 ceph-mon[191910]: 10.16 scrub starts
Oct 02 19:14:18 compute-0 ceph-mon[191910]: 10.16 scrub ok
Oct 02 19:14:18 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.8 deep-scrub starts
Oct 02 19:14:18 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.8 deep-scrub ok
Oct 02 19:14:18 compute-0 python3.9[230474]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 02 19:14:19 compute-0 ceph-mon[191910]: pgmap v269: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:19 compute-0 ceph-mon[191910]: 4.8 deep-scrub starts
Oct 02 19:14:19 compute-0 ceph-mon[191910]: 4.8 deep-scrub ok
Oct 02 19:14:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:14:19 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct 02 19:14:19 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct 02 19:14:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:19 compute-0 podman[230569]: 2025-10-02 19:14:19.733775333 +0000 UTC m=+0.154954520 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:14:19 compute-0 podman[230573]: 2025-10-02 19:14:19.739930918 +0000 UTC m=+0.155010822 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:14:20 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct 02 19:14:20 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct 02 19:14:20 compute-0 python3.9[230664]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:14:20 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct 02 19:14:20 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct 02 19:14:21 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Oct 02 19:14:21 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Oct 02 19:14:21 compute-0 ceph-mon[191910]: 8.10 scrub starts
Oct 02 19:14:21 compute-0 ceph-mon[191910]: 8.10 scrub ok
Oct 02 19:14:21 compute-0 ceph-mon[191910]: pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:21 compute-0 ceph-mon[191910]: 10.1d scrub starts
Oct 02 19:14:21 compute-0 ceph-mon[191910]: 10.1d scrub ok
Oct 02 19:14:21 compute-0 ceph-mon[191910]: 4.7 scrub starts
Oct 02 19:14:21 compute-0 ceph-mon[191910]: 4.7 scrub ok
Oct 02 19:14:21 compute-0 sudo[230827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijleibmlupblvybneyayxygxoaapugxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432460.5047045-346-5938911093308/AnsiballZ_systemd.py'
Oct 02 19:14:21 compute-0 sudo[230827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:21 compute-0 podman[230788]: 2025-10-02 19:14:21.524843766 +0000 UTC m=+0.140579576 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc., config_id=edpm, release-0.7.12=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64)
Oct 02 19:14:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:21 compute-0 python3.9[230837]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:14:21 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 02 19:14:22 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Oct 02 19:14:22 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Oct 02 19:14:22 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct 02 19:14:22 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 02 19:14:22 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 02 19:14:22 compute-0 ceph-mon[191910]: 10.1f scrub starts
Oct 02 19:14:22 compute-0 ceph-mon[191910]: 10.1f scrub ok
Oct 02 19:14:22 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 02 19:14:22 compute-0 sudo[230827]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:23 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct 02 19:14:23 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct 02 19:14:23 compute-0 ceph-mon[191910]: pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:23 compute-0 ceph-mon[191910]: 8.15 scrub starts
Oct 02 19:14:23 compute-0 ceph-mon[191910]: 8.15 scrub ok
Oct 02 19:14:23 compute-0 python3.9[230998]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 02 19:14:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:24 compute-0 ceph-mon[191910]: 11.2 scrub starts
Oct 02 19:14:24 compute-0 ceph-mon[191910]: 11.2 scrub ok
Oct 02 19:14:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.436 17 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.437 17 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a00e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.438 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feec38a00b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec5f86ab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38272c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec6d242f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.440 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.442 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feec72bb7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.442 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.442 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feec38264b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38273e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feec3827c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feec3827230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feec3825700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.448 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38274d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.448 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.449 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.450 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.450 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.450 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec4ac1ee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feec3827290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feec38277a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.450 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.452 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.452 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3825730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.452 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.453 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feec38272f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.453 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.453 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feec3827350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.453 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.454 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feec38a0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.454 17 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.454 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feec38273b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.455 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.452 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.455 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feec49646e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.456 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.456 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feec3827440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.456 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.456 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feec3827c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.456 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.457 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feec38274a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.457 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.457 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feec3827ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.457 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.457 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feec3827d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.458 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.458 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feec3827dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.458 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.458 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feec3827e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.459 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.459 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feec38a0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.459 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.459 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feec38276e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.459 17 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.459 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feec3827ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.460 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.460 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feec49641d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.460 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.460 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feec3827740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.460 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.460 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feec3827f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.461 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.461 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.461 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.462 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.462 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.462 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.462 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.462 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.462 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.462 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.462 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:14:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:14:25 compute-0 ceph-mon[191910]: pgmap v272: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:25 compute-0 sudo[231149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fljrxzouxhrnzbgtsxldsvzrebwedcsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432465.1688595-403-64644206460955/AnsiballZ_systemd.py'
Oct 02 19:14:25 compute-0 sudo[231149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:26 compute-0 python3.9[231151]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:14:26 compute-0 sudo[231149]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:26 compute-0 systemd[193692]: Created slice User Background Tasks Slice.
Oct 02 19:14:26 compute-0 systemd[193692]: Starting Cleanup of User's Temporary Files and Directories...
Oct 02 19:14:26 compute-0 systemd[193692]: Finished Cleanup of User's Temporary Files and Directories.
Oct 02 19:14:27 compute-0 sudo[231304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qraoiupflcakixfettifuuqwexdpcvty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432466.4699297-403-107301505127284/AnsiballZ_systemd.py'
Oct 02 19:14:27 compute-0 sudo[231304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:27 compute-0 ceph-mon[191910]: pgmap v273: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:27 compute-0 sudo[231307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:14:27 compute-0 sudo[231307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:14:27 compute-0 sudo[231307]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:27 compute-0 python3.9[231306]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:14:27 compute-0 sudo[231304]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:27 compute-0 sudo[231333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:14:27 compute-0 sudo[231333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:14:27 compute-0 sudo[231333]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:27 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.b scrub starts
Oct 02 19:14:27 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.b scrub ok
Oct 02 19:14:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:27 compute-0 sudo[231364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:14:27 compute-0 sudo[231364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:14:27 compute-0 sudo[231364]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:27 compute-0 sudo[231408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:14:27 compute-0 sudo[231408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:14:27 compute-0 sshd-session[223824]: Connection closed by 192.168.122.30 port 33108
Oct 02 19:14:27 compute-0 sshd-session[223821]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:14:27 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Oct 02 19:14:27 compute-0 systemd[1]: session-42.scope: Consumed 1min 21.989s CPU time.
Oct 02 19:14:27 compute-0 systemd-logind[793]: Session 42 logged out. Waiting for processes to exit.
Oct 02 19:14:27 compute-0 systemd-logind[793]: Removed session 42.
Oct 02 19:14:28 compute-0 sudo[231408]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:14:28 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:14:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:14:28 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:14:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:14:28 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:14:28 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev afc83cc2-c206-463a-b7dd-54a56892473d does not exist
Oct 02 19:14:28 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e09bb674-30a8-477b-bd37-791d0799fcfd does not exist
Oct 02 19:14:28 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev b371191f-7e79-4b16-8ede-5c550ee93f69 does not exist
Oct 02 19:14:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:14:28 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:14:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:14:28 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:14:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:14:28 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:14:28 compute-0 sudo[231463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:14:28 compute-0 sudo[231463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:14:28 compute-0 sudo[231463]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:28 compute-0 sudo[231488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:14:28 compute-0 sudo[231488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:14:28 compute-0 sudo[231488]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:28 compute-0 sudo[231513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:14:28 compute-0 sudo[231513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:14:28 compute-0 sudo[231513]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:28 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct 02 19:14:29 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct 02 19:14:29 compute-0 sudo[231538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:14:29 compute-0 sudo[231538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:14:29 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct 02 19:14:29 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct 02 19:14:29 compute-0 ceph-mon[191910]: 8.b scrub starts
Oct 02 19:14:29 compute-0 ceph-mon[191910]: 8.b scrub ok
Oct 02 19:14:29 compute-0 ceph-mon[191910]: pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:14:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:14:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:14:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:14:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:14:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:14:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:14:29 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct 02 19:14:29 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct 02 19:14:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v275: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:29 compute-0 podman[231600]: 2025-10-02 19:14:29.550409048 +0000 UTC m=+0.070089664 container create d23d6a00e3ca4f676c0f8ab654c810d15808ff275d14e4df1ef17f64173aa37a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 19:14:29 compute-0 systemd[1]: Started libpod-conmon-d23d6a00e3ca4f676c0f8ab654c810d15808ff275d14e4df1ef17f64173aa37a.scope.
Oct 02 19:14:29 compute-0 podman[231600]: 2025-10-02 19:14:29.526355945 +0000 UTC m=+0.046036641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:14:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:14:29 compute-0 podman[231600]: 2025-10-02 19:14:29.6802664 +0000 UTC m=+0.199947036 container init d23d6a00e3ca4f676c0f8ab654c810d15808ff275d14e4df1ef17f64173aa37a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:14:29 compute-0 podman[231600]: 2025-10-02 19:14:29.688706332 +0000 UTC m=+0.208386938 container start d23d6a00e3ca4f676c0f8ab654c810d15808ff275d14e4df1ef17f64173aa37a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 19:14:29 compute-0 podman[231600]: 2025-10-02 19:14:29.692721998 +0000 UTC m=+0.212402624 container attach d23d6a00e3ca4f676c0f8ab654c810d15808ff275d14e4df1ef17f64173aa37a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 19:14:29 compute-0 happy_haibt[231616]: 167 167
Oct 02 19:14:29 compute-0 systemd[1]: libpod-d23d6a00e3ca4f676c0f8ab654c810d15808ff275d14e4df1ef17f64173aa37a.scope: Deactivated successfully.
Oct 02 19:14:29 compute-0 podman[231600]: 2025-10-02 19:14:29.697295268 +0000 UTC m=+0.216975874 container died d23d6a00e3ca4f676c0f8ab654c810d15808ff275d14e4df1ef17f64173aa37a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Oct 02 19:14:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-44f740ea6a637224f73bb22dc9ece6bbd1ab4840c31c11cc6b7c0abcf6d052e1-merged.mount: Deactivated successfully.
Oct 02 19:14:29 compute-0 podman[157186]: time="2025-10-02T19:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:14:29 compute-0 podman[231600]: 2025-10-02 19:14:29.777736062 +0000 UTC m=+0.297416698 container remove d23d6a00e3ca4f676c0f8ab654c810d15808ff275d14e4df1ef17f64173aa37a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 19:14:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34183 "" "Go-http-client/1.1"
Oct 02 19:14:29 compute-0 systemd[1]: libpod-conmon-d23d6a00e3ca4f676c0f8ab654c810d15808ff275d14e4df1ef17f64173aa37a.scope: Deactivated successfully.
Oct 02 19:14:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6812 "" "Go-http-client/1.1"
Oct 02 19:14:29 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.9 deep-scrub starts
Oct 02 19:14:29 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.9 deep-scrub ok
Oct 02 19:14:30 compute-0 podman[231639]: 2025-10-02 19:14:30.032448167 +0000 UTC m=+0.081597146 container create 3e812cfbb1a4bb59028e59a91c714ea27807cc07ed89e9002b0f015d57441e84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_euclid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 19:14:30 compute-0 podman[231639]: 2025-10-02 19:14:29.995948867 +0000 UTC m=+0.045097906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:14:30 compute-0 systemd[1]: Started libpod-conmon-3e812cfbb1a4bb59028e59a91c714ea27807cc07ed89e9002b0f015d57441e84.scope.
Oct 02 19:14:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbd673f2b80cd7b3575db723f0e4aaf377f7e30fb35abf1fef8fcb8139539648/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbd673f2b80cd7b3575db723f0e4aaf377f7e30fb35abf1fef8fcb8139539648/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbd673f2b80cd7b3575db723f0e4aaf377f7e30fb35abf1fef8fcb8139539648/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbd673f2b80cd7b3575db723f0e4aaf377f7e30fb35abf1fef8fcb8139539648/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbd673f2b80cd7b3575db723f0e4aaf377f7e30fb35abf1fef8fcb8139539648/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:14:30 compute-0 ceph-mon[191910]: 8.2 scrub starts
Oct 02 19:14:30 compute-0 ceph-mon[191910]: 8.2 scrub ok
Oct 02 19:14:30 compute-0 ceph-mon[191910]: 4.9 scrub starts
Oct 02 19:14:30 compute-0 ceph-mon[191910]: 4.9 scrub ok
Oct 02 19:14:30 compute-0 podman[231639]: 2025-10-02 19:14:30.186227568 +0000 UTC m=+0.235376587 container init 3e812cfbb1a4bb59028e59a91c714ea27807cc07ed89e9002b0f015d57441e84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_euclid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 19:14:30 compute-0 podman[231639]: 2025-10-02 19:14:30.197879515 +0000 UTC m=+0.247028474 container start 3e812cfbb1a4bb59028e59a91c714ea27807cc07ed89e9002b0f015d57441e84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_euclid, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:14:30 compute-0 podman[231639]: 2025-10-02 19:14:30.206620084 +0000 UTC m=+0.255769073 container attach 3e812cfbb1a4bb59028e59a91c714ea27807cc07ed89e9002b0f015d57441e84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_euclid, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:14:30 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 8.d scrub starts
Oct 02 19:14:30 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 8.d scrub ok
Oct 02 19:14:31 compute-0 ceph-mon[191910]: 11.14 scrub starts
Oct 02 19:14:31 compute-0 ceph-mon[191910]: 11.14 scrub ok
Oct 02 19:14:31 compute-0 ceph-mon[191910]: pgmap v275: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:31 compute-0 ceph-mon[191910]: 11.9 deep-scrub starts
Oct 02 19:14:31 compute-0 ceph-mon[191910]: 11.9 deep-scrub ok
Oct 02 19:14:31 compute-0 jolly_euclid[231655]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:14:31 compute-0 jolly_euclid[231655]: --> relative data size: 1.0
Oct 02 19:14:31 compute-0 jolly_euclid[231655]: --> All data devices are unavailable
Oct 02 19:14:31 compute-0 systemd[1]: libpod-3e812cfbb1a4bb59028e59a91c714ea27807cc07ed89e9002b0f015d57441e84.scope: Deactivated successfully.
Oct 02 19:14:31 compute-0 systemd[1]: libpod-3e812cfbb1a4bb59028e59a91c714ea27807cc07ed89e9002b0f015d57441e84.scope: Consumed 1.141s CPU time.
Oct 02 19:14:31 compute-0 podman[231639]: 2025-10-02 19:14:31.387145702 +0000 UTC m=+1.436294681 container died 3e812cfbb1a4bb59028e59a91c714ea27807cc07ed89e9002b0f015d57441e84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:14:31 compute-0 openstack_network_exporter[159337]: ERROR   19:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:14:31 compute-0 openstack_network_exporter[159337]: ERROR   19:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:14:31 compute-0 openstack_network_exporter[159337]: ERROR   19:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:14:31 compute-0 openstack_network_exporter[159337]: ERROR   19:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:14:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:14:31 compute-0 openstack_network_exporter[159337]: ERROR   19:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:14:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:14:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbd673f2b80cd7b3575db723f0e4aaf377f7e30fb35abf1fef8fcb8139539648-merged.mount: Deactivated successfully.
Oct 02 19:14:31 compute-0 podman[231639]: 2025-10-02 19:14:31.488184707 +0000 UTC m=+1.537333686 container remove 3e812cfbb1a4bb59028e59a91c714ea27807cc07ed89e9002b0f015d57441e84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_euclid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:14:31 compute-0 systemd[1]: libpod-conmon-3e812cfbb1a4bb59028e59a91c714ea27807cc07ed89e9002b0f015d57441e84.scope: Deactivated successfully.
Oct 02 19:14:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:31 compute-0 sudo[231538]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:31 compute-0 sudo[231695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:14:31 compute-0 sudo[231695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:14:31 compute-0 sudo[231695]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:31 compute-0 sudo[231720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:14:31 compute-0 sudo[231720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:14:31 compute-0 sudo[231720]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:31 compute-0 sudo[231745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:14:31 compute-0 sudo[231745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:14:31 compute-0 sudo[231745]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:32 compute-0 sudo[231770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:14:32 compute-0 sudo[231770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:14:32 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct 02 19:14:32 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct 02 19:14:32 compute-0 ceph-mon[191910]: 8.d scrub starts
Oct 02 19:14:32 compute-0 ceph-mon[191910]: 8.d scrub ok
Oct 02 19:14:32 compute-0 podman[231832]: 2025-10-02 19:14:32.568749686 +0000 UTC m=+0.082922650 container create da85814054fa95ec0f9b869eed9e89a2862063164e0726659cf49c4e81ee88dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goodall, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 19:14:32 compute-0 podman[231832]: 2025-10-02 19:14:32.533834359 +0000 UTC m=+0.048007363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:14:32 compute-0 systemd[1]: Started libpod-conmon-da85814054fa95ec0f9b869eed9e89a2862063164e0726659cf49c4e81ee88dc.scope.
Oct 02 19:14:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:14:32 compute-0 podman[231832]: 2025-10-02 19:14:32.693188077 +0000 UTC m=+0.207361091 container init da85814054fa95ec0f9b869eed9e89a2862063164e0726659cf49c4e81ee88dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 19:14:32 compute-0 podman[231832]: 2025-10-02 19:14:32.709798223 +0000 UTC m=+0.223971157 container start da85814054fa95ec0f9b869eed9e89a2862063164e0726659cf49c4e81ee88dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 19:14:32 compute-0 podman[231832]: 2025-10-02 19:14:32.715451692 +0000 UTC m=+0.229624656 container attach da85814054fa95ec0f9b869eed9e89a2862063164e0726659cf49c4e81ee88dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 19:14:32 compute-0 silly_goodall[231848]: 167 167
Oct 02 19:14:32 compute-0 systemd[1]: libpod-da85814054fa95ec0f9b869eed9e89a2862063164e0726659cf49c4e81ee88dc.scope: Deactivated successfully.
Oct 02 19:14:32 compute-0 podman[231832]: 2025-10-02 19:14:32.71843717 +0000 UTC m=+0.232610094 container died da85814054fa95ec0f9b869eed9e89a2862063164e0726659cf49c4e81ee88dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 19:14:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-51eb5be9c459ca5fab5cc5b089b5a9b3f4a336c2608e0cd438386500a10725c6-merged.mount: Deactivated successfully.
Oct 02 19:14:32 compute-0 podman[231832]: 2025-10-02 19:14:32.788169453 +0000 UTC m=+0.302342407 container remove da85814054fa95ec0f9b869eed9e89a2862063164e0726659cf49c4e81ee88dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goodall, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 19:14:32 compute-0 systemd[1]: libpod-conmon-da85814054fa95ec0f9b869eed9e89a2862063164e0726659cf49c4e81ee88dc.scope: Deactivated successfully.
Oct 02 19:14:33 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.3 deep-scrub starts
Oct 02 19:14:33 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.3 deep-scrub ok
Oct 02 19:14:33 compute-0 podman[231870]: 2025-10-02 19:14:33.087627164 +0000 UTC m=+0.089875993 container create 5aed350e54fcbbfc6cf8261a4ac86410ff8bc8d6748121dc1f067f62a2b59e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_napier, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:14:33 compute-0 systemd[1]: Started libpod-conmon-5aed350e54fcbbfc6cf8261a4ac86410ff8bc8d6748121dc1f067f62a2b59e20.scope.
Oct 02 19:14:33 compute-0 podman[231870]: 2025-10-02 19:14:33.054918164 +0000 UTC m=+0.057167043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:14:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/946f68b4a4730288bed46d4bb43f2d3ac4fd184fe51ce0157b350f3dbe9da200/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/946f68b4a4730288bed46d4bb43f2d3ac4fd184fe51ce0157b350f3dbe9da200/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/946f68b4a4730288bed46d4bb43f2d3ac4fd184fe51ce0157b350f3dbe9da200/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/946f68b4a4730288bed46d4bb43f2d3ac4fd184fe51ce0157b350f3dbe9da200/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:14:33 compute-0 podman[231870]: 2025-10-02 19:14:33.224972553 +0000 UTC m=+0.227221362 container init 5aed350e54fcbbfc6cf8261a4ac86410ff8bc8d6748121dc1f067f62a2b59e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_napier, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:14:33 compute-0 ceph-mon[191910]: pgmap v276: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:33 compute-0 ceph-mon[191910]: 8.1 scrub starts
Oct 02 19:14:33 compute-0 ceph-mon[191910]: 8.1 scrub ok
Oct 02 19:14:33 compute-0 podman[231870]: 2025-10-02 19:14:33.259306806 +0000 UTC m=+0.261555615 container start 5aed350e54fcbbfc6cf8261a4ac86410ff8bc8d6748121dc1f067f62a2b59e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_napier, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:14:33 compute-0 podman[231870]: 2025-10-02 19:14:33.264270846 +0000 UTC m=+0.266519655 container attach 5aed350e54fcbbfc6cf8261a4ac86410ff8bc8d6748121dc1f067f62a2b59e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_napier, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:14:33 compute-0 sshd-session[231890]: Accepted publickey for zuul from 192.168.122.30 port 53126 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:14:33 compute-0 systemd-logind[793]: New session 43 of user zuul.
Oct 02 19:14:33 compute-0 systemd[1]: Started Session 43 of User zuul.
Oct 02 19:14:33 compute-0 sshd-session[231890]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:14:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v277: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:14:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:14:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:14:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:14:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:14:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:14:33 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.d scrub starts
Oct 02 19:14:33 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.d scrub ok
Oct 02 19:14:34 compute-0 brave_napier[231886]: {
Oct 02 19:14:34 compute-0 brave_napier[231886]:     "0": [
Oct 02 19:14:34 compute-0 brave_napier[231886]:         {
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "devices": [
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "/dev/loop3"
Oct 02 19:14:34 compute-0 brave_napier[231886]:             ],
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "lv_name": "ceph_lv0",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "lv_size": "21470642176",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "name": "ceph_lv0",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "tags": {
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.cluster_name": "ceph",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.crush_device_class": "",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.encrypted": "0",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.osd_id": "0",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.type": "block",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.vdo": "0"
Oct 02 19:14:34 compute-0 brave_napier[231886]:             },
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "type": "block",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "vg_name": "ceph_vg0"
Oct 02 19:14:34 compute-0 brave_napier[231886]:         }
Oct 02 19:14:34 compute-0 brave_napier[231886]:     ],
Oct 02 19:14:34 compute-0 brave_napier[231886]:     "1": [
Oct 02 19:14:34 compute-0 brave_napier[231886]:         {
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "devices": [
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "/dev/loop4"
Oct 02 19:14:34 compute-0 brave_napier[231886]:             ],
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "lv_name": "ceph_lv1",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "lv_size": "21470642176",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "name": "ceph_lv1",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "tags": {
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.cluster_name": "ceph",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.crush_device_class": "",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.encrypted": "0",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.osd_id": "1",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.type": "block",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.vdo": "0"
Oct 02 19:14:34 compute-0 brave_napier[231886]:             },
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "type": "block",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "vg_name": "ceph_vg1"
Oct 02 19:14:34 compute-0 brave_napier[231886]:         }
Oct 02 19:14:34 compute-0 brave_napier[231886]:     ],
Oct 02 19:14:34 compute-0 brave_napier[231886]:     "2": [
Oct 02 19:14:34 compute-0 brave_napier[231886]:         {
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "devices": [
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "/dev/loop5"
Oct 02 19:14:34 compute-0 brave_napier[231886]:             ],
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "lv_name": "ceph_lv2",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "lv_size": "21470642176",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "name": "ceph_lv2",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "tags": {
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.cluster_name": "ceph",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.crush_device_class": "",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.encrypted": "0",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.osd_id": "2",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.type": "block",
Oct 02 19:14:34 compute-0 brave_napier[231886]:                 "ceph.vdo": "0"
Oct 02 19:14:34 compute-0 brave_napier[231886]:             },
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "type": "block",
Oct 02 19:14:34 compute-0 brave_napier[231886]:             "vg_name": "ceph_vg2"
Oct 02 19:14:34 compute-0 brave_napier[231886]:         }
Oct 02 19:14:34 compute-0 brave_napier[231886]:     ]
Oct 02 19:14:34 compute-0 brave_napier[231886]: }
Oct 02 19:14:34 compute-0 systemd[1]: libpod-5aed350e54fcbbfc6cf8261a4ac86410ff8bc8d6748121dc1f067f62a2b59e20.scope: Deactivated successfully.
Oct 02 19:14:34 compute-0 podman[231870]: 2025-10-02 19:14:34.195114921 +0000 UTC m=+1.197363720 container died 5aed350e54fcbbfc6cf8261a4ac86410ff8bc8d6748121dc1f067f62a2b59e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_napier, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 19:14:34 compute-0 rsyslogd[187702]: imjournal from <compute-0:brave_napier>: begin to drop messages due to rate-limiting
Oct 02 19:14:34 compute-0 ceph-mon[191910]: 8.3 deep-scrub starts
Oct 02 19:14:34 compute-0 ceph-mon[191910]: 8.3 deep-scrub ok
Oct 02 19:14:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:14:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-946f68b4a4730288bed46d4bb43f2d3ac4fd184fe51ce0157b350f3dbe9da200-merged.mount: Deactivated successfully.
Oct 02 19:14:34 compute-0 podman[231870]: 2025-10-02 19:14:34.315266739 +0000 UTC m=+1.317515538 container remove 5aed350e54fcbbfc6cf8261a4ac86410ff8bc8d6748121dc1f067f62a2b59e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 19:14:34 compute-0 systemd[1]: libpod-conmon-5aed350e54fcbbfc6cf8261a4ac86410ff8bc8d6748121dc1f067f62a2b59e20.scope: Deactivated successfully.
Oct 02 19:14:34 compute-0 sudo[231770]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:34 compute-0 sudo[232028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:14:34 compute-0 sudo[232028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:14:34 compute-0 sudo[232028]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:34 compute-0 sudo[232084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:14:34 compute-0 sudo[232084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:14:34 compute-0 sudo[232084]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:34 compute-0 sudo[232109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:14:34 compute-0 sudo[232109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:14:34 compute-0 sudo[232109]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:34 compute-0 python3.9[232083]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:14:34 compute-0 sudo[232134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:14:34 compute-0 sudo[232134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:14:34 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct 02 19:14:34 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct 02 19:14:35 compute-0 podman[232228]: 2025-10-02 19:14:35.23720507 +0000 UTC m=+0.057341888 container create 4a7374a034e352aff542952ae9170dbfd9ad67f538e8d0069b0935ba0e5f0109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shtern, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 19:14:35 compute-0 ceph-mon[191910]: pgmap v277: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:35 compute-0 ceph-mon[191910]: 11.d scrub starts
Oct 02 19:14:35 compute-0 ceph-mon[191910]: 11.d scrub ok
Oct 02 19:14:35 compute-0 systemd[1]: Started libpod-conmon-4a7374a034e352aff542952ae9170dbfd9ad67f538e8d0069b0935ba0e5f0109.scope.
Oct 02 19:14:35 compute-0 podman[232228]: 2025-10-02 19:14:35.21892254 +0000 UTC m=+0.039059378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:14:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:14:35 compute-0 podman[232228]: 2025-10-02 19:14:35.37035746 +0000 UTC m=+0.190494308 container init 4a7374a034e352aff542952ae9170dbfd9ad67f538e8d0069b0935ba0e5f0109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shtern, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 19:14:35 compute-0 podman[232228]: 2025-10-02 19:14:35.388870925 +0000 UTC m=+0.209007773 container start 4a7374a034e352aff542952ae9170dbfd9ad67f538e8d0069b0935ba0e5f0109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shtern, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:14:35 compute-0 podman[232228]: 2025-10-02 19:14:35.395835478 +0000 UTC m=+0.215972366 container attach 4a7374a034e352aff542952ae9170dbfd9ad67f538e8d0069b0935ba0e5f0109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 19:14:35 compute-0 jolly_shtern[232242]: 167 167
Oct 02 19:14:35 compute-0 systemd[1]: libpod-4a7374a034e352aff542952ae9170dbfd9ad67f538e8d0069b0935ba0e5f0109.scope: Deactivated successfully.
Oct 02 19:14:35 compute-0 podman[232228]: 2025-10-02 19:14:35.402978996 +0000 UTC m=+0.223115854 container died 4a7374a034e352aff542952ae9170dbfd9ad67f538e8d0069b0935ba0e5f0109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shtern, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:14:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a88af79c4e02f99402b72d81d099b96e00b3d1c231ca1d662755e994651e3b1-merged.mount: Deactivated successfully.
Oct 02 19:14:35 compute-0 podman[232228]: 2025-10-02 19:14:35.488831572 +0000 UTC m=+0.308968390 container remove 4a7374a034e352aff542952ae9170dbfd9ad67f538e8d0069b0935ba0e5f0109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:14:35 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct 02 19:14:35 compute-0 systemd[1]: libpod-conmon-4a7374a034e352aff542952ae9170dbfd9ad67f538e8d0069b0935ba0e5f0109.scope: Deactivated successfully.
Oct 02 19:14:35 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct 02 19:14:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:35 compute-0 podman[232319]: 2025-10-02 19:14:35.703478614 +0000 UTC m=+0.046893813 container create dc02c6758dd9ca08707407acbf8d8fc62e9378cc1a83cb70c13675ba049f5c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:14:35 compute-0 systemd[1]: Started libpod-conmon-dc02c6758dd9ca08707407acbf8d8fc62e9378cc1a83cb70c13675ba049f5c0e.scope.
Oct 02 19:14:35 compute-0 podman[232319]: 2025-10-02 19:14:35.687215206 +0000 UTC m=+0.030630425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:14:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/789fd345b38d79e2469ca321d0cab453c131f4d5e5385f8fe729dc1baa338e10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/789fd345b38d79e2469ca321d0cab453c131f4d5e5385f8fe729dc1baa338e10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/789fd345b38d79e2469ca321d0cab453c131f4d5e5385f8fe729dc1baa338e10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/789fd345b38d79e2469ca321d0cab453c131f4d5e5385f8fe729dc1baa338e10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:14:35 compute-0 podman[232319]: 2025-10-02 19:14:35.840516046 +0000 UTC m=+0.183931335 container init dc02c6758dd9ca08707407acbf8d8fc62e9378cc1a83cb70c13675ba049f5c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:14:35 compute-0 podman[232319]: 2025-10-02 19:14:35.863944611 +0000 UTC m=+0.207359820 container start dc02c6758dd9ca08707407acbf8d8fc62e9378cc1a83cb70c13675ba049f5c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 19:14:35 compute-0 podman[232319]: 2025-10-02 19:14:35.871663774 +0000 UTC m=+0.215079023 container attach dc02c6758dd9ca08707407acbf8d8fc62e9378cc1a83cb70c13675ba049f5c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 19:14:36 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct 02 19:14:36 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct 02 19:14:36 compute-0 sudo[232414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jznrhcmkskxbmmuzarytguvoyvrgommz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432475.4852028-36-32382264627529/AnsiballZ_getent.py'
Oct 02 19:14:36 compute-0 sudo[232414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:36 compute-0 ceph-mon[191910]: 11.3 scrub starts
Oct 02 19:14:36 compute-0 ceph-mon[191910]: 11.3 scrub ok
Oct 02 19:14:36 compute-0 python3.9[232416]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 02 19:14:36 compute-0 sudo[232414]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:36 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct 02 19:14:36 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]: {
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:         "osd_id": 1,
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:         "type": "bluestore"
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:     },
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:         "osd_id": 2,
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:         "type": "bluestore"
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:     },
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:         "osd_id": 0,
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:         "type": "bluestore"
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]:     }
Oct 02 19:14:37 compute-0 eloquent_heisenberg[232336]: }
Oct 02 19:14:37 compute-0 systemd[1]: libpod-dc02c6758dd9ca08707407acbf8d8fc62e9378cc1a83cb70c13675ba049f5c0e.scope: Deactivated successfully.
Oct 02 19:14:37 compute-0 systemd[1]: libpod-dc02c6758dd9ca08707407acbf8d8fc62e9378cc1a83cb70c13675ba049f5c0e.scope: Consumed 1.186s CPU time.
Oct 02 19:14:37 compute-0 podman[232319]: 2025-10-02 19:14:37.052650414 +0000 UTC m=+1.396065703 container died dc02c6758dd9ca08707407acbf8d8fc62e9378cc1a83cb70c13675ba049f5c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_heisenberg, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 19:14:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-789fd345b38d79e2469ca321d0cab453c131f4d5e5385f8fe729dc1baa338e10-merged.mount: Deactivated successfully.
Oct 02 19:14:37 compute-0 podman[232319]: 2025-10-02 19:14:37.135057 +0000 UTC m=+1.478472199 container remove dc02c6758dd9ca08707407acbf8d8fc62e9378cc1a83cb70c13675ba049f5c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_heisenberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:14:37 compute-0 systemd[1]: libpod-conmon-dc02c6758dd9ca08707407acbf8d8fc62e9378cc1a83cb70c13675ba049f5c0e.scope: Deactivated successfully.
Oct 02 19:14:37 compute-0 sudo[232134]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:14:37 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:14:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:14:37 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:14:37 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 387cd8b9-99cf-42e6-b991-7e0c29860e46 does not exist
Oct 02 19:14:37 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev ce40a441-1aa0-433c-b16c-c458c03d2ddf does not exist
Oct 02 19:14:37 compute-0 ceph-mon[191910]: 8.6 scrub starts
Oct 02 19:14:37 compute-0 ceph-mon[191910]: 8.6 scrub ok
Oct 02 19:14:37 compute-0 ceph-mon[191910]: pgmap v278: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:37 compute-0 ceph-mon[191910]: 8.1b scrub starts
Oct 02 19:14:37 compute-0 ceph-mon[191910]: 8.1b scrub ok
Oct 02 19:14:37 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:14:37 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:14:37 compute-0 sudo[232559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:14:37 compute-0 sudo[232559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:14:37 compute-0 sudo[232559]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:37 compute-0 sudo[232611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:14:37 compute-0 sudo[232656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdsdfzbxlbomtdtojdzqijahmsppxhzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432476.9229608-48-280463508044148/AnsiballZ_setup.py'
Oct 02 19:14:37 compute-0 sudo[232656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:37 compute-0 sudo[232611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:14:37 compute-0 sudo[232611]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:37 compute-0 python3.9[232659]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 19:14:38 compute-0 sudo[232656]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:38 compute-0 ceph-mon[191910]: 11.10 scrub starts
Oct 02 19:14:38 compute-0 ceph-mon[191910]: 11.10 scrub ok
Oct 02 19:14:38 compute-0 sudo[232742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpsjogmphxepvwivniwtfbfvsxbynvfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432476.9229608-48-280463508044148/AnsiballZ_dnf.py'
Oct 02 19:14:38 compute-0 sudo[232742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:38 compute-0 python3.9[232744]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 19:14:39 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Oct 02 19:14:39 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Oct 02 19:14:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:14:39 compute-0 ceph-mon[191910]: pgmap v279: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:40 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Oct 02 19:14:40 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Oct 02 19:14:40 compute-0 sudo[232742]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:40 compute-0 ceph-mon[191910]: 8.4 scrub starts
Oct 02 19:14:40 compute-0 ceph-mon[191910]: 8.4 scrub ok
Oct 02 19:14:40 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.f deep-scrub starts
Oct 02 19:14:40 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.f deep-scrub ok
Oct 02 19:14:40 compute-0 podman[232814]: 2025-10-02 19:14:40.728357291 +0000 UTC m=+0.139565279 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:14:40 compute-0 podman[232806]: 2025-10-02 19:14:40.74203982 +0000 UTC m=+0.156853523 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Oct 02 19:14:41 compute-0 sudo[232936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjlqfbptvlyrtyvxoleefbhygmoelgkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432480.4748094-62-17715101523651/AnsiballZ_dnf.py'
Oct 02 19:14:41 compute-0 sudo[232936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:41 compute-0 ceph-mon[191910]: pgmap v280: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:41 compute-0 ceph-mon[191910]: 8.5 scrub starts
Oct 02 19:14:41 compute-0 ceph-mon[191910]: 8.5 scrub ok
Oct 02 19:14:41 compute-0 python3.9[232938]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:14:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:42 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct 02 19:14:42 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct 02 19:14:42 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct 02 19:14:42 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct 02 19:14:42 compute-0 ceph-mon[191910]: 8.f deep-scrub starts
Oct 02 19:14:42 compute-0 ceph-mon[191910]: 8.f deep-scrub ok
Oct 02 19:14:42 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct 02 19:14:42 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct 02 19:14:42 compute-0 sudo[232936]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:43 compute-0 ceph-mon[191910]: pgmap v281: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:43 compute-0 ceph-mon[191910]: 11.1a scrub starts
Oct 02 19:14:43 compute-0 ceph-mon[191910]: 11.1a scrub ok
Oct 02 19:14:43 compute-0 ceph-mon[191910]: 8.7 scrub starts
Oct 02 19:14:43 compute-0 ceph-mon[191910]: 8.7 scrub ok
Oct 02 19:14:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:43 compute-0 sudo[233089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyzjfpdjmxzntpefyyikjlsqpjagflfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432482.8347797-70-171537942778873/AnsiballZ_systemd.py'
Oct 02 19:14:43 compute-0 sudo[233089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:44 compute-0 python3.9[233091]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 19:14:44 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct 02 19:14:44 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct 02 19:14:44 compute-0 sudo[233089]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:14:44 compute-0 ceph-mon[191910]: 8.e scrub starts
Oct 02 19:14:44 compute-0 ceph-mon[191910]: 8.e scrub ok
Oct 02 19:14:44 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct 02 19:14:44 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct 02 19:14:45 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct 02 19:14:45 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct 02 19:14:45 compute-0 podman[233218]: 2025-10-02 19:14:45.220368832 +0000 UTC m=+0.114239384 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:14:45 compute-0 podman[233219]: 2025-10-02 19:14:45.260081465 +0000 UTC m=+0.151476412 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Oct 02 19:14:45 compute-0 ceph-mon[191910]: pgmap v282: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:45 compute-0 ceph-mon[191910]: 8.8 scrub starts
Oct 02 19:14:45 compute-0 ceph-mon[191910]: 8.8 scrub ok
Oct 02 19:14:45 compute-0 ceph-mon[191910]: 11.6 scrub starts
Oct 02 19:14:45 compute-0 ceph-mon[191910]: 11.6 scrub ok
Oct 02 19:14:45 compute-0 python3.9[233274]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:14:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v283: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:46 compute-0 ceph-mon[191910]: 11.b scrub starts
Oct 02 19:14:46 compute-0 ceph-mon[191910]: 11.b scrub ok
Oct 02 19:14:46 compute-0 sudo[233437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plijnqorhzryouxyaylatcbkkaloxijs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432485.8197246-88-109306435262681/AnsiballZ_sefcontext.py'
Oct 02 19:14:46 compute-0 sudo[233437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:46 compute-0 python3.9[233439]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 02 19:14:47 compute-0 sudo[233437]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:47 compute-0 ceph-mon[191910]: pgmap v283: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:47 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct 02 19:14:47 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct 02 19:14:48 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.a scrub starts
Oct 02 19:14:48 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.a scrub ok
Oct 02 19:14:48 compute-0 python3.9[233589]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:14:48 compute-0 ceph-mon[191910]: pgmap v284: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:48 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct 02 19:14:48 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct 02 19:14:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:14:49 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct 02 19:14:49 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct 02 19:14:49 compute-0 ceph-mon[191910]: 11.8 scrub starts
Oct 02 19:14:49 compute-0 ceph-mon[191910]: 11.8 scrub ok
Oct 02 19:14:49 compute-0 ceph-mon[191910]: 8.a scrub starts
Oct 02 19:14:49 compute-0 ceph-mon[191910]: 8.a scrub ok
Oct 02 19:14:49 compute-0 sudo[233745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vylutrevauntxvxzvezrpmuzkbhnugnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432488.9312284-106-198477603121227/AnsiballZ_dnf.py'
Oct 02 19:14:49 compute-0 sudo[233745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v285: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:49 compute-0 python3.9[233748]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:14:49 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Oct 02 19:14:49 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Oct 02 19:14:50 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Oct 02 19:14:50 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Oct 02 19:14:50 compute-0 ceph-mon[191910]: 11.1f scrub starts
Oct 02 19:14:50 compute-0 ceph-mon[191910]: 11.1f scrub ok
Oct 02 19:14:50 compute-0 ceph-mon[191910]: 11.4 scrub starts
Oct 02 19:14:50 compute-0 ceph-mon[191910]: 11.4 scrub ok
Oct 02 19:14:50 compute-0 ceph-mon[191910]: pgmap v285: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:50 compute-0 podman[233750]: 2025-10-02 19:14:50.683878065 +0000 UTC m=+0.110832924 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi)
Oct 02 19:14:50 compute-0 podman[233751]: 2025-10-02 19:14:50.687795458 +0000 UTC m=+0.111925923 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:14:51 compute-0 sudo[233745]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:51 compute-0 ceph-mon[191910]: 11.1b scrub starts
Oct 02 19:14:51 compute-0 ceph-mon[191910]: 11.1b scrub ok
Oct 02 19:14:51 compute-0 ceph-mon[191910]: 8.9 scrub starts
Oct 02 19:14:51 compute-0 ceph-mon[191910]: 8.9 scrub ok
Oct 02 19:14:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:52 compute-0 sudo[233956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvxjlxzjovhcpdhtabkahgxkbpewhgch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432491.4055023-114-46347822372363/AnsiballZ_command.py'
Oct 02 19:14:52 compute-0 sudo[233956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:52 compute-0 podman[233915]: 2025-10-02 19:14:52.203295909 +0000 UTC m=+0.131063675 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, config_id=edpm, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct 02 19:14:52 compute-0 python3.9[233961]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:14:52 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct 02 19:14:52 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct 02 19:14:52 compute-0 ceph-mon[191910]: pgmap v286: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:53 compute-0 ceph-mon[191910]: 11.1 scrub starts
Oct 02 19:14:53 compute-0 ceph-mon[191910]: 11.1 scrub ok
Oct 02 19:14:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v287: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:53 compute-0 sudo[233956]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:14:54 compute-0 sudo[234246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncmzfpdlxjriclxppxubpytjvymxvzbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432493.8668268-122-174363284435200/AnsiballZ_file.py'
Oct 02 19:14:54 compute-0 sudo[234246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:54 compute-0 ceph-mon[191910]: pgmap v287: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:54 compute-0 python3.9[234248]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 19:14:54 compute-0 sudo[234246]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:54 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Oct 02 19:14:54 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Oct 02 19:14:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:55 compute-0 python3.9[234398]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:14:56 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Oct 02 19:14:56 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Oct 02 19:14:56 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 11.f scrub starts
Oct 02 19:14:56 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 11.f scrub ok
Oct 02 19:14:56 compute-0 ceph-mon[191910]: 11.11 scrub starts
Oct 02 19:14:56 compute-0 ceph-mon[191910]: 11.11 scrub ok
Oct 02 19:14:56 compute-0 ceph-mon[191910]: pgmap v288: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:56 compute-0 sudo[234550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idcvihimquzissxfobnazjmdeisqjugt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432496.3535025-138-137823247330648/AnsiballZ_dnf.py'
Oct 02 19:14:56 compute-0 sudo[234550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:57 compute-0 python3.9[234552]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:14:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:57 compute-0 ceph-mon[191910]: 8.13 scrub starts
Oct 02 19:14:57 compute-0 ceph-mon[191910]: 8.13 scrub ok
Oct 02 19:14:57 compute-0 ceph-mon[191910]: 11.f scrub starts
Oct 02 19:14:57 compute-0 ceph-mon[191910]: 11.f scrub ok
Oct 02 19:14:58 compute-0 ceph-mon[191910]: pgmap v289: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:58 compute-0 sudo[234550]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:14:59 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Oct 02 19:14:59 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Oct 02 19:14:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v290: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:14:59 compute-0 sudo[234703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvlsxcgmuuhhgoyvzkjcbabznwihcneb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432499.0502026-147-151689679817407/AnsiballZ_dnf.py'
Oct 02 19:14:59 compute-0 sudo[234703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:59 compute-0 podman[157186]: time="2025-10-02T19:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:14:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:14:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6817 "" "Go-http-client/1.1"
Oct 02 19:14:59 compute-0 python3.9[234705]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:15:00 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct 02 19:15:00 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct 02 19:15:00 compute-0 ceph-mon[191910]: 11.19 scrub starts
Oct 02 19:15:00 compute-0 ceph-mon[191910]: 11.19 scrub ok
Oct 02 19:15:00 compute-0 ceph-mon[191910]: pgmap v290: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:01 compute-0 sudo[234703]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:01 compute-0 openstack_network_exporter[159337]: ERROR   19:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:15:01 compute-0 openstack_network_exporter[159337]: ERROR   19:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:15:01 compute-0 openstack_network_exporter[159337]: ERROR   19:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:15:01 compute-0 openstack_network_exporter[159337]: ERROR   19:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:15:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:15:01 compute-0 openstack_network_exporter[159337]: ERROR   19:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:15:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:15:01 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Oct 02 19:15:01 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Oct 02 19:15:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v291: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:01 compute-0 ceph-mon[191910]: 8.16 scrub starts
Oct 02 19:15:01 compute-0 ceph-mon[191910]: 8.16 scrub ok
Oct 02 19:15:02 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Oct 02 19:15:02 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Oct 02 19:15:02 compute-0 sudo[234856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znsozfkicnimpufoluyswvrxrxzprrha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432501.65205-159-227558861027122/AnsiballZ_stat.py'
Oct 02 19:15:02 compute-0 sudo[234856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:02 compute-0 python3.9[234858]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:15:02 compute-0 sudo[234856]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:02 compute-0 ceph-mon[191910]: 11.17 scrub starts
Oct 02 19:15:02 compute-0 ceph-mon[191910]: 11.17 scrub ok
Oct 02 19:15:02 compute-0 ceph-mon[191910]: pgmap v291: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:03 compute-0 sudo[235010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihispgtmandfwxuteeulmonrszrfnjhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432502.697962-167-142191502643565/AnsiballZ_slurp.py'
Oct 02 19:15:03 compute-0 sudo[235010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:15:03
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.log']
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:15:03 compute-0 python3.9[235012]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct 02 19:15:03 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct 02 19:15:03 compute-0 sudo[235010]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:03 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v292: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:15:03 compute-0 ceph-mon[191910]: 8.17 scrub starts
Oct 02 19:15:03 compute-0 ceph-mon[191910]: 8.17 scrub ok
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:15:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:15:04 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct 02 19:15:04 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct 02 19:15:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:15:04 compute-0 sshd-session[231894]: Connection closed by 192.168.122.30 port 53126
Oct 02 19:15:04 compute-0 sshd-session[231890]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:15:04 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Oct 02 19:15:04 compute-0 systemd[1]: session-43.scope: Consumed 25.720s CPU time.
Oct 02 19:15:04 compute-0 systemd-logind[793]: Session 43 logged out. Waiting for processes to exit.
Oct 02 19:15:04 compute-0 systemd-logind[793]: Removed session 43.
Oct 02 19:15:04 compute-0 ceph-mon[191910]: 8.14 scrub starts
Oct 02 19:15:04 compute-0 ceph-mon[191910]: 8.14 scrub ok
Oct 02 19:15:04 compute-0 ceph-mon[191910]: pgmap v292: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:05 compute-0 ceph-mon[191910]: 8.19 scrub starts
Oct 02 19:15:05 compute-0 ceph-mon[191910]: 8.19 scrub ok
Oct 02 19:15:06 compute-0 ceph-mon[191910]: pgmap v293: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v294: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:08 compute-0 ceph-mon[191910]: pgmap v294: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:09 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct 02 19:15:09 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct 02 19:15:09 compute-0 sshd-session[235037]: Accepted publickey for zuul from 192.168.122.30 port 53180 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:15:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:15:09 compute-0 systemd-logind[793]: New session 44 of user zuul.
Oct 02 19:15:09 compute-0 systemd[1]: Started Session 44 of User zuul.
Oct 02 19:15:09 compute-0 sshd-session[235037]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:15:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v295: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:10 compute-0 python3.9[235190]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:15:10 compute-0 ceph-mon[191910]: 8.1e scrub starts
Oct 02 19:15:10 compute-0 ceph-mon[191910]: 8.1e scrub ok
Oct 02 19:15:10 compute-0 ceph-mon[191910]: pgmap v295: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:11 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct 02 19:15:11 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct 02 19:15:11 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Oct 02 19:15:11 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Oct 02 19:15:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:11 compute-0 podman[235291]: 2025-10-02 19:15:11.704479795 +0000 UTC m=+0.135134943 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Oct 02 19:15:11 compute-0 podman[235295]: 2025-10-02 19:15:11.710187815 +0000 UTC m=+0.130219714 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:15:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:15:12 compute-0 python3.9[235388]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 19:15:12 compute-0 ceph-mon[191910]: 9.2 scrub starts
Oct 02 19:15:12 compute-0 ceph-mon[191910]: 9.2 scrub ok
Oct 02 19:15:12 compute-0 ceph-mon[191910]: 8.1f scrub starts
Oct 02 19:15:12 compute-0 ceph-mon[191910]: 8.1f scrub ok
Oct 02 19:15:12 compute-0 ceph-mon[191910]: pgmap v296: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v297: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:13 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct 02 19:15:13 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct 02 19:15:14 compute-0 python3.9[235589]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:15:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:15:14 compute-0 sshd-session[235040]: Connection closed by 192.168.122.30 port 53180
Oct 02 19:15:14 compute-0 sshd-session[235037]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:15:14 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Oct 02 19:15:14 compute-0 systemd[1]: session-44.scope: Consumed 4.285s CPU time.
Oct 02 19:15:14 compute-0 systemd-logind[793]: Session 44 logged out. Waiting for processes to exit.
Oct 02 19:15:14 compute-0 systemd-logind[793]: Removed session 44.
Oct 02 19:15:14 compute-0 ceph-mon[191910]: pgmap v297: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v298: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:15 compute-0 podman[235615]: 2025-10-02 19:15:15.739184546 +0000 UTC m=+0.153181667 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-type=git, managed_by=edpm_ansible, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350)
Oct 02 19:15:15 compute-0 ceph-mon[191910]: 11.18 scrub starts
Oct 02 19:15:15 compute-0 ceph-mon[191910]: 11.18 scrub ok
Oct 02 19:15:15 compute-0 podman[235616]: 2025-10-02 19:15:15.789285613 +0000 UTC m=+0.200267195 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:15:16 compute-0 ceph-mon[191910]: pgmap v298: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:17 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct 02 19:15:17 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct 02 19:15:17 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Oct 02 19:15:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:17 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:15:17.791506) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432517791647, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7178, "num_deletes": 251, "total_data_size": 8739460, "memory_usage": 9024128, "flush_reason": "Manual Compaction"}
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432517847244, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7112948, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 140, "largest_seqno": 7315, "table_properties": {"data_size": 7086663, "index_size": 17146, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 74674, "raw_average_key_size": 23, "raw_value_size": 7024680, "raw_average_value_size": 2182, "num_data_blocks": 753, "num_entries": 3218, "num_filter_entries": 3218, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432094, "oldest_key_time": 1759432094, "file_creation_time": 1759432517, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 55912 microseconds, and 28836 cpu microseconds.
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:15:17.847362) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7112948 bytes OK
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:15:17.847451) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:15:17.851007) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:15:17.851055) EVENT_LOG_v1 {"time_micros": 1759432517851043, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:15:17.851112) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 8708307, prev total WAL file size 8708307, number of live WAL files 2.
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:15:17.855311) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(6946KB) 13(52KB) 8(1944B)]
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432517855653, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7169045, "oldest_snapshot_seqno": -1}
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3033 keys, 7124919 bytes, temperature: kUnknown
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432517924835, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7124919, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7099065, "index_size": 17168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7621, "raw_key_size": 72719, "raw_average_key_size": 23, "raw_value_size": 7038704, "raw_average_value_size": 2320, "num_data_blocks": 756, "num_entries": 3033, "num_filter_entries": 3033, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759432517, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:15:17.925185) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7124919 bytes
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:15:17.927883) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.5 rd, 102.9 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(6.8, 0.0 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3323, records dropped: 290 output_compression: NoCompression
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:15:17.927920) EVENT_LOG_v1 {"time_micros": 1759432517927903, "job": 4, "event": "compaction_finished", "compaction_time_micros": 69263, "compaction_time_cpu_micros": 38711, "output_level": 6, "num_output_files": 1, "total_output_size": 7124919, "num_input_records": 3323, "num_output_records": 3033, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432517930490, "job": 4, "event": "table_file_deletion", "file_number": 19}
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432517930590, "job": 4, "event": "table_file_deletion", "file_number": 13}
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432517930642, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct 02 19:15:17 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:15:17.854989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:15:18 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct 02 19:15:18 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct 02 19:15:18 compute-0 ceph-mon[191910]: 9.4 scrub starts
Oct 02 19:15:18 compute-0 ceph-mon[191910]: 9.4 scrub ok
Oct 02 19:15:18 compute-0 ceph-mon[191910]: 8.1d scrub starts
Oct 02 19:15:18 compute-0 ceph-mon[191910]: pgmap v299: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:18 compute-0 ceph-mon[191910]: 8.1d scrub ok
Oct 02 19:15:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:15:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v300: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:19 compute-0 ceph-mon[191910]: 9.a scrub starts
Oct 02 19:15:19 compute-0 ceph-mon[191910]: 9.a scrub ok
Oct 02 19:15:19 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Oct 02 19:15:19 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Oct 02 19:15:20 compute-0 sshd-session[235662]: Accepted publickey for zuul from 192.168.122.30 port 57488 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:15:20 compute-0 systemd-logind[793]: New session 45 of user zuul.
Oct 02 19:15:20 compute-0 systemd[1]: Started Session 45 of User zuul.
Oct 02 19:15:20 compute-0 sshd-session[235662]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:15:20 compute-0 ceph-mon[191910]: pgmap v300: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:20 compute-0 ceph-mon[191910]: 8.1c scrub starts
Oct 02 19:15:20 compute-0 ceph-mon[191910]: 8.1c scrub ok
Oct 02 19:15:20 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Oct 02 19:15:20 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Oct 02 19:15:21 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct 02 19:15:21 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct 02 19:15:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v301: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:21 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct 02 19:15:21 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct 02 19:15:21 compute-0 podman[235773]: 2025-10-02 19:15:21.69880427 +0000 UTC m=+0.126663180 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:15:21 compute-0 podman[235779]: 2025-10-02 19:15:21.702050356 +0000 UTC m=+0.119371439 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:15:21 compute-0 ceph-mon[191910]: 8.12 scrub starts
Oct 02 19:15:21 compute-0 ceph-mon[191910]: 8.12 scrub ok
Oct 02 19:15:22 compute-0 python3.9[235855]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:15:22 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 11.e deep-scrub starts
Oct 02 19:15:22 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 11.e deep-scrub ok
Oct 02 19:15:22 compute-0 podman[235884]: 2025-10-02 19:15:22.717275247 +0000 UTC m=+0.143588705 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.29.0, version=9.4, release=1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vcs-type=git, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, name=ubi9, container_name=kepler)
Oct 02 19:15:22 compute-0 ceph-mon[191910]: 9.10 scrub starts
Oct 02 19:15:22 compute-0 ceph-mon[191910]: 9.10 scrub ok
Oct 02 19:15:22 compute-0 ceph-mon[191910]: pgmap v301: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:22 compute-0 ceph-mon[191910]: 8.1a scrub starts
Oct 02 19:15:22 compute-0 ceph-mon[191910]: 8.1a scrub ok
Oct 02 19:15:23 compute-0 python3.9[236028]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:15:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v302: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:23 compute-0 ceph-mon[191910]: 11.e deep-scrub starts
Oct 02 19:15:23 compute-0 ceph-mon[191910]: 11.e deep-scrub ok
Oct 02 19:15:23 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Oct 02 19:15:23 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Oct 02 19:15:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:15:24 compute-0 sudo[236182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffnwvcladaxdscbiozsanwmcniisxjub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432524.083725-40-191705544741090/AnsiballZ_setup.py'
Oct 02 19:15:24 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct 02 19:15:24 compute-0 sudo[236182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:24 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct 02 19:15:24 compute-0 ceph-mon[191910]: pgmap v302: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:24 compute-0 ceph-mon[191910]: 11.1c scrub starts
Oct 02 19:15:24 compute-0 ceph-mon[191910]: 11.1c scrub ok
Oct 02 19:15:24 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct 02 19:15:24 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct 02 19:15:24 compute-0 python3.9[236184]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 19:15:25 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct 02 19:15:25 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct 02 19:15:25 compute-0 sudo[236182]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v303: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:25 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct 02 19:15:25 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct 02 19:15:25 compute-0 ceph-mon[191910]: 8.c scrub starts
Oct 02 19:15:25 compute-0 ceph-mon[191910]: 8.c scrub ok
Oct 02 19:15:25 compute-0 ceph-mon[191910]: 8.11 scrub starts
Oct 02 19:15:25 compute-0 ceph-mon[191910]: 8.11 scrub ok
Oct 02 19:15:25 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Oct 02 19:15:25 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Oct 02 19:15:26 compute-0 sudo[236266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldmyhkkaahnerwukxxeenlxucrbqfvht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432524.083725-40-191705544741090/AnsiballZ_dnf.py'
Oct 02 19:15:26 compute-0 sudo[236266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:26 compute-0 python3.9[236268]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:15:26 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.d scrub starts
Oct 02 19:15:26 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.d scrub ok
Oct 02 19:15:26 compute-0 ceph-mon[191910]: 9.12 scrub starts
Oct 02 19:15:26 compute-0 ceph-mon[191910]: 9.12 scrub ok
Oct 02 19:15:26 compute-0 ceph-mon[191910]: pgmap v303: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:26 compute-0 ceph-mon[191910]: 8.18 scrub starts
Oct 02 19:15:26 compute-0 ceph-mon[191910]: 8.18 scrub ok
Oct 02 19:15:26 compute-0 ceph-mon[191910]: 11.1e scrub starts
Oct 02 19:15:26 compute-0 ceph-mon[191910]: 11.1e scrub ok
Oct 02 19:15:26 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct 02 19:15:26 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct 02 19:15:27 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Oct 02 19:15:27 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Oct 02 19:15:27 compute-0 sudo[236266]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v304: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:27 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct 02 19:15:27 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct 02 19:15:27 compute-0 ceph-mon[191910]: 9.d scrub starts
Oct 02 19:15:27 compute-0 ceph-mon[191910]: 9.d scrub ok
Oct 02 19:15:27 compute-0 ceph-mon[191910]: 11.15 scrub starts
Oct 02 19:15:27 compute-0 ceph-mon[191910]: 11.15 scrub ok
Oct 02 19:15:28 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 9.1a deep-scrub starts
Oct 02 19:15:28 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 9.1a deep-scrub ok
Oct 02 19:15:28 compute-0 sudo[236419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dofcofcavgbekkclesgvtvrzncnistbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432527.8494008-52-37856396083356/AnsiballZ_setup.py'
Oct 02 19:15:28 compute-0 sudo[236419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:28 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Oct 02 19:15:28 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Oct 02 19:15:28 compute-0 python3.9[236421]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 19:15:28 compute-0 ceph-mon[191910]: 9.14 scrub starts
Oct 02 19:15:28 compute-0 ceph-mon[191910]: 9.14 scrub ok
Oct 02 19:15:28 compute-0 ceph-mon[191910]: pgmap v304: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:28 compute-0 ceph-mon[191910]: 9.1 scrub starts
Oct 02 19:15:28 compute-0 ceph-mon[191910]: 9.1 scrub ok
Oct 02 19:15:29 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Oct 02 19:15:29 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Oct 02 19:15:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:15:29 compute-0 sudo[236419]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v305: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:29 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Oct 02 19:15:29 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Oct 02 19:15:29 compute-0 podman[157186]: time="2025-10-02T19:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:15:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:15:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6825 "" "Go-http-client/1.1"
Oct 02 19:15:29 compute-0 ceph-mon[191910]: 9.1a deep-scrub starts
Oct 02 19:15:29 compute-0 ceph-mon[191910]: 9.1a deep-scrub ok
Oct 02 19:15:29 compute-0 ceph-mon[191910]: 9.5 scrub starts
Oct 02 19:15:29 compute-0 ceph-mon[191910]: 9.5 scrub ok
Oct 02 19:15:30 compute-0 sudo[236622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szkhzrlhbfkhsqrlxxonepunuzlyjfdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432529.6205256-63-65921635282070/AnsiballZ_file.py'
Oct 02 19:15:30 compute-0 sudo[236622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:30 compute-0 python3.9[236624]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:15:30 compute-0 sudo[236622]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:30 compute-0 ceph-mon[191910]: 11.5 scrub starts
Oct 02 19:15:30 compute-0 ceph-mon[191910]: 11.5 scrub ok
Oct 02 19:15:30 compute-0 ceph-mon[191910]: pgmap v305: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:30 compute-0 ceph-mon[191910]: 9.1d scrub starts
Oct 02 19:15:30 compute-0 ceph-mon[191910]: 9.1d scrub ok
Oct 02 19:15:31 compute-0 openstack_network_exporter[159337]: ERROR   19:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:15:31 compute-0 openstack_network_exporter[159337]: ERROR   19:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:15:31 compute-0 openstack_network_exporter[159337]: ERROR   19:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:15:31 compute-0 openstack_network_exporter[159337]: ERROR   19:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:15:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:15:31 compute-0 openstack_network_exporter[159337]: ERROR   19:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:15:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:15:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v306: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:31 compute-0 sudo[236774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvwequszgnlmspeictzzkkpaywcioctj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432530.9453979-71-242923815585895/AnsiballZ_command.py'
Oct 02 19:15:31 compute-0 sudo[236774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:31 compute-0 python3.9[236776]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:15:32 compute-0 sudo[236774]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:32 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct 02 19:15:32 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct 02 19:15:32 compute-0 ceph-mon[191910]: pgmap v306: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:33 compute-0 sudo[236939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exvqqzsimnlsotaygimhhqqbhtrwvvjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432532.3635669-79-86065907497050/AnsiballZ_stat.py'
Oct 02 19:15:33 compute-0 sudo[236939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:33 compute-0 python3.9[236941]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:15:33 compute-0 sudo[236939]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v307: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:15:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:15:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:15:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:15:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:15:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:15:33 compute-0 sudo[237017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqokwybkkweuxmuauttkqyxtoljprxrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432532.3635669-79-86065907497050/AnsiballZ_file.py'
Oct 02 19:15:33 compute-0 sudo[237017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:33 compute-0 ceph-mon[191910]: 9.9 scrub starts
Oct 02 19:15:33 compute-0 ceph-mon[191910]: 9.9 scrub ok
Oct 02 19:15:34 compute-0 python3.9[237019]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:15:34 compute-0 sudo[237017]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:34 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct 02 19:15:34 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct 02 19:15:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:15:34 compute-0 ceph-mon[191910]: pgmap v307: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:34 compute-0 ceph-mon[191910]: 11.7 scrub starts
Oct 02 19:15:34 compute-0 ceph-mon[191910]: 11.7 scrub ok
Oct 02 19:15:34 compute-0 sudo[237169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjpolskjxivcugnwsrdilurqmuqsebhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432534.405424-91-80022181599365/AnsiballZ_stat.py'
Oct 02 19:15:34 compute-0 sudo[237169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:35 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 11.a scrub starts
Oct 02 19:15:35 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 11.a scrub ok
Oct 02 19:15:35 compute-0 python3.9[237171]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:15:35 compute-0 sudo[237169]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:35 compute-0 sudo[237247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hregydxcqpgnwzlubbbgqrctvsdoqmzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432534.405424-91-80022181599365/AnsiballZ_file.py'
Oct 02 19:15:35 compute-0 sudo[237247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v308: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:35 compute-0 python3.9[237249]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:15:35 compute-0 sudo[237247]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:35 compute-0 ceph-mon[191910]: 11.a scrub starts
Oct 02 19:15:35 compute-0 ceph-mon[191910]: 11.a scrub ok
Oct 02 19:15:36 compute-0 sudo[237399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lblzrgytdbrgrsldfpqnasabjkrqlqjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432536.116885-104-242789864214369/AnsiballZ_ini_file.py'
Oct 02 19:15:36 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct 02 19:15:36 compute-0 sudo[237399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:36 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct 02 19:15:36 compute-0 python3.9[237401]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:15:36 compute-0 ceph-mon[191910]: pgmap v308: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:37 compute-0 sudo[237399]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:37 compute-0 sudo[237493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:15:37 compute-0 sudo[237493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:15:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v309: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:37 compute-0 sudo[237493]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:37 compute-0 sudo[237543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:15:37 compute-0 sudo[237543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:15:37 compute-0 sudo[237543]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:37 compute-0 sudo[237607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azvpgpsrtcmygbmklelwfbheafeemdmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432537.2389994-104-118297964004627/AnsiballZ_ini_file.py'
Oct 02 19:15:37 compute-0 sudo[237607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:37 compute-0 sudo[237598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:15:37 compute-0 sudo[237598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:15:37 compute-0 sudo[237598]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:37 compute-0 sudo[237629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:15:37 compute-0 sudo[237629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:15:37 compute-0 python3.9[237621]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:15:37 compute-0 sudo[237607]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:38 compute-0 ceph-mon[191910]: 9.11 scrub starts
Oct 02 19:15:38 compute-0 ceph-mon[191910]: 9.11 scrub ok
Oct 02 19:15:38 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 11.c scrub starts
Oct 02 19:15:38 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 11.c scrub ok
Oct 02 19:15:38 compute-0 sudo[237629]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:15:38 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:15:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:15:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:15:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:15:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:15:38 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 1bd94c60-5f07-4538-aae3-58f1494baf8b does not exist
Oct 02 19:15:38 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev b7d3b51a-c327-4fda-b2ca-62e8d0a6cc41 does not exist
Oct 02 19:15:38 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev a9437c0a-ab49-4e3c-8a33-577056ff2317 does not exist
Oct 02 19:15:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:15:38 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:15:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:15:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:15:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:15:38 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:15:38 compute-0 sudo[237807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:15:38 compute-0 sudo[237807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:15:38 compute-0 sudo[237807]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:38 compute-0 sudo[237862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvqfmqjyurnqzbeelpcxubbmkzwiqgup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432538.1866019-104-176081968298452/AnsiballZ_ini_file.py'
Oct 02 19:15:38 compute-0 sudo[237862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:38 compute-0 sudo[237859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:15:38 compute-0 sudo[237859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:15:38 compute-0 sudo[237859]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:38 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Oct 02 19:15:38 compute-0 sudo[237887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:15:38 compute-0 sudo[237887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:15:38 compute-0 sudo[237887]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:38 compute-0 python3.9[237874]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:15:38 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Oct 02 19:15:38 compute-0 sudo[237862]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:38 compute-0 sudo[237912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:15:38 compute-0 sudo[237912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:15:39 compute-0 ceph-mon[191910]: pgmap v309: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:39 compute-0 ceph-mon[191910]: 11.c scrub starts
Oct 02 19:15:39 compute-0 ceph-mon[191910]: 11.c scrub ok
Oct 02 19:15:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:15:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:15:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:15:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:15:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:15:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:15:39 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 11.13 deep-scrub starts
Oct 02 19:15:39 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 11.13 deep-scrub ok
Oct 02 19:15:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:15:39 compute-0 podman[238068]: 2025-10-02 19:15:39.417123229 +0000 UTC m=+0.059431573 container create a7c8e3c336e925a7fb373a59e1ac54ab1beb2da075d7fe9b6c4130f2d8a60c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:15:39 compute-0 podman[238068]: 2025-10-02 19:15:39.380365653 +0000 UTC m=+0.022674007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:15:39 compute-0 systemd[1]: Started libpod-conmon-a7c8e3c336e925a7fb373a59e1ac54ab1beb2da075d7fe9b6c4130f2d8a60c77.scope.
Oct 02 19:15:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:15:39 compute-0 podman[238068]: 2025-10-02 19:15:39.556618525 +0000 UTC m=+0.198926919 container init a7c8e3c336e925a7fb373a59e1ac54ab1beb2da075d7fe9b6c4130f2d8a60c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 19:15:39 compute-0 podman[238068]: 2025-10-02 19:15:39.574074824 +0000 UTC m=+0.216383148 container start a7c8e3c336e925a7fb373a59e1ac54ab1beb2da075d7fe9b6c4130f2d8a60c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ptolemy, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 19:15:39 compute-0 podman[238068]: 2025-10-02 19:15:39.580461822 +0000 UTC m=+0.222770236 container attach a7c8e3c336e925a7fb373a59e1ac54ab1beb2da075d7fe9b6c4130f2d8a60c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ptolemy, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 19:15:39 compute-0 laughing_ptolemy[238096]: 167 167
Oct 02 19:15:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v310: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:39 compute-0 systemd[1]: libpod-a7c8e3c336e925a7fb373a59e1ac54ab1beb2da075d7fe9b6c4130f2d8a60c77.scope: Deactivated successfully.
Oct 02 19:15:39 compute-0 podman[238068]: 2025-10-02 19:15:39.588579215 +0000 UTC m=+0.230887569 container died a7c8e3c336e925a7fb373a59e1ac54ab1beb2da075d7fe9b6c4130f2d8a60c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:15:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1f0ab48bbe77980dd7053840deeaa4ef41bcf98a8071a19ff0990b809fc01ea-merged.mount: Deactivated successfully.
Oct 02 19:15:39 compute-0 podman[238068]: 2025-10-02 19:15:39.667563671 +0000 UTC m=+0.309872005 container remove a7c8e3c336e925a7fb373a59e1ac54ab1beb2da075d7fe9b6c4130f2d8a60c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 19:15:39 compute-0 systemd[1]: libpod-conmon-a7c8e3c336e925a7fb373a59e1ac54ab1beb2da075d7fe9b6c4130f2d8a60c77.scope: Deactivated successfully.
Oct 02 19:15:39 compute-0 sudo[238160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxeqhoiprlgrfajzhmmnkytnbcabgylk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432539.1368594-104-114812401255629/AnsiballZ_ini_file.py'
Oct 02 19:15:39 compute-0 sudo[238160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:39 compute-0 podman[238168]: 2025-10-02 19:15:39.8775923 +0000 UTC m=+0.073496321 container create b765cabdd1dae52e0e4a85bb3e471ac21a607df32244081b72f89c8769b5941e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 19:15:39 compute-0 python3.9[238162]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:15:39 compute-0 systemd[1]: Started libpod-conmon-b765cabdd1dae52e0e4a85bb3e471ac21a607df32244081b72f89c8769b5941e.scope.
Oct 02 19:15:39 compute-0 podman[238168]: 2025-10-02 19:15:39.853911968 +0000 UTC m=+0.049816029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:15:39 compute-0 sudo[238160]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4206b263af028b3c99d0af730e29f6eeea6cc80f726d2c0e39c8ef4ef35e800f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4206b263af028b3c99d0af730e29f6eeea6cc80f726d2c0e39c8ef4ef35e800f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4206b263af028b3c99d0af730e29f6eeea6cc80f726d2c0e39c8ef4ef35e800f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4206b263af028b3c99d0af730e29f6eeea6cc80f726d2c0e39c8ef4ef35e800f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4206b263af028b3c99d0af730e29f6eeea6cc80f726d2c0e39c8ef4ef35e800f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:15:40 compute-0 ceph-mon[191910]: 9.3 scrub starts
Oct 02 19:15:40 compute-0 ceph-mon[191910]: 9.3 scrub ok
Oct 02 19:15:40 compute-0 ceph-mon[191910]: 11.13 deep-scrub starts
Oct 02 19:15:40 compute-0 ceph-mon[191910]: 11.13 deep-scrub ok
Oct 02 19:15:40 compute-0 podman[238168]: 2025-10-02 19:15:40.042275518 +0000 UTC m=+0.238179639 container init b765cabdd1dae52e0e4a85bb3e471ac21a607df32244081b72f89c8769b5941e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 19:15:40 compute-0 podman[238168]: 2025-10-02 19:15:40.0636533 +0000 UTC m=+0.259557311 container start b765cabdd1dae52e0e4a85bb3e471ac21a607df32244081b72f89c8769b5941e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:15:40 compute-0 podman[238168]: 2025-10-02 19:15:40.06859521 +0000 UTC m=+0.264499231 container attach b765cabdd1dae52e0e4a85bb3e471ac21a607df32244081b72f89c8769b5941e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_curie, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 19:15:40 compute-0 sudo[238345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrhxjlumusoeyxpncmcfhmqvirdzkiyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432540.382183-135-110472595436625/AnsiballZ_dnf.py'
Oct 02 19:15:40 compute-0 sudo[238345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:41 compute-0 ceph-mon[191910]: pgmap v310: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:41 compute-0 python3.9[238348]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:15:41 compute-0 hungry_curie[238184]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:15:41 compute-0 hungry_curie[238184]: --> relative data size: 1.0
Oct 02 19:15:41 compute-0 hungry_curie[238184]: --> All data devices are unavailable
Oct 02 19:15:41 compute-0 systemd[1]: libpod-b765cabdd1dae52e0e4a85bb3e471ac21a607df32244081b72f89c8769b5941e.scope: Deactivated successfully.
Oct 02 19:15:41 compute-0 systemd[1]: libpod-b765cabdd1dae52e0e4a85bb3e471ac21a607df32244081b72f89c8769b5941e.scope: Consumed 1.250s CPU time.
Oct 02 19:15:41 compute-0 podman[238168]: 2025-10-02 19:15:41.388979093 +0000 UTC m=+1.584883104 container died b765cabdd1dae52e0e4a85bb3e471ac21a607df32244081b72f89c8769b5941e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_curie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:15:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-4206b263af028b3c99d0af730e29f6eeea6cc80f726d2c0e39c8ef4ef35e800f-merged.mount: Deactivated successfully.
Oct 02 19:15:41 compute-0 podman[238168]: 2025-10-02 19:15:41.469697065 +0000 UTC m=+1.665601086 container remove b765cabdd1dae52e0e4a85bb3e471ac21a607df32244081b72f89c8769b5941e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_curie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:15:41 compute-0 systemd[1]: libpod-conmon-b765cabdd1dae52e0e4a85bb3e471ac21a607df32244081b72f89c8769b5941e.scope: Deactivated successfully.
Oct 02 19:15:41 compute-0 sudo[237912]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v311: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:41 compute-0 sudo[238380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:15:41 compute-0 sudo[238380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:15:41 compute-0 sudo[238380]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:41 compute-0 sudo[238405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:15:41 compute-0 sudo[238405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:15:41 compute-0 sudo[238405]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:41 compute-0 sudo[238442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:15:41 compute-0 sudo[238442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:15:41 compute-0 sudo[238442]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:41 compute-0 podman[238430]: 2025-10-02 19:15:41.984049903 +0000 UTC m=+0.153433313 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:15:41 compute-0 podman[238429]: 2025-10-02 19:15:41.994914119 +0000 UTC m=+0.163711134 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 19:15:42 compute-0 sudo[238494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:15:42 compute-0 sudo[238494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:15:42 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Oct 02 19:15:42 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Oct 02 19:15:42 compute-0 sudo[238345]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:42 compute-0 podman[238557]: 2025-10-02 19:15:42.606250286 +0000 UTC m=+0.079762337 container create f019173c64cdb844936c3e17744fc36b5ae61248720b8747fd38943fb1faeecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:15:42 compute-0 podman[238557]: 2025-10-02 19:15:42.573098245 +0000 UTC m=+0.046610356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:15:42 compute-0 systemd[1]: Started libpod-conmon-f019173c64cdb844936c3e17744fc36b5ae61248720b8747fd38943fb1faeecf.scope.
Oct 02 19:15:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:15:42 compute-0 podman[238557]: 2025-10-02 19:15:42.759739001 +0000 UTC m=+0.233251102 container init f019173c64cdb844936c3e17744fc36b5ae61248720b8747fd38943fb1faeecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:15:42 compute-0 podman[238557]: 2025-10-02 19:15:42.780293231 +0000 UTC m=+0.253805282 container start f019173c64cdb844936c3e17744fc36b5ae61248720b8747fd38943fb1faeecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:15:42 compute-0 podman[238557]: 2025-10-02 19:15:42.786489134 +0000 UTC m=+0.260001185 container attach f019173c64cdb844936c3e17744fc36b5ae61248720b8747fd38943fb1faeecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_merkle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 19:15:42 compute-0 friendly_merkle[238597]: 167 167
Oct 02 19:15:42 compute-0 systemd[1]: libpod-f019173c64cdb844936c3e17744fc36b5ae61248720b8747fd38943fb1faeecf.scope: Deactivated successfully.
Oct 02 19:15:42 compute-0 podman[238557]: 2025-10-02 19:15:42.794791572 +0000 UTC m=+0.268303633 container died f019173c64cdb844936c3e17744fc36b5ae61248720b8747fd38943fb1faeecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_merkle, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 19:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b637ac31018a7b8ad667a8a4693034c0e1448c5358e37ec5a0f869bb38fc33c-merged.mount: Deactivated successfully.
Oct 02 19:15:42 compute-0 podman[238557]: 2025-10-02 19:15:42.873276625 +0000 UTC m=+0.346788656 container remove f019173c64cdb844936c3e17744fc36b5ae61248720b8747fd38943fb1faeecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_merkle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 19:15:42 compute-0 systemd[1]: libpod-conmon-f019173c64cdb844936c3e17744fc36b5ae61248720b8747fd38943fb1faeecf.scope: Deactivated successfully.
Oct 02 19:15:43 compute-0 ceph-mon[191910]: pgmap v311: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:43 compute-0 ceph-mon[191910]: 11.16 scrub starts
Oct 02 19:15:43 compute-0 ceph-mon[191910]: 11.16 scrub ok
Oct 02 19:15:43 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct 02 19:15:43 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct 02 19:15:43 compute-0 podman[238642]: 2025-10-02 19:15:43.128460621 +0000 UTC m=+0.084852801 container create 6563fc649db5bad1cb029242068b2f330406578a848eff02ab54da7f6a4927dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tu, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 19:15:43 compute-0 podman[238642]: 2025-10-02 19:15:43.091986833 +0000 UTC m=+0.048379093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:15:43 compute-0 systemd[1]: Started libpod-conmon-6563fc649db5bad1cb029242068b2f330406578a848eff02ab54da7f6a4927dc.scope.
Oct 02 19:15:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b64b6b9443d44ada5f7e7b96a13b1525f47b2259259fbaa15cabf6a6f690d536/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b64b6b9443d44ada5f7e7b96a13b1525f47b2259259fbaa15cabf6a6f690d536/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b64b6b9443d44ada5f7e7b96a13b1525f47b2259259fbaa15cabf6a6f690d536/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b64b6b9443d44ada5f7e7b96a13b1525f47b2259259fbaa15cabf6a6f690d536/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:15:43 compute-0 podman[238642]: 2025-10-02 19:15:43.28858565 +0000 UTC m=+0.244977840 container init 6563fc649db5bad1cb029242068b2f330406578a848eff02ab54da7f6a4927dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tu, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:15:43 compute-0 podman[238642]: 2025-10-02 19:15:43.313155176 +0000 UTC m=+0.269547346 container start 6563fc649db5bad1cb029242068b2f330406578a848eff02ab54da7f6a4927dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tu, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 19:15:43 compute-0 podman[238642]: 2025-10-02 19:15:43.318933838 +0000 UTC m=+0.275326018 container attach 6563fc649db5bad1cb029242068b2f330406578a848eff02ab54da7f6a4927dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tu, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 19:15:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v312: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:43 compute-0 sudo[238765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feqockjlwcncjcypnowunsclkztrsmpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432543.029415-146-117052989034838/AnsiballZ_setup.py'
Oct 02 19:15:43 compute-0 sudo[238765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:43 compute-0 python3.9[238767]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:15:43 compute-0 sudo[238765]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:44 compute-0 ceph-mon[191910]: 11.1d scrub starts
Oct 02 19:15:44 compute-0 ceph-mon[191910]: 11.1d scrub ok
Oct 02 19:15:44 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct 02 19:15:44 compute-0 sleepy_tu[238687]: {
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:     "0": [
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:         {
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "devices": [
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "/dev/loop3"
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             ],
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "lv_name": "ceph_lv0",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "lv_size": "21470642176",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "name": "ceph_lv0",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "tags": {
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.cluster_name": "ceph",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.crush_device_class": "",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.encrypted": "0",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.osd_id": "0",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.type": "block",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.vdo": "0"
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             },
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "type": "block",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "vg_name": "ceph_vg0"
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:         }
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:     ],
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:     "1": [
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:         {
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "devices": [
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "/dev/loop4"
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             ],
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "lv_name": "ceph_lv1",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "lv_size": "21470642176",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "name": "ceph_lv1",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "tags": {
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.cluster_name": "ceph",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.crush_device_class": "",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.encrypted": "0",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.osd_id": "1",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.type": "block",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.vdo": "0"
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             },
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "type": "block",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "vg_name": "ceph_vg1"
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:         }
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:     ],
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:     "2": [
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:         {
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "devices": [
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "/dev/loop5"
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             ],
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "lv_name": "ceph_lv2",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "lv_size": "21470642176",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "name": "ceph_lv2",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "tags": {
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.cluster_name": "ceph",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.crush_device_class": "",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.encrypted": "0",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.osd_id": "2",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.type": "block",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:                 "ceph.vdo": "0"
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             },
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "type": "block",
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:             "vg_name": "ceph_vg2"
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:         }
Oct 02 19:15:44 compute-0 sleepy_tu[238687]:     ]
Oct 02 19:15:44 compute-0 sleepy_tu[238687]: }
Oct 02 19:15:44 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct 02 19:15:44 compute-0 systemd[1]: libpod-6563fc649db5bad1cb029242068b2f330406578a848eff02ab54da7f6a4927dc.scope: Deactivated successfully.
Oct 02 19:15:44 compute-0 podman[238642]: 2025-10-02 19:15:44.137000947 +0000 UTC m=+1.093393107 container died 6563fc649db5bad1cb029242068b2f330406578a848eff02ab54da7f6a4927dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b64b6b9443d44ada5f7e7b96a13b1525f47b2259259fbaa15cabf6a6f690d536-merged.mount: Deactivated successfully.
Oct 02 19:15:44 compute-0 podman[238642]: 2025-10-02 19:15:44.245299394 +0000 UTC m=+1.201691564 container remove 6563fc649db5bad1cb029242068b2f330406578a848eff02ab54da7f6a4927dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:15:44 compute-0 systemd[1]: libpod-conmon-6563fc649db5bad1cb029242068b2f330406578a848eff02ab54da7f6a4927dc.scope: Deactivated successfully.
Oct 02 19:15:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:15:44 compute-0 sudo[238494]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:44 compute-0 sudo[238834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:15:44 compute-0 sudo[238834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:15:44 compute-0 sudo[238834]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:44 compute-0 sudo[238890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:15:44 compute-0 sudo[238890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:15:44 compute-0 sudo[238890]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:44 compute-0 sudo[238936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:15:44 compute-0 sudo[238936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:15:44 compute-0 sudo[238936]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:44 compute-0 sudo[239028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofqawkdksdrwzfxipqnplypjzssbwvns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432544.2883203-154-164424784651746/AnsiballZ_stat.py'
Oct 02 19:15:44 compute-0 sudo[239028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:44 compute-0 sudo[238993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:15:44 compute-0 sudo[238993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:15:44 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct 02 19:15:44 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Oct 02 19:15:44 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct 02 19:15:44 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Oct 02 19:15:45 compute-0 python3.9[239035]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:15:45 compute-0 ceph-mon[191910]: pgmap v312: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:45 compute-0 ceph-mon[191910]: 10.11 scrub starts
Oct 02 19:15:45 compute-0 ceph-mon[191910]: 10.11 scrub ok
Oct 02 19:15:45 compute-0 sudo[239028]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:45 compute-0 podman[239101]: 2025-10-02 19:15:45.340702484 +0000 UTC m=+0.077353484 container create 0b0f6bcc0604b09be3e16f6e07f298852395329cf0cf02660cabf6c6484c3a9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:15:45 compute-0 podman[239101]: 2025-10-02 19:15:45.314366922 +0000 UTC m=+0.051017952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:15:45 compute-0 systemd[1]: Started libpod-conmon-0b0f6bcc0604b09be3e16f6e07f298852395329cf0cf02660cabf6c6484c3a9d.scope.
Oct 02 19:15:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:15:45 compute-0 podman[239101]: 2025-10-02 19:15:45.474593443 +0000 UTC m=+0.211244483 container init 0b0f6bcc0604b09be3e16f6e07f298852395329cf0cf02660cabf6c6484c3a9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 19:15:45 compute-0 podman[239101]: 2025-10-02 19:15:45.485265273 +0000 UTC m=+0.221916303 container start 0b0f6bcc0604b09be3e16f6e07f298852395329cf0cf02660cabf6c6484c3a9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:15:45 compute-0 vigilant_mcnulty[239139]: 167 167
Oct 02 19:15:45 compute-0 systemd[1]: libpod-0b0f6bcc0604b09be3e16f6e07f298852395329cf0cf02660cabf6c6484c3a9d.scope: Deactivated successfully.
Oct 02 19:15:45 compute-0 podman[239101]: 2025-10-02 19:15:45.494532907 +0000 UTC m=+0.231183927 container attach 0b0f6bcc0604b09be3e16f6e07f298852395329cf0cf02660cabf6c6484c3a9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 19:15:45 compute-0 conmon[239139]: conmon 0b0f6bcc0604b09be3e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b0f6bcc0604b09be3e16f6e07f298852395329cf0cf02660cabf6c6484c3a9d.scope/container/memory.events
Oct 02 19:15:45 compute-0 podman[239101]: 2025-10-02 19:15:45.495898773 +0000 UTC m=+0.232549793 container died 0b0f6bcc0604b09be3e16f6e07f298852395329cf0cf02660cabf6c6484c3a9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mcnulty, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:15:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0605605f0ff7f484e8878bb3ad9a625b037d6b9853ce4d6c6a0df6ef761f1984-merged.mount: Deactivated successfully.
Oct 02 19:15:45 compute-0 podman[239101]: 2025-10-02 19:15:45.585711533 +0000 UTC m=+0.322362533 container remove 0b0f6bcc0604b09be3e16f6e07f298852395329cf0cf02660cabf6c6484c3a9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mcnulty, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:15:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v313: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:45 compute-0 systemd[1]: libpod-conmon-0b0f6bcc0604b09be3e16f6e07f298852395329cf0cf02660cabf6c6484c3a9d.scope: Deactivated successfully.
Oct 02 19:15:45 compute-0 podman[239214]: 2025-10-02 19:15:45.841826525 +0000 UTC m=+0.066658483 container create 5d225a90733808b4f5653e9e2bc63d85d4bad3b944ce095bc5c34d61758963af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:15:45 compute-0 podman[239214]: 2025-10-02 19:15:45.81615624 +0000 UTC m=+0.040988188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:15:45 compute-0 systemd[1]: Started libpod-conmon-5d225a90733808b4f5653e9e2bc63d85d4bad3b944ce095bc5c34d61758963af.scope.
Oct 02 19:15:45 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.e deep-scrub starts
Oct 02 19:15:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:15:45 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.e deep-scrub ok
Oct 02 19:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43bf1766ae584059d64e0b4f5716789246c1054836da6dc8c5ce0701ab5dcb3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43bf1766ae584059d64e0b4f5716789246c1054836da6dc8c5ce0701ab5dcb3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43bf1766ae584059d64e0b4f5716789246c1054836da6dc8c5ce0701ab5dcb3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43bf1766ae584059d64e0b4f5716789246c1054836da6dc8c5ce0701ab5dcb3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:15:45 compute-0 podman[239214]: 2025-10-02 19:15:45.976052373 +0000 UTC m=+0.200884311 container init 5d225a90733808b4f5653e9e2bc63d85d4bad3b944ce095bc5c34d61758963af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_newton, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:15:45 compute-0 podman[239214]: 2025-10-02 19:15:45.99575226 +0000 UTC m=+0.220584188 container start 5d225a90733808b4f5653e9e2bc63d85d4bad3b944ce095bc5c34d61758963af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 19:15:45 compute-0 sudo[239315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfdkgwkwxiutxkxpncecfnljwwjgdrkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432545.4177768-163-192473750548760/AnsiballZ_stat.py'
Oct 02 19:15:46 compute-0 podman[239214]: 2025-10-02 19:15:46.002669172 +0000 UTC m=+0.227501100 container attach 5d225a90733808b4f5653e9e2bc63d85d4bad3b944ce095bc5c34d61758963af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 19:15:46 compute-0 sudo[239315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:46 compute-0 podman[239251]: 2025-10-02 19:15:46.033614316 +0000 UTC m=+0.147641282 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Oct 02 19:15:46 compute-0 podman[239252]: 2025-10-02 19:15:46.055630444 +0000 UTC m=+0.165838810 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 19:15:46 compute-0 ceph-mon[191910]: 11.12 scrub starts
Oct 02 19:15:46 compute-0 ceph-mon[191910]: 9.1b scrub starts
Oct 02 19:15:46 compute-0 ceph-mon[191910]: 11.12 scrub ok
Oct 02 19:15:46 compute-0 ceph-mon[191910]: 9.1b scrub ok
Oct 02 19:15:46 compute-0 python3.9[239326]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:15:46 compute-0 sudo[239315]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:46 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct 02 19:15:46 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct 02 19:15:47 compute-0 loving_newton[239279]: {
Oct 02 19:15:47 compute-0 loving_newton[239279]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:15:47 compute-0 loving_newton[239279]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:15:47 compute-0 loving_newton[239279]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:15:47 compute-0 loving_newton[239279]:         "osd_id": 1,
Oct 02 19:15:47 compute-0 loving_newton[239279]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:15:47 compute-0 loving_newton[239279]:         "type": "bluestore"
Oct 02 19:15:47 compute-0 loving_newton[239279]:     },
Oct 02 19:15:47 compute-0 loving_newton[239279]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:15:47 compute-0 loving_newton[239279]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:15:47 compute-0 loving_newton[239279]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:15:47 compute-0 loving_newton[239279]:         "osd_id": 2,
Oct 02 19:15:47 compute-0 loving_newton[239279]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:15:47 compute-0 loving_newton[239279]:         "type": "bluestore"
Oct 02 19:15:47 compute-0 loving_newton[239279]:     },
Oct 02 19:15:47 compute-0 loving_newton[239279]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:15:47 compute-0 loving_newton[239279]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:15:47 compute-0 loving_newton[239279]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:15:47 compute-0 loving_newton[239279]:         "osd_id": 0,
Oct 02 19:15:47 compute-0 loving_newton[239279]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:15:47 compute-0 loving_newton[239279]:         "type": "bluestore"
Oct 02 19:15:47 compute-0 loving_newton[239279]:     }
Oct 02 19:15:47 compute-0 loving_newton[239279]: }
Oct 02 19:15:47 compute-0 systemd[1]: libpod-5d225a90733808b4f5653e9e2bc63d85d4bad3b944ce095bc5c34d61758963af.scope: Deactivated successfully.
Oct 02 19:15:47 compute-0 systemd[1]: libpod-5d225a90733808b4f5653e9e2bc63d85d4bad3b944ce095bc5c34d61758963af.scope: Consumed 1.075s CPU time.
Oct 02 19:15:47 compute-0 podman[239214]: 2025-10-02 19:15:47.066104101 +0000 UTC m=+1.290936049 container died 5d225a90733808b4f5653e9e2bc63d85d4bad3b944ce095bc5c34d61758963af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_newton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 19:15:47 compute-0 ceph-mon[191910]: pgmap v313: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:47 compute-0 ceph-mon[191910]: 9.e deep-scrub starts
Oct 02 19:15:47 compute-0 ceph-mon[191910]: 9.e deep-scrub ok
Oct 02 19:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-43bf1766ae584059d64e0b4f5716789246c1054836da6dc8c5ce0701ab5dcb3c-merged.mount: Deactivated successfully.
Oct 02 19:15:47 compute-0 podman[239214]: 2025-10-02 19:15:47.175008433 +0000 UTC m=+1.399840361 container remove 5d225a90733808b4f5653e9e2bc63d85d4bad3b944ce095bc5c34d61758963af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:15:47 compute-0 systemd[1]: libpod-conmon-5d225a90733808b4f5653e9e2bc63d85d4bad3b944ce095bc5c34d61758963af.scope: Deactivated successfully.
Oct 02 19:15:47 compute-0 sudo[238993]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:15:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:15:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:15:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:15:47 compute-0 sudo[239523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptzddzxtyklxzjjjgxlkaxoudpgpqncw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432546.6027346-173-257646907615956/AnsiballZ_service_facts.py'
Oct 02 19:15:47 compute-0 sudo[239523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:47 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev f42b8469-ed09-42f9-9877-4c659bc6a28f does not exist
Oct 02 19:15:47 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 8500c91a-4050-4b27-bbad-5ce69a9962fd does not exist
Oct 02 19:15:47 compute-0 sudo[239526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:15:47 compute-0 sudo[239526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:15:47 compute-0 sudo[239526]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:47 compute-0 python3.9[239525]: ansible-service_facts Invoked
Oct 02 19:15:47 compute-0 sudo[239551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:15:47 compute-0 sudo[239551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:15:47 compute-0 sudo[239551]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:47 compute-0 network[239592]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 19:15:47 compute-0 network[239593]: 'network-scripts' will be removed from distribution in near future.
Oct 02 19:15:47 compute-0 network[239594]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 19:15:48 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.1a deep-scrub starts
Oct 02 19:15:48 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.1a deep-scrub ok
Oct 02 19:15:48 compute-0 ceph-mon[191910]: 9.6 scrub starts
Oct 02 19:15:48 compute-0 ceph-mon[191910]: 9.6 scrub ok
Oct 02 19:15:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:15:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:15:48 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.f scrub starts
Oct 02 19:15:48 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.b scrub starts
Oct 02 19:15:48 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.f scrub ok
Oct 02 19:15:49 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.b scrub ok
Oct 02 19:15:49 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Oct 02 19:15:49 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Oct 02 19:15:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:15:49 compute-0 ceph-mon[191910]: pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:49 compute-0 ceph-mon[191910]: 10.1a deep-scrub starts
Oct 02 19:15:49 compute-0 ceph-mon[191910]: 10.1a deep-scrub ok
Oct 02 19:15:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v315: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:50 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Oct 02 19:15:50 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Oct 02 19:15:50 compute-0 ceph-mon[191910]: 9.f scrub starts
Oct 02 19:15:50 compute-0 ceph-mon[191910]: 9.b scrub starts
Oct 02 19:15:50 compute-0 ceph-mon[191910]: 9.f scrub ok
Oct 02 19:15:50 compute-0 ceph-mon[191910]: 9.b scrub ok
Oct 02 19:15:50 compute-0 ceph-mon[191910]: 10.10 scrub starts
Oct 02 19:15:50 compute-0 ceph-mon[191910]: 10.10 scrub ok
Oct 02 19:15:50 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct 02 19:15:50 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct 02 19:15:51 compute-0 ceph-mon[191910]: pgmap v315: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:51 compute-0 ceph-mon[191910]: 10.6 scrub starts
Oct 02 19:15:51 compute-0 ceph-mon[191910]: 10.6 scrub ok
Oct 02 19:15:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v316: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:51 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Oct 02 19:15:51 compute-0 podman[239688]: 2025-10-02 19:15:51.909151988 +0000 UTC m=+0.126446075 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:15:51 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Oct 02 19:15:51 compute-0 podman[239687]: 2025-10-02 19:15:51.93434473 +0000 UTC m=+0.151010630 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001)
Oct 02 19:15:52 compute-0 ceph-mon[191910]: 9.16 scrub starts
Oct 02 19:15:52 compute-0 ceph-mon[191910]: 9.16 scrub ok
Oct 02 19:15:52 compute-0 sudo[239523]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:52 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct 02 19:15:52 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct 02 19:15:53 compute-0 ceph-mon[191910]: pgmap v316: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:53 compute-0 ceph-mon[191910]: 9.7 scrub starts
Oct 02 19:15:53 compute-0 ceph-mon[191910]: 9.7 scrub ok
Oct 02 19:15:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:53 compute-0 podman[239812]: 2025-10-02 19:15:53.724135651 +0000 UTC m=+0.145038093 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=)
Oct 02 19:15:53 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Oct 02 19:15:53 compute-0 ceph-osd[206053]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Oct 02 19:15:54 compute-0 sudo[239952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twsumphkyhwrgycmkzmyepvsrjegvcfp ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1759432553.5212977-186-219132046856555/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1759432553.5212977-186-219132046856555/args'
Oct 02 19:15:54 compute-0 sudo[239952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:15:54 compute-0 sudo[239952]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:54 compute-0 ceph-mon[191910]: 9.1c scrub starts
Oct 02 19:15:54 compute-0 ceph-mon[191910]: 9.1c scrub ok
Oct 02 19:15:55 compute-0 ceph-mon[191910]: pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:55 compute-0 ceph-mon[191910]: 9.1e scrub starts
Oct 02 19:15:55 compute-0 ceph-mon[191910]: 9.1e scrub ok
Oct 02 19:15:55 compute-0 sudo[240119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtdtubipwknxpohzhatzawdgxgwtoygm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432554.7881722-197-96434501656760/AnsiballZ_dnf.py'
Oct 02 19:15:55 compute-0 sudo[240119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:55 compute-0 python3.9[240121]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:15:56 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct 02 19:15:56 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct 02 19:15:56 compute-0 sudo[240119]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:57 compute-0 ceph-mon[191910]: pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:58 compute-0 sudo[240272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrhfosldykfsylqafddixaxqpxoxkzue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432557.4278305-210-239060526893608/AnsiballZ_package_facts.py'
Oct 02 19:15:58 compute-0 sudo[240272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:58 compute-0 ceph-mon[191910]: 9.17 scrub starts
Oct 02 19:15:58 compute-0 ceph-mon[191910]: 9.17 scrub ok
Oct 02 19:15:58 compute-0 python3.9[240274]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 02 19:15:58 compute-0 sudo[240272]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:15:59 compute-0 ceph-mon[191910]: pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:15:59 compute-0 podman[157186]: time="2025-10-02T19:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:15:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:15:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6825 "" "Go-http-client/1.1"
Oct 02 19:15:59 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Oct 02 19:15:59 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Oct 02 19:16:00 compute-0 sudo[240424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocnlxktkkfmkbjnxdcvuenolbswlmrxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432559.5580614-220-130296651099299/AnsiballZ_stat.py'
Oct 02 19:16:00 compute-0 sudo[240424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:00 compute-0 python3.9[240426]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:16:00 compute-0 sudo[240424]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:00 compute-0 sudo[240502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awioyazitlzclmtppeoebzzdxaktuzfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432559.5580614-220-130296651099299/AnsiballZ_file.py'
Oct 02 19:16:00 compute-0 sudo[240502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:01 compute-0 python3.9[240504]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:16:01 compute-0 sudo[240502]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:01 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct 02 19:16:01 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct 02 19:16:01 compute-0 openstack_network_exporter[159337]: ERROR   19:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:16:01 compute-0 openstack_network_exporter[159337]: ERROR   19:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:16:01 compute-0 openstack_network_exporter[159337]: ERROR   19:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:16:01 compute-0 openstack_network_exporter[159337]: ERROR   19:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:16:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:16:01 compute-0 openstack_network_exporter[159337]: ERROR   19:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:16:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:16:01 compute-0 ceph-mon[191910]: pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:01 compute-0 ceph-mon[191910]: 9.8 scrub starts
Oct 02 19:16:01 compute-0 ceph-mon[191910]: 9.8 scrub ok
Oct 02 19:16:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:02 compute-0 sudo[240654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahgatpvwzezjkmmycbqflbsfcrhowqib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432561.4583912-232-134050940911507/AnsiballZ_stat.py'
Oct 02 19:16:02 compute-0 sudo[240654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:02 compute-0 python3.9[240656]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:16:02 compute-0 sudo[240654]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:02 compute-0 ceph-mon[191910]: 10.19 scrub starts
Oct 02 19:16:02 compute-0 ceph-mon[191910]: 10.19 scrub ok
Oct 02 19:16:02 compute-0 sudo[240732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlvfiymytlhpjzwhvkpyvghzszrpclbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432561.4583912-232-134050940911507/AnsiballZ_file.py'
Oct 02 19:16:02 compute-0 sudo[240732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:02 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Oct 02 19:16:02 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Oct 02 19:16:02 compute-0 python3.9[240734]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:16:02 compute-0 sudo[240732]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:03 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Oct 02 19:16:03 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:16:03
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'backups', '.mgr', 'default.rgw.meta']
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:16:03 compute-0 ceph-mon[191910]: pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:16:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:16:03 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct 02 19:16:03 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct 02 19:16:04 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.f deep-scrub starts
Oct 02 19:16:04 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.f deep-scrub ok
Oct 02 19:16:04 compute-0 sudo[240884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxhiatfedaotcfsrkmhvcxozngateyqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432563.517733-250-89735462442526/AnsiballZ_lineinfile.py'
Oct 02 19:16:04 compute-0 sudo[240884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:16:04 compute-0 python3.9[240886]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:16:04 compute-0 sudo[240884]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:04 compute-0 ceph-mon[191910]: 9.18 scrub starts
Oct 02 19:16:04 compute-0 ceph-mon[191910]: 9.18 scrub ok
Oct 02 19:16:04 compute-0 ceph-mon[191910]: 10.2 scrub starts
Oct 02 19:16:04 compute-0 ceph-mon[191910]: 10.2 scrub ok
Oct 02 19:16:05 compute-0 ceph-mon[191910]: pgmap v322: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:05 compute-0 ceph-mon[191910]: 9.c scrub starts
Oct 02 19:16:05 compute-0 ceph-mon[191910]: 9.c scrub ok
Oct 02 19:16:05 compute-0 ceph-mon[191910]: 10.f deep-scrub starts
Oct 02 19:16:05 compute-0 ceph-mon[191910]: 10.f deep-scrub ok
Oct 02 19:16:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:05 compute-0 sudo[241036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjctyfaucwrdkaycgzbzzfpuwkxbbihv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432565.1643224-265-121491075858485/AnsiballZ_setup.py'
Oct 02 19:16:05 compute-0 sudo[241036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:06 compute-0 python3.9[241038]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 19:16:06 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Oct 02 19:16:06 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Oct 02 19:16:06 compute-0 sudo[241036]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:06 compute-0 ceph-mon[191910]: pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:06 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.13 deep-scrub starts
Oct 02 19:16:06 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.13 deep-scrub ok
Oct 02 19:16:07 compute-0 sudo[241120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sunftxikybklselzerxsnhecfoxolije ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432565.1643224-265-121491075858485/AnsiballZ_systemd.py'
Oct 02 19:16:07 compute-0 sudo[241120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:07 compute-0 ceph-mon[191910]: 10.12 scrub starts
Oct 02 19:16:07 compute-0 ceph-mon[191910]: 10.12 scrub ok
Oct 02 19:16:07 compute-0 python3.9[241122]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:16:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:07 compute-0 sudo[241120]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:07 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Oct 02 19:16:07 compute-0 ceph-osd[208121]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Oct 02 19:16:08 compute-0 sshd-session[235665]: Connection closed by 192.168.122.30 port 57488
Oct 02 19:16:08 compute-0 sshd-session[235662]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:16:08 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Oct 02 19:16:08 compute-0 systemd[1]: session-45.scope: Consumed 37.425s CPU time.
Oct 02 19:16:08 compute-0 systemd-logind[793]: Session 45 logged out. Waiting for processes to exit.
Oct 02 19:16:08 compute-0 systemd-logind[793]: Removed session 45.
Oct 02 19:16:08 compute-0 ceph-mon[191910]: 9.13 deep-scrub starts
Oct 02 19:16:08 compute-0 ceph-mon[191910]: 9.13 deep-scrub ok
Oct 02 19:16:08 compute-0 ceph-mon[191910]: pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:09 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Oct 02 19:16:09 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Oct 02 19:16:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:16:09 compute-0 ceph-mon[191910]: 9.19 scrub starts
Oct 02 19:16:09 compute-0 ceph-mon[191910]: 9.19 scrub ok
Oct 02 19:16:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:10 compute-0 ceph-mon[191910]: 10.14 scrub starts
Oct 02 19:16:10 compute-0 ceph-mon[191910]: 10.14 scrub ok
Oct 02 19:16:10 compute-0 ceph-mon[191910]: pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:16:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:16:12 compute-0 ceph-mon[191910]: pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:12 compute-0 podman[241150]: 2025-10-02 19:16:12.703772201 +0000 UTC m=+0.123115827 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:16:12 compute-0 podman[241149]: 2025-10-02 19:16:12.735805533 +0000 UTC m=+0.156925695 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 02 19:16:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:16:14 compute-0 sshd-session[241186]: Accepted publickey for zuul from 192.168.122.30 port 56526 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:16:14 compute-0 systemd-logind[793]: New session 46 of user zuul.
Oct 02 19:16:14 compute-0 systemd[1]: Started Session 46 of User zuul.
Oct 02 19:16:14 compute-0 sshd-session[241186]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:16:14 compute-0 ceph-mon[191910]: pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:15 compute-0 sudo[241339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-besdafbadkvmjjkhvngrdgacngxqkbhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432574.8219838-22-86117317642054/AnsiballZ_file.py'
Oct 02 19:16:15 compute-0 sudo[241339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:15 compute-0 python3.9[241341]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:16:15 compute-0 sudo[241339]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:16 compute-0 podman[241446]: 2025-10-02 19:16:16.692240347 +0000 UTC m=+0.115374053 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, distribution-scope=public, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 19:16:16 compute-0 ceph-mon[191910]: pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:16 compute-0 podman[241454]: 2025-10-02 19:16:16.715882918 +0000 UTC m=+0.124134063 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:16:16 compute-0 sudo[241533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyevmolxizkvjccpujuqplopvkpishlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432576.0162919-34-280231714611028/AnsiballZ_stat.py'
Oct 02 19:16:16 compute-0 sudo[241533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:16 compute-0 python3.9[241535]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:16:16 compute-0 sudo[241533]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:17 compute-0 sudo[241611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pokszzliwgktratawanlzbqxuosdejqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432576.0162919-34-280231714611028/AnsiballZ_file.py'
Oct 02 19:16:17 compute-0 sudo[241611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:17 compute-0 python3.9[241613]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:16:17 compute-0 sudo[241611]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:17 compute-0 sshd-session[241189]: Connection closed by 192.168.122.30 port 56526
Oct 02 19:16:17 compute-0 sshd-session[241186]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:16:17 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Oct 02 19:16:17 compute-0 systemd[1]: session-46.scope: Consumed 2.405s CPU time.
Oct 02 19:16:17 compute-0 systemd-logind[793]: Session 46 logged out. Waiting for processes to exit.
Oct 02 19:16:17 compute-0 systemd-logind[793]: Removed session 46.
Oct 02 19:16:18 compute-0 ceph-mon[191910]: pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:16:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:20 compute-0 ceph-mon[191910]: pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:20 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct 02 19:16:20 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct 02 19:16:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:22 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct 02 19:16:22 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct 02 19:16:22 compute-0 podman[241639]: 2025-10-02 19:16:22.682888256 +0000 UTC m=+0.103109711 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:16:22 compute-0 podman[241640]: 2025-10-02 19:16:22.700148649 +0000 UTC m=+0.121573096 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:16:22 compute-0 ceph-mon[191910]: 10.b scrub starts
Oct 02 19:16:22 compute-0 ceph-mon[191910]: 10.b scrub ok
Oct 02 19:16:22 compute-0 ceph-mon[191910]: pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:23 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 9.15 deep-scrub starts
Oct 02 19:16:23 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 9.15 deep-scrub ok
Oct 02 19:16:23 compute-0 sshd-session[241679]: Accepted publickey for zuul from 192.168.122.30 port 48952 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:16:23 compute-0 systemd-logind[793]: New session 47 of user zuul.
Oct 02 19:16:23 compute-0 systemd[1]: Started Session 47 of User zuul.
Oct 02 19:16:23 compute-0 sshd-session[241679]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:16:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:23 compute-0 ceph-mon[191910]: 10.13 scrub starts
Oct 02 19:16:23 compute-0 ceph-mon[191910]: 10.13 scrub ok
Oct 02 19:16:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.436 17 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.437 17 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a00e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.437 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feec38a00b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec5f86ab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38272c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec6d242f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38273e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38274d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec4ac1ee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3825730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.440 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.441 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feec72bb7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.441 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.441 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feec38264b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.441 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.442 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feec3827c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.442 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.442 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feec3827230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.442 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.442 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feec3825700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.442 17 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.442 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feec3827290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.442 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.443 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feec38277a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.443 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feec38272f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.443 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feec3827350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feec38a0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feec38273b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feec49646e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feec3827440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feec3827c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feec38274a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feec3827ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feec3827d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feec3827dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feec3827e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feec38a0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feec38276e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feec3827ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feec49641d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feec3827740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feec3827f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.447 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.447 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.448 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.449 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.450 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:16:24.450 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:16:24 compute-0 podman[241807]: 2025-10-02 19:16:24.691116515 +0000 UTC m=+0.151401220 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, maintainer=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, vcs-type=git, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.buildah.version=1.29.0, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct 02 19:16:24 compute-0 ceph-mon[191910]: 9.15 deep-scrub starts
Oct 02 19:16:24 compute-0 ceph-mon[191910]: 9.15 deep-scrub ok
Oct 02 19:16:24 compute-0 ceph-mon[191910]: pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:24 compute-0 python3.9[241845]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:16:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:25 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Oct 02 19:16:26 compute-0 ceph-osd[207106]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Oct 02 19:16:26 compute-0 sudo[242006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qecxkzpvjiejydvfovhtbcqaspvzblax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432585.5756383-33-4641719451599/AnsiballZ_file.py'
Oct 02 19:16:26 compute-0 sudo[242006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:26 compute-0 python3.9[242008]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:16:26 compute-0 sudo[242006]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:26 compute-0 ceph-mon[191910]: pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:27 compute-0 sudo[242181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwumrrpntzkhkwtfrmguymdqylfkebwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432586.6971552-41-189663339757161/AnsiballZ_stat.py'
Oct 02 19:16:27 compute-0 sudo[242181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:27 compute-0 python3.9[242183]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:16:27 compute-0 sudo[242181]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:27 compute-0 ceph-mon[191910]: 9.1f scrub starts
Oct 02 19:16:27 compute-0 ceph-mon[191910]: 9.1f scrub ok
Oct 02 19:16:28 compute-0 sudo[242259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kadsjtbxjqixhajhbykhkshdwrsklvtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432586.6971552-41-189663339757161/AnsiballZ_file.py'
Oct 02 19:16:28 compute-0 sudo[242259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:28 compute-0 python3.9[242261]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.sip4ccw0 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:16:28 compute-0 sudo[242259]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:28 compute-0 ceph-mon[191910]: pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:16:29 compute-0 sudo[242411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfuxrjlgoieagbotiktpccjmxkgdzqas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432589.0144863-61-101133214116604/AnsiballZ_stat.py'
Oct 02 19:16:29 compute-0 sudo[242411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:29 compute-0 python3.9[242413]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:16:29 compute-0 podman[157186]: time="2025-10-02T19:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:16:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:16:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6835 "" "Go-http-client/1.1"
Oct 02 19:16:29 compute-0 sudo[242411]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:30 compute-0 sudo[242489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wghmwqrjzgshblavyggsyhbueavyyylv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432589.0144863-61-101133214116604/AnsiballZ_file.py'
Oct 02 19:16:30 compute-0 sudo[242489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:30 compute-0 python3.9[242491]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.lvpu_kvx recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:16:30 compute-0 sudo[242489]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:30 compute-0 ceph-mon[191910]: pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:31 compute-0 sudo[242641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpdcwmuexaeswtkyumpcbwzwgwejmkrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432590.7607143-74-38974379174898/AnsiballZ_file.py'
Oct 02 19:16:31 compute-0 sudo[242641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:31 compute-0 openstack_network_exporter[159337]: ERROR   19:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:16:31 compute-0 openstack_network_exporter[159337]: ERROR   19:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:16:31 compute-0 openstack_network_exporter[159337]: ERROR   19:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:16:31 compute-0 openstack_network_exporter[159337]: ERROR   19:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:16:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:16:31 compute-0 openstack_network_exporter[159337]: ERROR   19:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:16:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:16:31 compute-0 python3.9[242643]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:16:31 compute-0 sudo[242641]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:32 compute-0 sudo[242793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgfrgbtbsfhueqeldyzrswwzmvyzpnif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432591.7189858-82-203250122562856/AnsiballZ_stat.py'
Oct 02 19:16:32 compute-0 sudo[242793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:32 compute-0 python3.9[242795]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:16:32 compute-0 sudo[242793]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:32 compute-0 sudo[242871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtcvsaxhhyjdlcfhlwcjnrxdgbrxnyse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432591.7189858-82-203250122562856/AnsiballZ_file.py'
Oct 02 19:16:32 compute-0 sudo[242871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:32 compute-0 ceph-mon[191910]: pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:32 compute-0 python3.9[242873]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:16:33 compute-0 sudo[242871]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:16:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:16:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:16:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:16:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:16:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:16:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:33 compute-0 sudo[243023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufvzzoiiykslbgfxxzqomfkhnmtauefb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432593.2272267-82-206466521284617/AnsiballZ_stat.py'
Oct 02 19:16:33 compute-0 sudo[243023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:33 compute-0 python3.9[243025]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:16:34 compute-0 sudo[243023]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:16:34 compute-0 sudo[243101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqfeqslgpsxgkyggbbnhdfyhzrilgdbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432593.2272267-82-206466521284617/AnsiballZ_file.py'
Oct 02 19:16:34 compute-0 sudo[243101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:34 compute-0 python3.9[243103]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:16:34 compute-0 sudo[243101]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:34 compute-0 ceph-mon[191910]: pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:35 compute-0 sudo[243253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hufzdsitnbovodyfjpbmkifruwktkygb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432594.9687595-105-153742929611979/AnsiballZ_file.py'
Oct 02 19:16:35 compute-0 sudo[243253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:35 compute-0 python3.9[243255]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:16:35 compute-0 sudo[243253]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:36 compute-0 sudo[243405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bucrazgdqdfiybsyynovdioyxlnmtrba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432595.9431965-113-203401998668433/AnsiballZ_stat.py'
Oct 02 19:16:36 compute-0 sudo[243405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:36 compute-0 python3.9[243407]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:16:36 compute-0 sudo[243405]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:36 compute-0 ceph-mon[191910]: pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:37 compute-0 sudo[243483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptdgtcekxkzywlrwgdqqwqibapxvouye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432595.9431965-113-203401998668433/AnsiballZ_file.py'
Oct 02 19:16:37 compute-0 sudo[243483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:37 compute-0 python3.9[243485]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:16:37 compute-0 sudo[243483]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:38 compute-0 sudo[243635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxmyctlapohqsfmebxtodmhhyrlicgyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432597.6256993-125-31484205219135/AnsiballZ_stat.py'
Oct 02 19:16:38 compute-0 sudo[243635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:38 compute-0 python3.9[243637]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:16:38 compute-0 sudo[243635]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:38 compute-0 sudo[243713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eibwkcnzpjaeqmcxumblntprpedvjnmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432597.6256993-125-31484205219135/AnsiballZ_file.py'
Oct 02 19:16:38 compute-0 sudo[243713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:38 compute-0 ceph-mon[191910]: pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:38 compute-0 python3.9[243715]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:16:39 compute-0 sudo[243713]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:16:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:40 compute-0 sudo[243865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuplczxggkhpihgopclgqirwrlucgkxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432599.2621236-137-174021560640117/AnsiballZ_systemd.py'
Oct 02 19:16:40 compute-0 sudo[243865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:40 compute-0 python3.9[243867]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:16:40 compute-0 systemd[1]: Reloading.
Oct 02 19:16:40 compute-0 systemd-rc-local-generator[243890]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:16:40 compute-0 systemd-sysv-generator[243895]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:16:40 compute-0 sudo[243865]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:40 compute-0 ceph-mon[191910]: pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:41 compute-0 sudo[244055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqwufojwsshttgrgnevqxgdcszldscei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432601.2208774-145-197173666596116/AnsiballZ_stat.py'
Oct 02 19:16:41 compute-0 sudo[244055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:41 compute-0 python3.9[244057]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:16:41 compute-0 sudo[244055]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:42 compute-0 sudo[244133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylkiypyiouivbquxrlybsalnjphnxhfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432601.2208774-145-197173666596116/AnsiballZ_file.py'
Oct 02 19:16:42 compute-0 sudo[244133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:42 compute-0 python3.9[244135]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:16:42 compute-0 sudo[244133]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:43 compute-0 ceph-mon[191910]: pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:43 compute-0 sudo[244311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmedwasozbjrhqvqrfgznbypobfrxwfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432602.9407814-157-254675747018730/AnsiballZ_stat.py'
Oct 02 19:16:43 compute-0 sudo[244311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:43 compute-0 podman[244259]: 2025-10-02 19:16:43.528193133 +0000 UTC m=+0.117528713 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Oct 02 19:16:43 compute-0 podman[244260]: 2025-10-02 19:16:43.533511914 +0000 UTC m=+0.123632154 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:16:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:43 compute-0 python3.9[244327]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:16:43 compute-0 sudo[244311]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:44 compute-0 sudo[244406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atopxmcelpdlwjnqyhorpgwjzzthnctj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432602.9407814-157-254675747018730/AnsiballZ_file.py'
Oct 02 19:16:44 compute-0 sudo[244406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:16:44 compute-0 python3.9[244408]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:16:44 compute-0 sudo[244406]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:45 compute-0 ceph-mon[191910]: pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:45 compute-0 sudo[244558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnlvcmletyvlxuenmfdnjzcajkhjyvxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432604.7635584-169-167542472524121/AnsiballZ_systemd.py'
Oct 02 19:16:45 compute-0 sudo[244558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:45 compute-0 python3.9[244560]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:16:45 compute-0 systemd[1]: Reloading.
Oct 02 19:16:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:45 compute-0 systemd-sysv-generator[244590]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:16:45 compute-0 systemd-rc-local-generator[244584]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:16:46 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 19:16:46 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 19:16:46 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 19:16:46 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 19:16:46 compute-0 sudo[244558]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:47 compute-0 ceph-mon[191910]: pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:47 compute-0 podman[244725]: 2025-10-02 19:16:47.384974405 +0000 UTC m=+0.084001774 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 02 19:16:47 compute-0 podman[244726]: 2025-10-02 19:16:47.42103103 +0000 UTC m=+0.116068734 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:16:47 compute-0 rsyslogd[187702]: imjournal: 1620 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 02 19:16:47 compute-0 python3.9[244789]: ansible-ansible.builtin.service_facts Invoked
Oct 02 19:16:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:47 compute-0 network[244836]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 19:16:47 compute-0 sudo[244797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:16:47 compute-0 network[244838]: 'network-scripts' will be removed from distribution in near future.
Oct 02 19:16:47 compute-0 sudo[244797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:16:47 compute-0 network[244839]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 19:16:47 compute-0 sudo[244797]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:48 compute-0 sudo[244844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:16:48 compute-0 sudo[244844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:16:48 compute-0 sudo[244844]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:48 compute-0 sudo[244872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:16:48 compute-0 sudo[244872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:16:48 compute-0 sudo[244872]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:48 compute-0 sudo[244902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:16:48 compute-0 sudo[244902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:16:49 compute-0 ceph-mon[191910]: pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:16:49 compute-0 sudo[244902]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:16:49 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:16:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:16:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:16:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:16:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:16:49 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev efd2df6b-af0e-4391-b880-084148cdb0f9 does not exist
Oct 02 19:16:49 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev d4018d6f-ffb4-4ef9-b812-cab72f81666f does not exist
Oct 02 19:16:49 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 795057fd-60af-479d-8e27-750a2f554785 does not exist
Oct 02 19:16:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:16:49 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:16:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:16:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:16:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:16:49 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:16:49 compute-0 sudo[244988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:16:49 compute-0 sudo[244988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:16:49 compute-0 sudo[244988]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:49 compute-0 sudo[245017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:16:49 compute-0 sudo[245017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:16:50 compute-0 sudo[245017]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:50 compute-0 sudo[245046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:16:50 compute-0 sudo[245046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:16:50 compute-0 sudo[245046]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:50 compute-0 sudo[245075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:16:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:16:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:16:50 compute-0 sudo[245075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:16:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:16:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:16:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:16:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:16:50 compute-0 podman[245149]: 2025-10-02 19:16:50.764636705 +0000 UTC m=+0.062928767 container create 5d72bf7e2485cdc17e22ca13216a0137ad43074db18c8788f93333582909a98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:16:50 compute-0 podman[245149]: 2025-10-02 19:16:50.739241983 +0000 UTC m=+0.037534075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:16:50 compute-0 systemd[1]: Started libpod-conmon-5d72bf7e2485cdc17e22ca13216a0137ad43074db18c8788f93333582909a98a.scope.
Oct 02 19:16:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:16:50 compute-0 podman[245149]: 2025-10-02 19:16:50.896803574 +0000 UTC m=+0.195095706 container init 5d72bf7e2485cdc17e22ca13216a0137ad43074db18c8788f93333582909a98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:16:50 compute-0 podman[245149]: 2025-10-02 19:16:50.908079043 +0000 UTC m=+0.206371095 container start 5d72bf7e2485cdc17e22ca13216a0137ad43074db18c8788f93333582909a98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 19:16:50 compute-0 dreamy_perlman[245169]: 167 167
Oct 02 19:16:50 compute-0 systemd[1]: libpod-5d72bf7e2485cdc17e22ca13216a0137ad43074db18c8788f93333582909a98a.scope: Deactivated successfully.
Oct 02 19:16:50 compute-0 podman[245149]: 2025-10-02 19:16:50.917671667 +0000 UTC m=+0.215963759 container attach 5d72bf7e2485cdc17e22ca13216a0137ad43074db18c8788f93333582909a98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 19:16:50 compute-0 podman[245149]: 2025-10-02 19:16:50.919620109 +0000 UTC m=+0.217912201 container died 5d72bf7e2485cdc17e22ca13216a0137ad43074db18c8788f93333582909a98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 19:16:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-de3e27ea50183f167a8350b3628c6345bdcda6ae5a6ed2f65c84364a7c30dc56-merged.mount: Deactivated successfully.
Oct 02 19:16:51 compute-0 podman[245149]: 2025-10-02 19:16:51.048014098 +0000 UTC m=+0.346306160 container remove 5d72bf7e2485cdc17e22ca13216a0137ad43074db18c8788f93333582909a98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:16:51 compute-0 systemd[1]: libpod-conmon-5d72bf7e2485cdc17e22ca13216a0137ad43074db18c8788f93333582909a98a.scope: Deactivated successfully.
Oct 02 19:16:51 compute-0 podman[245203]: 2025-10-02 19:16:51.282783504 +0000 UTC m=+0.073333303 container create 492038b808f0738be0f43a232bd278d2d264b1bacef604a952c51afee939b698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 19:16:51 compute-0 ceph-mon[191910]: pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:51 compute-0 podman[245203]: 2025-10-02 19:16:51.249568834 +0000 UTC m=+0.040118723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:16:51 compute-0 systemd[1]: Started libpod-conmon-492038b808f0738be0f43a232bd278d2d264b1bacef604a952c51afee939b698.scope.
Oct 02 19:16:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:16:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c37d977ab90598e67be35e28fccf266521f6470b5118302461ab38070762efe1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:16:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c37d977ab90598e67be35e28fccf266521f6470b5118302461ab38070762efe1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:16:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c37d977ab90598e67be35e28fccf266521f6470b5118302461ab38070762efe1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:16:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c37d977ab90598e67be35e28fccf266521f6470b5118302461ab38070762efe1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:16:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c37d977ab90598e67be35e28fccf266521f6470b5118302461ab38070762efe1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:16:51 compute-0 podman[245203]: 2025-10-02 19:16:51.458578287 +0000 UTC m=+0.249128186 container init 492038b808f0738be0f43a232bd278d2d264b1bacef604a952c51afee939b698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 19:16:51 compute-0 podman[245203]: 2025-10-02 19:16:51.480667642 +0000 UTC m=+0.271217461 container start 492038b808f0738be0f43a232bd278d2d264b1bacef604a952c51afee939b698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_elgamal, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 19:16:51 compute-0 podman[245203]: 2025-10-02 19:16:51.48774926 +0000 UTC m=+0.278299089 container attach 492038b808f0738be0f43a232bd278d2d264b1bacef604a952c51afee939b698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 19:16:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:52 compute-0 fervent_elgamal[245224]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:16:52 compute-0 fervent_elgamal[245224]: --> relative data size: 1.0
Oct 02 19:16:52 compute-0 fervent_elgamal[245224]: --> All data devices are unavailable
Oct 02 19:16:52 compute-0 systemd[1]: libpod-492038b808f0738be0f43a232bd278d2d264b1bacef604a952c51afee939b698.scope: Deactivated successfully.
Oct 02 19:16:52 compute-0 systemd[1]: libpod-492038b808f0738be0f43a232bd278d2d264b1bacef604a952c51afee939b698.scope: Consumed 1.182s CPU time.
Oct 02 19:16:52 compute-0 podman[245203]: 2025-10-02 19:16:52.72886629 +0000 UTC m=+1.519416179 container died 492038b808f0738be0f43a232bd278d2d264b1bacef604a952c51afee939b698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:16:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c37d977ab90598e67be35e28fccf266521f6470b5118302461ab38070762efe1-merged.mount: Deactivated successfully.
Oct 02 19:16:52 compute-0 podman[245203]: 2025-10-02 19:16:52.837071714 +0000 UTC m=+1.627621513 container remove 492038b808f0738be0f43a232bd278d2d264b1bacef604a952c51afee939b698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_elgamal, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 19:16:52 compute-0 systemd[1]: libpod-conmon-492038b808f0738be0f43a232bd278d2d264b1bacef604a952c51afee939b698.scope: Deactivated successfully.
Oct 02 19:16:52 compute-0 sudo[245075]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:52 compute-0 podman[245310]: 2025-10-02 19:16:52.893292423 +0000 UTC m=+0.116698931 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:16:52 compute-0 podman[245308]: 2025-10-02 19:16:52.923366479 +0000 UTC m=+0.144814445 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 19:16:52 compute-0 sudo[245360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:16:52 compute-0 sudo[245360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:16:53 compute-0 sudo[245360]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:53 compute-0 sudo[245385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:16:53 compute-0 sudo[245385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:16:53 compute-0 sudo[245385]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:53 compute-0 sudo[245433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:16:53 compute-0 sudo[245433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:16:53 compute-0 sudo[245433]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:53 compute-0 ceph-mon[191910]: pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:53 compute-0 sudo[245487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:16:53 compute-0 sudo[245487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:16:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:53 compute-0 sudo[245607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntimutofgdwlatfcsozdkoqrvkizbycj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432613.1460884-195-180544029726211/AnsiballZ_stat.py'
Oct 02 19:16:53 compute-0 sudo[245607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:53 compute-0 python3.9[245611]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:16:53 compute-0 podman[245624]: 2025-10-02 19:16:53.912826326 +0000 UTC m=+0.078629002 container create a0f7137983b3706e51af82a7178acff47277c8b2863ed32f94dcf7f4d8011459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_dhawan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 19:16:53 compute-0 sudo[245607]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:53 compute-0 podman[245624]: 2025-10-02 19:16:53.889446277 +0000 UTC m=+0.055248993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:16:53 compute-0 systemd[1]: Started libpod-conmon-a0f7137983b3706e51af82a7178acff47277c8b2863ed32f94dcf7f4d8011459.scope.
Oct 02 19:16:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:16:54 compute-0 podman[245624]: 2025-10-02 19:16:54.047935384 +0000 UTC m=+0.213738050 container init a0f7137983b3706e51af82a7178acff47277c8b2863ed32f94dcf7f4d8011459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:16:54 compute-0 podman[245624]: 2025-10-02 19:16:54.066740901 +0000 UTC m=+0.232543597 container start a0f7137983b3706e51af82a7178acff47277c8b2863ed32f94dcf7f4d8011459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:16:54 compute-0 podman[245624]: 2025-10-02 19:16:54.073457119 +0000 UTC m=+0.239259785 container attach a0f7137983b3706e51af82a7178acff47277c8b2863ed32f94dcf7f4d8011459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_dhawan, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:16:54 compute-0 boring_dhawan[245654]: 167 167
Oct 02 19:16:54 compute-0 systemd[1]: libpod-a0f7137983b3706e51af82a7178acff47277c8b2863ed32f94dcf7f4d8011459.scope: Deactivated successfully.
Oct 02 19:16:54 compute-0 podman[245624]: 2025-10-02 19:16:54.079127649 +0000 UTC m=+0.244930345 container died a0f7137983b3706e51af82a7178acff47277c8b2863ed32f94dcf7f4d8011459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:16:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-40ca2be8dd23e7c2a1a52abf124287754e8ba840082d7d4b28372399957e46af-merged.mount: Deactivated successfully.
Oct 02 19:16:54 compute-0 podman[245624]: 2025-10-02 19:16:54.170653253 +0000 UTC m=+0.336455939 container remove a0f7137983b3706e51af82a7178acff47277c8b2863ed32f94dcf7f4d8011459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_dhawan, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 19:16:54 compute-0 systemd[1]: libpod-conmon-a0f7137983b3706e51af82a7178acff47277c8b2863ed32f94dcf7f4d8011459.scope: Deactivated successfully.
Oct 02 19:16:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:16:54 compute-0 sudo[245732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hozftmopasirmhgkexpypsonkidsdkcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432613.1460884-195-180544029726211/AnsiballZ_file.py'
Oct 02 19:16:54 compute-0 sudo[245732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:54 compute-0 podman[245740]: 2025-10-02 19:16:54.443464696 +0000 UTC m=+0.092991833 container create b4f30945f2e2fb9975cfe80b2d48ddf2d78ff688138137803d259bc511faa1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ellis, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:16:54 compute-0 podman[245740]: 2025-10-02 19:16:54.417839477 +0000 UTC m=+0.067366644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:16:54 compute-0 python3.9[245737]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:16:54 compute-0 systemd[1]: Started libpod-conmon-b4f30945f2e2fb9975cfe80b2d48ddf2d78ff688138137803d259bc511faa1a3.scope.
Oct 02 19:16:54 compute-0 sudo[245732]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b48c5ac4229823b49b62ef6c333f25f40500a79d26379ad228ce378aab6d6b1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b48c5ac4229823b49b62ef6c333f25f40500a79d26379ad228ce378aab6d6b1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b48c5ac4229823b49b62ef6c333f25f40500a79d26379ad228ce378aab6d6b1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b48c5ac4229823b49b62ef6c333f25f40500a79d26379ad228ce378aab6d6b1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:16:54 compute-0 podman[245740]: 2025-10-02 19:16:54.613760685 +0000 UTC m=+0.263287922 container init b4f30945f2e2fb9975cfe80b2d48ddf2d78ff688138137803d259bc511faa1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 19:16:54 compute-0 podman[245740]: 2025-10-02 19:16:54.629682786 +0000 UTC m=+0.279209923 container start b4f30945f2e2fb9975cfe80b2d48ddf2d78ff688138137803d259bc511faa1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 19:16:54 compute-0 podman[245740]: 2025-10-02 19:16:54.634597456 +0000 UTC m=+0.284124593 container attach b4f30945f2e2fb9975cfe80b2d48ddf2d78ff688138137803d259bc511faa1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ellis, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 19:16:55 compute-0 ceph-mon[191910]: pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:55 compute-0 sudo[245924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdbluitrzbwanywkdsbjdfdxceavqzey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432614.8622234-208-234561955632181/AnsiballZ_file.py'
Oct 02 19:16:55 compute-0 sudo[245924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:55 compute-0 sad_ellis[245756]: {
Oct 02 19:16:55 compute-0 sad_ellis[245756]:     "0": [
Oct 02 19:16:55 compute-0 sad_ellis[245756]:         {
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "devices": [
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "/dev/loop3"
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             ],
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "lv_name": "ceph_lv0",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "lv_size": "21470642176",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "name": "ceph_lv0",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "tags": {
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.cluster_name": "ceph",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.crush_device_class": "",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.encrypted": "0",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.osd_id": "0",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.type": "block",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.vdo": "0"
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             },
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "type": "block",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "vg_name": "ceph_vg0"
Oct 02 19:16:55 compute-0 sad_ellis[245756]:         }
Oct 02 19:16:55 compute-0 sad_ellis[245756]:     ],
Oct 02 19:16:55 compute-0 sad_ellis[245756]:     "1": [
Oct 02 19:16:55 compute-0 sad_ellis[245756]:         {
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "devices": [
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "/dev/loop4"
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             ],
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "lv_name": "ceph_lv1",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "lv_size": "21470642176",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "name": "ceph_lv1",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "tags": {
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.cluster_name": "ceph",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.crush_device_class": "",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.encrypted": "0",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.osd_id": "1",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.type": "block",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.vdo": "0"
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             },
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "type": "block",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "vg_name": "ceph_vg1"
Oct 02 19:16:55 compute-0 sad_ellis[245756]:         }
Oct 02 19:16:55 compute-0 sad_ellis[245756]:     ],
Oct 02 19:16:55 compute-0 sad_ellis[245756]:     "2": [
Oct 02 19:16:55 compute-0 sad_ellis[245756]:         {
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "devices": [
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "/dev/loop5"
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             ],
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "lv_name": "ceph_lv2",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "lv_size": "21470642176",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "name": "ceph_lv2",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "tags": {
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.cluster_name": "ceph",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.crush_device_class": "",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.encrypted": "0",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.osd_id": "2",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.type": "block",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:                 "ceph.vdo": "0"
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             },
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "type": "block",
Oct 02 19:16:55 compute-0 sad_ellis[245756]:             "vg_name": "ceph_vg2"
Oct 02 19:16:55 compute-0 sad_ellis[245756]:         }
Oct 02 19:16:55 compute-0 sad_ellis[245756]:     ]
Oct 02 19:16:55 compute-0 sad_ellis[245756]: }
Oct 02 19:16:55 compute-0 podman[245884]: 2025-10-02 19:16:55.436427635 +0000 UTC m=+0.150243669 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, managed_by=edpm_ansible, io.openshift.expose-services=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:16:55 compute-0 systemd[1]: libpod-b4f30945f2e2fb9975cfe80b2d48ddf2d78ff688138137803d259bc511faa1a3.scope: Deactivated successfully.
Oct 02 19:16:55 compute-0 podman[245740]: 2025-10-02 19:16:55.467332393 +0000 UTC m=+1.116859550 container died b4f30945f2e2fb9975cfe80b2d48ddf2d78ff688138137803d259bc511faa1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ellis, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 19:16:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b48c5ac4229823b49b62ef6c333f25f40500a79d26379ad228ce378aab6d6b1a-merged.mount: Deactivated successfully.
Oct 02 19:16:55 compute-0 podman[245740]: 2025-10-02 19:16:55.546472268 +0000 UTC m=+1.195999395 container remove b4f30945f2e2fb9975cfe80b2d48ddf2d78ff688138137803d259bc511faa1a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 19:16:55 compute-0 systemd[1]: libpod-conmon-b4f30945f2e2fb9975cfe80b2d48ddf2d78ff688138137803d259bc511faa1a3.scope: Deactivated successfully.
Oct 02 19:16:55 compute-0 sudo[245487]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:55 compute-0 python3.9[245931]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:16:55 compute-0 sudo[245924]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:55 compute-0 sudo[245945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:16:55 compute-0 sudo[245945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:16:55 compute-0 sudo[245945]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:55 compute-0 sudo[245973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:16:55 compute-0 sudo[245973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:16:55 compute-0 sudo[245973]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:55 compute-0 sudo[246019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:16:55 compute-0 sudo[246019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:16:55 compute-0 sudo[246019]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:56 compute-0 sudo[246065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:16:56 compute-0 sudo[246065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:16:56 compute-0 sudo[246240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvztyxzflaaohtlzudobqvlhmrfrfhri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432615.9827938-216-146403623245105/AnsiballZ_stat.py'
Oct 02 19:16:56 compute-0 sudo[246240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:56 compute-0 podman[246216]: 2025-10-02 19:16:56.546197247 +0000 UTC m=+0.088932715 container create 161f44cffb989603cd1ca5fab439f1f58eabd00ba02304fdfa08be575bdf4c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 19:16:56 compute-0 systemd[1]: Started libpod-conmon-161f44cffb989603cd1ca5fab439f1f58eabd00ba02304fdfa08be575bdf4c9f.scope.
Oct 02 19:16:56 compute-0 podman[246216]: 2025-10-02 19:16:56.520774534 +0000 UTC m=+0.063510032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:16:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:16:56 compute-0 podman[246216]: 2025-10-02 19:16:56.67281355 +0000 UTC m=+0.215549118 container init 161f44cffb989603cd1ca5fab439f1f58eabd00ba02304fdfa08be575bdf4c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:16:56 compute-0 podman[246216]: 2025-10-02 19:16:56.691962817 +0000 UTC m=+0.234698285 container start 161f44cffb989603cd1ca5fab439f1f58eabd00ba02304fdfa08be575bdf4c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chebyshev, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 19:16:56 compute-0 peaceful_chebyshev[246249]: 167 167
Oct 02 19:16:56 compute-0 systemd[1]: libpod-161f44cffb989603cd1ca5fab439f1f58eabd00ba02304fdfa08be575bdf4c9f.scope: Deactivated successfully.
Oct 02 19:16:56 compute-0 conmon[246249]: conmon 161f44cffb989603cd1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-161f44cffb989603cd1ca5fab439f1f58eabd00ba02304fdfa08be575bdf4c9f.scope/container/memory.events
Oct 02 19:16:56 compute-0 podman[246216]: 2025-10-02 19:16:56.70909641 +0000 UTC m=+0.251831908 container attach 161f44cffb989603cd1ca5fab439f1f58eabd00ba02304fdfa08be575bdf4c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:16:56 compute-0 podman[246216]: 2025-10-02 19:16:56.710271551 +0000 UTC m=+0.253007039 container died 161f44cffb989603cd1ca5fab439f1f58eabd00ba02304fdfa08be575bdf4c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 19:16:56 compute-0 python3.9[246246]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:16:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8c43f9158280874e290bc4868e93ba0057336b28f9b5c8255d37a2cded92b09-merged.mount: Deactivated successfully.
Oct 02 19:16:56 compute-0 sudo[246240]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:56 compute-0 podman[246216]: 2025-10-02 19:16:56.835129297 +0000 UTC m=+0.377864795 container remove 161f44cffb989603cd1ca5fab439f1f58eabd00ba02304fdfa08be575bdf4c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:16:56 compute-0 systemd[1]: libpod-conmon-161f44cffb989603cd1ca5fab439f1f58eabd00ba02304fdfa08be575bdf4c9f.scope: Deactivated successfully.
Oct 02 19:16:57 compute-0 podman[246298]: 2025-10-02 19:16:57.063630177 +0000 UTC m=+0.073597680 container create 599b423eda169e35771df9e8678fa5531e94405dd0e1af93993f38a5e450519b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_galileo, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 19:16:57 compute-0 systemd[1]: Started libpod-conmon-599b423eda169e35771df9e8678fa5531e94405dd0e1af93993f38a5e450519b.scope.
Oct 02 19:16:57 compute-0 podman[246298]: 2025-10-02 19:16:57.036745325 +0000 UTC m=+0.046712848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:16:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc65b1710237db345b98946956fe1d396fd5f791c5e6cb91ce85593a7a2b14aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc65b1710237db345b98946956fe1d396fd5f791c5e6cb91ce85593a7a2b14aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc65b1710237db345b98946956fe1d396fd5f791c5e6cb91ce85593a7a2b14aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc65b1710237db345b98946956fe1d396fd5f791c5e6cb91ce85593a7a2b14aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:16:57 compute-0 sudo[246368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzghwbriktbehnqlgvbsccwpqqqgjrws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432615.9827938-216-146403623245105/AnsiballZ_file.py'
Oct 02 19:16:57 compute-0 sudo[246368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:57 compute-0 podman[246298]: 2025-10-02 19:16:57.190828585 +0000 UTC m=+0.200796118 container init 599b423eda169e35771df9e8678fa5531e94405dd0e1af93993f38a5e450519b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_galileo, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 19:16:57 compute-0 podman[246298]: 2025-10-02 19:16:57.206665244 +0000 UTC m=+0.216632747 container start 599b423eda169e35771df9e8678fa5531e94405dd0e1af93993f38a5e450519b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_galileo, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:16:57 compute-0 podman[246298]: 2025-10-02 19:16:57.21181939 +0000 UTC m=+0.221786883 container attach 599b423eda169e35771df9e8678fa5531e94405dd0e1af93993f38a5e450519b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_galileo, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 19:16:57 compute-0 ceph-mon[191910]: pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:57 compute-0 python3.9[246370]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:16:57 compute-0 sudo[246368]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]: {
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:         "osd_id": 1,
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:         "type": "bluestore"
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:     },
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:         "osd_id": 2,
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:         "type": "bluestore"
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:     },
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:         "osd_id": 0,
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:         "type": "bluestore"
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]:     }
Oct 02 19:16:58 compute-0 pedantic_galileo[246361]: }
Oct 02 19:16:58 compute-0 systemd[1]: libpod-599b423eda169e35771df9e8678fa5531e94405dd0e1af93993f38a5e450519b.scope: Deactivated successfully.
Oct 02 19:16:58 compute-0 podman[246298]: 2025-10-02 19:16:58.391515924 +0000 UTC m=+1.401483447 container died 599b423eda169e35771df9e8678fa5531e94405dd0e1af93993f38a5e450519b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_galileo, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:16:58 compute-0 systemd[1]: libpod-599b423eda169e35771df9e8678fa5531e94405dd0e1af93993f38a5e450519b.scope: Consumed 1.188s CPU time.
Oct 02 19:16:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc65b1710237db345b98946956fe1d396fd5f791c5e6cb91ce85593a7a2b14aa-merged.mount: Deactivated successfully.
Oct 02 19:16:58 compute-0 podman[246298]: 2025-10-02 19:16:58.473472134 +0000 UTC m=+1.483439657 container remove 599b423eda169e35771df9e8678fa5531e94405dd0e1af93993f38a5e450519b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_galileo, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 19:16:58 compute-0 systemd[1]: libpod-conmon-599b423eda169e35771df9e8678fa5531e94405dd0e1af93993f38a5e450519b.scope: Deactivated successfully.
Oct 02 19:16:58 compute-0 sudo[246065]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:58 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:16:58 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:16:58 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:16:58 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:16:58 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 3d79e37e-b18e-4c7c-83d4-d8d39fae1dfa does not exist
Oct 02 19:16:58 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 393cb1ab-24b4-48c1-9d00-edb408e97072 does not exist
Oct 02 19:16:58 compute-0 sudo[246563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iduptmlpqkiudociyyujqjjqwzythqnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432617.847664-231-252263459502965/AnsiballZ_timezone.py'
Oct 02 19:16:58 compute-0 sudo[246563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:58 compute-0 sudo[246565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:16:58 compute-0 sudo[246565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:16:58 compute-0 sudo[246565]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:58 compute-0 sudo[246591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:16:58 compute-0 sudo[246591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:16:58 compute-0 sudo[246591]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:58 compute-0 python3.9[246569]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 02 19:16:58 compute-0 systemd[1]: Starting Time & Date Service...
Oct 02 19:16:59 compute-0 systemd[1]: Started Time & Date Service.
Oct 02 19:16:59 compute-0 sudo[246563]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:16:59 compute-0 ceph-mon[191910]: pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:59 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:16:59 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:16:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:16:59 compute-0 podman[157186]: time="2025-10-02T19:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:16:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:16:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6835 "" "Go-http-client/1.1"
Oct 02 19:17:00 compute-0 sudo[246769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csclajfhyocpzfbaxnpvrjutjxrjqidh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432619.5351527-240-142755603365340/AnsiballZ_file.py'
Oct 02 19:17:00 compute-0 sudo[246769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:00 compute-0 python3.9[246771]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:17:00 compute-0 sudo[246769]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:01 compute-0 sudo[246921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrzmeszjalpilrklwmorwostuarxmakb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432620.6302328-248-279239388932970/AnsiballZ_stat.py'
Oct 02 19:17:01 compute-0 sudo[246921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:01 compute-0 ceph-mon[191910]: pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:01 compute-0 openstack_network_exporter[159337]: ERROR   19:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:17:01 compute-0 openstack_network_exporter[159337]: ERROR   19:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:17:01 compute-0 openstack_network_exporter[159337]: ERROR   19:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:17:01 compute-0 openstack_network_exporter[159337]: ERROR   19:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:17:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:17:01 compute-0 openstack_network_exporter[159337]: ERROR   19:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:17:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:17:01 compute-0 python3.9[246923]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:17:01 compute-0 sudo[246921]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:01 compute-0 sudo[246999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svwxwilzrjfbdhxhhbsnpaliirzbgoks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432620.6302328-248-279239388932970/AnsiballZ_file.py'
Oct 02 19:17:01 compute-0 sudo[246999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:02 compute-0 python3.9[247001]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:17:02 compute-0 sudo[246999]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:03 compute-0 sudo[247151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbownbzybbuakorfjeqebnljvyltqmfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432622.5274668-260-188643585984004/AnsiballZ_stat.py'
Oct 02 19:17:03 compute-0 sudo[247151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:03 compute-0 python3.9[247153]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:17:03 compute-0 sudo[247151]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:03 compute-0 ceph-mon[191910]: pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:17:03
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'backups', '.mgr', 'images', 'default.rgw.log', 'default.rgw.control', 'volumes', 'vms', 'default.rgw.meta']
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:17:03 compute-0 sudo[247229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekhxnnkjtarivahwznugvqzqgsagogbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432622.5274668-260-188643585984004/AnsiballZ_file.py'
Oct 02 19:17:03 compute-0 sudo[247229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:17:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:17:03 compute-0 python3.9[247231]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.r1yke_ni recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:17:03 compute-0 sudo[247229]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:17:04 compute-0 sshd-session[187759]: Received disconnect from 38.102.83.68 port 51164:11: disconnected by user
Oct 02 19:17:04 compute-0 sshd-session[187759]: Disconnected from user zuul 38.102.83.68 port 51164
Oct 02 19:17:04 compute-0 sshd-session[187756]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:17:04 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Oct 02 19:17:04 compute-0 systemd[1]: session-25.scope: Consumed 2min 49.771s CPU time.
Oct 02 19:17:04 compute-0 systemd-logind[793]: Session 25 logged out. Waiting for processes to exit.
Oct 02 19:17:04 compute-0 systemd-logind[793]: Removed session 25.
Oct 02 19:17:04 compute-0 sudo[247381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkmtqbrvhpyxpmkozpxjkarqbtokieqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432624.1197495-272-137991133692536/AnsiballZ_stat.py'
Oct 02 19:17:04 compute-0 sudo[247381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:04 compute-0 python3.9[247383]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:17:04 compute-0 sudo[247381]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:05 compute-0 sudo[247459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efougnhrtbhuxqaufiqhjrjcbwqthfok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432624.1197495-272-137991133692536/AnsiballZ_file.py'
Oct 02 19:17:05 compute-0 sudo[247459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:05 compute-0 ceph-mon[191910]: pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:05 compute-0 python3.9[247461]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:17:05 compute-0 sudo[247459]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:06 compute-0 sudo[247611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crtqezqhangnfalqrjibmjffwuuzrapc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432625.995087-285-4663588923203/AnsiballZ_command.py'
Oct 02 19:17:06 compute-0 sudo[247611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:07 compute-0 python3.9[247613]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:17:07 compute-0 sudo[247611]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:07 compute-0 ceph-mon[191910]: pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:07 compute-0 sudo[247764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqigrafbqpagwhlsotjqcjajjuyuxwvj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432627.3159518-293-185383741784610/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 19:17:07 compute-0 sudo[247764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:08 compute-0 python3[247766]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 19:17:08 compute-0 sudo[247764]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:09 compute-0 sudo[247916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wifivypygvpmpxkjobntdxhpzmjzqlmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432628.577008-301-172719975548736/AnsiballZ_stat.py'
Oct 02 19:17:09 compute-0 sudo[247916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:17:09 compute-0 python3.9[247918]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:17:09 compute-0 sudo[247916]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:09 compute-0 ceph-mon[191910]: pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:09 compute-0 sudo[247994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agybbjedrxrzgvapujariihltfvsjtxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432628.577008-301-172719975548736/AnsiballZ_file.py'
Oct 02 19:17:09 compute-0 sudo[247994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:09 compute-0 python3.9[247996]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:17:10 compute-0 sudo[247994]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:10 compute-0 sudo[248146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlfsrhofpehjzyeaywdlveyjeqmsiayl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432630.293232-313-6746502038037/AnsiballZ_stat.py'
Oct 02 19:17:10 compute-0 sudo[248146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:11 compute-0 python3.9[248148]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:17:11 compute-0 sudo[248146]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:11 compute-0 ceph-mon[191910]: pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:11 compute-0 sudo[248224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzkoqqjmnpjnjdrnxamtwasjszqqfzvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432630.293232-313-6746502038037/AnsiballZ_file.py'
Oct 02 19:17:11 compute-0 sudo[248224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:11 compute-0 python3.9[248226]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:17:11 compute-0 sudo[248224]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:17:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:17:12 compute-0 sudo[248376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxlhbsbpkvovngvuzjqecfoatktammqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432631.9730847-325-195240008201039/AnsiballZ_stat.py'
Oct 02 19:17:12 compute-0 sudo[248376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:12 compute-0 python3.9[248378]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:17:12 compute-0 sudo[248376]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:13 compute-0 sudo[248454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekyesmlpjimxswbraqjlmehxzutgcvtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432631.9730847-325-195240008201039/AnsiballZ_file.py'
Oct 02 19:17:13 compute-0 sudo[248454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:13 compute-0 python3.9[248456]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:17:13 compute-0 sudo[248454]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:13 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:17:13 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:17:13 compute-0 ceph-mon[191910]: pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:14 compute-0 podman[248582]: 2025-10-02 19:17:14.117873086 +0000 UTC m=+0.073085576 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:17:14 compute-0 sudo[248637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxrvvamxayoxzjuppajaxixuteyqzvmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432633.5982347-337-148571189746126/AnsiballZ_stat.py'
Oct 02 19:17:14 compute-0 sudo[248637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:14 compute-0 podman[248581]: 2025-10-02 19:17:14.137532346 +0000 UTC m=+0.085989127 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:17:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:17:14 compute-0 python3.9[248649]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:17:14 compute-0 sudo[248637]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:14 compute-0 sudo[248725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzsnocrrtoiecbcumohmdtcioshrcmiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432633.5982347-337-148571189746126/AnsiballZ_file.py'
Oct 02 19:17:14 compute-0 sudo[248725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:15 compute-0 python3.9[248727]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:17:15 compute-0 sudo[248725]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:15 compute-0 ceph-mon[191910]: pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:15 compute-0 sudo[248877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnempsfpxfsflljqxlgnyyrostlpkgjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432635.3211627-349-24177536931888/AnsiballZ_stat.py'
Oct 02 19:17:15 compute-0 sudo[248877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:16 compute-0 python3.9[248879]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:17:16 compute-0 sudo[248877]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:16 compute-0 sudo[248955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfqfreesszfoqakhegtwpizextfzdtxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432635.3211627-349-24177536931888/AnsiballZ_file.py'
Oct 02 19:17:16 compute-0 sudo[248955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:16 compute-0 python3.9[248957]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:17:16 compute-0 sudo[248955]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:17 compute-0 ceph-mon[191910]: pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:17 compute-0 sudo[249138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-artlpokekhjchlrqarjdcuouffftaliq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432637.1616528-362-29535619047351/AnsiballZ_command.py'
Oct 02 19:17:17 compute-0 podman[249081]: 2025-10-02 19:17:17.662938005 +0000 UTC m=+0.091470763 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 02 19:17:17 compute-0 sudo[249138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:17 compute-0 podman[249082]: 2025-10-02 19:17:17.721166577 +0000 UTC m=+0.138436976 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 19:17:17 compute-0 python3.9[249146]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:17:17 compute-0 sudo[249138]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:18 compute-0 sudo[249304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytulbazfuyjuniehtysyvcyhljehshao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432638.196764-370-239958026013549/AnsiballZ_blockinfile.py'
Oct 02 19:17:18 compute-0 sudo[249304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:19 compute-0 python3.9[249306]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:17:19 compute-0 sudo[249304]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:17:19 compute-0 ceph-mon[191910]: pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:19 compute-0 sudo[249457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aggtwofjrlldfsnzkenhwdfprlznnijx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432639.4035375-379-56534296518204/AnsiballZ_file.py'
Oct 02 19:17:19 compute-0 sudo[249457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:20 compute-0 python3.9[249459]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:17:20 compute-0 sudo[249457]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:20 compute-0 sudo[249609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucafgrjdgglzvpvwjitgrnvgkyhocnxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432640.3843427-379-183910389310207/AnsiballZ_file.py'
Oct 02 19:17:20 compute-0 sudo[249609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:21 compute-0 python3.9[249611]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:17:21 compute-0 sudo[249609]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:21 compute-0 ceph-mon[191910]: pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:21 compute-0 sudo[249761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqnlgczfkujdmtwlbfbqliaiivampvuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432641.3192275-394-202196571728342/AnsiballZ_mount.py'
Oct 02 19:17:21 compute-0 sudo[249761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:22 compute-0 python3.9[249763]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 02 19:17:22 compute-0 sudo[249761]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:22 compute-0 sudo[249913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgsjrjdqewpkhbqwiprmnvonxhztnkxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432642.3644524-394-208596297775052/AnsiballZ_mount.py'
Oct 02 19:17:22 compute-0 sudo[249913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:23 compute-0 python3.9[249915]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 02 19:17:23 compute-0 sudo[249913]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:23 compute-0 ceph-mon[191910]: pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:23 compute-0 sshd-session[241682]: Connection closed by 192.168.122.30 port 48952
Oct 02 19:17:23 compute-0 sshd-session[241679]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:17:23 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Oct 02 19:17:23 compute-0 systemd[1]: session-47.scope: Consumed 48.767s CPU time.
Oct 02 19:17:23 compute-0 systemd-logind[793]: Session 47 logged out. Waiting for processes to exit.
Oct 02 19:17:23 compute-0 systemd-logind[793]: Removed session 47.
Oct 02 19:17:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:23 compute-0 podman[249941]: 2025-10-02 19:17:23.694934998 +0000 UTC m=+0.108319538 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:17:23 compute-0 podman[249940]: 2025-10-02 19:17:23.70744805 +0000 UTC m=+0.130388202 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:17:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:17:24 compute-0 ceph-mon[191910]: pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:25 compute-0 podman[249978]: 2025-10-02 19:17:25.716573004 +0000 UTC m=+0.131809921 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, distribution-scope=public, name=ubi9, release-0.7.12=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible, release=1214.1726694543)
Oct 02 19:17:26 compute-0 ceph-mon[191910]: pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:28 compute-0 ceph-mon[191910]: pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:29 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 02 19:17:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:17:29 compute-0 sshd-session[249999]: Accepted publickey for zuul from 192.168.122.30 port 44736 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:17:29 compute-0 systemd-logind[793]: New session 48 of user zuul.
Oct 02 19:17:29 compute-0 systemd[1]: Started Session 48 of User zuul.
Oct 02 19:17:29 compute-0 sshd-session[249999]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:17:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:29 compute-0 podman[157186]: time="2025-10-02T19:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:17:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:17:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6837 "" "Go-http-client/1.1"
Oct 02 19:17:30 compute-0 sudo[250152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvushuwisrvvlwwrelspavkvazeybhfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432649.4956224-16-33901739652385/AnsiballZ_tempfile.py'
Oct 02 19:17:30 compute-0 sudo[250152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:30 compute-0 python3.9[250154]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 02 19:17:30 compute-0 sudo[250152]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:30 compute-0 ceph-mon[191910]: pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:31 compute-0 openstack_network_exporter[159337]: ERROR   19:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:17:31 compute-0 openstack_network_exporter[159337]: ERROR   19:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:17:31 compute-0 openstack_network_exporter[159337]: ERROR   19:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:17:31 compute-0 openstack_network_exporter[159337]: ERROR   19:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:17:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:17:31 compute-0 openstack_network_exporter[159337]: ERROR   19:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:17:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:17:31 compute-0 sudo[250304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igvhiyfjhxrzzkmcawwrqepqdznicwqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432650.8761575-28-50914097375527/AnsiballZ_stat.py'
Oct 02 19:17:31 compute-0 sudo[250304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:31 compute-0 python3.9[250306]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:17:31 compute-0 sudo[250304]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:32 compute-0 ceph-mon[191910]: pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:32 compute-0 sudo[250458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpqxpdcwfjqdzccnrdnjihvvxhqrdrpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432652.1961071-36-9953455052748/AnsiballZ_slurp.py'
Oct 02 19:17:32 compute-0 sudo[250458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:33 compute-0 python3.9[250460]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct 02 19:17:33 compute-0 sudo[250458]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:17:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:17:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:17:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:17:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:17:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:17:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:34 compute-0 sudo[250610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmmvzjtljovaiclguiszzjvhpsxkwqup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432653.511283-44-132990255868238/AnsiballZ_stat.py'
Oct 02 19:17:34 compute-0 sudo[250610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:34 compute-0 python3.9[250612]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.hj78g5c1 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:17:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:17:34 compute-0 sudo[250610]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:34 compute-0 ceph-mon[191910]: pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:35 compute-0 sudo[250735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhpxmkntaitmeekwzadyszgyhxbsqvyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432653.511283-44-132990255868238/AnsiballZ_copy.py'
Oct 02 19:17:35 compute-0 sudo[250735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:35 compute-0 python3.9[250737]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.hj78g5c1 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432653.511283-44-132990255868238/.source.hj78g5c1 _original_basename=.98sau3r_ follow=False checksum=8d50fe321827f22ed6aa9a1728e25d66f9d49d74 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:17:35 compute-0 sudo[250735]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:36 compute-0 sudo[250887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcsaqgotlluxlhqocxvrdymcdwdqehho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432655.7413435-59-172939152028178/AnsiballZ_setup.py'
Oct 02 19:17:36 compute-0 sudo[250887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:36 compute-0 ceph-mon[191910]: pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:36 compute-0 python3.9[250889]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:17:36 compute-0 sudo[250887]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:38 compute-0 sudo[251039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfmbhdnuyhsmqzjlsbtrphpcdoxdtbpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432657.338344-68-71331932134424/AnsiballZ_blockinfile.py'
Oct 02 19:17:38 compute-0 sudo[251039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:38 compute-0 python3.9[251041]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbvy1nmZlQ1mwB+8mXD1QVEPHj9WDqCT0xaUa0WXwPbTqC63n5C/4mCHmoqqXTwoEhHX7so7AlSpv5zZ7hPkQOsh2gCmla2/HhNjy/xA5JU+H4TM08v9CmvM5ymnbSuLlQxrYXJOAzVSvZV4eKucl4LDsV5CMlRMJjTim4/SvCrGpM09ZfwVaN0pzt0NY21deN4P7w4mt27M+xtoVorj/BupjoBo24TZzqPokPuZXFUigBfiHWqiEENVhU9baZbXWsxcToG6PgefXxjz0KPMd7Nuk7aP8paYmZwXQfEZgVe+m5ihzuwQw5rtVmj0XDfT/OT+kUBWhVInST0A96gtIN5d/7rsiWdiCFPqEu0sJEG3rMPkinVARq5Q4hV/I8dZ45vEYVV6KVipSJYx2eldcJrSpYH2LoC3XLdQoJBlWr5Mz50aFuI35bWbkZAbLcG9UJvIQDKZ8Z+UC/JnYyCHn3m2Zimlf93NaKxuB4cuROvZYifnCiCOr9xV1pyAguC6E=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK8dVLzi3MTJZ5eDxe5XdUxMonA7YKX5W9IYtbfghkzW
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC2CDn2xLcobMglTrqlWQW+s0s7KVx/tuT7qoElt54b5qX7SDKjeu7ZNAyB2Kosqdgz51mquHrgoPZYMVp0nqB8=
                                              create=True mode=0644 path=/tmp/ansible.hj78g5c1 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:17:38 compute-0 sudo[251039]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:38 compute-0 ceph-mon[191910]: pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:17:39 compute-0 sudo[251191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymnorhcjmxpzsbtqdatwchiukmghcxer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432658.5857177-76-160395904976721/AnsiballZ_command.py'
Oct 02 19:17:39 compute-0 sudo[251191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:39 compute-0 python3.9[251193]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.hj78g5c1' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:17:39 compute-0 sudo[251191]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:40 compute-0 sudo[251345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnnhtkshpbvqfvzeofygnubecuhixjqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432659.7901735-84-99171413757176/AnsiballZ_file.py'
Oct 02 19:17:40 compute-0 sudo[251345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:40 compute-0 python3.9[251347]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.hj78g5c1 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:17:40 compute-0 sudo[251345]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:40 compute-0 ceph-mon[191910]: pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:41 compute-0 sshd-session[250002]: Connection closed by 192.168.122.30 port 44736
Oct 02 19:17:41 compute-0 sshd-session[249999]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:17:41 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Oct 02 19:17:41 compute-0 systemd[1]: session-48.scope: Consumed 8.950s CPU time.
Oct 02 19:17:41 compute-0 systemd-logind[793]: Session 48 logged out. Waiting for processes to exit.
Oct 02 19:17:41 compute-0 systemd-logind[793]: Removed session 48.
Oct 02 19:17:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:42 compute-0 ceph-mon[191910]: pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:17:44 compute-0 podman[251372]: 2025-10-02 19:17:44.724720364 +0000 UTC m=+0.145610206 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct 02 19:17:44 compute-0 podman[251373]: 2025-10-02 19:17:44.728807482 +0000 UTC m=+0.147755813 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:17:44 compute-0 ceph-mon[191910]: pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:46 compute-0 sshd-session[251413]: Accepted publickey for zuul from 192.168.122.30 port 57206 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:17:46 compute-0 systemd-logind[793]: New session 49 of user zuul.
Oct 02 19:17:46 compute-0 systemd[1]: Started Session 49 of User zuul.
Oct 02 19:17:46 compute-0 sshd-session[251413]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:17:46 compute-0 ceph-mon[191910]: pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:47 compute-0 python3.9[251566]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:17:48 compute-0 podman[251647]: 2025-10-02 19:17:48.712708559 +0000 UTC m=+0.132477917 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., distribution-scope=public, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, release=1755695350)
Oct 02 19:17:48 compute-0 podman[251648]: 2025-10-02 19:17:48.778782218 +0000 UTC m=+0.197057517 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:17:48 compute-0 ceph-mon[191910]: pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:49 compute-0 sudo[251766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seqgzfkvcwcjsrpdzdjfkgcdsjsyzltn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432668.3216364-32-21303146880274/AnsiballZ_systemd.py'
Oct 02 19:17:49 compute-0 sudo[251766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:17:49 compute-0 python3.9[251768]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 02 19:17:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:49 compute-0 sudo[251766]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:50 compute-0 sudo[251921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpxmuhyqguzlynxmmlbnktzsmdsdbkip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432669.958775-40-18562779253690/AnsiballZ_systemd.py'
Oct 02 19:17:50 compute-0 sudo[251921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:50 compute-0 python3.9[251923]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:17:50 compute-0 sudo[251921]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:50 compute-0 ceph-mon[191910]: pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:51 compute-0 sudo[252074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjkqetmivglioxuhblwvyyptgeackkcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432671.1983948-49-75518765003486/AnsiballZ_command.py'
Oct 02 19:17:51 compute-0 sudo[252074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:52 compute-0 python3.9[252076]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:17:52 compute-0 sudo[252074]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:52 compute-0 ceph-mon[191910]: pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:53 compute-0 sudo[252227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmkjetnwbsgihmdbxtsfqwrtansrfcwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432672.3806064-57-265949330875265/AnsiballZ_stat.py'
Oct 02 19:17:53 compute-0 sudo[252227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:53 compute-0 python3.9[252229]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:17:53 compute-0 sudo[252227]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:17:54.302211) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432674302245, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1674, "num_deletes": 251, "total_data_size": 2371597, "memory_usage": 2422520, "flush_reason": "Manual Compaction"}
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432674314437, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1389659, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7316, "largest_seqno": 8989, "table_properties": {"data_size": 1384113, "index_size": 2494, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16279, "raw_average_key_size": 20, "raw_value_size": 1370939, "raw_average_value_size": 1757, "num_data_blocks": 117, "num_entries": 780, "num_filter_entries": 780, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432518, "oldest_key_time": 1759432518, "file_creation_time": 1759432674, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 12301 microseconds, and 5471 cpu microseconds.
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:17:54.314510) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1389659 bytes OK
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:17:54.314533) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:17:54.317047) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:17:54.317063) EVENT_LOG_v1 {"time_micros": 1759432674317059, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:17:54.317082) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2364116, prev total WAL file size 2364116, number of live WAL files 2.
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:17:54.318447) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1357KB)], [20(6957KB)]
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432674318565, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8514578, "oldest_snapshot_seqno": -1}
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3371 keys, 6815051 bytes, temperature: kUnknown
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432674396016, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6815051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6789214, "index_size": 16320, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 80651, "raw_average_key_size": 23, "raw_value_size": 6724968, "raw_average_value_size": 1994, "num_data_blocks": 724, "num_entries": 3371, "num_filter_entries": 3371, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759432674, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:17:54.396358) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6815051 bytes
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:17:54.399688) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 110.3 rd, 88.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 6.8 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(11.0) write-amplify(4.9) OK, records in: 3813, records dropped: 442 output_compression: NoCompression
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:17:54.399724) EVENT_LOG_v1 {"time_micros": 1759432674399707, "job": 6, "event": "compaction_finished", "compaction_time_micros": 77217, "compaction_time_cpu_micros": 45562, "output_level": 6, "num_output_files": 1, "total_output_size": 6815051, "num_input_records": 3813, "num_output_records": 3371, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432674400358, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432674403071, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:17:54.318156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:17:54.403303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:17:54.403314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:17:54.403318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:17:54.403322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:17:54 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:17:54.403326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:17:54 compute-0 sudo[252411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfllewufmhjrmberwedljsdciatsgwph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432673.7286785-66-118473073706083/AnsiballZ_file.py'
Oct 02 19:17:54 compute-0 sudo[252411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:17:54 compute-0 podman[252354]: 2025-10-02 19:17:54.484876584 +0000 UTC m=+0.114022530 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:17:54 compute-0 podman[252353]: 2025-10-02 19:17:54.49155855 +0000 UTC m=+0.123228353 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:17:54 compute-0 python3.9[252423]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:17:54 compute-0 sudo[252411]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:55 compute-0 sshd-session[251416]: Connection closed by 192.168.122.30 port 57206
Oct 02 19:17:55 compute-0 sshd-session[251413]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:17:55 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Oct 02 19:17:55 compute-0 systemd[1]: session-49.scope: Consumed 6.726s CPU time.
Oct 02 19:17:55 compute-0 systemd-logind[793]: Session 49 logged out. Waiting for processes to exit.
Oct 02 19:17:55 compute-0 systemd-logind[793]: Removed session 49.
Oct 02 19:17:55 compute-0 ceph-mon[191910]: pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:56 compute-0 podman[252448]: 2025-10-02 19:17:56.707282504 +0000 UTC m=+0.124944799 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.buildah.version=1.29.0, name=ubi9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=edpm, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct 02 19:17:57 compute-0 ceph-mon[191910]: pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:58 compute-0 sudo[252467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:17:58 compute-0 sudo[252467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:17:58 compute-0 sudo[252467]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:59 compute-0 sudo[252492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:17:59 compute-0 sudo[252492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:17:59 compute-0 sudo[252492]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:59 compute-0 sudo[252517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:17:59 compute-0 sudo[252517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:17:59 compute-0 sudo[252517]: pam_unix(sudo:session): session closed for user root
Oct 02 19:17:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:17:59 compute-0 sudo[252542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:17:59 compute-0 sudo[252542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:17:59 compute-0 ceph-mon[191910]: pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:17:59 compute-0 podman[157186]: time="2025-10-02T19:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:17:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:17:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6834 "" "Go-http-client/1.1"
Oct 02 19:17:59 compute-0 sudo[252542]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:18:00 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:18:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:18:00 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:18:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:18:00 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:18:00 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 42c71126-9035-4f7e-bb30-775a9685e783 does not exist
Oct 02 19:18:00 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev ac6d99d0-6e62-415f-92f9-08f3d7a7c0dd does not exist
Oct 02 19:18:00 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev c4721893-3122-4251-88e9-00788d1d6e22 does not exist
Oct 02 19:18:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:18:00 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:18:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:18:00 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:18:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:18:00 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:18:00 compute-0 sudo[252598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:18:00 compute-0 sudo[252598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:18:00 compute-0 sudo[252598]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:00 compute-0 sudo[252623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:18:00 compute-0 sudo[252623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:18:00 compute-0 sudo[252623]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:00 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:18:00 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:18:00 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:18:00 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:18:00 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:18:00 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:18:00 compute-0 sudo[252648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:18:00 compute-0 sudo[252648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:18:00 compute-0 sudo[252648]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:00 compute-0 sudo[252673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:18:00 compute-0 sudo[252673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:18:00 compute-0 sshd-session[252698]: Accepted publickey for zuul from 192.168.122.30 port 35464 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:18:00 compute-0 systemd-logind[793]: New session 50 of user zuul.
Oct 02 19:18:00 compute-0 systemd[1]: Started Session 50 of User zuul.
Oct 02 19:18:00 compute-0 sshd-session[252698]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:18:01 compute-0 podman[252793]: 2025-10-02 19:18:01.203704102 +0000 UTC m=+0.087658832 container create bcc5e965e47adf2b4f1d3e7a8c1f1bd911121a2abf0a97e1db3f3ae966b37e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_brattain, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:18:01 compute-0 podman[252793]: 2025-10-02 19:18:01.161912206 +0000 UTC m=+0.045867026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:18:01 compute-0 systemd[1]: Started libpod-conmon-bcc5e965e47adf2b4f1d3e7a8c1f1bd911121a2abf0a97e1db3f3ae966b37e30.scope.
Oct 02 19:18:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:18:01 compute-0 podman[252793]: 2025-10-02 19:18:01.320852614 +0000 UTC m=+0.204807394 container init bcc5e965e47adf2b4f1d3e7a8c1f1bd911121a2abf0a97e1db3f3ae966b37e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 19:18:01 compute-0 podman[252793]: 2025-10-02 19:18:01.332157773 +0000 UTC m=+0.216112513 container start bcc5e965e47adf2b4f1d3e7a8c1f1bd911121a2abf0a97e1db3f3ae966b37e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 19:18:01 compute-0 podman[252793]: 2025-10-02 19:18:01.337101914 +0000 UTC m=+0.221056674 container attach bcc5e965e47adf2b4f1d3e7a8c1f1bd911121a2abf0a97e1db3f3ae966b37e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_brattain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:18:01 compute-0 busy_brattain[252809]: 167 167
Oct 02 19:18:01 compute-0 podman[252793]: 2025-10-02 19:18:01.341067929 +0000 UTC m=+0.225022659 container died bcc5e965e47adf2b4f1d3e7a8c1f1bd911121a2abf0a97e1db3f3ae966b37e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:18:01 compute-0 systemd[1]: libpod-bcc5e965e47adf2b4f1d3e7a8c1f1bd911121a2abf0a97e1db3f3ae966b37e30.scope: Deactivated successfully.
Oct 02 19:18:01 compute-0 ceph-mon[191910]: pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-38e9f9d915612b3691724760ed72218a1077f3b22c837e0a3f9b25f30aebf526-merged.mount: Deactivated successfully.
Oct 02 19:18:01 compute-0 podman[252793]: 2025-10-02 19:18:01.392948623 +0000 UTC m=+0.276903353 container remove bcc5e965e47adf2b4f1d3e7a8c1f1bd911121a2abf0a97e1db3f3ae966b37e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_brattain, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:18:01 compute-0 systemd[1]: libpod-conmon-bcc5e965e47adf2b4f1d3e7a8c1f1bd911121a2abf0a97e1db3f3ae966b37e30.scope: Deactivated successfully.
Oct 02 19:18:01 compute-0 openstack_network_exporter[159337]: ERROR   19:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:18:01 compute-0 openstack_network_exporter[159337]: ERROR   19:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:18:01 compute-0 openstack_network_exporter[159337]: ERROR   19:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:18:01 compute-0 openstack_network_exporter[159337]: ERROR   19:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:18:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:18:01 compute-0 openstack_network_exporter[159337]: ERROR   19:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:18:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:18:01 compute-0 podman[252837]: 2025-10-02 19:18:01.618512915 +0000 UTC m=+0.083596135 container create 29c3e57cc518644e02c602d0e266a89b0246e7346024c403c22fa761aedd177f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:18:01 compute-0 podman[252837]: 2025-10-02 19:18:01.581837044 +0000 UTC m=+0.046920314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:18:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:01 compute-0 systemd[1]: Started libpod-conmon-29c3e57cc518644e02c602d0e266a89b0246e7346024c403c22fa761aedd177f.scope.
Oct 02 19:18:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced07fde24b686b9801724ebaf1f59129255606cb04007fa753a6e98d7cd9093/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced07fde24b686b9801724ebaf1f59129255606cb04007fa753a6e98d7cd9093/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced07fde24b686b9801724ebaf1f59129255606cb04007fa753a6e98d7cd9093/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced07fde24b686b9801724ebaf1f59129255606cb04007fa753a6e98d7cd9093/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:18:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced07fde24b686b9801724ebaf1f59129255606cb04007fa753a6e98d7cd9093/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:18:01 compute-0 podman[252837]: 2025-10-02 19:18:01.835353806 +0000 UTC m=+0.300437056 container init 29c3e57cc518644e02c602d0e266a89b0246e7346024c403c22fa761aedd177f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 19:18:01 compute-0 podman[252837]: 2025-10-02 19:18:01.857104952 +0000 UTC m=+0.322188172 container start 29c3e57cc518644e02c602d0e266a89b0246e7346024c403c22fa761aedd177f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 19:18:01 compute-0 podman[252837]: 2025-10-02 19:18:01.863266275 +0000 UTC m=+0.328349515 container attach 29c3e57cc518644e02c602d0e266a89b0246e7346024c403c22fa761aedd177f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 19:18:02 compute-0 python3.9[252949]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:18:03 compute-0 angry_carson[252894]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:18:03 compute-0 angry_carson[252894]: --> relative data size: 1.0
Oct 02 19:18:03 compute-0 angry_carson[252894]: --> All data devices are unavailable
Oct 02 19:18:03 compute-0 systemd[1]: libpod-29c3e57cc518644e02c602d0e266a89b0246e7346024c403c22fa761aedd177f.scope: Deactivated successfully.
Oct 02 19:18:03 compute-0 systemd[1]: libpod-29c3e57cc518644e02c602d0e266a89b0246e7346024c403c22fa761aedd177f.scope: Consumed 1.212s CPU time.
Oct 02 19:18:03 compute-0 podman[252837]: 2025-10-02 19:18:03.140992633 +0000 UTC m=+1.606075873 container died 29c3e57cc518644e02c602d0e266a89b0246e7346024c403c22fa761aedd177f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:18:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ced07fde24b686b9801724ebaf1f59129255606cb04007fa753a6e98d7cd9093-merged.mount: Deactivated successfully.
Oct 02 19:18:03 compute-0 podman[252837]: 2025-10-02 19:18:03.239949493 +0000 UTC m=+1.705032713 container remove 29c3e57cc518644e02c602d0e266a89b0246e7346024c403c22fa761aedd177f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carson, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 19:18:03 compute-0 systemd[1]: libpod-conmon-29c3e57cc518644e02c602d0e266a89b0246e7346024c403c22fa761aedd177f.scope: Deactivated successfully.
Oct 02 19:18:03 compute-0 sudo[252673]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:03 compute-0 sudo[253091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:18:03 compute-0 sudo[253091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:18:03 compute-0 sudo[253091]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:03 compute-0 ceph-mon[191910]: pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:18:03
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'volumes', 'backups', 'default.rgw.control']
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:18:03 compute-0 sudo[253187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibychuhxberzwlbfqfvmckojlcccqpfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432682.9458125-34-29974751350133/AnsiballZ_setup.py'
Oct 02 19:18:03 compute-0 sudo[253142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:18:03 compute-0 sudo[253142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:18:03 compute-0 sudo[253187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:03 compute-0 sudo[253142]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:03 compute-0 sudo[253192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:18:03 compute-0 sudo[253192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:18:03 compute-0 sudo[253192]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:18:03 compute-0 sudo[253217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:18:03 compute-0 sudo[253217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:18:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:18:03 compute-0 python3.9[253191]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 19:18:04 compute-0 sudo[253187]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:04 compute-0 podman[253288]: 2025-10-02 19:18:04.170306126 +0000 UTC m=+0.064322094 container create 33dcc31f269f6afdc0cb30cb6abb212e9415dc6b375ee239a07e30985b34ac7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williamson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 19:18:04 compute-0 systemd[1]: Started libpod-conmon-33dcc31f269f6afdc0cb30cb6abb212e9415dc6b375ee239a07e30985b34ac7d.scope.
Oct 02 19:18:04 compute-0 podman[253288]: 2025-10-02 19:18:04.144950494 +0000 UTC m=+0.038966452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:18:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:18:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:18:04 compute-0 podman[253288]: 2025-10-02 19:18:04.324788966 +0000 UTC m=+0.218804944 container init 33dcc31f269f6afdc0cb30cb6abb212e9415dc6b375ee239a07e30985b34ac7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 19:18:04 compute-0 podman[253288]: 2025-10-02 19:18:04.341777116 +0000 UTC m=+0.235793064 container start 33dcc31f269f6afdc0cb30cb6abb212e9415dc6b375ee239a07e30985b34ac7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williamson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 19:18:04 compute-0 stupefied_williamson[253304]: 167 167
Oct 02 19:18:04 compute-0 podman[253288]: 2025-10-02 19:18:04.348326289 +0000 UTC m=+0.242342237 container attach 33dcc31f269f6afdc0cb30cb6abb212e9415dc6b375ee239a07e30985b34ac7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:18:04 compute-0 systemd[1]: libpod-33dcc31f269f6afdc0cb30cb6abb212e9415dc6b375ee239a07e30985b34ac7d.scope: Deactivated successfully.
Oct 02 19:18:04 compute-0 podman[253288]: 2025-10-02 19:18:04.355230912 +0000 UTC m=+0.249246890 container died 33dcc31f269f6afdc0cb30cb6abb212e9415dc6b375ee239a07e30985b34ac7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:18:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a68f00deb4477c017888b3fd24310b89cc113ba0f4761f160a79521b5504623b-merged.mount: Deactivated successfully.
Oct 02 19:18:04 compute-0 podman[253288]: 2025-10-02 19:18:04.438985589 +0000 UTC m=+0.333001557 container remove 33dcc31f269f6afdc0cb30cb6abb212e9415dc6b375ee239a07e30985b34ac7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williamson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:18:04 compute-0 systemd[1]: libpod-conmon-33dcc31f269f6afdc0cb30cb6abb212e9415dc6b375ee239a07e30985b34ac7d.scope: Deactivated successfully.
Oct 02 19:18:04 compute-0 podman[253375]: 2025-10-02 19:18:04.669518773 +0000 UTC m=+0.076015084 container create b375f5daa9f4683af0707b001bd4d7bfc1eab68fac8a1e0d17960b91cc5e7ab8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chaum, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 19:18:04 compute-0 sudo[253413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jidkvsvyuqlagvgpfnbwjkrslznvhruy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432682.9458125-34-29974751350133/AnsiballZ_dnf.py'
Oct 02 19:18:04 compute-0 sudo[253413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:04 compute-0 systemd[1]: Started libpod-conmon-b375f5daa9f4683af0707b001bd4d7bfc1eab68fac8a1e0d17960b91cc5e7ab8.scope.
Oct 02 19:18:04 compute-0 podman[253375]: 2025-10-02 19:18:04.635126512 +0000 UTC m=+0.041622863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:18:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5c3127b6672c0d3762eca090298da861beb14a4bd1478eb77068f17622bdb8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5c3127b6672c0d3762eca090298da861beb14a4bd1478eb77068f17622bdb8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5c3127b6672c0d3762eca090298da861beb14a4bd1478eb77068f17622bdb8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5c3127b6672c0d3762eca090298da861beb14a4bd1478eb77068f17622bdb8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:18:04 compute-0 podman[253375]: 2025-10-02 19:18:04.791668647 +0000 UTC m=+0.198164968 container init b375f5daa9f4683af0707b001bd4d7bfc1eab68fac8a1e0d17960b91cc5e7ab8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chaum, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:18:04 compute-0 podman[253375]: 2025-10-02 19:18:04.811339238 +0000 UTC m=+0.217835539 container start b375f5daa9f4683af0707b001bd4d7bfc1eab68fac8a1e0d17960b91cc5e7ab8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chaum, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:18:04 compute-0 podman[253375]: 2025-10-02 19:18:04.815305533 +0000 UTC m=+0.221801854 container attach b375f5daa9f4683af0707b001bd4d7bfc1eab68fac8a1e0d17960b91cc5e7ab8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chaum, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 19:18:05 compute-0 python3.9[253419]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 19:18:05 compute-0 ceph-mon[191910]: pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:05 compute-0 blissful_chaum[253420]: {
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:     "0": [
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:         {
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "devices": [
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "/dev/loop3"
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             ],
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "lv_name": "ceph_lv0",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "lv_size": "21470642176",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "name": "ceph_lv0",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "tags": {
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.cluster_name": "ceph",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.crush_device_class": "",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.encrypted": "0",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.osd_id": "0",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.type": "block",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.vdo": "0"
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             },
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "type": "block",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "vg_name": "ceph_vg0"
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:         }
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:     ],
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:     "1": [
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:         {
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "devices": [
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "/dev/loop4"
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             ],
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "lv_name": "ceph_lv1",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "lv_size": "21470642176",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "name": "ceph_lv1",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "tags": {
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.cluster_name": "ceph",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.crush_device_class": "",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.encrypted": "0",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.osd_id": "1",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.type": "block",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.vdo": "0"
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             },
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "type": "block",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "vg_name": "ceph_vg1"
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:         }
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:     ],
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:     "2": [
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:         {
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "devices": [
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "/dev/loop5"
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             ],
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "lv_name": "ceph_lv2",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "lv_size": "21470642176",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "name": "ceph_lv2",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "tags": {
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.cluster_name": "ceph",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.crush_device_class": "",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.encrypted": "0",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.osd_id": "2",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.type": "block",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:                 "ceph.vdo": "0"
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             },
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "type": "block",
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:             "vg_name": "ceph_vg2"
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:         }
Oct 02 19:18:05 compute-0 blissful_chaum[253420]:     ]
Oct 02 19:18:05 compute-0 blissful_chaum[253420]: }
Oct 02 19:18:05 compute-0 systemd[1]: libpod-b375f5daa9f4683af0707b001bd4d7bfc1eab68fac8a1e0d17960b91cc5e7ab8.scope: Deactivated successfully.
Oct 02 19:18:05 compute-0 podman[253375]: 2025-10-02 19:18:05.635987251 +0000 UTC m=+1.042483592 container died b375f5daa9f4683af0707b001bd4d7bfc1eab68fac8a1e0d17960b91cc5e7ab8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chaum, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 02 19:18:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5c3127b6672c0d3762eca090298da861beb14a4bd1478eb77068f17622bdb8c-merged.mount: Deactivated successfully.
Oct 02 19:18:05 compute-0 podman[253375]: 2025-10-02 19:18:05.74772612 +0000 UTC m=+1.154222431 container remove b375f5daa9f4683af0707b001bd4d7bfc1eab68fac8a1e0d17960b91cc5e7ab8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chaum, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:18:05 compute-0 systemd[1]: libpod-conmon-b375f5daa9f4683af0707b001bd4d7bfc1eab68fac8a1e0d17960b91cc5e7ab8.scope: Deactivated successfully.
Oct 02 19:18:05 compute-0 sudo[253217]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:05 compute-0 sudo[253441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:18:05 compute-0 sudo[253441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:18:05 compute-0 sudo[253441]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:06 compute-0 sudo[253466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:18:06 compute-0 sudo[253466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:18:06 compute-0 sudo[253466]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:06 compute-0 sudo[253491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:18:06 compute-0 sudo[253491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:18:06 compute-0 sudo[253491]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:06 compute-0 sudo[253516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:18:06 compute-0 sudo[253516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:18:06 compute-0 sudo[253413]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:06 compute-0 podman[253651]: 2025-10-02 19:18:06.683645708 +0000 UTC m=+0.069296165 container create d818d295b9a9fe093ec8c39299ab6181c3c2e618722d84ecc82f3467a98e5d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:18:06 compute-0 podman[253651]: 2025-10-02 19:18:06.650236924 +0000 UTC m=+0.035887431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:18:06 compute-0 systemd[1]: Started libpod-conmon-d818d295b9a9fe093ec8c39299ab6181c3c2e618722d84ecc82f3467a98e5d99.scope.
Oct 02 19:18:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:18:06 compute-0 podman[253651]: 2025-10-02 19:18:06.836255929 +0000 UTC m=+0.221906416 container init d818d295b9a9fe093ec8c39299ab6181c3c2e618722d84ecc82f3467a98e5d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mccarthy, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:18:06 compute-0 podman[253651]: 2025-10-02 19:18:06.852503749 +0000 UTC m=+0.238154206 container start d818d295b9a9fe093ec8c39299ab6181c3c2e618722d84ecc82f3467a98e5d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mccarthy, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:18:06 compute-0 podman[253651]: 2025-10-02 19:18:06.859359211 +0000 UTC m=+0.245009678 container attach d818d295b9a9fe093ec8c39299ab6181c3c2e618722d84ecc82f3467a98e5d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mccarthy, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 19:18:06 compute-0 gifted_mccarthy[253670]: 167 167
Oct 02 19:18:06 compute-0 systemd[1]: libpod-d818d295b9a9fe093ec8c39299ab6181c3c2e618722d84ecc82f3467a98e5d99.scope: Deactivated successfully.
Oct 02 19:18:06 compute-0 conmon[253670]: conmon d818d295b9a9fe093ec8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d818d295b9a9fe093ec8c39299ab6181c3c2e618722d84ecc82f3467a98e5d99.scope/container/memory.events
Oct 02 19:18:06 compute-0 podman[253651]: 2025-10-02 19:18:06.868234336 +0000 UTC m=+0.253884803 container died d818d295b9a9fe093ec8c39299ab6181c3c2e618722d84ecc82f3467a98e5d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 19:18:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a3da578f0d242357f26a44578884243bd7ab0f681253f14bc21b124d3ca2970-merged.mount: Deactivated successfully.
Oct 02 19:18:06 compute-0 podman[253651]: 2025-10-02 19:18:06.947793762 +0000 UTC m=+0.333444219 container remove d818d295b9a9fe093ec8c39299ab6181c3c2e618722d84ecc82f3467a98e5d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mccarthy, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 19:18:06 compute-0 systemd[1]: libpod-conmon-d818d295b9a9fe093ec8c39299ab6181c3c2e618722d84ecc82f3467a98e5d99.scope: Deactivated successfully.
Oct 02 19:18:07 compute-0 podman[253749]: 2025-10-02 19:18:07.238744295 +0000 UTC m=+0.088968106 container create b2d822b7f67a3bc73fbce6cce253579083b0a86eaaee650bf64002b27ad6cf8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:18:07 compute-0 podman[253749]: 2025-10-02 19:18:07.207312583 +0000 UTC m=+0.057536484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:18:07 compute-0 systemd[1]: Started libpod-conmon-b2d822b7f67a3bc73fbce6cce253579083b0a86eaaee650bf64002b27ad6cf8e.scope.
Oct 02 19:18:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f9cfb6b85784380efbd512bf3b491d3c03318fbeba8761afbe521b26e23572/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f9cfb6b85784380efbd512bf3b491d3c03318fbeba8761afbe521b26e23572/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f9cfb6b85784380efbd512bf3b491d3c03318fbeba8761afbe521b26e23572/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f9cfb6b85784380efbd512bf3b491d3c03318fbeba8761afbe521b26e23572/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:18:07 compute-0 podman[253749]: 2025-10-02 19:18:07.367774972 +0000 UTC m=+0.217998803 container init b2d822b7f67a3bc73fbce6cce253579083b0a86eaaee650bf64002b27ad6cf8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 19:18:07 compute-0 podman[253749]: 2025-10-02 19:18:07.3885022 +0000 UTC m=+0.238726051 container start b2d822b7f67a3bc73fbce6cce253579083b0a86eaaee650bf64002b27ad6cf8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cartwright, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:18:07 compute-0 podman[253749]: 2025-10-02 19:18:07.393132123 +0000 UTC m=+0.243355934 container attach b2d822b7f67a3bc73fbce6cce253579083b0a86eaaee650bf64002b27ad6cf8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:18:07 compute-0 python3.9[253777]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:18:07 compute-0 ceph-mon[191910]: pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]: {
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:         "osd_id": 1,
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:         "type": "bluestore"
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:     },
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:         "osd_id": 2,
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:         "type": "bluestore"
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:     },
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:         "osd_id": 0,
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:         "type": "bluestore"
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]:     }
Oct 02 19:18:08 compute-0 romantic_cartwright[253783]: }
Oct 02 19:18:08 compute-0 systemd[1]: libpod-b2d822b7f67a3bc73fbce6cce253579083b0a86eaaee650bf64002b27ad6cf8e.scope: Deactivated successfully.
Oct 02 19:18:08 compute-0 podman[253749]: 2025-10-02 19:18:08.57862106 +0000 UTC m=+1.428844911 container died b2d822b7f67a3bc73fbce6cce253579083b0a86eaaee650bf64002b27ad6cf8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cartwright, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:18:08 compute-0 systemd[1]: libpod-b2d822b7f67a3bc73fbce6cce253579083b0a86eaaee650bf64002b27ad6cf8e.scope: Consumed 1.187s CPU time.
Oct 02 19:18:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-81f9cfb6b85784380efbd512bf3b491d3c03318fbeba8761afbe521b26e23572-merged.mount: Deactivated successfully.
Oct 02 19:18:08 compute-0 podman[253749]: 2025-10-02 19:18:08.658103925 +0000 UTC m=+1.508327756 container remove b2d822b7f67a3bc73fbce6cce253579083b0a86eaaee650bf64002b27ad6cf8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:18:08 compute-0 systemd[1]: libpod-conmon-b2d822b7f67a3bc73fbce6cce253579083b0a86eaaee650bf64002b27ad6cf8e.scope: Deactivated successfully.
Oct 02 19:18:08 compute-0 sudo[253516]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:18:08 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:18:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:18:08 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:18:08 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev be4f5a6f-d87e-413b-95fc-43ab853546c6 does not exist
Oct 02 19:18:08 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 69e3f8ac-b709-421d-bb2e-4b7ae786397d does not exist
Oct 02 19:18:08 compute-0 sudo[253950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:18:08 compute-0 sudo[253950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:18:08 compute-0 sudo[253950]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:08 compute-0 sudo[254001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:18:08 compute-0 sudo[254001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:18:08 compute-0 sudo[254001]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:09 compute-0 python3.9[254002]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:18:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:18:09 compute-0 ceph-mon[191910]: pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:09 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:18:09 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:18:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:10 compute-0 python3.9[254176]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:18:11 compute-0 python3.9[254326]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:18:11 compute-0 ceph-mon[191910]: pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:11 compute-0 sshd-session[252708]: Connection closed by 192.168.122.30 port 35464
Oct 02 19:18:11 compute-0 sshd-session[252698]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:18:11 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Oct 02 19:18:11 compute-0 systemd[1]: session-50.scope: Consumed 8.634s CPU time.
Oct 02 19:18:11 compute-0 systemd-logind[793]: Session 50 logged out. Waiting for processes to exit.
Oct 02 19:18:11 compute-0 systemd-logind[793]: Removed session 50.
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:18:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:18:13 compute-0 ceph-mon[191910]: pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:18:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 600.0 total, 600.0 interval
                                            Cumulative writes: 2032 writes, 9022 keys, 2032 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                            Cumulative WAL: 2032 writes, 2032 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 2032 writes, 9022 keys, 2032 commit groups, 1.0 writes per commit group, ingest: 10.85 MB, 0.02 MB/s
                                            Interval WAL: 2032 writes, 2032 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    112.4      0.07              0.03         3    0.024       0      0       0.0       0.0
                                              L6      1/0    6.50 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    102.1     90.8      0.15              0.08         2    0.073    7136    732       0.0       0.0
                                             Sum      1/0    6.50 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     68.3     97.9      0.22              0.12         5    0.044    7136    732       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     69.7     99.7      0.21              0.12         4    0.054    7136    732       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    102.1     90.8      0.15              0.08         2    0.073    7136    732       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    118.9      0.07              0.03         2    0.034       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.008, interval 0.008
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.2 seconds
                                            Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.2 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x557df35531f0#2 capacity: 308.00 MB usage: 631.69 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.6e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(36,544.97 KB,0.172791%) FilterBlock(6,27.55 KB,0.00873417%) IndexBlock(6,59.17 KB,0.0187614%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Oct 02 19:18:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:18:15 compute-0 ceph-mon[191910]: pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:15 compute-0 podman[254353]: 2025-10-02 19:18:15.698578178 +0000 UTC m=+0.114497532 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:18:15 compute-0 podman[254354]: 2025-10-02 19:18:15.699852232 +0000 UTC m=+0.112969162 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:18:16 compute-0 sshd-session[254396]: Accepted publickey for zuul from 192.168.122.30 port 51410 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:18:16 compute-0 systemd-logind[793]: New session 51 of user zuul.
Oct 02 19:18:16 compute-0 systemd[1]: Started Session 51 of User zuul.
Oct 02 19:18:17 compute-0 sshd-session[254396]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:18:17 compute-0 ceph-mon[191910]: pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:18 compute-0 python3.9[254549]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:18:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:18:19 compute-0 ceph-mon[191910]: pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:19 compute-0 podman[254579]: 2025-10-02 19:18:19.706903202 +0000 UTC m=+0.122596076 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, architecture=x86_64, config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 02 19:18:19 compute-0 podman[254580]: 2025-10-02 19:18:19.76609779 +0000 UTC m=+0.179817042 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:18:20 compute-0 sudo[254749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdyysqqngmdyxhlqrbglonhlfwgbxlbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432699.824706-50-126668051095280/AnsiballZ_file.py'
Oct 02 19:18:20 compute-0 sudo[254749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:20 compute-0 python3.9[254751]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:18:20 compute-0 sudo[254749]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:21 compute-0 ceph-mon[191910]: pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:21 compute-0 sudo[254901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzyonkeqijdzsmtuvykuunpbviwjbwyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432701.0757124-50-7626113935789/AnsiballZ_file.py'
Oct 02 19:18:21 compute-0 sudo[254901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:21 compute-0 python3.9[254903]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:18:21 compute-0 sudo[254901]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:22 compute-0 sudo[255053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkqjhbxcvuxyjuvozmzsjrljudhmdjfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432702.1229637-65-206202902464116/AnsiballZ_stat.py'
Oct 02 19:18:22 compute-0 sudo[255053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:23 compute-0 python3.9[255055]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:18:23 compute-0 sudo[255053]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:23 compute-0 ceph-mon[191910]: pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:23 compute-0 sudo[255132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noduxqhqaustpddyxsnmwznfekzdspim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432702.1229637-65-206202902464116/AnsiballZ_file.py'
Oct 02 19:18:23 compute-0 sudo[255132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:23 compute-0 python3.9[255134]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/ovn/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:18:23 compute-0 sudo[255132]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.437 17 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.437 17 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.437 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a00e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.438 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feec38a00b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec5f86ab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38272c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec6d242f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38273e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38274d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec4ac1ee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3825730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.442 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.442 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feec72bb7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.442 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.442 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feec38264b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.443 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feec3827c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.443 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feec3827230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feec3825700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feec3827290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feec38277a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feec38272f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feec3827350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feec38a0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feec38273b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feec49646e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feec3827440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feec3827c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feec38274a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feec3827ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feec3827d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feec3827dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feec3827e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feec38a0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feec38276e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feec3827ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.452 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feec49641d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.452 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.452 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feec3827740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.452 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.452 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feec3827f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.452 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.453 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.453 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.453 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.453 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:18:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:18:24 compute-0 ceph-mon[191910]: pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:24 compute-0 sudo[255287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmsooayurognmubzlrkzjdykncsrecrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432704.0676877-65-4130890258337/AnsiballZ_stat.py'
Oct 02 19:18:24 compute-0 sudo[255287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:24 compute-0 podman[255286]: 2025-10-02 19:18:24.665485005 +0000 UTC m=+0.089003177 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:18:24 compute-0 podman[255282]: 2025-10-02 19:18:24.692272095 +0000 UTC m=+0.108320879 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0)
Oct 02 19:18:24 compute-0 python3.9[255304]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:18:24 compute-0 sudo[255287]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:25 compute-0 sudo[255404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cryhrlqayxbgluujqzkxgweypfnlsqly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432704.0676877-65-4130890258337/AnsiballZ_file.py'
Oct 02 19:18:25 compute-0 sudo[255404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:25 compute-0 python3.9[255406]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/ovn/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:18:25 compute-0 sudo[255404]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:26 compute-0 sudo[255556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkvrddbhnhlowmuyblxxbwkzcghghxpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432705.676227-65-116300528978741/AnsiballZ_stat.py'
Oct 02 19:18:26 compute-0 sudo[255556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:26 compute-0 python3.9[255558]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:18:26 compute-0 sudo[255556]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:26 compute-0 ceph-mon[191910]: pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:27 compute-0 podman[255608]: 2025-10-02 19:18:27.015348422 +0000 UTC m=+0.114534064 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., container_name=kepler, distribution-scope=public, name=ubi9, release-0.7.12=)
Oct 02 19:18:27 compute-0 sudo[255652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwzaimldqwjqebiyeomvvnzvbwikrscs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432705.676227-65-116300528978741/AnsiballZ_file.py'
Oct 02 19:18:27 compute-0 sudo[255652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:27 compute-0 python3.9[255655]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/ovn/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:18:27 compute-0 sudo[255652]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:28 compute-0 sudo[255805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzpgbkasgylnlnbebdigngzyblbufejd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432707.5977519-100-254798591341356/AnsiballZ_file.py'
Oct 02 19:18:28 compute-0 sudo[255805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:28 compute-0 python3.9[255807]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:18:28 compute-0 sudo[255805]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:28 compute-0 ceph-mon[191910]: pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:29 compute-0 sudo[255957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrqbngpzmkggcqaagiwuewmfzfxaybyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432708.5680468-100-80193934157912/AnsiballZ_file.py'
Oct 02 19:18:29 compute-0 sudo[255957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:18:29 compute-0 python3.9[255959]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:18:29 compute-0 sudo[255957]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:29 compute-0 podman[157186]: time="2025-10-02T19:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:18:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:18:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6834 "" "Go-http-client/1.1"
Oct 02 19:18:30 compute-0 sudo[256109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyefcznknheclvbxvszoyzjskpmfophz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432709.6651044-115-192374155102727/AnsiballZ_stat.py'
Oct 02 19:18:30 compute-0 sudo[256109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:30 compute-0 python3.9[256111]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:18:30 compute-0 sudo[256109]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:30 compute-0 ceph-mon[191910]: pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:30 compute-0 sudo[256187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcpehzwottklnusneozckacazslnieed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432709.6651044-115-192374155102727/AnsiballZ_file.py'
Oct 02 19:18:30 compute-0 sudo[256187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:31 compute-0 python3.9[256189]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:18:31 compute-0 sudo[256187]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:31 compute-0 openstack_network_exporter[159337]: ERROR   19:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:18:31 compute-0 openstack_network_exporter[159337]: ERROR   19:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:18:31 compute-0 openstack_network_exporter[159337]: ERROR   19:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:18:31 compute-0 openstack_network_exporter[159337]: ERROR   19:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:18:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:18:31 compute-0 openstack_network_exporter[159337]: ERROR   19:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:18:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:18:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:31 compute-0 sudo[256339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeixozxpfutezblgjdkzeezyzdilehxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432711.3720977-115-203917574211368/AnsiballZ_stat.py'
Oct 02 19:18:31 compute-0 sudo[256339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:32 compute-0 python3.9[256341]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:18:32 compute-0 sudo[256339]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:32 compute-0 sudo[256417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhrbrssgiqzriuzzlpiwegpibjcrpuyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432711.3720977-115-203917574211368/AnsiballZ_file.py'
Oct 02 19:18:32 compute-0 sudo[256417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:32 compute-0 python3.9[256419]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:18:32 compute-0 sudo[256417]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:32 compute-0 ceph-mon[191910]: pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:33 compute-0 sudo[256569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlorgdlyrzjvytcudzblbsypusxedtue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432712.9614675-115-222217029432967/AnsiballZ_stat.py'
Oct 02 19:18:33 compute-0 sudo[256569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:18:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:18:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:18:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:18:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:18:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:18:33 compute-0 python3.9[256571]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:18:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:33 compute-0 sudo[256569]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:34 compute-0 sudo[256647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkpmmhafnvmlgtrhjffqtjhgezbrhanl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432712.9614675-115-222217029432967/AnsiballZ_file.py'
Oct 02 19:18:34 compute-0 sudo[256647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:18:34 compute-0 python3.9[256649]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:18:34 compute-0 sudo[256647]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:34 compute-0 ceph-mon[191910]: pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:35 compute-0 sudo[256799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfrcpeghukdtfsbaluqjssbraqjwdhdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432714.835184-150-5302603261355/AnsiballZ_file.py'
Oct 02 19:18:35 compute-0 sudo[256799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:35 compute-0 python3.9[256801]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:18:35 compute-0 sudo[256799]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:36 compute-0 sudo[256951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gagqagagrcuuorlvfrnbytexjhdqdhme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432715.906803-150-23839226208372/AnsiballZ_file.py'
Oct 02 19:18:36 compute-0 sudo[256951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:36 compute-0 python3.9[256953]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:18:36 compute-0 sudo[256951]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:36 compute-0 ceph-mon[191910]: pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:37 compute-0 sudo[257103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utppdcbhtaqquzgrpzwajxaqmsxbvyck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432716.9781346-165-52003798757984/AnsiballZ_stat.py'
Oct 02 19:18:37 compute-0 sudo[257103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:37 compute-0 python3.9[257105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:18:37 compute-0 sudo[257103]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:38 compute-0 sudo[257226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysoqeomdajyflrincgmqihnkeygduujh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432716.9781346-165-52003798757984/AnsiballZ_copy.py'
Oct 02 19:18:38 compute-0 sudo[257226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:38 compute-0 python3.9[257228]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759432716.9781346-165-52003798757984/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=41095a37aabc2859e389656d28d81337337c6ceb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:18:38 compute-0 sudo[257226]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:38 compute-0 ceph-mon[191910]: pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:18:39 compute-0 sudo[257378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mufxozwmwdsadwmepmxwqkpwoiuhykjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432718.9119954-165-207258833090323/AnsiballZ_stat.py'
Oct 02 19:18:39 compute-0 sudo[257378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:39 compute-0 python3.9[257380]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:18:39 compute-0 sudo[257378]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:40 compute-0 sudo[257501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asstgpvutzmyxtslgrvpiolzprsbqxcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432718.9119954-165-207258833090323/AnsiballZ_copy.py'
Oct 02 19:18:40 compute-0 sudo[257501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:40 compute-0 python3.9[257503]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759432718.9119954-165-207258833090323/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=0ba8230f7bd65fdaafc1bb560aa96358742b150a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:18:40 compute-0 sudo[257501]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:40 compute-0 ceph-mon[191910]: pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:41 compute-0 sudo[257653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbjluvjotpqpyxqkeijhjcfjtiqqvuqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432720.6535885-165-281368772174875/AnsiballZ_stat.py'
Oct 02 19:18:41 compute-0 sudo[257653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:41 compute-0 python3.9[257655]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:18:41 compute-0 sudo[257653]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:41 compute-0 sudo[257776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxoizcjpxhxhcvwplktvrjfpznnbqjbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432720.6535885-165-281368772174875/AnsiballZ_copy.py'
Oct 02 19:18:41 compute-0 sudo[257776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:42 compute-0 python3.9[257778]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759432720.6535885-165-281368772174875/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=543b19ae6b7136839b96c9e132c8bada05750151 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:18:42 compute-0 sudo[257776]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:42 compute-0 ceph-mon[191910]: pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:42 compute-0 sudo[257928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqafduxrvnhdhbffapgmrwjmtnwxnzgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432722.463944-209-210900720084895/AnsiballZ_file.py'
Oct 02 19:18:42 compute-0 sudo[257928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:43 compute-0 python3.9[257930]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:18:43 compute-0 sudo[257928]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:43 compute-0 sudo[258080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmwwbtokfyrhuimrtixyjwdydubhumfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432723.3469446-209-13952454947015/AnsiballZ_file.py'
Oct 02 19:18:44 compute-0 sudo[258080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:44 compute-0 python3.9[258082]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:18:44 compute-0 sudo[258080]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:18:44 compute-0 ceph-mon[191910]: pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:45 compute-0 sudo[258232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtbrfrynoietrdlddqgtcmfdajdyctij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432724.5793285-224-163323540757654/AnsiballZ_stat.py'
Oct 02 19:18:45 compute-0 sudo[258232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:45 compute-0 python3.9[258234]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:18:45 compute-0 sudo[258232]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:45 compute-0 sudo[258310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfwxozjzplzmngrczhrtvnhrqyvklmpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432724.5793285-224-163323540757654/AnsiballZ_file.py'
Oct 02 19:18:45 compute-0 sudo[258310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:45 compute-0 podman[258313]: 2025-10-02 19:18:45.992469156 +0000 UTC m=+0.119629407 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:18:45 compute-0 podman[258312]: 2025-10-02 19:18:45.995966488 +0000 UTC m=+0.126745454 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:18:46 compute-0 python3.9[258314]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:18:46 compute-0 sudo[258310]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:46 compute-0 ceph-mon[191910]: pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:46 compute-0 sudo[258504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acffricztcosnqdbolalaxetqbyrmxjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432726.3762896-224-75698192354358/AnsiballZ_stat.py'
Oct 02 19:18:46 compute-0 sudo[258504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:47 compute-0 python3.9[258506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:18:47 compute-0 sudo[258504]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:47 compute-0 sudo[258582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luhxpovodonkefaqmswwloaqheqirvez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432726.3762896-224-75698192354358/AnsiballZ_file.py'
Oct 02 19:18:47 compute-0 sudo[258582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:47 compute-0 python3.9[258584]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:18:47 compute-0 sudo[258582]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:48 compute-0 sudo[258734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjmgqsmtvhrtbejtouyucyimvbjowqnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432728.092816-224-101455412172608/AnsiballZ_stat.py'
Oct 02 19:18:48 compute-0 sudo[258734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:48 compute-0 ceph-mon[191910]: pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:48 compute-0 python3.9[258736]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:18:48 compute-0 sudo[258734]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:18:49 compute-0 sudo[258812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abrzucnktplbcqumkxvnedotbicmyyjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432728.092816-224-101455412172608/AnsiballZ_file.py'
Oct 02 19:18:49 compute-0 sudo[258812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:49 compute-0 python3.9[258814]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:18:49 compute-0 sudo[258812]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:50 compute-0 sudo[258991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttqtezhzuredsnoddovcxkgwvksivekl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432729.9625041-259-85573018762205/AnsiballZ_file.py'
Oct 02 19:18:50 compute-0 sudo[258991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:50 compute-0 podman[258939]: 2025-10-02 19:18:50.554046907 +0000 UTC m=+0.129838877 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.expose-services=, container_name=openstack_network_exporter, config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 19:18:50 compute-0 podman[258940]: 2025-10-02 19:18:50.586142634 +0000 UTC m=+0.155570426 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 19:18:50 compute-0 python3.9[259002]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:18:50 compute-0 sudo[258991]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:50 compute-0 ceph-mon[191910]: pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:51 compute-0 sudo[259159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzrrutfcetzvxkvbgxxrogwddjppwcda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432731.0088584-259-264816732242028/AnsiballZ_file.py'
Oct 02 19:18:51 compute-0 sudo[259159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:51 compute-0 python3.9[259161]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:18:51 compute-0 sudo[259159]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:52 compute-0 sudo[259311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atmrtbomwyrpqxfpkpfwmnlrjvpwxfct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432732.2255626-274-194682134489639/AnsiballZ_stat.py'
Oct 02 19:18:52 compute-0 sudo[259311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:52 compute-0 ceph-mon[191910]: pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:18:52.928829) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432732928854, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 706, "num_deletes": 251, "total_data_size": 872807, "memory_usage": 885640, "flush_reason": "Manual Compaction"}
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432732935481, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 864955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8990, "largest_seqno": 9695, "table_properties": {"data_size": 861305, "index_size": 1494, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7947, "raw_average_key_size": 18, "raw_value_size": 853967, "raw_average_value_size": 1990, "num_data_blocks": 69, "num_entries": 429, "num_filter_entries": 429, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432675, "oldest_key_time": 1759432675, "file_creation_time": 1759432732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 6689 microseconds, and 2761 cpu microseconds.
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:18:52.935520) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 864955 bytes OK
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:18:52.935531) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:18:52.937314) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:18:52.937324) EVENT_LOG_v1 {"time_micros": 1759432732937321, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:18:52.937335) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 869150, prev total WAL file size 869150, number of live WAL files 2.
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:18:52.938348) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(844KB)], [23(6655KB)]
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432732938493, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7680006, "oldest_snapshot_seqno": -1}
Oct 02 19:18:52 compute-0 python3.9[259313]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3286 keys, 6105562 bytes, temperature: kUnknown
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432732983486, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6105562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6081593, "index_size": 14644, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79673, "raw_average_key_size": 24, "raw_value_size": 6020140, "raw_average_value_size": 1832, "num_data_blocks": 639, "num_entries": 3286, "num_filter_entries": 3286, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759432732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:18:52.983695) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6105562 bytes
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:18:52.985824) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.5 rd, 135.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.5 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(15.9) write-amplify(7.1) OK, records in: 3800, records dropped: 514 output_compression: NoCompression
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:18:52.985862) EVENT_LOG_v1 {"time_micros": 1759432732985846, "job": 8, "event": "compaction_finished", "compaction_time_micros": 45049, "compaction_time_cpu_micros": 29908, "output_level": 6, "num_output_files": 1, "total_output_size": 6105562, "num_input_records": 3800, "num_output_records": 3286, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432732986164, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432732987289, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:18:52.937720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:18:52.987516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:18:52.987522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:18:52.987524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:18:52.987526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:18:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:18:52.987528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:18:53 compute-0 sudo[259311]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:53 compute-0 sudo[259389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypnkwlffaxvruxiadmlvdehxnhnxauxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432732.2255626-274-194682134489639/AnsiballZ_file.py'
Oct 02 19:18:53 compute-0 sudo[259389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:53 compute-0 python3.9[259391]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:18:53 compute-0 sudo[259389]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:18:54 compute-0 sudo[259541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhpqvuvinboaqyickqyackalrqirgivy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432733.9605746-274-207846196138632/AnsiballZ_stat.py'
Oct 02 19:18:54 compute-0 sudo[259541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:54 compute-0 python3.9[259543]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:18:54 compute-0 sudo[259541]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:54 compute-0 ceph-mon[191910]: pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:55 compute-0 podman[259593]: 2025-10-02 19:18:55.219675763 +0000 UTC m=+0.114148363 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Oct 02 19:18:55 compute-0 sudo[259650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hotabqorkpgtneiogdstcyssoltoazrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432733.9605746-274-207846196138632/AnsiballZ_file.py'
Oct 02 19:18:55 compute-0 sudo[259650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:55 compute-0 podman[259594]: 2025-10-02 19:18:55.23508789 +0000 UTC m=+0.119616087 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:18:55 compute-0 python3.9[259662]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:18:55 compute-0 sudo[259650]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:56 compute-0 sudo[259812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kswfpcguagpeiscuhrwvjgzugispqqve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432735.8461618-274-154777492284897/AnsiballZ_stat.py'
Oct 02 19:18:56 compute-0 sudo[259812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:56 compute-0 python3.9[259814]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:18:56 compute-0 sudo[259812]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:56 compute-0 ceph-mon[191910]: pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:57 compute-0 sudo[259890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kequgvqqpowksrqhsunldufnjuuyijfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432735.8461618-274-154777492284897/AnsiballZ_file.py'
Oct 02 19:18:57 compute-0 sudo[259890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:57 compute-0 podman[259892]: 2025-10-02 19:18:57.258194241 +0000 UTC m=+0.126370506 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, container_name=kepler, maintainer=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, io.buildah.version=1.29.0)
Oct 02 19:18:57 compute-0 python3.9[259893]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:18:57 compute-0 sudo[259890]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:58 compute-0 sudo[260061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkijrxdrpivovanibymmzsslmzeeognm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432738.2083926-325-71343408147122/AnsiballZ_file.py'
Oct 02 19:18:58 compute-0 sudo[260061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:18:58 compute-0 python3.9[260063]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:18:58 compute-0 ceph-mon[191910]: pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:58 compute-0 sudo[260061]: pam_unix(sudo:session): session closed for user root
Oct 02 19:18:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:18:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:18:59 compute-0 podman[157186]: time="2025-10-02T19:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:18:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:18:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6841 "" "Go-http-client/1.1"
Oct 02 19:18:59 compute-0 sudo[260213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mniakzcjzbagxldubpeutpkbguogujcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432739.2874868-333-5727736692675/AnsiballZ_stat.py'
Oct 02 19:18:59 compute-0 sudo[260213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:00 compute-0 python3.9[260215]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:19:00 compute-0 sudo[260213]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:00 compute-0 sudo[260291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axtdzjuxiftppwwfdiwinnmbedhfnvau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432739.2874868-333-5727736692675/AnsiballZ_file.py'
Oct 02 19:19:00 compute-0 sudo[260291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:00 compute-0 python3.9[260293]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:19:00 compute-0 sudo[260291]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:00 compute-0 ceph-mon[191910]: pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:01 compute-0 openstack_network_exporter[159337]: ERROR   19:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:19:01 compute-0 openstack_network_exporter[159337]: ERROR   19:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:19:01 compute-0 openstack_network_exporter[159337]: ERROR   19:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:19:01 compute-0 openstack_network_exporter[159337]: ERROR   19:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:19:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:19:01 compute-0 openstack_network_exporter[159337]: ERROR   19:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:19:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:19:01 compute-0 sudo[260443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkiisalckoviyjexkqrwrqlrhaddrdet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432741.1241543-346-46084356329987/AnsiballZ_file.py'
Oct 02 19:19:01 compute-0 sudo[260443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:01 compute-0 python3.9[260445]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:19:01 compute-0 sudo[260443]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:02 compute-0 sudo[260595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukuufcmzwhxohbndgiprwpgxknntzxrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432742.0935204-354-223471219460676/AnsiballZ_stat.py'
Oct 02 19:19:02 compute-0 sudo[260595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:02 compute-0 python3.9[260597]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:19:02 compute-0 sudo[260595]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:02 compute-0 ceph-mon[191910]: pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:03 compute-0 sudo[260673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgkqjpoenctgbeybqgfdfyntrfdmhnyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432742.0935204-354-223471219460676/AnsiballZ_file.py'
Oct 02 19:19:03 compute-0 sudo[260673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:19:03
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'backups', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta']
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:19:03 compute-0 python3.9[260675]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:19:03 compute-0 sudo[260673]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:19:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:19:04 compute-0 sudo[260825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctfylhmuraeftqnazhvlvkpbewhsbihl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432743.8708155-367-210320533050135/AnsiballZ_file.py'
Oct 02 19:19:04 compute-0 sudo[260825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:04 compute-0 python3.9[260827]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:19:04 compute-0 sudo[260825]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:05 compute-0 ceph-mon[191910]: pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:05 compute-0 sudo[260977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlykzenkptsgyffactiiogcyoypkdvzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432745.0673604-375-142837549514591/AnsiballZ_stat.py'
Oct 02 19:19:05 compute-0 sudo[260977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:05 compute-0 python3.9[260979]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:19:05 compute-0 sudo[260977]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:06 compute-0 sudo[261100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qziggltsapjjwkgotyprllqouejdkcws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432745.0673604-375-142837549514591/AnsiballZ_copy.py'
Oct 02 19:19:06 compute-0 sudo[261100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:06 compute-0 python3.9[261102]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759432745.0673604-375-142837549514591/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81b905e41eda5af3080e544e4dc3bafb229246e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:19:06 compute-0 sudo[261100]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:07 compute-0 ceph-mon[191910]: pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:07 compute-0 sudo[261252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfpxtztmmkihssrmznzgswpnejljjnog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432747.0562763-391-135639198149438/AnsiballZ_file.py'
Oct 02 19:19:07 compute-0 sudo[261252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:07 compute-0 python3.9[261254]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:19:07 compute-0 sudo[261252]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:08 compute-0 sudo[261404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esahhzaoqlroxwznlolakaidyrtlxcso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432748.1191335-399-103054017797082/AnsiballZ_stat.py'
Oct 02 19:19:08 compute-0 sudo[261404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:08 compute-0 python3.9[261406]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:19:08 compute-0 sudo[261404]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:09 compute-0 sudo[261409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:19:09 compute-0 ceph-mon[191910]: pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:09 compute-0 sudo[261409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:19:09 compute-0 sudo[261409]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:09 compute-0 sudo[261457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:19:09 compute-0 sudo[261457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:19:09 compute-0 sudo[261457]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:09 compute-0 sudo[261505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:19:09 compute-0 sudo[261505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:19:09 compute-0 sudo[261505]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:09 compute-0 sudo[261557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohzairbqdwgtxyscmqlqabysjgfcockd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432748.1191335-399-103054017797082/AnsiballZ_file.py'
Oct 02 19:19:09 compute-0 sudo[261557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:19:09 compute-0 sudo[261558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:19:09 compute-0 sudo[261558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:19:09 compute-0 python3.9[261564]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:19:09 compute-0 sudo[261557]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:10 compute-0 sudo[261558]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:19:10 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:19:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:19:10 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:19:10 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:19:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:19:10 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:19:10 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 119f5c15-4e62-4b50-9bd9-11f0f5231f3e does not exist
Oct 02 19:19:10 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 893d0d3f-6439-45dd-9d52-592450e66962 does not exist
Oct 02 19:19:10 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 27a836ce-acba-4c56-aee2-15a8fd81e743 does not exist
Oct 02 19:19:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:19:10 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:19:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:19:10 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:19:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:19:10 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:19:10 compute-0 sudo[261697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:19:10 compute-0 sudo[261697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:19:10 compute-0 sudo[261697]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:10 compute-0 sudo[261750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:19:10 compute-0 sudo[261750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:19:10 compute-0 sudo[261750]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:10 compute-0 sudo[261826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngksqatsqznomuaztfzfxvcexqmcxzox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432749.8977613-412-80596630477031/AnsiballZ_file.py'
Oct 02 19:19:10 compute-0 sudo[261826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:10 compute-0 sudo[261807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:19:10 compute-0 sudo[261807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:19:10 compute-0 sudo[261807]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:10 compute-0 sudo[261841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:19:10 compute-0 sudo[261841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:19:10 compute-0 python3.9[261838]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:19:10 compute-0 sudo[261826]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:11 compute-0 ceph-mon[191910]: pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:11 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:19:11 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:19:11 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:19:11 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:19:11 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:19:11 compute-0 podman[261981]: 2025-10-02 19:19:11.135607748 +0000 UTC m=+0.080250009 container create 20fce5e0162fde8d61474c0d9821d1385fb11bff65845211f13cb768260074e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_benz, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 19:19:11 compute-0 systemd[1]: Started libpod-conmon-20fce5e0162fde8d61474c0d9821d1385fb11bff65845211f13cb768260074e1.scope.
Oct 02 19:19:11 compute-0 podman[261981]: 2025-10-02 19:19:11.110603268 +0000 UTC m=+0.055245539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:19:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:19:11 compute-0 podman[261981]: 2025-10-02 19:19:11.25241913 +0000 UTC m=+0.197061381 container init 20fce5e0162fde8d61474c0d9821d1385fb11bff65845211f13cb768260074e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_benz, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 19:19:11 compute-0 podman[261981]: 2025-10-02 19:19:11.26225226 +0000 UTC m=+0.206894501 container start 20fce5e0162fde8d61474c0d9821d1385fb11bff65845211f13cb768260074e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_benz, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 19:19:11 compute-0 podman[261981]: 2025-10-02 19:19:11.266365108 +0000 UTC m=+0.211007389 container attach 20fce5e0162fde8d61474c0d9821d1385fb11bff65845211f13cb768260074e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_benz, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 19:19:11 compute-0 hardcore_benz[262038]: 167 167
Oct 02 19:19:11 compute-0 systemd[1]: libpod-20fce5e0162fde8d61474c0d9821d1385fb11bff65845211f13cb768260074e1.scope: Deactivated successfully.
Oct 02 19:19:11 compute-0 conmon[262038]: conmon 20fce5e0162fde8d6147 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-20fce5e0162fde8d61474c0d9821d1385fb11bff65845211f13cb768260074e1.scope/container/memory.events
Oct 02 19:19:11 compute-0 podman[261981]: 2025-10-02 19:19:11.27098515 +0000 UTC m=+0.215627401 container died 20fce5e0162fde8d61474c0d9821d1385fb11bff65845211f13cb768260074e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:19:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-43ff7ef1e313c6d30ae3b478d5752c36f49ce57becaeb30735c5008f7e74fad1-merged.mount: Deactivated successfully.
Oct 02 19:19:11 compute-0 sudo[262085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqgyvluecqmvryjwiuitedjsnbtstofo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432750.907634-420-71884812335003/AnsiballZ_stat.py'
Oct 02 19:19:11 compute-0 sudo[262085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:11 compute-0 podman[261981]: 2025-10-02 19:19:11.388362017 +0000 UTC m=+0.333004268 container remove 20fce5e0162fde8d61474c0d9821d1385fb11bff65845211f13cb768260074e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 19:19:11 compute-0 systemd[1]: libpod-conmon-20fce5e0162fde8d61474c0d9821d1385fb11bff65845211f13cb768260074e1.scope: Deactivated successfully.
Oct 02 19:19:11 compute-0 python3.9[262087]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:19:11 compute-0 sudo[262085]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:11 compute-0 podman[262095]: 2025-10-02 19:19:11.704324984 +0000 UTC m=+0.137097908 container create a1d609d9a67f1db3334d93999b2c584d86555354f08428086659a567cd4d2592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lovelace, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 02 19:19:11 compute-0 podman[262095]: 2025-10-02 19:19:11.620250936 +0000 UTC m=+0.053023910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:19:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:11 compute-0 systemd[1]: Started libpod-conmon-a1d609d9a67f1db3334d93999b2c584d86555354f08428086659a567cd4d2592.scope.
Oct 02 19:19:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:19:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7789f79659f35076cfbcda3493e5af61979709855241768a5c9c63899c9d4b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:19:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7789f79659f35076cfbcda3493e5af61979709855241768a5c9c63899c9d4b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:19:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7789f79659f35076cfbcda3493e5af61979709855241768a5c9c63899c9d4b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:19:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7789f79659f35076cfbcda3493e5af61979709855241768a5c9c63899c9d4b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:19:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7789f79659f35076cfbcda3493e5af61979709855241768a5c9c63899c9d4b3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:19:11 compute-0 podman[262095]: 2025-10-02 19:19:11.913808562 +0000 UTC m=+0.346581526 container init a1d609d9a67f1db3334d93999b2c584d86555354f08428086659a567cd4d2592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct 02 19:19:11 compute-0 podman[262095]: 2025-10-02 19:19:11.926653911 +0000 UTC m=+0.359426805 container start a1d609d9a67f1db3334d93999b2c584d86555354f08428086659a567cd4d2592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lovelace, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 19:19:11 compute-0 podman[262095]: 2025-10-02 19:19:11.955153903 +0000 UTC m=+0.387926797 container attach a1d609d9a67f1db3334d93999b2c584d86555354f08428086659a567cd4d2592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lovelace, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 19:19:12 compute-0 sudo[262191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pevuitvlwuxapgppejaowusiggasnjge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432750.907634-420-71884812335003/AnsiballZ_file.py'
Oct 02 19:19:12 compute-0 sudo[262191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:19:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:19:12 compute-0 python3.9[262193]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:19:12 compute-0 sudo[262191]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:13 compute-0 sudo[262362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gawekevforohzclzrzbinipglybyhcev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432752.5865605-433-37426452199352/AnsiballZ_file.py'
Oct 02 19:19:13 compute-0 sudo[262362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:13 compute-0 ceph-mon[191910]: pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:13 compute-0 charming_lovelace[262136]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:19:13 compute-0 charming_lovelace[262136]: --> relative data size: 1.0
Oct 02 19:19:13 compute-0 charming_lovelace[262136]: --> All data devices are unavailable
Oct 02 19:19:13 compute-0 systemd[1]: libpod-a1d609d9a67f1db3334d93999b2c584d86555354f08428086659a567cd4d2592.scope: Deactivated successfully.
Oct 02 19:19:13 compute-0 systemd[1]: libpod-a1d609d9a67f1db3334d93999b2c584d86555354f08428086659a567cd4d2592.scope: Consumed 1.211s CPU time.
Oct 02 19:19:13 compute-0 podman[262095]: 2025-10-02 19:19:13.212964571 +0000 UTC m=+1.645737485 container died a1d609d9a67f1db3334d93999b2c584d86555354f08428086659a567cd4d2592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 19:19:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7789f79659f35076cfbcda3493e5af61979709855241768a5c9c63899c9d4b3-merged.mount: Deactivated successfully.
Oct 02 19:19:13 compute-0 podman[262095]: 2025-10-02 19:19:13.321991368 +0000 UTC m=+1.754764272 container remove a1d609d9a67f1db3334d93999b2c584d86555354f08428086659a567cd4d2592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lovelace, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:19:13 compute-0 systemd[1]: libpod-conmon-a1d609d9a67f1db3334d93999b2c584d86555354f08428086659a567cd4d2592.scope: Deactivated successfully.
Oct 02 19:19:13 compute-0 python3.9[262365]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:19:13 compute-0 sudo[261841]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:13 compute-0 sudo[262362]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:13 compute-0 sudo[262384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:19:13 compute-0 sudo[262384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:19:13 compute-0 sudo[262384]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:13 compute-0 sudo[262432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:19:13 compute-0 sudo[262432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:19:13 compute-0 sudo[262432]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:13 compute-0 sudo[262466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:19:13 compute-0 sudo[262466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:19:13 compute-0 sudo[262466]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:13 compute-0 sudo[262518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:19:13 compute-0 sudo[262518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:19:14 compute-0 sudo[262657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzglhzyxvxwtguwuibdafnxbuxomgtee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432753.6409168-441-152694783470493/AnsiballZ_stat.py'
Oct 02 19:19:14 compute-0 sudo[262657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:19:14 compute-0 podman[262674]: 2025-10-02 19:19:14.312317829 +0000 UTC m=+0.067768569 container create 08f007d1a41b775b2f295b7da5c54ee4933b3de093c55dbd26f438fcbb425fa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:19:14 compute-0 python3.9[262666]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:19:14 compute-0 sudo[262657]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:14 compute-0 podman[262674]: 2025-10-02 19:19:14.283062507 +0000 UTC m=+0.038513307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:19:14 compute-0 systemd[1]: Started libpod-conmon-08f007d1a41b775b2f295b7da5c54ee4933b3de093c55dbd26f438fcbb425fa3.scope.
Oct 02 19:19:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:19:14 compute-0 podman[262674]: 2025-10-02 19:19:14.436478615 +0000 UTC m=+0.191929345 container init 08f007d1a41b775b2f295b7da5c54ee4933b3de093c55dbd26f438fcbb425fa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 19:19:14 compute-0 podman[262674]: 2025-10-02 19:19:14.451024349 +0000 UTC m=+0.206475079 container start 08f007d1a41b775b2f295b7da5c54ee4933b3de093c55dbd26f438fcbb425fa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 19:19:14 compute-0 magical_villani[262690]: 167 167
Oct 02 19:19:14 compute-0 podman[262674]: 2025-10-02 19:19:14.456088262 +0000 UTC m=+0.211539002 container attach 08f007d1a41b775b2f295b7da5c54ee4933b3de093c55dbd26f438fcbb425fa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:19:14 compute-0 systemd[1]: libpod-08f007d1a41b775b2f295b7da5c54ee4933b3de093c55dbd26f438fcbb425fa3.scope: Deactivated successfully.
Oct 02 19:19:14 compute-0 podman[262674]: 2025-10-02 19:19:14.457679444 +0000 UTC m=+0.213130144 container died 08f007d1a41b775b2f295b7da5c54ee4933b3de093c55dbd26f438fcbb425fa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:19:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e76ff5e9d8d245497ec932f50638e16c329d00dfba8fd689d912c8d64627e90b-merged.mount: Deactivated successfully.
Oct 02 19:19:14 compute-0 podman[262674]: 2025-10-02 19:19:14.511285209 +0000 UTC m=+0.266735919 container remove 08f007d1a41b775b2f295b7da5c54ee4933b3de093c55dbd26f438fcbb425fa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:19:14 compute-0 systemd[1]: libpod-conmon-08f007d1a41b775b2f295b7da5c54ee4933b3de093c55dbd26f438fcbb425fa3.scope: Deactivated successfully.
Oct 02 19:19:14 compute-0 podman[262759]: 2025-10-02 19:19:14.728652193 +0000 UTC m=+0.082711643 container create 11e034149a3fef8fc41352a28e9fb131736b233c1857777ed1fc59ea6dd13974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chaum, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 19:19:14 compute-0 systemd[1]: Started libpod-conmon-11e034149a3fef8fc41352a28e9fb131736b233c1857777ed1fc59ea6dd13974.scope.
Oct 02 19:19:14 compute-0 podman[262759]: 2025-10-02 19:19:14.705997345 +0000 UTC m=+0.060056825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:19:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2af87b8578edc28ad11c44e03b0efe85c34ccbf8dd80cd71c135c8255a29336/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2af87b8578edc28ad11c44e03b0efe85c34ccbf8dd80cd71c135c8255a29336/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2af87b8578edc28ad11c44e03b0efe85c34ccbf8dd80cd71c135c8255a29336/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2af87b8578edc28ad11c44e03b0efe85c34ccbf8dd80cd71c135c8255a29336/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:19:14 compute-0 podman[262759]: 2025-10-02 19:19:14.840683399 +0000 UTC m=+0.194742859 container init 11e034149a3fef8fc41352a28e9fb131736b233c1857777ed1fc59ea6dd13974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chaum, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 19:19:14 compute-0 podman[262759]: 2025-10-02 19:19:14.852216203 +0000 UTC m=+0.206275633 container start 11e034149a3fef8fc41352a28e9fb131736b233c1857777ed1fc59ea6dd13974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:19:14 compute-0 podman[262759]: 2025-10-02 19:19:14.8562484 +0000 UTC m=+0.210307830 container attach 11e034149a3fef8fc41352a28e9fb131736b233c1857777ed1fc59ea6dd13974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chaum, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:19:15 compute-0 ceph-mon[191910]: pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:15 compute-0 sudo[262853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpiyusophzynfueyajzbdjmrtckfacfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432753.6409168-441-152694783470493/AnsiballZ_copy.py'
Oct 02 19:19:15 compute-0 sudo[262853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:15 compute-0 python3.9[262855]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759432753.6409168-441-152694783470493/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81b905e41eda5af3080e544e4dc3bafb229246e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:19:15 compute-0 sudo[262853]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:15 compute-0 cool_chaum[262783]: {
Oct 02 19:19:15 compute-0 cool_chaum[262783]:     "0": [
Oct 02 19:19:15 compute-0 cool_chaum[262783]:         {
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "devices": [
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "/dev/loop3"
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             ],
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "lv_name": "ceph_lv0",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "lv_size": "21470642176",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "name": "ceph_lv0",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "tags": {
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.cluster_name": "ceph",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.crush_device_class": "",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.encrypted": "0",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.osd_id": "0",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.type": "block",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.vdo": "0"
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             },
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "type": "block",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "vg_name": "ceph_vg0"
Oct 02 19:19:15 compute-0 cool_chaum[262783]:         }
Oct 02 19:19:15 compute-0 cool_chaum[262783]:     ],
Oct 02 19:19:15 compute-0 cool_chaum[262783]:     "1": [
Oct 02 19:19:15 compute-0 cool_chaum[262783]:         {
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "devices": [
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "/dev/loop4"
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             ],
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "lv_name": "ceph_lv1",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "lv_size": "21470642176",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "name": "ceph_lv1",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "tags": {
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.cluster_name": "ceph",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.crush_device_class": "",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.encrypted": "0",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.osd_id": "1",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.type": "block",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.vdo": "0"
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             },
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "type": "block",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "vg_name": "ceph_vg1"
Oct 02 19:19:15 compute-0 cool_chaum[262783]:         }
Oct 02 19:19:15 compute-0 cool_chaum[262783]:     ],
Oct 02 19:19:15 compute-0 cool_chaum[262783]:     "2": [
Oct 02 19:19:15 compute-0 cool_chaum[262783]:         {
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "devices": [
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "/dev/loop5"
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             ],
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "lv_name": "ceph_lv2",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "lv_size": "21470642176",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "name": "ceph_lv2",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "tags": {
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.cluster_name": "ceph",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.crush_device_class": "",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.encrypted": "0",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.osd_id": "2",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.type": "block",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:                 "ceph.vdo": "0"
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             },
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "type": "block",
Oct 02 19:19:15 compute-0 cool_chaum[262783]:             "vg_name": "ceph_vg2"
Oct 02 19:19:15 compute-0 cool_chaum[262783]:         }
Oct 02 19:19:15 compute-0 cool_chaum[262783]:     ]
Oct 02 19:19:15 compute-0 cool_chaum[262783]: }
Oct 02 19:19:15 compute-0 systemd[1]: libpod-11e034149a3fef8fc41352a28e9fb131736b233c1857777ed1fc59ea6dd13974.scope: Deactivated successfully.
Oct 02 19:19:15 compute-0 podman[262759]: 2025-10-02 19:19:15.686003174 +0000 UTC m=+1.040062654 container died 11e034149a3fef8fc41352a28e9fb131736b233c1857777ed1fc59ea6dd13974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chaum, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 19:19:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2af87b8578edc28ad11c44e03b0efe85c34ccbf8dd80cd71c135c8255a29336-merged.mount: Deactivated successfully.
Oct 02 19:19:15 compute-0 podman[262759]: 2025-10-02 19:19:15.785941181 +0000 UTC m=+1.140000641 container remove 11e034149a3fef8fc41352a28e9fb131736b233c1857777ed1fc59ea6dd13974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:19:15 compute-0 systemd[1]: libpod-conmon-11e034149a3fef8fc41352a28e9fb131736b233c1857777ed1fc59ea6dd13974.scope: Deactivated successfully.
Oct 02 19:19:15 compute-0 sudo[262518]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:15 compute-0 sudo[262948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:19:15 compute-0 sudo[262948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:19:15 compute-0 sudo[262948]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:16 compute-0 sudo[262996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:19:16 compute-0 sudo[262996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:19:16 compute-0 sudo[262996]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:16 compute-0 sudo[263120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tluohbscwdvgzkjldsfouflpyqnobmwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432755.722963-457-240494802193259/AnsiballZ_file.py'
Oct 02 19:19:16 compute-0 sudo[263120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:16 compute-0 sudo[263064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:19:16 compute-0 sudo[263064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:19:16 compute-0 sudo[263064]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:16 compute-0 podman[263044]: 2025-10-02 19:19:16.307448781 +0000 UTC m=+0.116880705 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 19:19:16 compute-0 podman[263045]: 2025-10-02 19:19:16.312575667 +0000 UTC m=+0.121581770 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:19:16 compute-0 sudo[263140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:19:16 compute-0 sudo[263140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:19:16 compute-0 python3.9[263135]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:19:16 compute-0 sudo[263120]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:16 compute-0 podman[263251]: 2025-10-02 19:19:16.91986429 +0000 UTC m=+0.072039171 container create a9436e4193e14e2927d9533cc0d9d5d1f207896bb6db918bad0f140e46775cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 02 19:19:16 compute-0 systemd[1]: Started libpod-conmon-a9436e4193e14e2927d9533cc0d9d5d1f207896bb6db918bad0f140e46775cf3.scope.
Oct 02 19:19:16 compute-0 podman[263251]: 2025-10-02 19:19:16.891839641 +0000 UTC m=+0.044014522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:19:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:19:17 compute-0 podman[263251]: 2025-10-02 19:19:17.039074226 +0000 UTC m=+0.191249167 container init a9436e4193e14e2927d9533cc0d9d5d1f207896bb6db918bad0f140e46775cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_margulis, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 19:19:17 compute-0 podman[263251]: 2025-10-02 19:19:17.057846931 +0000 UTC m=+0.210021812 container start a9436e4193e14e2927d9533cc0d9d5d1f207896bb6db918bad0f140e46775cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:19:17 compute-0 podman[263251]: 2025-10-02 19:19:17.065157604 +0000 UTC m=+0.217332555 container attach a9436e4193e14e2927d9533cc0d9d5d1f207896bb6db918bad0f140e46775cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_margulis, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 19:19:17 compute-0 vigilant_margulis[263300]: 167 167
Oct 02 19:19:17 compute-0 systemd[1]: libpod-a9436e4193e14e2927d9533cc0d9d5d1f207896bb6db918bad0f140e46775cf3.scope: Deactivated successfully.
Oct 02 19:19:17 compute-0 podman[263251]: 2025-10-02 19:19:17.069828717 +0000 UTC m=+0.222003588 container died a9436e4193e14e2927d9533cc0d9d5d1f207896bb6db918bad0f140e46775cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 19:19:17 compute-0 ceph-mon[191910]: pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb3225e96bf62218cb8c641601a2cfe71c746375392a8e02e58b839026c28826-merged.mount: Deactivated successfully.
Oct 02 19:19:17 compute-0 podman[263251]: 2025-10-02 19:19:17.147040065 +0000 UTC m=+0.299214916 container remove a9436e4193e14e2927d9533cc0d9d5d1f207896bb6db918bad0f140e46775cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:19:17 compute-0 systemd[1]: libpod-conmon-a9436e4193e14e2927d9533cc0d9d5d1f207896bb6db918bad0f140e46775cf3.scope: Deactivated successfully.
Oct 02 19:19:17 compute-0 sudo[263387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynneihwrjemxiqyszntpvsgeggdnlvff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432756.8148408-465-31691041716781/AnsiballZ_stat.py'
Oct 02 19:19:17 compute-0 sudo[263387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:17 compute-0 podman[263395]: 2025-10-02 19:19:17.4011746 +0000 UTC m=+0.060930528 container create d11fe49acb2b8b86733df32b2b7e144d87899da6f3ffad57a63da55191777014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:19:17 compute-0 systemd[1]: Started libpod-conmon-d11fe49acb2b8b86733df32b2b7e144d87899da6f3ffad57a63da55191777014.scope.
Oct 02 19:19:17 compute-0 podman[263395]: 2025-10-02 19:19:17.382865337 +0000 UTC m=+0.042621285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:19:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:19:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d641e6e5b46cf0bdc09a36b1c718c4d3d7a1867c2add7bae2aa7bf40bde53c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:19:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d641e6e5b46cf0bdc09a36b1c718c4d3d7a1867c2add7bae2aa7bf40bde53c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:19:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d641e6e5b46cf0bdc09a36b1c718c4d3d7a1867c2add7bae2aa7bf40bde53c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:19:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d641e6e5b46cf0bdc09a36b1c718c4d3d7a1867c2add7bae2aa7bf40bde53c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:19:17 compute-0 podman[263395]: 2025-10-02 19:19:17.559080437 +0000 UTC m=+0.218836445 container init d11fe49acb2b8b86733df32b2b7e144d87899da6f3ffad57a63da55191777014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 19:19:17 compute-0 podman[263395]: 2025-10-02 19:19:17.590142036 +0000 UTC m=+0.249898004 container start d11fe49acb2b8b86733df32b2b7e144d87899da6f3ffad57a63da55191777014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_haslett, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:19:17 compute-0 podman[263395]: 2025-10-02 19:19:17.59784495 +0000 UTC m=+0.257600918 container attach d11fe49acb2b8b86733df32b2b7e144d87899da6f3ffad57a63da55191777014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct 02 19:19:17 compute-0 python3.9[263391]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:19:17 compute-0 sudo[263387]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:18 compute-0 sudo[263492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xocqumnfpflirtyqnsridysfsrvnsqco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432756.8148408-465-31691041716781/AnsiballZ_file.py'
Oct 02 19:19:18 compute-0 sudo[263492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:18 compute-0 python3.9[263494]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:19:18 compute-0 sudo[263492]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]: {
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:         "osd_id": 1,
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:         "type": "bluestore"
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:     },
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:         "osd_id": 2,
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:         "type": "bluestore"
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:     },
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:         "osd_id": 0,
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:         "type": "bluestore"
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]:     }
Oct 02 19:19:18 compute-0 vigilant_haslett[263412]: }
Oct 02 19:19:18 compute-0 systemd[1]: libpod-d11fe49acb2b8b86733df32b2b7e144d87899da6f3ffad57a63da55191777014.scope: Deactivated successfully.
Oct 02 19:19:18 compute-0 systemd[1]: libpod-d11fe49acb2b8b86733df32b2b7e144d87899da6f3ffad57a63da55191777014.scope: Consumed 1.192s CPU time.
Oct 02 19:19:18 compute-0 podman[263395]: 2025-10-02 19:19:18.782063565 +0000 UTC m=+1.441819503 container died d11fe49acb2b8b86733df32b2b7e144d87899da6f3ffad57a63da55191777014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_haslett, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:19:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d641e6e5b46cf0bdc09a36b1c718c4d3d7a1867c2add7bae2aa7bf40bde53c4-merged.mount: Deactivated successfully.
Oct 02 19:19:18 compute-0 podman[263395]: 2025-10-02 19:19:18.862723624 +0000 UTC m=+1.522479552 container remove d11fe49acb2b8b86733df32b2b7e144d87899da6f3ffad57a63da55191777014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:19:18 compute-0 systemd[1]: libpod-conmon-d11fe49acb2b8b86733df32b2b7e144d87899da6f3ffad57a63da55191777014.scope: Deactivated successfully.
Oct 02 19:19:18 compute-0 sudo[263140]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:19:18 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:19:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:19:18 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:19:18 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev cfc8619b-91dc-44bd-8e2c-dd13faf4b8e6 does not exist
Oct 02 19:19:18 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 74f2efa8-e870-464a-9ba3-7502d8413fdc does not exist
Oct 02 19:19:19 compute-0 sudo[263631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:19:19 compute-0 sudo[263631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:19:19 compute-0 sudo[263631]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:19 compute-0 sudo[263674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:19:19 compute-0 sudo[263674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:19:19 compute-0 sudo[263674]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:19 compute-0 ceph-mon[191910]: pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:19:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:19:19 compute-0 sudo[263736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvmioestgvyjcuzwpdkjgoeqqdrgmdbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432758.7312071-478-125687876379750/AnsiballZ_file.py'
Oct 02 19:19:19 compute-0 sudo[263736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:19:19 compute-0 python3.9[263738]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:19:19 compute-0 sudo[263736]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:20 compute-0 sudo[263889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oujmdbqsvxrkzrqnwqggyedngfqihmqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432759.81187-486-221309747912363/AnsiballZ_stat.py'
Oct 02 19:19:20 compute-0 sudo[263889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:20 compute-0 python3.9[263891]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:19:20 compute-0 sudo[263889]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:21 compute-0 sudo[263998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbyukoepwyuytvvhswnhalpqxcbemghe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432759.81187-486-221309747912363/AnsiballZ_file.py'
Oct 02 19:19:21 compute-0 sudo[263998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:21 compute-0 podman[263941]: 2025-10-02 19:19:21.142827187 +0000 UTC m=+0.154603790 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 19:19:21 compute-0 ceph-mon[191910]: pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:21 compute-0 podman[263942]: 2025-10-02 19:19:21.200286873 +0000 UTC m=+0.204835916 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 19:19:21 compute-0 python3.9[264007]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:19:21 compute-0 sudo[263998]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:21 compute-0 sshd-session[254399]: Connection closed by 192.168.122.30 port 51410
Oct 02 19:19:21 compute-0 sshd-session[254396]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:19:21 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Oct 02 19:19:21 compute-0 systemd[1]: session-51.scope: Consumed 56.072s CPU time.
Oct 02 19:19:21 compute-0 systemd-logind[793]: Session 51 logged out. Waiting for processes to exit.
Oct 02 19:19:21 compute-0 systemd-logind[793]: Removed session 51.
Oct 02 19:19:23 compute-0 ceph-mon[191910]: pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:19:25 compute-0 ceph-mon[191910]: pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:25 compute-0 podman[264038]: 2025-10-02 19:19:25.674049526 +0000 UTC m=+0.102816224 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Oct 02 19:19:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:25 compute-0 podman[264039]: 2025-10-02 19:19:25.724137728 +0000 UTC m=+0.143693723 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:19:27 compute-0 ceph-mon[191910]: pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:27 compute-0 podman[264079]: 2025-10-02 19:19:27.717820052 +0000 UTC m=+0.140885599 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, release-0.7.12=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, release=1214.1726694543)
Oct 02 19:19:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:27 compute-0 sshd-session[264099]: Accepted publickey for zuul from 192.168.122.30 port 50486 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:19:27 compute-0 systemd-logind[793]: New session 52 of user zuul.
Oct 02 19:19:27 compute-0 systemd[1]: Started Session 52 of User zuul.
Oct 02 19:19:27 compute-0 sshd-session[264099]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:19:28 compute-0 sudo[264252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utdbzzrbutmriffvmzcvvmqulfpcvciy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432768.1418817-22-12293115863917/AnsiballZ_file.py'
Oct 02 19:19:28 compute-0 sudo[264252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:29 compute-0 python3.9[264254]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:19:29 compute-0 sudo[264252]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:29 compute-0 ceph-mon[191910]: pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:19:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:29 compute-0 podman[157186]: time="2025-10-02T19:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:19:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:19:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6834 "" "Go-http-client/1.1"
Oct 02 19:19:30 compute-0 sudo[264404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzlgurngtxpaebvwxakzrmbcgdtjhgvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432769.488801-34-197342128738920/AnsiballZ_stat.py'
Oct 02 19:19:30 compute-0 sudo[264404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:30 compute-0 python3.9[264406]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:19:30 compute-0 sudo[264404]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:31 compute-0 ceph-mon[191910]: pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:31 compute-0 sudo[264527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-molfpcxvzmdilwrwgxxiorqhfpxpnaec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432769.488801-34-197342128738920/AnsiballZ_copy.py'
Oct 02 19:19:31 compute-0 sudo[264527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:31 compute-0 openstack_network_exporter[159337]: ERROR   19:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:19:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:19:31 compute-0 openstack_network_exporter[159337]: ERROR   19:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:19:31 compute-0 openstack_network_exporter[159337]: ERROR   19:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:19:31 compute-0 openstack_network_exporter[159337]: ERROR   19:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:19:31 compute-0 openstack_network_exporter[159337]: ERROR   19:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:19:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:19:31 compute-0 python3.9[264529]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432769.488801-34-197342128738920/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=66cc2c217983223fce7f84f2a8cd1b6a8771b9cc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:19:31 compute-0 sudo[264527]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:32 compute-0 sudo[264679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jujjronnjpgefefvvqysoheecheubjbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432771.8584108-34-176390816955733/AnsiballZ_stat.py'
Oct 02 19:19:32 compute-0 sudo[264679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:32 compute-0 python3.9[264681]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:19:32 compute-0 sudo[264679]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:33 compute-0 ceph-mon[191910]: pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:33 compute-0 sudo[264802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymbhlmoarjsclkgvmwtggvweirgzijga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432771.8584108-34-176390816955733/AnsiballZ_copy.py'
Oct 02 19:19:33 compute-0 sudo[264802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:33 compute-0 python3.9[264804]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432771.8584108-34-176390816955733/.source.conf _original_basename=ceph.conf follow=False checksum=4530b1cbc4e0be6c5e7411ddc47fa3cba0122c76 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:19:33 compute-0 sudo[264802]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:19:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:19:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:19:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:19:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:19:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:19:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:33 compute-0 sshd-session[264102]: Connection closed by 192.168.122.30 port 50486
Oct 02 19:19:33 compute-0 sshd-session[264099]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:19:33 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Oct 02 19:19:34 compute-0 systemd[1]: session-52.scope: Consumed 4.869s CPU time.
Oct 02 19:19:34 compute-0 systemd-logind[793]: Session 52 logged out. Waiting for processes to exit.
Oct 02 19:19:34 compute-0 systemd-logind[793]: Removed session 52.
Oct 02 19:19:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:19:35 compute-0 ceph-mon[191910]: pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:37 compute-0 ceph-mon[191910]: pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:19:39 compute-0 ceph-mon[191910]: pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:40 compute-0 sshd-session[264829]: Accepted publickey for zuul from 192.168.122.30 port 46256 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:19:40 compute-0 systemd-logind[793]: New session 53 of user zuul.
Oct 02 19:19:40 compute-0 systemd[1]: Started Session 53 of User zuul.
Oct 02 19:19:40 compute-0 sshd-session[264829]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:19:41 compute-0 ceph-mon[191910]: pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:41 compute-0 python3.9[264982]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:19:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:42 compute-0 sudo[265136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfqjsjblfqshmaphpxqtekdmyfptqade ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432782.093583-34-241377135342987/AnsiballZ_file.py'
Oct 02 19:19:42 compute-0 sudo[265136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:42 compute-0 python3.9[265138]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:19:42 compute-0 sudo[265136]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:43 compute-0 ceph-mon[191910]: pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:43 compute-0 sudo[265288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuxhfbbmctsnkiecqzpkwuocjxfgfabi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432783.2387326-34-23528802468093/AnsiballZ_file.py'
Oct 02 19:19:43 compute-0 sudo[265288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:43 compute-0 python3.9[265290]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:19:43 compute-0 sudo[265288]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:19:44 compute-0 python3.9[265440]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:19:45 compute-0 ceph-mon[191910]: pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:45 compute-0 sudo[265590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfzoznjyhqhycrprtsakcyauarkwypib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432785.20065-57-12267332191806/AnsiballZ_seboolean.py'
Oct 02 19:19:45 compute-0 sudo[265590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:46 compute-0 python3.9[265592]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 02 19:19:46 compute-0 podman[265593]: 2025-10-02 19:19:46.705045094 +0000 UTC m=+0.125352438 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, org.label-schema.build-date=20250930, managed_by=edpm_ansible)
Oct 02 19:19:46 compute-0 podman[265594]: 2025-10-02 19:19:46.714936715 +0000 UTC m=+0.129799305 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:19:47 compute-0 ceph-mon[191910]: pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:47 compute-0 sudo[265590]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:48 compute-0 sudo[265783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brydieyjmqffeqtfgonxjirdgaowwzta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432788.0631-67-135288130157619/AnsiballZ_setup.py'
Oct 02 19:19:48 compute-0 sudo[265783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:48 compute-0 python3.9[265785]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 19:19:49 compute-0 sudo[265783]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:19:49 compute-0 ceph-mon[191910]: pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:49 compute-0 sudo[265868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxfegfozvryyskbwkgbrhhvmypeyuuzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432788.0631-67-135288130157619/AnsiballZ_dnf.py'
Oct 02 19:19:49 compute-0 sudo[265868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:50 compute-0 python3.9[265870]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:19:51 compute-0 sudo[265868]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:51 compute-0 ceph-mon[191910]: pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:51 compute-0 podman[265896]: 2025-10-02 19:19:51.688499726 +0000 UTC m=+0.121656941 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, config_id=edpm, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 19:19:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:51 compute-0 podman[265897]: 2025-10-02 19:19:51.750659546 +0000 UTC m=+0.181140180 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:19:52 compute-0 sudo[266063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqqgikmqkhecowrntoghuzplevzfegeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432791.7678719-79-173950009643839/AnsiballZ_systemd.py'
Oct 02 19:19:52 compute-0 sudo[266063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:52 compute-0 ceph-mon[191910]: pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:52 compute-0 python3.9[266065]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 19:19:53 compute-0 sudo[266063]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:54 compute-0 sudo[266218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcgioctsrymmdyjyytuvgapqljaaivqv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432793.3850586-87-63472419527884/AnsiballZ_edpm_nftables_snippet.py'
Oct 02 19:19:54 compute-0 sudo[266218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:19:54 compute-0 python3[266220]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 02 19:19:54 compute-0 sudo[266218]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:54 compute-0 ceph-mon[191910]: pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:55 compute-0 sudo[266370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pssjzeyzvrknaatavutyakuswlidbacd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432794.7607768-96-254080340709523/AnsiballZ_file.py'
Oct 02 19:19:55 compute-0 sudo[266370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:55 compute-0 python3.9[266372]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:19:55 compute-0 sudo[266370]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:56 compute-0 sudo[266542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybwgaegotbtnlhwuwwkodqwsobzqkiuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432795.8457432-104-208339048152006/AnsiballZ_stat.py'
Oct 02 19:19:56 compute-0 sudo[266542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:56 compute-0 podman[266497]: 2025-10-02 19:19:56.600072332 +0000 UTC m=+0.095644114 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:19:56 compute-0 podman[266496]: 2025-10-02 19:19:56.617559644 +0000 UTC m=+0.121443116 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:19:56 compute-0 python3.9[266559]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:19:56 compute-0 ceph-mon[191910]: pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:56 compute-0 sudo[266542]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:57 compute-0 sudo[266638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpddfwffaxsuiqekypjlxwlzyickwzjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432795.8457432-104-208339048152006/AnsiballZ_file.py'
Oct 02 19:19:57 compute-0 sudo[266638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:57 compute-0 python3.9[266640]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:19:57 compute-0 sudo[266638]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:58 compute-0 sudo[266809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ershojgagyjloiugombkiasizfugniic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432797.7946796-116-184004680685014/AnsiballZ_stat.py'
Oct 02 19:19:58 compute-0 sudo[266809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:58 compute-0 podman[266764]: 2025-10-02 19:19:58.351637808 +0000 UTC m=+0.132832946 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9)
Oct 02 19:19:58 compute-0 python3.9[266812]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:19:58 compute-0 sudo[266809]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:58 compute-0 ceph-mon[191910]: pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:59 compute-0 sudo[266888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfiwdypykfcafbrjbkismywpaaolklqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432797.7946796-116-184004680685014/AnsiballZ_file.py'
Oct 02 19:19:59 compute-0 sudo[266888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:19:59 compute-0 python3.9[266890]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.mm9qhf8z recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:19:59 compute-0 sudo[266888]: pam_unix(sudo:session): session closed for user root
Oct 02 19:19:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:19:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:19:59 compute-0 podman[157186]: time="2025-10-02T19:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:19:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:19:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6832 "" "Go-http-client/1.1"
Oct 02 19:20:00 compute-0 sudo[267040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzskodcvipfukdlzasmbnlzivzwtelvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432799.5476933-128-52135267628600/AnsiballZ_stat.py'
Oct 02 19:20:00 compute-0 sudo[267040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:00 compute-0 python3.9[267042]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:20:00 compute-0 sudo[267040]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:00 compute-0 sudo[267118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlbtppctoxzmdgdhblplgsbiixhkpxad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432799.5476933-128-52135267628600/AnsiballZ_file.py'
Oct 02 19:20:00 compute-0 sudo[267118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:00 compute-0 ceph-mon[191910]: pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:00 compute-0 python3.9[267120]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:20:01 compute-0 sudo[267118]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:01 compute-0 openstack_network_exporter[159337]: ERROR   19:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:20:01 compute-0 openstack_network_exporter[159337]: ERROR   19:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:20:01 compute-0 openstack_network_exporter[159337]: ERROR   19:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:20:01 compute-0 openstack_network_exporter[159337]: ERROR   19:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:20:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:20:01 compute-0 openstack_network_exporter[159337]: ERROR   19:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:20:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:20:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:02 compute-0 sudo[267270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmgritojqdixphfymhqpiiaeiattcjou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432801.3364954-141-127147193475662/AnsiballZ_command.py'
Oct 02 19:20:02 compute-0 sudo[267270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:02 compute-0 python3.9[267272]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:20:02 compute-0 sudo[267270]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:02 compute-0 ceph-mon[191910]: pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:03 compute-0 sudo[267423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqexrodcbpvdukozonbymgjzwrtnwmfk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432802.6280072-149-245854004488757/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 19:20:03 compute-0 sudo[267423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:03 compute-0 python3[267425]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:20:03
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'images', 'backups', 'volumes']
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:20:03 compute-0 sudo[267423]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:20:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:20:04 compute-0 sudo[267575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doxqvchlabdeidqffbpddvlwyjwkaiqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432803.7934163-157-39650424054352/AnsiballZ_stat.py'
Oct 02 19:20:04 compute-0 sudo[267575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:04 compute-0 python3.9[267577]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:20:04 compute-0 sudo[267575]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:04 compute-0 ceph-mon[191910]: pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:05 compute-0 sudo[267653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czjechdrrvbvzcbbvijcxfhwlybleiqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432803.7934163-157-39650424054352/AnsiballZ_file.py'
Oct 02 19:20:05 compute-0 sudo[267653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:05 compute-0 python3.9[267655]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:20:05 compute-0 sudo[267653]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:06 compute-0 sudo[267805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gihfimzbywzeeobhowvqifwivqmakifd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432805.5518448-169-260662104895133/AnsiballZ_stat.py'
Oct 02 19:20:06 compute-0 sudo[267805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:06 compute-0 python3.9[267807]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:20:06 compute-0 sudo[267805]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:06 compute-0 sudo[267883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvaalvbqdksaugumjxcrbkgfvelfixuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432805.5518448-169-260662104895133/AnsiballZ_file.py'
Oct 02 19:20:06 compute-0 sudo[267883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:06 compute-0 ceph-mon[191910]: pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:07 compute-0 python3.9[267885]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:20:07 compute-0 sudo[267883]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:07 compute-0 sudo[268035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynrdsgliyminkcnkteljqyhlpntrtxwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432807.3658283-181-1230322211947/AnsiballZ_stat.py'
Oct 02 19:20:07 compute-0 sudo[268035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:08 compute-0 python3.9[268037]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:20:08 compute-0 sudo[268035]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:08 compute-0 sudo[268113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obypnnersdjibxxoviajvathxcndzrga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432807.3658283-181-1230322211947/AnsiballZ_file.py'
Oct 02 19:20:08 compute-0 sudo[268113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:08 compute-0 python3.9[268115]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:20:08 compute-0 sudo[268113]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:08 compute-0 ceph-mon[191910]: pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:20:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:09 compute-0 sudo[268265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzzekchfygdliamzsjdexzkwhuwpdbjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432809.1339238-193-271065982484374/AnsiballZ_stat.py'
Oct 02 19:20:09 compute-0 sudo[268265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:10 compute-0 python3.9[268267]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:20:10 compute-0 sudo[268265]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:10 compute-0 sudo[268343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwvaydsavkupixeyjodsuuibtdfdlwxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432809.1339238-193-271065982484374/AnsiballZ_file.py'
Oct 02 19:20:10 compute-0 sudo[268343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:10 compute-0 python3.9[268345]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:20:10 compute-0 sudo[268343]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:10 compute-0 ceph-mon[191910]: pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:11 compute-0 sudo[268495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juyeatzszvtkflpzlrkppnczfsemevfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432811.036618-205-136474480545615/AnsiballZ_stat.py'
Oct 02 19:20:11 compute-0 sudo[268495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:11 compute-0 python3.9[268497]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:20:11 compute-0 sudo[268495]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:20:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:20:12 compute-0 sudo[268573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfwnlimhrfbhozfxsapbshnrxermnfbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432811.036618-205-136474480545615/AnsiballZ_file.py'
Oct 02 19:20:12 compute-0 sudo[268573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:12 compute-0 python3.9[268575]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:20:12 compute-0 sudo[268573]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:12 compute-0 ceph-mon[191910]: pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:13 compute-0 sudo[268725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtbdosgkosjepmdpxdgdvwyjqljymyyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432813.0578296-218-236989126454322/AnsiballZ_command.py'
Oct 02 19:20:13 compute-0 sudo[268725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:13 compute-0 python3.9[268727]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:20:13 compute-0 sudo[268725]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:20:14 compute-0 sudo[268880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzdvlhhkktksqxmmdzguglhwgjrbisld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432814.069259-226-216751312151160/AnsiballZ_blockinfile.py'
Oct 02 19:20:14 compute-0 sudo[268880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:14 compute-0 ceph-mon[191910]: pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:14 compute-0 python3.9[268882]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:20:15 compute-0 sudo[268880]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:15 compute-0 sudo[269032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-depgtvzdhbgewvfegensanhgvdzuhrwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432815.3527193-235-200656086291932/AnsiballZ_command.py'
Oct 02 19:20:15 compute-0 sudo[269032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:16 compute-0 python3.9[269034]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:20:16 compute-0 sudo[269032]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:16 compute-0 ceph-mon[191910]: pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:17 compute-0 sudo[269214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iudpiyczdplnyamawcwzfijapzviwgjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432816.4696226-243-78147281662820/AnsiballZ_stat.py'
Oct 02 19:20:17 compute-0 podman[269160]: 2025-10-02 19:20:17.053972854 +0000 UTC m=+0.099069325 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:20:17 compute-0 sudo[269214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:17 compute-0 podman[269159]: 2025-10-02 19:20:17.061482272 +0000 UTC m=+0.116256919 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS)
Oct 02 19:20:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:20:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Cumulative writes: 5376 writes, 23K keys, 5376 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                            Cumulative WAL: 5376 writes, 747 syncs, 7.20 writes per sync, written: 0.02 GB, 0.03 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 5376 writes, 23K keys, 5376 commit groups, 1.0 writes per commit group, ingest: 18.43 MB, 0.03 MB/s
                                            Interval WAL: 5376 writes, 747 syncs, 7.20 writes per sync, written: 0.02 GB, 0.03 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b415425090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b415425090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b415425090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Oct 02 19:20:17 compute-0 python3.9[269226]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:20:17 compute-0 sudo[269214]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:18 compute-0 sudo[269376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acrolfakxgosdohswrjmcqilvybzxunm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432817.622334-252-237709637880079/AnsiballZ_file.py'
Oct 02 19:20:18 compute-0 sudo[269376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:18 compute-0 python3.9[269378]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:20:18 compute-0 sudo[269376]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:18 compute-0 ceph-mon[191910]: pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:19 compute-0 sudo[269432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:20:19 compute-0 sudo[269432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:19 compute-0 sudo[269432]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:20:19 compute-0 sudo[269481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:20:19 compute-0 sudo[269481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:19 compute-0 sudo[269481]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:19 compute-0 sudo[269528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:20:19 compute-0 sudo[269528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:19 compute-0 sudo[269528]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:19 compute-0 sudo[269578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 19:20:19 compute-0 sudo[269578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:19 compute-0 python3.9[269627]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:20:20 compute-0 sudo[269578]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:20:20 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:20:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:20:20 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:20:20 compute-0 sudo[269673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:20:20 compute-0 sudo[269673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:20 compute-0 sudo[269673]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:20 compute-0 sudo[269699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:20:20 compute-0 sudo[269699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:20 compute-0 sudo[269699]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:20 compute-0 sudo[269724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:20:20 compute-0 sudo[269724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:20 compute-0 sudo[269724]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:20 compute-0 sudo[269749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:20:20 compute-0 sudo[269749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:21 compute-0 ceph-mon[191910]: pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:21 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:20:21 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:20:21 compute-0 sudo[269749]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:21 compute-0 sudo[269930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dknbfypadxoczowqgwebezuxswggnnvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432820.6475356-292-59676283920729/AnsiballZ_command.py'
Oct 02 19:20:21 compute-0 sudo[269930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:20:21 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:20:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:20:21 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:20:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:20:21 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:20:21 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev cbef23d3-48d1-4b60-b6ab-c9a4441dcfa9 does not exist
Oct 02 19:20:21 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev d7ba52bd-3396-41e1-be2a-12b1cfa5eb72 does not exist
Oct 02 19:20:21 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 8bf853d4-2b48-4d71-a423-22684c9ec2df does not exist
Oct 02 19:20:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:20:21 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:20:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:20:21 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:20:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:20:21 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:20:21 compute-0 python3.9[269932]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:20:21 compute-0 sudo[269933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:20:21 compute-0 sudo[269933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:21 compute-0 ovs-vsctl[269956]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 02 19:20:21 compute-0 sudo[269933]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:21 compute-0 sudo[269930]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:21 compute-0 sudo[269959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:20:21 compute-0 sudo[269959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:21 compute-0 sudo[269959]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:21 compute-0 sudo[270007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:20:21 compute-0 sudo[270007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:21 compute-0 sudo[270007]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:21 compute-0 sudo[270045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:20:21 compute-0 sudo[270045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:21 compute-0 podman[270097]: 2025-10-02 19:20:21.891993004 +0000 UTC m=+0.096347514 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, distribution-scope=public, version=9.6, io.openshift.expose-services=, release=1755695350, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 19:20:21 compute-0 podman[270104]: 2025-10-02 19:20:21.968825661 +0000 UTC m=+0.168973340 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:20:22 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:20:22 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:20:22 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:20:22 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:20:22 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:20:22 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:20:22 compute-0 sudo[270258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkdndmfkiobgwrdsvvitxxjpglcritbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432821.7074716-301-230813942947505/AnsiballZ_command.py'
Oct 02 19:20:22 compute-0 sudo[270258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:22 compute-0 podman[270266]: 2025-10-02 19:20:22.26518589 +0000 UTC m=+0.059648025 container create 60bce2164e0396b8e70c3afae71b4d61130581b11deede77f680c5e079d9cd26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_antonelli, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:20:22 compute-0 systemd[1]: Started libpod-conmon-60bce2164e0396b8e70c3afae71b4d61130581b11deede77f680c5e079d9cd26.scope.
Oct 02 19:20:22 compute-0 podman[270266]: 2025-10-02 19:20:22.241617868 +0000 UTC m=+0.036080033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:20:22 compute-0 python3.9[270265]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:20:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:20:22 compute-0 sudo[270258]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:22 compute-0 podman[270266]: 2025-10-02 19:20:22.411027918 +0000 UTC m=+0.205490103 container init 60bce2164e0396b8e70c3afae71b4d61130581b11deede77f680c5e079d9cd26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 19:20:22 compute-0 podman[270266]: 2025-10-02 19:20:22.421449963 +0000 UTC m=+0.215912108 container start 60bce2164e0396b8e70c3afae71b4d61130581b11deede77f680c5e079d9cd26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_antonelli, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 19:20:22 compute-0 podman[270266]: 2025-10-02 19:20:22.425760797 +0000 UTC m=+0.220223032 container attach 60bce2164e0396b8e70c3afae71b4d61130581b11deede77f680c5e079d9cd26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_antonelli, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:20:22 compute-0 gifted_antonelli[270282]: 167 167
Oct 02 19:20:22 compute-0 systemd[1]: libpod-60bce2164e0396b8e70c3afae71b4d61130581b11deede77f680c5e079d9cd26.scope: Deactivated successfully.
Oct 02 19:20:22 compute-0 podman[270290]: 2025-10-02 19:20:22.515159486 +0000 UTC m=+0.057700313 container died 60bce2164e0396b8e70c3afae71b4d61130581b11deede77f680c5e079d9cd26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_antonelli, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e4b6e926fd6d52236ecaeba87f022080fc7ace28641e4092ad9c0db5915a79f-merged.mount: Deactivated successfully.
Oct 02 19:20:22 compute-0 podman[270290]: 2025-10-02 19:20:22.602635633 +0000 UTC m=+0.145176430 container remove 60bce2164e0396b8e70c3afae71b4d61130581b11deede77f680c5e079d9cd26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 19:20:22 compute-0 systemd[1]: libpod-conmon-60bce2164e0396b8e70c3afae71b4d61130581b11deede77f680c5e079d9cd26.scope: Deactivated successfully.
Oct 02 19:20:22 compute-0 podman[270369]: 2025-10-02 19:20:22.936680007 +0000 UTC m=+0.102435084 container create ee93aca24664d13833e9fc1daac1ffa1894587a238e94f855d5f8705f17631dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_sinoussi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 19:20:22 compute-0 podman[270369]: 2025-10-02 19:20:22.894290198 +0000 UTC m=+0.060045335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:20:23 compute-0 systemd[1]: Started libpod-conmon-ee93aca24664d13833e9fc1daac1ffa1894587a238e94f855d5f8705f17631dd.scope.
Oct 02 19:20:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/264a6eaa17dcdf8fae85b08e1b213d0b74cc91967630558ae1f1a8bb51a34e85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/264a6eaa17dcdf8fae85b08e1b213d0b74cc91967630558ae1f1a8bb51a34e85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/264a6eaa17dcdf8fae85b08e1b213d0b74cc91967630558ae1f1a8bb51a34e85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/264a6eaa17dcdf8fae85b08e1b213d0b74cc91967630558ae1f1a8bb51a34e85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/264a6eaa17dcdf8fae85b08e1b213d0b74cc91967630558ae1f1a8bb51a34e85/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:20:23 compute-0 podman[270369]: 2025-10-02 19:20:23.125764506 +0000 UTC m=+0.291519633 container init ee93aca24664d13833e9fc1daac1ffa1894587a238e94f855d5f8705f17631dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_sinoussi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:20:23 compute-0 podman[270369]: 2025-10-02 19:20:23.148745953 +0000 UTC m=+0.314501030 container start ee93aca24664d13833e9fc1daac1ffa1894587a238e94f855d5f8705f17631dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_sinoussi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 19:20:23 compute-0 podman[270369]: 2025-10-02 19:20:23.155822939 +0000 UTC m=+0.321578016 container attach ee93aca24664d13833e9fc1daac1ffa1894587a238e94f855d5f8705f17631dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_sinoussi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:20:23 compute-0 ceph-mon[191910]: pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:23 compute-0 python3.9[270480]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:20:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:20:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Cumulative writes: 6633 writes, 27K keys, 6633 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                            Cumulative WAL: 6633 writes, 1139 syncs, 5.82 writes per sync, written: 0.02 GB, 0.03 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 6633 writes, 27K keys, 6633 commit groups, 1.0 writes per commit group, ingest: 19.33 MB, 0.03 MB/s
                                            Interval WAL: 6633 writes, 1139 syncs, 5.82 writes per sync, written: 0.02 GB, 0.03 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e63090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e63090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e63090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Oct 02 19:20:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:20:24 compute-0 sudo[270655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqcrtvlfiluzzzkvivevvmsjtlvxataz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432823.898952-319-168705444958574/AnsiballZ_file.py'
Oct 02 19:20:24 compute-0 pedantic_sinoussi[270424]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:20:24 compute-0 pedantic_sinoussi[270424]: --> relative data size: 1.0
Oct 02 19:20:24 compute-0 pedantic_sinoussi[270424]: --> All data devices are unavailable
Oct 02 19:20:24 compute-0 sudo[270655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.437 17 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.439 17 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a00e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.439 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feec38a00b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec5f86ab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38272c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec6d242f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38273e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38274d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feec72bb7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.446 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec4ac1ee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.448 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.448 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3825730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 systemd[1]: libpod-ee93aca24664d13833e9fc1daac1ffa1894587a238e94f855d5f8705f17631dd.scope: Deactivated successfully.
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.448 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 podman[270369]: 2025-10-02 19:20:24.451136018 +0000 UTC m=+1.616891095 container died ee93aca24664d13833e9fc1daac1ffa1894587a238e94f855d5f8705f17631dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 19:20:24 compute-0 systemd[1]: libpod-ee93aca24664d13833e9fc1daac1ffa1894587a238e94f855d5f8705f17631dd.scope: Consumed 1.225s CPU time.
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.449 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feec38264b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.455 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.455 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feec3827c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.456 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.456 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feec3827230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.456 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.456 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feec3825700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.456 17 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.456 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feec3827290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.456 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.456 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feec38277a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.457 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.457 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feec38272f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.457 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.457 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feec3827350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.457 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.457 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feec38a0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.457 17 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.457 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feec38273b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.458 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.458 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feec49646e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.458 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.458 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feec3827440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.458 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.458 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feec3827c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.458 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.458 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feec38274a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.458 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.459 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feec3827ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.459 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.459 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feec3827d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.459 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.459 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feec3827dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.460 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.462 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feec3827e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.462 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.462 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feec38a0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.463 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.463 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feec38276e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.463 17 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.463 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feec3827ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.464 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.464 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feec49641d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.465 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.465 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feec3827740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.465 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.466 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feec3827f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.470 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.471 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.471 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.471 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.471 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.472 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.472 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.472 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.472 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.472 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.472 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.472 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.472 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.472 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.473 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.473 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.473 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.473 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.473 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.473 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.473 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.473 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.474 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.474 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.474 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.474 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:20:24.474 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-264a6eaa17dcdf8fae85b08e1b213d0b74cc91967630558ae1f1a8bb51a34e85-merged.mount: Deactivated successfully.
Oct 02 19:20:24 compute-0 podman[270369]: 2025-10-02 19:20:24.559090606 +0000 UTC m=+1.724845653 container remove ee93aca24664d13833e9fc1daac1ffa1894587a238e94f855d5f8705f17631dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_sinoussi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:20:24 compute-0 systemd[1]: libpod-conmon-ee93aca24664d13833e9fc1daac1ffa1894587a238e94f855d5f8705f17631dd.scope: Deactivated successfully.
Oct 02 19:20:24 compute-0 sudo[270045]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:24 compute-0 python3.9[270658]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:20:24 compute-0 sudo[270671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:20:24 compute-0 sudo[270671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:24 compute-0 sudo[270671]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:24 compute-0 sudo[270655]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:24 compute-0 sudo[270696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:20:24 compute-0 sudo[270696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:24 compute-0 sudo[270696]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:24 compute-0 sudo[270744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:20:24 compute-0 sudo[270744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:24 compute-0 sudo[270744]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:25 compute-0 sudo[270770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:20:25 compute-0 sudo[270770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:25 compute-0 ceph-mon[191910]: pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:25 compute-0 sudo[270967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sznunaxxhxbjkonssrakabjcsymhtbgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432825.0088735-327-162134639310426/AnsiballZ_stat.py'
Oct 02 19:20:25 compute-0 sudo[270967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:25 compute-0 podman[270954]: 2025-10-02 19:20:25.600238998 +0000 UTC m=+0.063004313 container create cc824ba2ef22c5d9447961496ea8efb8ab79a3821ddb1542bf6bacf131a208ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:20:25 compute-0 systemd[1]: Started libpod-conmon-cc824ba2ef22c5d9447961496ea8efb8ab79a3821ddb1542bf6bacf131a208ec.scope.
Oct 02 19:20:25 compute-0 podman[270954]: 2025-10-02 19:20:25.574308324 +0000 UTC m=+0.037073719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:20:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:20:25 compute-0 podman[270954]: 2025-10-02 19:20:25.728464341 +0000 UTC m=+0.191229686 container init cc824ba2ef22c5d9447961496ea8efb8ab79a3821ddb1542bf6bacf131a208ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brown, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:20:25 compute-0 podman[270954]: 2025-10-02 19:20:25.740710394 +0000 UTC m=+0.203475709 container start cc824ba2ef22c5d9447961496ea8efb8ab79a3821ddb1542bf6bacf131a208ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brown, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:20:25 compute-0 podman[270954]: 2025-10-02 19:20:25.745888441 +0000 UTC m=+0.208653756 container attach cc824ba2ef22c5d9447961496ea8efb8ab79a3821ddb1542bf6bacf131a208ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 19:20:25 compute-0 laughing_brown[270978]: 167 167
Oct 02 19:20:25 compute-0 systemd[1]: libpod-cc824ba2ef22c5d9447961496ea8efb8ab79a3821ddb1542bf6bacf131a208ec.scope: Deactivated successfully.
Oct 02 19:20:25 compute-0 podman[270954]: 2025-10-02 19:20:25.754854228 +0000 UTC m=+0.217619563 container died cc824ba2ef22c5d9447961496ea8efb8ab79a3821ddb1542bf6bacf131a208ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 19:20:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-1856bb1ce0dbd6caa675009b16bc89f06cda8f410ecb9055d5fb0a482383619f-merged.mount: Deactivated successfully.
Oct 02 19:20:25 compute-0 podman[270954]: 2025-10-02 19:20:25.831614823 +0000 UTC m=+0.294380158 container remove cc824ba2ef22c5d9447961496ea8efb8ab79a3821ddb1542bf6bacf131a208ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brown, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:20:25 compute-0 python3.9[270973]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:20:25 compute-0 systemd[1]: libpod-conmon-cc824ba2ef22c5d9447961496ea8efb8ab79a3821ddb1542bf6bacf131a208ec.scope: Deactivated successfully.
Oct 02 19:20:25 compute-0 sudo[270967]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:26 compute-0 podman[271009]: 2025-10-02 19:20:26.057913644 +0000 UTC m=+0.078235525 container create 6d7d8e22149378e42decd9943aa06719b3d0cbb5bdf7147d7096fd55aeec0e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 19:20:26 compute-0 podman[271009]: 2025-10-02 19:20:26.026799233 +0000 UTC m=+0.047121124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:20:26 compute-0 systemd[1]: Started libpod-conmon-6d7d8e22149378e42decd9943aa06719b3d0cbb5bdf7147d7096fd55aeec0e8a.scope.
Oct 02 19:20:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:20:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0098ebe169139aad0572f3bcd4957d269d92c7f903227af15dd43da7e5d71806/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:20:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0098ebe169139aad0572f3bcd4957d269d92c7f903227af15dd43da7e5d71806/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:20:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0098ebe169139aad0572f3bcd4957d269d92c7f903227af15dd43da7e5d71806/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:20:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0098ebe169139aad0572f3bcd4957d269d92c7f903227af15dd43da7e5d71806/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:20:26 compute-0 podman[271009]: 2025-10-02 19:20:26.247526317 +0000 UTC m=+0.267848228 container init 6d7d8e22149378e42decd9943aa06719b3d0cbb5bdf7147d7096fd55aeec0e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lumiere, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 19:20:26 compute-0 podman[271009]: 2025-10-02 19:20:26.256927055 +0000 UTC m=+0.277248936 container start 6d7d8e22149378e42decd9943aa06719b3d0cbb5bdf7147d7096fd55aeec0e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 19:20:26 compute-0 podman[271009]: 2025-10-02 19:20:26.265257304 +0000 UTC m=+0.285579235 container attach 6d7d8e22149378e42decd9943aa06719b3d0cbb5bdf7147d7096fd55aeec0e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 19:20:26 compute-0 sudo[271100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqhqnmaaczzukourktpnoywiwhpmdapn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432825.0088735-327-162134639310426/AnsiballZ_file.py'
Oct 02 19:20:26 compute-0 sudo[271100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:26 compute-0 python3.9[271102]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:20:26 compute-0 sudo[271100]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]: {
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:     "0": [
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:         {
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "devices": [
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "/dev/loop3"
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             ],
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "lv_name": "ceph_lv0",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "lv_size": "21470642176",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "name": "ceph_lv0",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "tags": {
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.cluster_name": "ceph",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.crush_device_class": "",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.encrypted": "0",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.osd_id": "0",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.type": "block",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.vdo": "0"
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             },
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "type": "block",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "vg_name": "ceph_vg0"
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:         }
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:     ],
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:     "1": [
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:         {
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "devices": [
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "/dev/loop4"
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             ],
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "lv_name": "ceph_lv1",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "lv_size": "21470642176",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "name": "ceph_lv1",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "tags": {
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.cluster_name": "ceph",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.crush_device_class": "",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.encrypted": "0",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.osd_id": "1",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.type": "block",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.vdo": "0"
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             },
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "type": "block",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "vg_name": "ceph_vg1"
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:         }
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:     ],
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:     "2": [
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:         {
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "devices": [
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "/dev/loop5"
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             ],
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "lv_name": "ceph_lv2",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "lv_size": "21470642176",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "name": "ceph_lv2",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "tags": {
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.cluster_name": "ceph",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.crush_device_class": "",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.encrypted": "0",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.osd_id": "2",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.type": "block",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:                 "ceph.vdo": "0"
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             },
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "type": "block",
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:             "vg_name": "ceph_vg2"
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:         }
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]:     ]
Oct 02 19:20:27 compute-0 quizzical_lumiere[271057]: }
Oct 02 19:20:27 compute-0 systemd[1]: libpod-6d7d8e22149378e42decd9943aa06719b3d0cbb5bdf7147d7096fd55aeec0e8a.scope: Deactivated successfully.
Oct 02 19:20:27 compute-0 podman[271009]: 2025-10-02 19:20:27.154065437 +0000 UTC m=+1.174387348 container died 6d7d8e22149378e42decd9943aa06719b3d0cbb5bdf7147d7096fd55aeec0e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lumiere, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 19:20:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0098ebe169139aad0572f3bcd4957d269d92c7f903227af15dd43da7e5d71806-merged.mount: Deactivated successfully.
Oct 02 19:20:27 compute-0 podman[271009]: 2025-10-02 19:20:27.259590441 +0000 UTC m=+1.279912272 container remove 6d7d8e22149378e42decd9943aa06719b3d0cbb5bdf7147d7096fd55aeec0e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:20:27 compute-0 ceph-mon[191910]: pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:27 compute-0 systemd[1]: libpod-conmon-6d7d8e22149378e42decd9943aa06719b3d0cbb5bdf7147d7096fd55aeec0e8a.scope: Deactivated successfully.
Oct 02 19:20:27 compute-0 sudo[270770]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:27 compute-0 podman[271207]: 2025-10-02 19:20:27.30656172 +0000 UTC m=+0.102899936 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:20:27 compute-0 podman[271214]: 2025-10-02 19:20:27.320083067 +0000 UTC m=+0.114181184 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:20:27 compute-0 sudo[271288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:20:27 compute-0 sudo[271333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzineoiumyfwmfikwvkqkkgnmtyaeexx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432826.9003344-327-114559517302534/AnsiballZ_stat.py'
Oct 02 19:20:27 compute-0 sudo[271288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:27 compute-0 sudo[271333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:27 compute-0 sudo[271288]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:27 compute-0 sudo[271338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:20:27 compute-0 sudo[271338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:27 compute-0 sudo[271338]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:27 compute-0 python3.9[271337]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:20:27 compute-0 sudo[271363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:20:27 compute-0 sudo[271363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:27 compute-0 sudo[271363]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:27 compute-0 sudo[271333]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:27 compute-0 sudo[271390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:20:27 compute-0 sudo[271390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:28 compute-0 sudo[271510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exhyjxccpauculislprnlcrepfcspgjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432826.9003344-327-114559517302534/AnsiballZ_file.py'
Oct 02 19:20:28 compute-0 sudo[271510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:28 compute-0 python3.9[271517]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:20:28 compute-0 podman[271528]: 2025-10-02 19:20:28.222909889 +0000 UTC m=+0.092576674 container create fc4d6622c8069d71b9ba52f21e6faeb409174ff9790a7a1daca63b9aa8b9ae54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 19:20:28 compute-0 sudo[271510]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:28 compute-0 podman[271528]: 2025-10-02 19:20:28.184537617 +0000 UTC m=+0.054204472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:20:28 compute-0 systemd[1]: Started libpod-conmon-fc4d6622c8069d71b9ba52f21e6faeb409174ff9790a7a1daca63b9aa8b9ae54.scope.
Oct 02 19:20:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:20:28 compute-0 podman[271528]: 2025-10-02 19:20:28.355031705 +0000 UTC m=+0.224698550 container init fc4d6622c8069d71b9ba52f21e6faeb409174ff9790a7a1daca63b9aa8b9ae54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 19:20:28 compute-0 podman[271528]: 2025-10-02 19:20:28.371732016 +0000 UTC m=+0.241398811 container start fc4d6622c8069d71b9ba52f21e6faeb409174ff9790a7a1daca63b9aa8b9ae54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 19:20:28 compute-0 hungry_swirles[271546]: 167 167
Oct 02 19:20:28 compute-0 systemd[1]: libpod-fc4d6622c8069d71b9ba52f21e6faeb409174ff9790a7a1daca63b9aa8b9ae54.scope: Deactivated successfully.
Oct 02 19:20:28 compute-0 podman[271528]: 2025-10-02 19:20:28.37983016 +0000 UTC m=+0.249496975 container attach fc4d6622c8069d71b9ba52f21e6faeb409174ff9790a7a1daca63b9aa8b9ae54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:20:28 compute-0 podman[271528]: 2025-10-02 19:20:28.381650408 +0000 UTC m=+0.251317233 container died fc4d6622c8069d71b9ba52f21e6faeb409174ff9790a7a1daca63b9aa8b9ae54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:20:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a381b4fadf57df83941586e5edcd90efda83755064a65706b2f26aa0d19ac8ac-merged.mount: Deactivated successfully.
Oct 02 19:20:28 compute-0 podman[271528]: 2025-10-02 19:20:28.466906447 +0000 UTC m=+0.336573232 container remove fc4d6622c8069d71b9ba52f21e6faeb409174ff9790a7a1daca63b9aa8b9ae54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:20:28 compute-0 systemd[1]: libpod-conmon-fc4d6622c8069d71b9ba52f21e6faeb409174ff9790a7a1daca63b9aa8b9ae54.scope: Deactivated successfully.
Oct 02 19:20:28 compute-0 podman[271573]: 2025-10-02 19:20:28.528894463 +0000 UTC m=+0.118720414 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, name=ubi9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, vcs-type=git, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, vendor=Red Hat, Inc., version=9.4, maintainer=Red Hat, Inc., container_name=kepler)
Oct 02 19:20:28 compute-0 podman[271663]: 2025-10-02 19:20:28.712920909 +0000 UTC m=+0.077482386 container create 9a2f7cdda20263a052dbbdec51d94a4cca7c4ffdd8f6b81309b64e702874bd1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cannon, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 19:20:28 compute-0 podman[271663]: 2025-10-02 19:20:28.68228293 +0000 UTC m=+0.046844497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:20:28 compute-0 systemd[1]: Started libpod-conmon-9a2f7cdda20263a052dbbdec51d94a4cca7c4ffdd8f6b81309b64e702874bd1a.scope.
Oct 02 19:20:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1175180d9084881e513cadcb7d996fcb094f56e685ae2700aa007f3edc06e6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1175180d9084881e513cadcb7d996fcb094f56e685ae2700aa007f3edc06e6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1175180d9084881e513cadcb7d996fcb094f56e685ae2700aa007f3edc06e6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1175180d9084881e513cadcb7d996fcb094f56e685ae2700aa007f3edc06e6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:20:28 compute-0 podman[271663]: 2025-10-02 19:20:28.842166049 +0000 UTC m=+0.206727546 container init 9a2f7cdda20263a052dbbdec51d94a4cca7c4ffdd8f6b81309b64e702874bd1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 19:20:28 compute-0 podman[271663]: 2025-10-02 19:20:28.87706257 +0000 UTC m=+0.241624037 container start 9a2f7cdda20263a052dbbdec51d94a4cca7c4ffdd8f6b81309b64e702874bd1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:20:28 compute-0 podman[271663]: 2025-10-02 19:20:28.88161128 +0000 UTC m=+0.246172797 container attach 9a2f7cdda20263a052dbbdec51d94a4cca7c4ffdd8f6b81309b64e702874bd1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cannon, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:20:29 compute-0 sudo[271758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrohbdskzztztxqzpbwubeksxqofcocc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432828.4894595-350-251453737773340/AnsiballZ_file.py'
Oct 02 19:20:29 compute-0 sudo[271758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:29 compute-0 python3.9[271760]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:20:29 compute-0 sudo[271758]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:29 compute-0 ceph-mon[191910]: pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:20:29 compute-0 podman[157186]: time="2025-10-02T19:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:20:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34384 "" "Go-http-client/1.1"
Oct 02 19:20:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7252 "" "Go-http-client/1.1"
Oct 02 19:20:29 compute-0 kind_cannon[271703]: {
Oct 02 19:20:29 compute-0 kind_cannon[271703]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:20:29 compute-0 kind_cannon[271703]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:20:29 compute-0 kind_cannon[271703]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:20:29 compute-0 kind_cannon[271703]:         "osd_id": 1,
Oct 02 19:20:29 compute-0 kind_cannon[271703]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:20:29 compute-0 kind_cannon[271703]:         "type": "bluestore"
Oct 02 19:20:29 compute-0 kind_cannon[271703]:     },
Oct 02 19:20:29 compute-0 kind_cannon[271703]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:20:29 compute-0 kind_cannon[271703]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:20:29 compute-0 kind_cannon[271703]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:20:29 compute-0 kind_cannon[271703]:         "osd_id": 2,
Oct 02 19:20:29 compute-0 kind_cannon[271703]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:20:29 compute-0 kind_cannon[271703]:         "type": "bluestore"
Oct 02 19:20:29 compute-0 kind_cannon[271703]:     },
Oct 02 19:20:29 compute-0 kind_cannon[271703]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:20:29 compute-0 kind_cannon[271703]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:20:29 compute-0 kind_cannon[271703]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:20:29 compute-0 kind_cannon[271703]:         "osd_id": 0,
Oct 02 19:20:29 compute-0 kind_cannon[271703]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:20:29 compute-0 kind_cannon[271703]:         "type": "bluestore"
Oct 02 19:20:29 compute-0 kind_cannon[271703]:     }
Oct 02 19:20:29 compute-0 kind_cannon[271703]: }
Oct 02 19:20:29 compute-0 sudo[271936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdrdstrvokmivtpzyxfsklhqtlemrets ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432829.499396-358-155669273113073/AnsiballZ_stat.py'
Oct 02 19:20:29 compute-0 sudo[271936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:29 compute-0 systemd[1]: libpod-9a2f7cdda20263a052dbbdec51d94a4cca7c4ffdd8f6b81309b64e702874bd1a.scope: Deactivated successfully.
Oct 02 19:20:29 compute-0 systemd[1]: libpod-9a2f7cdda20263a052dbbdec51d94a4cca7c4ffdd8f6b81309b64e702874bd1a.scope: Consumed 1.049s CPU time.
Oct 02 19:20:29 compute-0 podman[271663]: 2025-10-02 19:20:29.92996342 +0000 UTC m=+1.294524897 container died 9a2f7cdda20263a052dbbdec51d94a4cca7c4ffdd8f6b81309b64e702874bd1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:20:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1175180d9084881e513cadcb7d996fcb094f56e685ae2700aa007f3edc06e6c-merged.mount: Deactivated successfully.
Oct 02 19:20:30 compute-0 podman[271663]: 2025-10-02 19:20:30.018053845 +0000 UTC m=+1.382615322 container remove 9a2f7cdda20263a052dbbdec51d94a4cca7c4ffdd8f6b81309b64e702874bd1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cannon, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:20:30 compute-0 systemd[1]: libpod-conmon-9a2f7cdda20263a052dbbdec51d94a4cca7c4ffdd8f6b81309b64e702874bd1a.scope: Deactivated successfully.
Oct 02 19:20:30 compute-0 sudo[271390]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:20:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:20:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:20:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:20:30 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 2b86bb02-21e8-4e68-999d-fb9645452e8b does not exist
Oct 02 19:20:30 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 0412aadf-7b1d-4189-90de-2c5e97ef2afe does not exist
Oct 02 19:20:30 compute-0 python3.9[271940]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:20:30 compute-0 sudo[271936]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:30 compute-0 sudo[271952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:20:30 compute-0 sudo[271952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:30 compute-0 sudo[271952]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:30 compute-0 sudo[271984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:20:30 compute-0 sudo[271984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:20:30 compute-0 sudo[271984]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:20:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Cumulative writes: 5524 writes, 23K keys, 5524 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                            Cumulative WAL: 5524 writes, 788 syncs, 7.01 writes per sync, written: 0.02 GB, 0.03 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 5524 writes, 23K keys, 5524 commit groups, 1.0 writes per commit group, ingest: 18.45 MB, 0.03 MB/s
                                            Interval WAL: 5524 writes, 788 syncs, 7.01 writes per sync, written: 0.02 GB, 0.03 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Oct 02 19:20:30 compute-0 sudo[272077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwqxzcwdkiymvwqbtupbzefrcvfrhkag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432829.499396-358-155669273113073/AnsiballZ_file.py'
Oct 02 19:20:30 compute-0 sudo[272077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:30 compute-0 python3.9[272079]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:20:30 compute-0 sudo[272077]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:31 compute-0 ceph-mon[191910]: pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:20:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:20:31 compute-0 openstack_network_exporter[159337]: ERROR   19:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:20:31 compute-0 openstack_network_exporter[159337]: ERROR   19:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:20:31 compute-0 openstack_network_exporter[159337]: ERROR   19:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:20:31 compute-0 openstack_network_exporter[159337]: ERROR   19:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:20:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:20:31 compute-0 openstack_network_exporter[159337]: ERROR   19:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:20:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:20:31 compute-0 sudo[272229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayujwesruxwaofwexgzwcziwldpmdrlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432831.0967607-370-277206832435918/AnsiballZ_stat.py'
Oct 02 19:20:31 compute-0 sudo[272229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:31 compute-0 ceph-mgr[192222]: [devicehealth INFO root] Check health
Oct 02 19:20:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:31 compute-0 python3.9[272231]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:20:31 compute-0 sudo[272229]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:32 compute-0 sudo[272307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dugifshzwfkcjqhgcfjprmakxgnmwafa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432831.0967607-370-277206832435918/AnsiballZ_file.py'
Oct 02 19:20:32 compute-0 sudo[272307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:32 compute-0 python3.9[272309]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:20:32 compute-0 sudo[272307]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:33 compute-0 ceph-mon[191910]: pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:20:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:20:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:20:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:20:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:20:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:20:33 compute-0 sudo[272459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbjthcnruepxmaipnmiumqqefvuhbzeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432832.869421-382-161844157621123/AnsiballZ_systemd.py'
Oct 02 19:20:33 compute-0 sudo[272459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:34 compute-0 python3.9[272461]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:20:34 compute-0 systemd[1]: Reloading.
Oct 02 19:20:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:20:34 compute-0 systemd-rc-local-generator[272492]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:20:34 compute-0 systemd-sysv-generator[272495]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:20:34 compute-0 sudo[272459]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:35 compute-0 ceph-mon[191910]: pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:35 compute-0 sudo[272649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suwqkmcyohrjvkdrvpmbxfjnesbeegaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432834.9892163-390-281034240122772/AnsiballZ_stat.py'
Oct 02 19:20:35 compute-0 sudo[272649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:35 compute-0 python3.9[272651]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:20:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:35 compute-0 sudo[272649]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:36 compute-0 sudo[272727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdmrmmdbczfcqxpmdcbvpkjwacbmiapt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432834.9892163-390-281034240122772/AnsiballZ_file.py'
Oct 02 19:20:36 compute-0 sudo[272727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:36 compute-0 python3.9[272729]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:20:36 compute-0 sudo[272727]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:37 compute-0 ceph-mon[191910]: pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:37 compute-0 sudo[272879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rytghyqmfrylvtvwlwxkomwkvjgilmqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432836.6986344-402-275157450065692/AnsiballZ_stat.py'
Oct 02 19:20:37 compute-0 sudo[272879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:37 compute-0 python3.9[272881]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:20:37 compute-0 sudo[272879]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:37 compute-0 sudo[272957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfenfdhqcyeplmniygvaowolmhsybtom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432836.6986344-402-275157450065692/AnsiballZ_file.py'
Oct 02 19:20:37 compute-0 sudo[272957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:38 compute-0 python3.9[272959]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:20:38 compute-0 sudo[272957]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:39 compute-0 ceph-mon[191910]: pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:20:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:39 compute-0 sudo[273109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muyqebyzdplxxhnqklqrhqcvdirlgbab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432838.4417274-414-43476643815779/AnsiballZ_systemd.py'
Oct 02 19:20:39 compute-0 sudo[273109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:40 compute-0 python3.9[273111]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:20:40 compute-0 systemd[1]: Reloading.
Oct 02 19:20:40 compute-0 systemd-sysv-generator[273142]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:20:40 compute-0 systemd-rc-local-generator[273136]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:20:40 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 19:20:40 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 19:20:40 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 19:20:40 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 19:20:40 compute-0 sudo[273109]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:41 compute-0 ceph-mon[191910]: pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:41 compute-0 sudo[273303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouiaazxxecgpkuuenmrlsqgsfrzdvoxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432841.2675047-424-146295988943933/AnsiballZ_file.py'
Oct 02 19:20:41 compute-0 sudo[273303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:41 compute-0 python3.9[273305]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:20:42 compute-0 sudo[273303]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:43 compute-0 sudo[273455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwciqvtncpvpousgzyxdprmenijudatm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432842.7280517-432-163344062001695/AnsiballZ_stat.py'
Oct 02 19:20:43 compute-0 ceph-mon[191910]: pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:43 compute-0 sudo[273455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:43 compute-0 python3.9[273457]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:20:43 compute-0 sudo[273455]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:43 compute-0 sudo[273533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvhjvilfaxjrrjuyujbrjdsiyavpvtix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432842.7280517-432-163344062001695/AnsiballZ_file.py'
Oct 02 19:20:43 compute-0 sudo[273533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:44 compute-0 python3.9[273535]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ovn_controller/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ovn_controller/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:20:44 compute-0 sudo[273533]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:20:45 compute-0 sudo[273685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reaqcqukkmampygiirwayvrotqjlxqlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432844.5346234-446-218057121831247/AnsiballZ_file.py'
Oct 02 19:20:45 compute-0 sudo[273685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:45 compute-0 ceph-mon[191910]: pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:45 compute-0 python3.9[273687]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:20:45 compute-0 sudo[273685]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:46 compute-0 sudo[273837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvaiedfeicduubzzriheczfzklyhxotc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432845.6205359-454-174020451865364/AnsiballZ_stat.py'
Oct 02 19:20:46 compute-0 sudo[273837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:46 compute-0 python3.9[273839]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:20:46 compute-0 sudo[273837]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:46 compute-0 sudo[273915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlgpnophfokhitaqutloigqtuxdqommx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432845.6205359-454-174020451865364/AnsiballZ_file.py'
Oct 02 19:20:46 compute-0 sudo[273915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:47 compute-0 python3.9[273917]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ovn_controller.json _original_basename=.hzmxe5mo recurse=False state=file path=/var/lib/kolla/config_files/ovn_controller.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:20:47 compute-0 sudo[273915]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:47 compute-0 ceph-mon[191910]: pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:47 compute-0 podman[273998]: 2025-10-02 19:20:47.680877032 +0000 UTC m=+0.091004521 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:20:47 compute-0 podman[273994]: 2025-10-02 19:20:47.698581329 +0000 UTC m=+0.122453621 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=edpm, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:20:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:47 compute-0 sudo[274107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byyaiqjwalmjmczzqkozqfhkhdwlfdge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432847.3877194-466-88402936597229/AnsiballZ_file.py'
Oct 02 19:20:47 compute-0 sudo[274107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:48 compute-0 python3.9[274109]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:20:48 compute-0 sudo[274107]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:48 compute-0 sudo[274259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fapxcowrwraasftmlbvzuhwbtcladpyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432848.5060015-474-12175472207195/AnsiballZ_stat.py'
Oct 02 19:20:48 compute-0 sudo[274259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:49 compute-0 sudo[274259]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:49 compute-0 ceph-mon[191910]: pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:20:49 compute-0 sudo[274338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aljrvsmnkgcgqpgcrbpnblfilgnciemf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432848.5060015-474-12175472207195/AnsiballZ_file.py'
Oct 02 19:20:49 compute-0 sudo[274338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:49 compute-0 sudo[274338]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:50 compute-0 sudo[274490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kltcacpykdiwqohmphlzeplhutbotqjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432850.2201118-488-208779955260061/AnsiballZ_container_config_data.py'
Oct 02 19:20:50 compute-0 sudo[274490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:50 compute-0 python3.9[274492]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 02 19:20:51 compute-0 sudo[274490]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:51 compute-0 ceph-mon[191910]: pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:52 compute-0 sudo[274642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjdspftcebmydbsixfgrnylttyjvjzfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432851.286762-497-9468783314860/AnsiballZ_container_config_hash.py'
Oct 02 19:20:52 compute-0 sudo[274642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:52 compute-0 podman[274644]: 2025-10-02 19:20:52.175882767 +0000 UTC m=+0.131323676 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-type=git, build-date=2025-08-20T13:12:41)
Oct 02 19:20:52 compute-0 podman[274645]: 2025-10-02 19:20:52.190883523 +0000 UTC m=+0.132029495 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:20:52 compute-0 python3.9[274649]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:20:52 compute-0 sudo[274642]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:53 compute-0 ceph-mon[191910]: pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:53 compute-0 sudo[274835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iotdjtxdxmslhnedtmymnmwlpumeiqva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432852.5759623-506-51247841431887/AnsiballZ_podman_container_info.py'
Oct 02 19:20:53 compute-0 sudo[274835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:53 compute-0 python3.9[274837]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 19:20:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:53 compute-0 sudo[274835]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:20:55 compute-0 ceph-mon[191910]: pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:55 compute-0 sudo[275012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxyvmxsiwsqrdgkxvsrbvfbkiegobxto ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432854.793477-519-169990041260059/AnsiballZ_edpm_container_manage.py'
Oct 02 19:20:55 compute-0 sudo[275012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:55 compute-0 python3[275014]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:20:56 compute-0 python3[275014]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f",
                                                     "Digest": "sha256:129e24971fee94cc60b5f440605f1512fb932a884e38e64122f38f11f942e3b9",
                                                     "RepoTags": [
                                                          "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:129e24971fee94cc60b5f440605f1512fb932a884e38e64122f38f11f942e3b9"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2025-10-02T06:41:04.763416897Z",
                                                     "Config": {
                                                          "User": "root",
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                               "LANG=en_US.UTF-8",
                                                               "TZ=UTC",
                                                               "container=oci"
                                                          ],
                                                          "Entrypoint": [
                                                               "dumb-init",
                                                               "--single-child",
                                                               "--"
                                                          ],
                                                          "Cmd": [
                                                               "kolla_start"
                                                          ],
                                                          "Labels": {
                                                               "io.buildah.version": "1.41.3",
                                                               "maintainer": "OpenStack Kubernetes Operator team",
                                                               "org.label-schema.build-date": "20251001",
                                                               "org.label-schema.license": "GPLv2",
                                                               "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                               "org.label-schema.schema-version": "1.0",
                                                               "org.label-schema.vendor": "CentOS",
                                                               "tcib_build_tag": "a0eac564d779a7eaac46c9816bff261a",
                                                               "tcib_managed": "true"
                                                          },
                                                          "StopSignal": "SIGTERM"
                                                     },
                                                     "Version": "",
                                                     "Author": "",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 345627081,
                                                     "VirtualSize": 345627081,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/8f7765a0696e83d3cfed8c8e9a70a4344fabd76a523317a36aee407406588981/diff:/var/lib/containers/storage/overlay/661e15e0dfc445ecdff08d434d5cb11b0b9a54f42dd69506bb77f4c8cd8adb25/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/4757c1c767cdaf82eab12df0ed4287df67b8e29aa1208326d810f1ccc3ae859d/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/4757c1c767cdaf82eab12df0ed4287df67b8e29aa1208326d810f1ccc3ae859d/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",
                                                               "sha256:c7c80f27a004d53fb75b6d30a961f2416ea855138d9e550000fa093a1e5e384d",
                                                               "sha256:2581cff67e17c51811bac9607dcd596a85156992ccb768e403301479a37d51fb",
                                                               "sha256:b0f2967db57e02040537a064ba6efcf4aa5c9caf2b7b1633852dac7a10163ec7"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "io.buildah.version": "1.41.3",
                                                          "maintainer": "OpenStack Kubernetes Operator team",
                                                          "org.label-schema.build-date": "20251001",
                                                          "org.label-schema.license": "GPLv2",
                                                          "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                          "org.label-schema.schema-version": "1.0",
                                                          "org.label-schema.vendor": "CentOS",
                                                          "tcib_build_tag": "a0eac564d779a7eaac46c9816bff261a",
                                                          "tcib_managed": "true"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                     "User": "root",
                                                     "History": [
                                                          {
                                                               "created": "2025-10-01T03:48:01.636308726Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-01T03:48:01.636415187Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251001\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-01T03:48:09.404099909Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:09.757191184Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                               "comment": "FROM quay.io/centos/centos:stream9",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:09.757211565Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:09.757229405Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:09.757245856Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:09.757279147Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:09.757304688Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:10.233672718Z",
                                                               "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:47.227633956Z",
                                                               "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:50.639117027Z",
                                                               "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:51.032972349Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:51.419814064Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:52.143664292Z",
                                                               "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:52.537669617Z",
                                                               "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:52.939739979Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:53.354487155Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:53.748982134Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:54.090941713Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:54.48363415Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:54.858704521Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:55.151167986Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:55.41361541Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:55.720650713Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:56.087416219Z",
                                                               "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:56.402825868Z",
                                                               "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:59.881750329Z",
                                                               "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:11:00.217806143Z",
                                                               "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:11:00.573407121Z",
                                                               "created_by": "/bin/sh -c mkdir -p /openstack",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:11:02.069855698Z",
                                                               "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:11:03.929362102Z",
                                                               "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:11:03.929402883Z",
                                                               "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:11:03.929411243Z",
                                                               "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:11:03.929417844Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:11:04.966176997Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"a0eac564d779a7eaac46c9816bff261a\""
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:13:12.768929965Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:a0eac564d779a7eaac46c9816bff261a",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:14:11.001599194Z",
                                                               "created_by": "/bin/sh -c dnf -y install openvswitch openvswitch-ovn-common python3-netifaces python3-openvswitch tcpdump && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:14:12.804918795Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"a0eac564d779a7eaac46c9816bff261a\""
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:40:27.907340157Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-base:a0eac564d779a7eaac46c9816bff261a",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:41:04.753811469Z",
                                                               "created_by": "/bin/sh -c dnf -y install openvswitch-ovn-host && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:41:05.979540066Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"a0eac564d779a7eaac46c9816bff261a\""
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"
                                                     ]
                                                }
                                           ]
                                           : quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 02 19:20:56 compute-0 sudo[275012]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:57 compute-0 sudo[275220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqmchuwacyrynxijtbbwuupwezoeqmiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432856.579073-527-217777011345632/AnsiballZ_stat.py'
Oct 02 19:20:57 compute-0 sudo[275220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:57 compute-0 ceph-mon[191910]: pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:57 compute-0 python3.9[275222]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:20:57 compute-0 sudo[275220]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:57 compute-0 podman[275250]: 2025-10-02 19:20:57.700705092 +0000 UTC m=+0.117427240 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:20:57 compute-0 podman[275249]: 2025-10-02 19:20:57.703796897 +0000 UTC m=+0.118701795 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:20:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:58 compute-0 sudo[275413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlzzbdgvgevnyawibsdtinupnpdsqcok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432857.7221053-536-72277053697331/AnsiballZ_file.py'
Oct 02 19:20:58 compute-0 sudo[275413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:58 compute-0 python3.9[275415]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:20:58 compute-0 sudo[275413]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:58 compute-0 sudo[275506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghtqzyuoofwfhjlqxljzqemifmrirhmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432857.7221053-536-72277053697331/AnsiballZ_stat.py'
Oct 02 19:20:58 compute-0 sudo[275506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:20:58 compute-0 podman[275463]: 2025-10-02 19:20:58.907041457 +0000 UTC m=+0.104204810 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, architecture=x86_64, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.buildah.version=1.29.0, io.openshift.expose-services=, config_id=edpm, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:20:59 compute-0 python3.9[275511]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:20:59 compute-0 sudo[275506]: pam_unix(sudo:session): session closed for user root
Oct 02 19:20:59 compute-0 ceph-mon[191910]: pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:20:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:20:59 compute-0 podman[157186]: time="2025-10-02T19:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:20:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:20:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6832 "" "Go-http-client/1.1"
Oct 02 19:20:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:00 compute-0 sudo[275660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjzpwpydxbqfyiddmxyjmpsmqkhygymr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432859.2422245-536-275770159714590/AnsiballZ_copy.py'
Oct 02 19:21:00 compute-0 sudo[275660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:00 compute-0 python3.9[275662]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432859.2422245-536-275770159714590/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:21:00 compute-0 sudo[275660]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:01 compute-0 sudo[275736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txgarvuftfrdbghuujvdqrnwimxffprd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432859.2422245-536-275770159714590/AnsiballZ_systemd.py'
Oct 02 19:21:01 compute-0 sudo[275736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:01 compute-0 ceph-mon[191910]: pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:01 compute-0 openstack_network_exporter[159337]: ERROR   19:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:21:01 compute-0 openstack_network_exporter[159337]: ERROR   19:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:21:01 compute-0 openstack_network_exporter[159337]: ERROR   19:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:21:01 compute-0 openstack_network_exporter[159337]: ERROR   19:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:21:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:21:01 compute-0 openstack_network_exporter[159337]: ERROR   19:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:21:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:21:01 compute-0 python3.9[275738]: ansible-systemd Invoked with state=started name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:21:01 compute-0 sudo[275736]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:02 compute-0 sudo[275890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odmdugqctuxvbsssuskqftocdkagheja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432861.8936663-560-93556666692528/AnsiballZ_command.py'
Oct 02 19:21:02 compute-0 sudo[275890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:02 compute-0 python3.9[275892]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:21:02 compute-0 ovs-vsctl[275893]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 02 19:21:02 compute-0 sudo[275890]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:03 compute-0 sudo[276043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuawlzzwxouzlwhjpbwxcihvltwspqnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432862.931243-568-113210769407208/AnsiballZ_command.py'
Oct 02 19:21:03 compute-0 sudo[276043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:03 compute-0 ceph-mon[191910]: pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:21:03
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['images', '.mgr', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'backups']
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:21:03 compute-0 python3.9[276045]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:21:03 compute-0 ovs-vsctl[276047]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 02 19:21:03 compute-0 sudo[276043]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:21:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:21:04 compute-0 sudo[276198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enfqslkrbkhxlfpqjfaskbllybhvwctm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432863.9815996-582-191134582878155/AnsiballZ_command.py'
Oct 02 19:21:04 compute-0 sudo[276198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:04 compute-0 python3.9[276200]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:21:04 compute-0 ovs-vsctl[276201]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 02 19:21:04 compute-0 sudo[276198]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:05 compute-0 sshd-session[264832]: Connection closed by 192.168.122.30 port 46256
Oct 02 19:21:05 compute-0 sshd-session[264829]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:21:05 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Oct 02 19:21:05 compute-0 systemd[1]: session-53.scope: Consumed 1min 8.591s CPU time.
Oct 02 19:21:05 compute-0 systemd-logind[793]: Session 53 logged out. Waiting for processes to exit.
Oct 02 19:21:05 compute-0 systemd-logind[793]: Removed session 53.
Oct 02 19:21:05 compute-0 ceph-mon[191910]: pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:07 compute-0 ceph-mon[191910]: pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:21:09 compute-0 ceph-mon[191910]: pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:11 compute-0 sshd-session[276226]: Accepted publickey for zuul from 192.168.122.30 port 32884 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:21:11 compute-0 systemd-logind[793]: New session 54 of user zuul.
Oct 02 19:21:11 compute-0 systemd[1]: Started Session 54 of User zuul.
Oct 02 19:21:11 compute-0 sshd-session[276226]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:21:11 compute-0 ceph-mon[191910]: pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:21:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:21:12 compute-0 python3.9[276379]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:21:13 compute-0 ceph-mon[191910]: pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:13 compute-0 sudo[276533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woefroqnwwxynkrmuzelygekxbmnxnpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432873.3135722-34-82050803033929/AnsiballZ_file.py'
Oct 02 19:21:14 compute-0 sudo[276533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:14 compute-0 python3.9[276535]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:21:14 compute-0 sudo[276533]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:21:14 compute-0 sudo[276685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jicuvizobwufujflnwgrkgjphvzfldvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432874.4967194-34-86682529265389/AnsiballZ_file.py'
Oct 02 19:21:14 compute-0 sudo[276685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:15 compute-0 python3.9[276687]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:21:15 compute-0 sudo[276685]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:15 compute-0 ceph-mon[191910]: pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:15 compute-0 sudo[276837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywpoqxikrkrgjrqaybcntjlmutjholdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432875.4245362-34-129047291557873/AnsiballZ_file.py'
Oct 02 19:21:15 compute-0 sudo[276837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:16 compute-0 python3.9[276839]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:21:16 compute-0 sudo[276837]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:16 compute-0 sudo[276989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oltgaijvhzyrdncjmeczshwghthorkoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432876.3107963-34-273645079711303/AnsiballZ_file.py'
Oct 02 19:21:16 compute-0 sudo[276989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:17 compute-0 python3.9[276991]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:21:17 compute-0 sudo[276989]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:17 compute-0 ceph-mon[191910]: pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:17 compute-0 sudo[277173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smzqjvrakhlvorcpejopwisvacwozwvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432877.3252478-34-28495945324603/AnsiballZ_file.py'
Oct 02 19:21:17 compute-0 sudo[277173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:17 compute-0 podman[277115]: 2025-10-02 19:21:17.868023058 +0000 UTC m=+0.105926426 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Oct 02 19:21:17 compute-0 podman[277116]: 2025-10-02 19:21:17.878087373 +0000 UTC m=+0.108244810 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:21:18 compute-0 python3.9[277185]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:21:18 compute-0 sudo[277173]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:19 compute-0 python3.9[277335]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:21:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:21:19 compute-0 ceph-mon[191910]: pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:20 compute-0 sudo[277486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owhhgofuycolihdioholxjnnfvnhnoem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432879.358103-78-164238278323697/AnsiballZ_seboolean.py'
Oct 02 19:21:20 compute-0 sudo[277486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:20 compute-0 python3.9[277488]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 02 19:21:20 compute-0 sudo[277486]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:21 compute-0 ceph-mon[191910]: pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:22 compute-0 python3.9[277638]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:21:22 compute-0 podman[277709]: 2025-10-02 19:21:22.668576793 +0000 UTC m=+0.098906185 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9-minimal, distribution-scope=public, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 02 19:21:22 compute-0 podman[277710]: 2025-10-02 19:21:22.721836394 +0000 UTC m=+0.134246728 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:21:23 compute-0 python3.9[277803]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432881.2011187-86-265379068201941/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:21:23 compute-0 ceph-mon[191910]: pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:24 compute-0 python3.9[277953]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:21:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:21:24 compute-0 python3.9[278074]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432883.4156194-101-137104523248398/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:21:25 compute-0 ceph-mon[191910]: pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:25 compute-0 sudo[278224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxrxdtxfmxfizjuttrzijdiwararzspp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432885.264523-118-130099996633213/AnsiballZ_setup.py'
Oct 02 19:21:25 compute-0 sudo[278224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:26 compute-0 python3.9[278226]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 19:21:26 compute-0 sudo[278224]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:27 compute-0 sudo[278308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgslhlsmirfgxlurhvlrtjmftskrrlbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432885.264523-118-130099996633213/AnsiballZ_dnf.py'
Oct 02 19:21:27 compute-0 sudo[278308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:27 compute-0 python3.9[278310]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:21:27 compute-0 ceph-mon[191910]: pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:28 compute-0 sudo[278308]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:28 compute-0 podman[278312]: 2025-10-02 19:21:28.685439254 +0000 UTC m=+0.110667246 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 19:21:28 compute-0 podman[278313]: 2025-10-02 19:21:28.72124448 +0000 UTC m=+0.137957320 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:21:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:21:29 compute-0 ceph-mon[191910]: pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:29 compute-0 podman[278458]: 2025-10-02 19:21:29.656036857 +0000 UTC m=+0.090238969 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, name=ubi9, vcs-type=git, io.openshift.expose-services=, release=1214.1726694543, version=9.4, architecture=x86_64, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler)
Oct 02 19:21:29 compute-0 sudo[278521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdlsqatrjhvtkurzslswbgefvlxdltwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432888.903234-130-115049548814435/AnsiballZ_systemd.py'
Oct 02 19:21:29 compute-0 podman[157186]: time="2025-10-02T19:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:21:29 compute-0 sudo[278521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:21:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6839 "" "Go-http-client/1.1"
Oct 02 19:21:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:30 compute-0 python3.9[278523]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 19:21:30 compute-0 sudo[278521]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:30 compute-0 sudo[278552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:21:30 compute-0 sudo[278552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:21:30 compute-0 sudo[278552]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:30 compute-0 sudo[278607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:21:30 compute-0 sudo[278607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:21:30 compute-0 sudo[278607]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:30 compute-0 sudo[278654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:21:30 compute-0 sudo[278654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:21:30 compute-0 sudo[278654]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:30 compute-0 sudo[278701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:21:30 compute-0 sudo[278701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:21:31 compute-0 python3.9[278776]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:21:31 compute-0 openstack_network_exporter[159337]: ERROR   19:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:21:31 compute-0 openstack_network_exporter[159337]: ERROR   19:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:21:31 compute-0 openstack_network_exporter[159337]: ERROR   19:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:21:31 compute-0 openstack_network_exporter[159337]: ERROR   19:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:21:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:21:31 compute-0 openstack_network_exporter[159337]: ERROR   19:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:21:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:21:31 compute-0 sudo[278701]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 19:21:31 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 19:21:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:21:31 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:21:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:21:31 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:21:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:21:31 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:21:31 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 0513eb75-842d-4797-9618-71032e68cb12 does not exist
Oct 02 19:21:31 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 1c40cc35-4255-418f-a865-90d4016ab3f2 does not exist
Oct 02 19:21:31 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 3b2cb102-dd07-4ce6-9793-71b3c6952817 does not exist
Oct 02 19:21:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:21:31 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:21:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:21:31 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:21:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:21:31 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:21:31 compute-0 ceph-mon[191910]: pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 19:21:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:21:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:21:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:21:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:21:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:21:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:21:31 compute-0 sudo[278900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:21:31 compute-0 sudo[278900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:21:31 compute-0 sudo[278900]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:31 compute-0 sudo[278952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:21:31 compute-0 sudo[278952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:21:31 compute-0 sudo[278952]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:31 compute-0 python3.9[278953]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432890.4572825-138-75274602796294/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:21:31 compute-0 sudo[278978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:21:31 compute-0 sudo[278978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:21:31 compute-0 sudo[278978]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:31 compute-0 sudo[279003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:21:31 compute-0 sudo[279003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:21:32 compute-0 podman[279166]: 2025-10-02 19:21:32.471823919 +0000 UTC m=+0.075388005 container create 5e4c49a941abbf89c44f3947071e5909fc45fd48d7ca10940669b27f367f0133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:21:32 compute-0 podman[279166]: 2025-10-02 19:21:32.439414696 +0000 UTC m=+0.042978772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:21:32 compute-0 systemd[1]: Started libpod-conmon-5e4c49a941abbf89c44f3947071e5909fc45fd48d7ca10940669b27f367f0133.scope.
Oct 02 19:21:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:21:32 compute-0 podman[279166]: 2025-10-02 19:21:32.60802341 +0000 UTC m=+0.211587486 container init 5e4c49a941abbf89c44f3947071e5909fc45fd48d7ca10940669b27f367f0133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_proskuriakova, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 19:21:32 compute-0 podman[279166]: 2025-10-02 19:21:32.62309925 +0000 UTC m=+0.226663306 container start 5e4c49a941abbf89c44f3947071e5909fc45fd48d7ca10940669b27f367f0133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 19:21:32 compute-0 podman[279166]: 2025-10-02 19:21:32.62711563 +0000 UTC m=+0.230679716 container attach 5e4c49a941abbf89c44f3947071e5909fc45fd48d7ca10940669b27f367f0133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:21:32 compute-0 quirky_proskuriakova[279206]: 167 167
Oct 02 19:21:32 compute-0 systemd[1]: libpod-5e4c49a941abbf89c44f3947071e5909fc45fd48d7ca10940669b27f367f0133.scope: Deactivated successfully.
Oct 02 19:21:32 compute-0 podman[279166]: 2025-10-02 19:21:32.633227996 +0000 UTC m=+0.236792052 container died 5e4c49a941abbf89c44f3947071e5909fc45fd48d7ca10940669b27f367f0133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 19:21:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-d34a755aa4b7aa728e6a3b4b0833d5bd1180cd918714dfe4782d1c54f8e3b214-merged.mount: Deactivated successfully.
Oct 02 19:21:32 compute-0 podman[279166]: 2025-10-02 19:21:32.705489195 +0000 UTC m=+0.309053251 container remove 5e4c49a941abbf89c44f3947071e5909fc45fd48d7ca10940669b27f367f0133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 19:21:32 compute-0 systemd[1]: libpod-conmon-5e4c49a941abbf89c44f3947071e5909fc45fd48d7ca10940669b27f367f0133.scope: Deactivated successfully.
Oct 02 19:21:32 compute-0 python3.9[279238]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:21:32 compute-0 podman[279255]: 2025-10-02 19:21:32.96115909 +0000 UTC m=+0.094846445 container create 11ee9b5f9534b28ef97bc34f7611a452baed6bacc586d76a8bbeee24d9c7601d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_panini, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 02 19:21:33 compute-0 systemd[1]: Started libpod-conmon-11ee9b5f9534b28ef97bc34f7611a452baed6bacc586d76a8bbeee24d9c7601d.scope.
Oct 02 19:21:33 compute-0 podman[279255]: 2025-10-02 19:21:32.923107024 +0000 UTC m=+0.056794469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:21:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2292f98288a6a914a97a13ae1230255c72d0ec74a02bd9296ad0e41eee63c636/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2292f98288a6a914a97a13ae1230255c72d0ec74a02bd9296ad0e41eee63c636/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2292f98288a6a914a97a13ae1230255c72d0ec74a02bd9296ad0e41eee63c636/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2292f98288a6a914a97a13ae1230255c72d0ec74a02bd9296ad0e41eee63c636/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2292f98288a6a914a97a13ae1230255c72d0ec74a02bd9296ad0e41eee63c636/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:21:33 compute-0 podman[279255]: 2025-10-02 19:21:33.073322876 +0000 UTC m=+0.207010221 container init 11ee9b5f9534b28ef97bc34f7611a452baed6bacc586d76a8bbeee24d9c7601d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_panini, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 19:21:33 compute-0 podman[279255]: 2025-10-02 19:21:33.088910191 +0000 UTC m=+0.222597536 container start 11ee9b5f9534b28ef97bc34f7611a452baed6bacc586d76a8bbeee24d9c7601d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_panini, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 19:21:33 compute-0 podman[279255]: 2025-10-02 19:21:33.094439271 +0000 UTC m=+0.228126646 container attach 11ee9b5f9534b28ef97bc34f7611a452baed6bacc586d76a8bbeee24d9c7601d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_panini, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 19:21:33 compute-0 ceph-mon[191910]: pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:21:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:21:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:21:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:21:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:21:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:21:33 compute-0 python3.9[279396]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432892.1105092-138-75008590985684/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:21:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:34 compute-0 happy_panini[279291]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:21:34 compute-0 happy_panini[279291]: --> relative data size: 1.0
Oct 02 19:21:34 compute-0 happy_panini[279291]: --> All data devices are unavailable
Oct 02 19:21:34 compute-0 systemd[1]: libpod-11ee9b5f9534b28ef97bc34f7611a452baed6bacc586d76a8bbeee24d9c7601d.scope: Deactivated successfully.
Oct 02 19:21:34 compute-0 systemd[1]: libpod-11ee9b5f9534b28ef97bc34f7611a452baed6bacc586d76a8bbeee24d9c7601d.scope: Consumed 1.119s CPU time.
Oct 02 19:21:34 compute-0 podman[279445]: 2025-10-02 19:21:34.323973368 +0000 UTC m=+0.041406140 container died 11ee9b5f9534b28ef97bc34f7611a452baed6bacc586d76a8bbeee24d9c7601d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_panini, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 19:21:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:21:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2292f98288a6a914a97a13ae1230255c72d0ec74a02bd9296ad0e41eee63c636-merged.mount: Deactivated successfully.
Oct 02 19:21:34 compute-0 podman[279445]: 2025-10-02 19:21:34.585715378 +0000 UTC m=+0.303148150 container remove 11ee9b5f9534b28ef97bc34f7611a452baed6bacc586d76a8bbeee24d9c7601d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_panini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 19:21:34 compute-0 systemd[1]: libpod-conmon-11ee9b5f9534b28ef97bc34f7611a452baed6bacc586d76a8bbeee24d9c7601d.scope: Deactivated successfully.
Oct 02 19:21:34 compute-0 sudo[279003]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:34 compute-0 sudo[279502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:21:34 compute-0 sudo[279502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:21:34 compute-0 sudo[279502]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:34 compute-0 sudo[279550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:21:34 compute-0 sudo[279550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:21:34 compute-0 sudo[279550]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:34 compute-0 sudo[279588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:21:34 compute-0 sudo[279588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:21:34 compute-0 sudo[279588]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:35 compute-0 sudo[279640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:21:35 compute-0 sudo[279640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:21:35 compute-0 python3.9[279678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:21:35 compute-0 ceph-mon[191910]: pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:35 compute-0 podman[279770]: 2025-10-02 19:21:35.677744349 +0000 UTC m=+0.104657212 container create 422aa3d109fe4235958c8468fb4c4f05ccf96dc1b1fcdd3d348b203c3e48b640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 19:21:35 compute-0 podman[279770]: 2025-10-02 19:21:35.646866848 +0000 UTC m=+0.073779731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:21:35 compute-0 systemd[1]: Started libpod-conmon-422aa3d109fe4235958c8468fb4c4f05ccf96dc1b1fcdd3d348b203c3e48b640.scope.
Oct 02 19:21:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:21:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:35 compute-0 podman[279770]: 2025-10-02 19:21:35.816304353 +0000 UTC m=+0.243217206 container init 422aa3d109fe4235958c8468fb4c4f05ccf96dc1b1fcdd3d348b203c3e48b640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 19:21:35 compute-0 podman[279770]: 2025-10-02 19:21:35.833702297 +0000 UTC m=+0.260615140 container start 422aa3d109fe4235958c8468fb4c4f05ccf96dc1b1fcdd3d348b203c3e48b640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:21:35 compute-0 podman[279770]: 2025-10-02 19:21:35.838900269 +0000 UTC m=+0.265813092 container attach 422aa3d109fe4235958c8468fb4c4f05ccf96dc1b1fcdd3d348b203c3e48b640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 02 19:21:35 compute-0 beautiful_driscoll[279821]: 167 167
Oct 02 19:21:35 compute-0 systemd[1]: libpod-422aa3d109fe4235958c8468fb4c4f05ccf96dc1b1fcdd3d348b203c3e48b640.scope: Deactivated successfully.
Oct 02 19:21:35 compute-0 podman[279770]: 2025-10-02 19:21:35.847493133 +0000 UTC m=+0.274405996 container died 422aa3d109fe4235958c8468fb4c4f05ccf96dc1b1fcdd3d348b203c3e48b640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_driscoll, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:21:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9b0a6727872e6ea9baee4dae268639c894ec1fd5191b98ef1867c271d6aea19-merged.mount: Deactivated successfully.
Oct 02 19:21:35 compute-0 podman[279770]: 2025-10-02 19:21:35.922799375 +0000 UTC m=+0.349712208 container remove 422aa3d109fe4235958c8468fb4c4f05ccf96dc1b1fcdd3d348b203c3e48b640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 19:21:35 compute-0 systemd[1]: libpod-conmon-422aa3d109fe4235958c8468fb4c4f05ccf96dc1b1fcdd3d348b203c3e48b640.scope: Deactivated successfully.
Oct 02 19:21:36 compute-0 python3.9[279874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432894.594453-182-689169480772/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:21:36 compute-0 podman[279882]: 2025-10-02 19:21:36.202031123 +0000 UTC m=+0.095531074 container create bb1471a52d93e279f7700150f99d7623b394c745606aa84d7e371f21b43114f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shirley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:21:36 compute-0 podman[279882]: 2025-10-02 19:21:36.16815188 +0000 UTC m=+0.061651901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:21:36 compute-0 systemd[1]: Started libpod-conmon-bb1471a52d93e279f7700150f99d7623b394c745606aa84d7e371f21b43114f2.scope.
Oct 02 19:21:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab84fbcbacd0c4781ff24f8ab32168c96d732363a3fa1fa9eebb9b4dccf96cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab84fbcbacd0c4781ff24f8ab32168c96d732363a3fa1fa9eebb9b4dccf96cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab84fbcbacd0c4781ff24f8ab32168c96d732363a3fa1fa9eebb9b4dccf96cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab84fbcbacd0c4781ff24f8ab32168c96d732363a3fa1fa9eebb9b4dccf96cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:21:36 compute-0 podman[279882]: 2025-10-02 19:21:36.354876917 +0000 UTC m=+0.248376938 container init bb1471a52d93e279f7700150f99d7623b394c745606aa84d7e371f21b43114f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:21:36 compute-0 podman[279882]: 2025-10-02 19:21:36.375992252 +0000 UTC m=+0.269492223 container start bb1471a52d93e279f7700150f99d7623b394c745606aa84d7e371f21b43114f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shirley, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 19:21:36 compute-0 podman[279882]: 2025-10-02 19:21:36.38252981 +0000 UTC m=+0.276029791 container attach bb1471a52d93e279f7700150f99d7623b394c745606aa84d7e371f21b43114f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 19:21:37 compute-0 python3.9[280052]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:21:37 compute-0 hungry_shirley[279922]: {
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:     "0": [
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:         {
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "devices": [
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "/dev/loop3"
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             ],
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "lv_name": "ceph_lv0",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "lv_size": "21470642176",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "name": "ceph_lv0",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "tags": {
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.cluster_name": "ceph",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.crush_device_class": "",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.encrypted": "0",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.osd_id": "0",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.type": "block",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.vdo": "0"
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             },
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "type": "block",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "vg_name": "ceph_vg0"
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:         }
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:     ],
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:     "1": [
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:         {
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "devices": [
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "/dev/loop4"
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             ],
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "lv_name": "ceph_lv1",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "lv_size": "21470642176",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "name": "ceph_lv1",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "tags": {
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.cluster_name": "ceph",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.crush_device_class": "",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.encrypted": "0",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.osd_id": "1",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.type": "block",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.vdo": "0"
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             },
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "type": "block",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "vg_name": "ceph_vg1"
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:         }
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:     ],
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:     "2": [
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:         {
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "devices": [
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "/dev/loop5"
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             ],
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "lv_name": "ceph_lv2",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "lv_size": "21470642176",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "name": "ceph_lv2",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "tags": {
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.cluster_name": "ceph",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.crush_device_class": "",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.encrypted": "0",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.osd_id": "2",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.type": "block",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:                 "ceph.vdo": "0"
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             },
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "type": "block",
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:             "vg_name": "ceph_vg2"
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:         }
Oct 02 19:21:37 compute-0 hungry_shirley[279922]:     ]
Oct 02 19:21:37 compute-0 hungry_shirley[279922]: }
Oct 02 19:21:37 compute-0 systemd[1]: libpod-bb1471a52d93e279f7700150f99d7623b394c745606aa84d7e371f21b43114f2.scope: Deactivated successfully.
Oct 02 19:21:37 compute-0 podman[279882]: 2025-10-02 19:21:37.265967648 +0000 UTC m=+1.159467599 container died bb1471a52d93e279f7700150f99d7623b394c745606aa84d7e371f21b43114f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shirley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:21:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ab84fbcbacd0c4781ff24f8ab32168c96d732363a3fa1fa9eebb9b4dccf96cd-merged.mount: Deactivated successfully.
Oct 02 19:21:37 compute-0 podman[279882]: 2025-10-02 19:21:37.351216731 +0000 UTC m=+1.244716672 container remove bb1471a52d93e279f7700150f99d7623b394c745606aa84d7e371f21b43114f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shirley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 19:21:37 compute-0 systemd[1]: libpod-conmon-bb1471a52d93e279f7700150f99d7623b394c745606aa84d7e371f21b43114f2.scope: Deactivated successfully.
Oct 02 19:21:37 compute-0 sudo[279640]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:37 compute-0 sudo[280114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:21:37 compute-0 sudo[280114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:21:37 compute-0 sudo[280114]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:37 compute-0 sudo[280162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:21:37 compute-0 sudo[280162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:21:37 compute-0 sudo[280162]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:37 compute-0 ceph-mon[191910]: pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:37 compute-0 sudo[280208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:21:37 compute-0 sudo[280208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:21:37 compute-0 sudo[280208]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:37 compute-0 sudo[280262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:21:37 compute-0 sudo[280262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:21:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:37 compute-0 python3.9[280266]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432896.4134495-182-219788556591894/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:21:38 compute-0 podman[280372]: 2025-10-02 19:21:38.280270791 +0000 UTC m=+0.075050276 container create 8a491f5275d3c949867c8840cdd0fb43e88d47019db97fe74b525c0e7c189004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bohr, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 19:21:38 compute-0 podman[280372]: 2025-10-02 19:21:38.247442837 +0000 UTC m=+0.042222312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:21:38 compute-0 systemd[1]: Started libpod-conmon-8a491f5275d3c949867c8840cdd0fb43e88d47019db97fe74b525c0e7c189004.scope.
Oct 02 19:21:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:21:38 compute-0 podman[280372]: 2025-10-02 19:21:38.417045577 +0000 UTC m=+0.211825112 container init 8a491f5275d3c949867c8840cdd0fb43e88d47019db97fe74b525c0e7c189004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bohr, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:21:38 compute-0 podman[280372]: 2025-10-02 19:21:38.429161937 +0000 UTC m=+0.223941392 container start 8a491f5275d3c949867c8840cdd0fb43e88d47019db97fe74b525c0e7c189004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bohr, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 19:21:38 compute-0 podman[280372]: 2025-10-02 19:21:38.434837742 +0000 UTC m=+0.229617227 container attach 8a491f5275d3c949867c8840cdd0fb43e88d47019db97fe74b525c0e7c189004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:21:38 compute-0 sleepy_bohr[280419]: 167 167
Oct 02 19:21:38 compute-0 systemd[1]: libpod-8a491f5275d3c949867c8840cdd0fb43e88d47019db97fe74b525c0e7c189004.scope: Deactivated successfully.
Oct 02 19:21:38 compute-0 podman[280372]: 2025-10-02 19:21:38.442531422 +0000 UTC m=+0.237310867 container died 8a491f5275d3c949867c8840cdd0fb43e88d47019db97fe74b525c0e7c189004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 19:21:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0a1702241e8247b16a18d3c4e7ef6ede817431b9c7987209d88fd2b6e914fba-merged.mount: Deactivated successfully.
Oct 02 19:21:38 compute-0 podman[280372]: 2025-10-02 19:21:38.51111978 +0000 UTC m=+0.305899235 container remove 8a491f5275d3c949867c8840cdd0fb43e88d47019db97fe74b525c0e7c189004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 19:21:38 compute-0 systemd[1]: libpod-conmon-8a491f5275d3c949867c8840cdd0fb43e88d47019db97fe74b525c0e7c189004.scope: Deactivated successfully.
Oct 02 19:21:38 compute-0 ceph-mon[191910]: pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:38 compute-0 podman[280489]: 2025-10-02 19:21:38.74017009 +0000 UTC m=+0.080964156 container create 6daccf14994ff9dfc35566b198770879885289dbf28c704dea82b65c3afbaa51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:21:38 compute-0 systemd[1]: Started libpod-conmon-6daccf14994ff9dfc35566b198770879885289dbf28c704dea82b65c3afbaa51.scope.
Oct 02 19:21:38 compute-0 podman[280489]: 2025-10-02 19:21:38.713881344 +0000 UTC m=+0.054675440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:21:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98684f26b77c48ac2b98399a9956b14f3d8f1208bcabe70a206ebaf1f5fe461c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98684f26b77c48ac2b98399a9956b14f3d8f1208bcabe70a206ebaf1f5fe461c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98684f26b77c48ac2b98399a9956b14f3d8f1208bcabe70a206ebaf1f5fe461c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98684f26b77c48ac2b98399a9956b14f3d8f1208bcabe70a206ebaf1f5fe461c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:21:38 compute-0 podman[280489]: 2025-10-02 19:21:38.888488001 +0000 UTC m=+0.229282077 container init 6daccf14994ff9dfc35566b198770879885289dbf28c704dea82b65c3afbaa51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hermann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:21:38 compute-0 podman[280489]: 2025-10-02 19:21:38.909200925 +0000 UTC m=+0.249995001 container start 6daccf14994ff9dfc35566b198770879885289dbf28c704dea82b65c3afbaa51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hermann, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 19:21:38 compute-0 podman[280489]: 2025-10-02 19:21:38.914959562 +0000 UTC m=+0.255753648 container attach 6daccf14994ff9dfc35566b198770879885289dbf28c704dea82b65c3afbaa51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hermann, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 19:21:38 compute-0 python3.9[280528]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:21:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:21:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:39 compute-0 sudo[280704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihdqscyjpxixmuoajhuxpwmxvrxspxtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432899.3516865-220-29877315141059/AnsiballZ_file.py'
Oct 02 19:21:39 compute-0 sudo[280704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]: {
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:         "osd_id": 1,
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:         "type": "bluestore"
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:     },
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:         "osd_id": 2,
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:         "type": "bluestore"
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:     },
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:         "osd_id": 0,
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:         "type": "bluestore"
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]:     }
Oct 02 19:21:40 compute-0 upbeat_hermann[280532]: }
Oct 02 19:21:40 compute-0 python3.9[280707]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:21:40 compute-0 sudo[280704]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:40 compute-0 systemd[1]: libpod-6daccf14994ff9dfc35566b198770879885289dbf28c704dea82b65c3afbaa51.scope: Deactivated successfully.
Oct 02 19:21:40 compute-0 systemd[1]: libpod-6daccf14994ff9dfc35566b198770879885289dbf28c704dea82b65c3afbaa51.scope: Consumed 1.204s CPU time.
Oct 02 19:21:40 compute-0 podman[280489]: 2025-10-02 19:21:40.128196185 +0000 UTC m=+1.468990251 container died 6daccf14994ff9dfc35566b198770879885289dbf28c704dea82b65c3afbaa51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hermann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 02 19:21:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-98684f26b77c48ac2b98399a9956b14f3d8f1208bcabe70a206ebaf1f5fe461c-merged.mount: Deactivated successfully.
Oct 02 19:21:40 compute-0 podman[280489]: 2025-10-02 19:21:40.247778563 +0000 UTC m=+1.588572669 container remove 6daccf14994ff9dfc35566b198770879885289dbf28c704dea82b65c3afbaa51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:21:40 compute-0 systemd[1]: libpod-conmon-6daccf14994ff9dfc35566b198770879885289dbf28c704dea82b65c3afbaa51.scope: Deactivated successfully.
Oct 02 19:21:40 compute-0 sudo[280262]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:21:40 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:21:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:21:40 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:21:40 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 444cdd7e-4b4c-4f0b-9b34-74a6ecdc7ed8 does not exist
Oct 02 19:21:40 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 87616c13-5c54-423d-b1a7-6d698ffeeac2 does not exist
Oct 02 19:21:40 compute-0 sudo[280755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:21:40 compute-0 sudo[280755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:21:40 compute-0 sudo[280755]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:40 compute-0 sudo[280803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:21:40 compute-0 sudo[280803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:21:40 compute-0 sudo[280803]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:40 compute-0 sudo[280930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emrndkjpqaslfllceskenjgjwbranvdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432900.431769-228-5597087120241/AnsiballZ_stat.py'
Oct 02 19:21:40 compute-0 sudo[280930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:41 compute-0 python3.9[280932]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:21:41 compute-0 sudo[280930]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:41 compute-0 ceph-mon[191910]: pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:21:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:21:41 compute-0 sudo[281008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvzpfwxzbqgxletkidlbscruzkqdbtpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432900.431769-228-5597087120241/AnsiballZ_file.py'
Oct 02 19:21:41 compute-0 sudo[281008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:41 compute-0 python3.9[281010]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:21:41 compute-0 sudo[281008]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:42 compute-0 sudo[281160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycvnmwtmcvxpkfdpjfsqyqltjdfglpkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432902.1201138-228-186247228080453/AnsiballZ_stat.py'
Oct 02 19:21:42 compute-0 sudo[281160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:42 compute-0 python3.9[281162]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:21:42 compute-0 sudo[281160]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:43 compute-0 sudo[281238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxfgbpboxpalsartlqxociroyivjrsrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432902.1201138-228-186247228080453/AnsiballZ_file.py'
Oct 02 19:21:43 compute-0 sudo[281238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:43 compute-0 ceph-mon[191910]: pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:43 compute-0 python3.9[281240]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:21:43 compute-0 sudo[281238]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:21:44 compute-0 sudo[281390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulvzuvzrweksyaehgcsxzcjqnhypueok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432903.8551478-251-73226730660791/AnsiballZ_file.py'
Oct 02 19:21:44 compute-0 sudo[281390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:44 compute-0 python3.9[281392]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:21:44 compute-0 sudo[281390]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:45 compute-0 ceph-mon[191910]: pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:45 compute-0 sudo[281542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofkpmgmjkwxqpihofhrglqrpblygahpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432904.9035826-259-172823703599115/AnsiballZ_stat.py'
Oct 02 19:21:45 compute-0 sudo[281542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:45 compute-0 python3.9[281544]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:21:45 compute-0 sudo[281542]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:46 compute-0 sudo[281620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wonryrsrmketthbehfuyhznlocjfxhsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432904.9035826-259-172823703599115/AnsiballZ_file.py'
Oct 02 19:21:46 compute-0 sudo[281620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:46 compute-0 python3.9[281622]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:21:46 compute-0 sudo[281620]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:47 compute-0 sudo[281772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgynhtqvzbqqugivaadhmxyhadfesria ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432906.5800176-271-255598455724025/AnsiballZ_stat.py'
Oct 02 19:21:47 compute-0 sudo[281772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:47 compute-0 python3.9[281774]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:21:47 compute-0 sudo[281772]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:47 compute-0 ceph-mon[191910]: pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:47 compute-0 sudo[281850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjonzgorxzjhczgcvodtcikrohwdwjkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432906.5800176-271-255598455724025/AnsiballZ_file.py'
Oct 02 19:21:47 compute-0 sudo[281850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:48 compute-0 python3.9[281852]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:21:48 compute-0 sudo[281850]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:48 compute-0 podman[281951]: 2025-10-02 19:21:48.702715385 +0000 UTC m=+0.120051771 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:21:48 compute-0 podman[281946]: 2025-10-02 19:21:48.731222182 +0000 UTC m=+0.150584394 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930)
Oct 02 19:21:48 compute-0 sudo[282045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aawjedpnkcnugjirbpzvpilidxvtnuae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432908.3522966-283-222085904248642/AnsiballZ_systemd.py'
Oct 02 19:21:48 compute-0 sudo[282045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:49 compute-0 python3.9[282047]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:21:49 compute-0 systemd[1]: Reloading.
Oct 02 19:21:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:21:49 compute-0 ceph-mon[191910]: pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:49 compute-0 systemd-rc-local-generator[282075]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:21:49 compute-0 systemd-sysv-generator[282078]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:21:49 compute-0 sudo[282045]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:50 compute-0 sudo[282236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihfsbetaytkuswlyewbtvgfzrqroxhwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432910.0756516-291-173137785746798/AnsiballZ_stat.py'
Oct 02 19:21:50 compute-0 sudo[282236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:50 compute-0 python3.9[282238]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:21:50 compute-0 sudo[282236]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:51 compute-0 sudo[282314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqjifddauxfzumhnvqfullihyeouvdkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432910.0756516-291-173137785746798/AnsiballZ_file.py'
Oct 02 19:21:51 compute-0 sudo[282314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:51 compute-0 ceph-mon[191910]: pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:51 compute-0 python3.9[282316]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:21:51 compute-0 sudo[282314]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:52 compute-0 sudo[282466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbfitzvqfuggcuvkuwrjekcnvzpiylyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432911.7428217-303-72879256420009/AnsiballZ_stat.py'
Oct 02 19:21:52 compute-0 sudo[282466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:52 compute-0 python3.9[282468]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:21:52 compute-0 sudo[282466]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:52 compute-0 sudo[282565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crcwosvjlhudvftmlxrifvufmeabzkyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432911.7428217-303-72879256420009/AnsiballZ_file.py'
Oct 02 19:21:52 compute-0 sudo[282565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:53 compute-0 podman[282518]: 2025-10-02 19:21:53.041089018 +0000 UTC m=+0.145999589 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, release=1755695350, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 02 19:21:53 compute-0 podman[282519]: 2025-10-02 19:21:53.042963219 +0000 UTC m=+0.147619093 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 19:21:53 compute-0 python3.9[282581]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:21:53 compute-0 sudo[282565]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:53 compute-0 ceph-mon[191910]: pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:54 compute-0 sudo[282742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqzdzuvhduoiuksxnoxdeeoqupxyzjlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432913.5141163-315-46927801686481/AnsiballZ_systemd.py'
Oct 02 19:21:54 compute-0 sudo[282742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:21:54 compute-0 python3.9[282744]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:21:54 compute-0 systemd[1]: Reloading.
Oct 02 19:21:54 compute-0 systemd-sysv-generator[282773]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:21:54 compute-0 systemd-rc-local-generator[282770]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:21:55 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 19:21:55 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 19:21:55 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 19:21:55 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 19:21:55 compute-0 sudo[282742]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:55 compute-0 ceph-mon[191910]: pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:21:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 8 op/s
Oct 02 19:21:55 compute-0 sudo[282937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrcunwktohfhidiaqebgebnnfynxoupd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432915.4521613-325-50559427932274/AnsiballZ_file.py'
Oct 02 19:21:55 compute-0 sudo[282937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:56 compute-0 python3.9[282939]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:21:56 compute-0 sudo[282937]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:57 compute-0 sudo[283089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrjwdhtjxebrofavrgsjczezmevqzfvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432916.5556638-333-227031975328071/AnsiballZ_stat.py'
Oct 02 19:21:57 compute-0 sudo[283089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:57 compute-0 python3.9[283091]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:21:57 compute-0 sudo[283089]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:57 compute-0 ceph-mon[191910]: pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 8 op/s
Oct 02 19:21:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Oct 02 19:21:58 compute-0 sudo[283212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdshlsnjzhyoaijcnnysqdazsouzetdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432916.5556638-333-227031975328071/AnsiballZ_copy.py'
Oct 02 19:21:58 compute-0 sudo[283212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:58 compute-0 python3.9[283214]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432916.5556638-333-227031975328071/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:21:58 compute-0 sudo[283212]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:21:59 compute-0 sudo[283397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfpklclrjrkxbjidsmmfvsckmwjusuwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432918.8869364-350-247275036460142/AnsiballZ_file.py'
Oct 02 19:21:59 compute-0 podman[283338]: 2025-10-02 19:21:59.422075759 +0000 UTC m=+0.101065915 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:21:59 compute-0 sudo[283397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:21:59 compute-0 ceph-mon[191910]: pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Oct 02 19:21:59 compute-0 podman[283339]: 2025-10-02 19:21:59.459318443 +0000 UTC m=+0.136391587 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:21:59 compute-0 python3.9[283409]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:21:59 compute-0 sudo[283397]: pam_unix(sudo:session): session closed for user root
Oct 02 19:21:59 compute-0 podman[157186]: time="2025-10-02T19:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:21:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32818 "" "Go-http-client/1.1"
Oct 02 19:21:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6832 "" "Go-http-client/1.1"
Oct 02 19:21:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Oct 02 19:22:00 compute-0 sudo[283575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhtukcpzqxtuvsnedfdqafrkrrpjiaad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432919.980091-358-12910285630270/AnsiballZ_stat.py'
Oct 02 19:22:00 compute-0 sudo[283575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:00 compute-0 podman[283533]: 2025-10-02 19:22:00.569632612 +0000 UTC m=+0.127477033 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., release-0.7.12=, config_id=edpm, release=1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, architecture=x86_64, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:22:00 compute-0 python3.9[283580]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:22:00 compute-0 sudo[283575]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:01 compute-0 sudo[283701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzsdkfhfajnrrszgubtqnjalmbdtiyjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432919.980091-358-12910285630270/AnsiballZ_copy.py'
Oct 02 19:22:01 compute-0 sudo[283701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:01 compute-0 openstack_network_exporter[159337]: ERROR   19:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:22:01 compute-0 openstack_network_exporter[159337]: ERROR   19:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:22:01 compute-0 openstack_network_exporter[159337]: ERROR   19:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:22:01 compute-0 openstack_network_exporter[159337]: ERROR   19:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:22:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:22:01 compute-0 openstack_network_exporter[159337]: ERROR   19:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:22:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:22:01 compute-0 ceph-mon[191910]: pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Oct 02 19:22:01 compute-0 python3.9[283703]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432919.980091-358-12910285630270/.source.json _original_basename=.yxu6z3iv follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:22:01 compute-0 sudo[283701]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Oct 02 19:22:02 compute-0 sudo[283853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smbvzprjgvuadyowlfgufzesnnwzlfel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432921.9471455-373-86012064448417/AnsiballZ_file.py'
Oct 02 19:22:02 compute-0 sudo[283853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:02 compute-0 python3.9[283855]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:22:02 compute-0 sudo[283853]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:22:03
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'images', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'backups', 'volumes']
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:22:03 compute-0 ceph-mon[191910]: pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Oct 02 19:22:03 compute-0 sudo[284005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkoccildjsfxknnwzladaalerfqkxsvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432923.0542924-381-172355021306057/AnsiballZ_stat.py'
Oct 02 19:22:03 compute-0 sudo[284005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:22:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 19:22:03 compute-0 sudo[284005]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:22:04 compute-0 sudo[284128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evwwswrdmwolbawybqebbuhacuxdgruf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432923.0542924-381-172355021306057/AnsiballZ_copy.py'
Oct 02 19:22:04 compute-0 sudo[284128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:04 compute-0 sudo[284128]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:05 compute-0 ceph-mon[191910]: pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 19:22:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 19:22:05 compute-0 sudo[284280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-angapdrtltshodenhiiruyraxczkiabk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432925.1991992-398-60292214279287/AnsiballZ_container_config_data.py'
Oct 02 19:22:05 compute-0 sudo[284280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:06 compute-0 python3.9[284282]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 02 19:22:06 compute-0 sudo[284280]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:07 compute-0 sudo[284432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvnnqgjmrppjgnbcgpdfwapgnppyqwhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432926.5713027-407-161825632032452/AnsiballZ_container_config_hash.py'
Oct 02 19:22:07 compute-0 sudo[284432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:07 compute-0 python3.9[284434]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:22:07 compute-0 sudo[284432]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:07 compute-0 ceph-mon[191910]: pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 19:22:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Oct 02 19:22:08 compute-0 sudo[284584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evojjooffgugrienuqghdmdvxfobttav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432927.8176746-416-140251696579970/AnsiballZ_podman_container_info.py'
Oct 02 19:22:08 compute-0 sudo[284584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:08 compute-0 python3.9[284586]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 19:22:09 compute-0 sudo[284584]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:22:09 compute-0 ceph-mon[191910]: pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Oct 02 19:22:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Oct 02 19:22:10 compute-0 sudo[284761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrluvmbezcljwyxfagycayeixqlomtsr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432930.0742176-429-246600273328868/AnsiballZ_edpm_container_manage.py'
Oct 02 19:22:10 compute-0 sudo[284761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:11 compute-0 python3[284763]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:22:11 compute-0 ceph-mon[191910]: pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Oct 02 19:22:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 0 B/s wr, 15 op/s
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:22:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:22:13 compute-0 ceph-mon[191910]: pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 0 B/s wr, 15 op/s
Oct 02 19:22:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 0 B/s wr, 4 op/s
Oct 02 19:22:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:22:15 compute-0 ceph-mon[191910]: pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 0 B/s wr, 4 op/s
Oct 02 19:22:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 02 19:22:16 compute-0 ceph-mon[191910]: pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 02 19:22:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:19 compute-0 ceph-mon[191910]: pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:22:19 compute-0 podman[284821]: 2025-10-02 19:22:19.787757993 +0000 UTC m=+0.214304839 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:22:19 compute-0 podman[284820]: 2025-10-02 19:22:19.810121752 +0000 UTC m=+0.234840479 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:22:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:21 compute-0 ceph-mon[191910]: pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:21 compute-0 podman[284776]: 2025-10-02 19:22:21.862739022 +0000 UTC m=+10.682659255 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 19:22:22 compute-0 podman[284908]: 2025-10-02 19:22:22.111252353 +0000 UTC m=+0.057673952 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 19:22:22 compute-0 podman[284908]: 2025-10-02 19:22:22.371734789 +0000 UTC m=+0.318156328 container create 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 19:22:22 compute-0 python3[284763]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 19:22:22 compute-0 sudo[284761]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:23 compute-0 sudo[285121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntpoxaaskfuuxdkoquihpmklwhowqzku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432942.8267436-437-129371741071955/AnsiballZ_stat.py'
Oct 02 19:22:23 compute-0 sudo[285121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:23 compute-0 ceph-mon[191910]: pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:23 compute-0 podman[285069]: 2025-10-02 19:22:23.375623229 +0000 UTC m=+0.139872242 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6)
Oct 02 19:22:23 compute-0 podman[285070]: 2025-10-02 19:22:23.392622502 +0000 UTC m=+0.156612518 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:22:23 compute-0 python3.9[285136]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:22:23 compute-0 sudo[285121]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.438 17 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.438 17 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.438 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a00e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.439 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feec38a00b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec5f86ab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.441 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.441 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feec72bb7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.442 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38272c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.442 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feec38264b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.442 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.443 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feec3827c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.443 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feec3827230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec6d242f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feec3825700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feec3827290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feec38277a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feec38272f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feec3827350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.446 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38273e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feec38a0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feec38273b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.448 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.448 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.449 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38274d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.449 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.449 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feec49646e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.450 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feec3827440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feec3827c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feec38274a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.451 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.452 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec4ac1ee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.453 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.452 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.453 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feec3827ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.453 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.454 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feec3827d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.453 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.454 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3825730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.454 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.455 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.454 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.455 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feec3827dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.456 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.456 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feec3827e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.456 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.456 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feec38a0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.456 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.457 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feec38276e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.457 17 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.457 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feec3827ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.457 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.458 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feec49641d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.458 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.458 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feec3827740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.458 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.458 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feec3827f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.459 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.459 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.460 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.460 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.460 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.460 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.460 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.461 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.461 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.461 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.461 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.461 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.461 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.462 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.462 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.462 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.462 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.462 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.462 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.463 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:22:24.464 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:22:24 compute-0 sudo[285294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anjgvsvscppveahxmxdatjnhoukfxwtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432943.9763026-446-84417637321967/AnsiballZ_file.py'
Oct 02 19:22:24 compute-0 sudo[285294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:24 compute-0 python3.9[285297]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:22:24 compute-0 sudo[285294]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:25 compute-0 sudo[285371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsdjoyupuhrlwulajhfadxxziuchltek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432943.9763026-446-84417637321967/AnsiballZ_stat.py'
Oct 02 19:22:25 compute-0 sudo[285371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:25 compute-0 python3.9[285373]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:22:25 compute-0 sudo[285371]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:25 compute-0 ceph-mon[191910]: pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:26 compute-0 sudo[285522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cugcttybzqgyufulbssasyptpnqlkbjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432945.4431932-446-110634973229587/AnsiballZ_copy.py'
Oct 02 19:22:26 compute-0 sudo[285522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:26 compute-0 python3.9[285524]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432945.4431932-446-110634973229587/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:22:26 compute-0 sudo[285522]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:26 compute-0 sudo[285598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fclnakqkunvqiwnxugujnbhxlmehzqmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432945.4431932-446-110634973229587/AnsiballZ_systemd.py'
Oct 02 19:22:26 compute-0 sudo[285598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:27 compute-0 python3.9[285600]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:22:27 compute-0 systemd[1]: Reloading.
Oct 02 19:22:27 compute-0 ceph-mon[191910]: pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:27 compute-0 systemd-sysv-generator[285628]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:22:27 compute-0 systemd-rc-local-generator[285622]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:22:27 compute-0 sudo[285598]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:28 compute-0 sudo[285710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcpwhiwdguxfwtkdsfgnxqaepmiqeljw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432945.4431932-446-110634973229587/AnsiballZ_systemd.py'
Oct 02 19:22:28 compute-0 sudo[285710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:28 compute-0 python3.9[285712]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:22:28 compute-0 systemd[1]: Reloading.
Oct 02 19:22:28 compute-0 systemd-rc-local-generator[285740]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:22:28 compute-0 systemd-sysv-generator[285744]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:22:29 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Oct 02 19:22:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:22:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/174ae04c20e09d97124639ce43b818b977ae63da8c09a1a2de7df7c9b059b4b1/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 02 19:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/174ae04c20e09d97124639ce43b818b977ae63da8c09a1a2de7df7c9b059b4b1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 19:22:29 compute-0 ceph-mon[191910]: pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:29 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19.
Oct 02 19:22:29 compute-0 podman[285754]: 2025-10-02 19:22:29.526066058 +0000 UTC m=+0.290146526 container init 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: + sudo -E kolla_set_configs
Oct 02 19:22:29 compute-0 podman[285754]: 2025-10-02 19:22:29.56800073 +0000 UTC m=+0.332081148 container start 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 19:22:29 compute-0 edpm-start-podman-container[285754]: ovn_metadata_agent
Oct 02 19:22:29 compute-0 podman[285772]: 2025-10-02 19:22:29.612596745 +0000 UTC m=+0.105336700 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:22:29 compute-0 podman[285771]: 2025-10-02 19:22:29.633854865 +0000 UTC m=+0.130975430 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: INFO:__main__:Validating config file
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: INFO:__main__:Copying service configuration files
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 02 19:22:29 compute-0 podman[285792]: 2025-10-02 19:22:29.654219079 +0000 UTC m=+0.076917786 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: INFO:__main__:Writing out command to execute
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 02 19:22:29 compute-0 edpm-start-podman-container[285753]: Creating additional drop-in dependency for "ovn_metadata_agent" (6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19)
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: ++ cat /run_command
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: + CMD=neutron-ovn-metadata-agent
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: + ARGS=
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: + sudo kolla_copy_cacerts
Oct 02 19:22:29 compute-0 systemd[1]: Reloading.
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: + [[ ! -n '' ]]
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: + . kolla_extend_start
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: Running command: 'neutron-ovn-metadata-agent'
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: + umask 0022
Oct 02 19:22:29 compute-0 ovn_metadata_agent[285768]: + exec neutron-ovn-metadata-agent
Oct 02 19:22:29 compute-0 podman[157186]: time="2025-10-02T19:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:22:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35730 "" "Go-http-client/1.1"
Oct 02 19:22:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7260 "" "Go-http-client/1.1"
Oct 02 19:22:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:29 compute-0 systemd-rc-local-generator[285887]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:22:29 compute-0 systemd-sysv-generator[285891]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:22:30 compute-0 systemd[1]: Started ovn_metadata_agent container.
Oct 02 19:22:30 compute-0 sudo[285710]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:30 compute-0 sshd-session[276229]: Connection closed by 192.168.122.30 port 32884
Oct 02 19:22:30 compute-0 sshd-session[276226]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:22:30 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Oct 02 19:22:30 compute-0 systemd[1]: session-54.scope: Consumed 1min 28.395s CPU time.
Oct 02 19:22:30 compute-0 systemd-logind[793]: Session 54 logged out. Waiting for processes to exit.
Oct 02 19:22:30 compute-0 systemd-logind[793]: Removed session 54.
Oct 02 19:22:30 compute-0 podman[285922]: 2025-10-02 19:22:30.830946958 +0000 UTC m=+0.117116242 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, container_name=kepler, distribution-scope=public, release=1214.1726694543, release-0.7.12=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc.)
Oct 02 19:22:31 compute-0 openstack_network_exporter[159337]: ERROR   19:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:22:31 compute-0 openstack_network_exporter[159337]: ERROR   19:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:22:31 compute-0 openstack_network_exporter[159337]: ERROR   19:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:22:31 compute-0 openstack_network_exporter[159337]: ERROR   19:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:22:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:22:31 compute-0 openstack_network_exporter[159337]: ERROR   19:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:22:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:22:31 compute-0 ceph-mon[191910]: pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.226 285790 INFO neutron.common.config [-] Logging enabled!
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.227 285790 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.227 285790 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.227 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.227 285790 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.227 285790 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.228 285790 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.228 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.228 285790 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.228 285790 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.228 285790 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.228 285790 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.228 285790 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.228 285790 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.229 285790 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.229 285790 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.229 285790 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.229 285790 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.229 285790 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.229 285790 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.229 285790 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.229 285790 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.230 285790 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.230 285790 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.230 285790 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.230 285790 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.230 285790 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.230 285790 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.230 285790 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.230 285790 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.230 285790 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.231 285790 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.231 285790 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.231 285790 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.231 285790 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.231 285790 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.231 285790 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.231 285790 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.232 285790 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.232 285790 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.232 285790 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.232 285790 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.232 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.232 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.232 285790 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.232 285790 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.233 285790 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.233 285790 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.233 285790 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.233 285790 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.233 285790 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.233 285790 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.233 285790 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.234 285790 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.234 285790 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.234 285790 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.234 285790 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.234 285790 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.234 285790 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.234 285790 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.234 285790 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.234 285790 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.235 285790 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.235 285790 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.235 285790 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.235 285790 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.235 285790 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.235 285790 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.235 285790 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.235 285790 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.236 285790 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.236 285790 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.236 285790 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.236 285790 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.236 285790 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.236 285790 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.236 285790 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.236 285790 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.236 285790 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.237 285790 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.237 285790 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.237 285790 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.237 285790 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.237 285790 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.237 285790 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.237 285790 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.237 285790 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.238 285790 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.238 285790 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.238 285790 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.238 285790 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.238 285790 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.238 285790 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.238 285790 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.239 285790 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.239 285790 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.239 285790 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.239 285790 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.239 285790 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.239 285790 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.239 285790 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.239 285790 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.239 285790 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.239 285790 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.239 285790 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.240 285790 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.240 285790 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.240 285790 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.240 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.240 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.240 285790 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.240 285790 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.240 285790 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.241 285790 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.241 285790 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.241 285790 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.241 285790 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.241 285790 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.241 285790 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.241 285790 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.241 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.241 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.242 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.242 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.242 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.242 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.242 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.242 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.242 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.242 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.243 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.243 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.243 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.243 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.243 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.243 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.243 285790 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.243 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.244 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.244 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.244 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.244 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.244 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.244 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.244 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.244 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.244 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.245 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.245 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.245 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.245 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.245 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.245 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.245 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.245 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.246 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.246 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.246 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.246 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.246 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.246 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.246 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.246 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.246 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.247 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.247 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.247 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.247 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.247 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.247 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.247 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.247 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.247 285790 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.248 285790 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.248 285790 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.248 285790 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.248 285790 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.248 285790 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.248 285790 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.248 285790 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.248 285790 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.249 285790 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.249 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.249 285790 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.249 285790 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.249 285790 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.249 285790 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.249 285790 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.249 285790 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.250 285790 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.250 285790 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.250 285790 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.250 285790 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.250 285790 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.250 285790 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.250 285790 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.250 285790 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.251 285790 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.251 285790 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.251 285790 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.251 285790 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.251 285790 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.251 285790 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.251 285790 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.251 285790 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.251 285790 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.252 285790 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.252 285790 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.252 285790 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.252 285790 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.252 285790 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.252 285790 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.252 285790 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.252 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.253 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.253 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.253 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.253 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.253 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.253 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.253 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.253 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.253 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.254 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.254 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.254 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.254 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.254 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.254 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.254 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.254 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.254 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.255 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.255 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.255 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.255 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.255 285790 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.255 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.255 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.255 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.255 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.256 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.256 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.256 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.256 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.256 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.256 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.256 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.256 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.257 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.257 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.257 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.257 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.257 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.257 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.257 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.257 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.258 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.258 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.258 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.258 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.258 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.258 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.258 285790 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.258 285790 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.259 285790 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.259 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.259 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.259 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.259 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.259 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.259 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.259 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.259 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.260 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.260 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.260 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.260 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.260 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.260 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.260 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.260 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.261 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.261 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.261 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.261 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.261 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.261 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.261 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.261 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.261 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.262 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.262 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.262 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.262 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.262 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.262 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.262 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.262 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.263 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.263 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.263 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.263 285790 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.263 285790 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.273 285790 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.274 285790 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.274 285790 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.274 285790 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.274 285790 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.289 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 1de2af17-a89c-45e5-97c6-db433f26bbb6 (UUID: 1de2af17-a89c-45e5-97c6-db433f26bbb6) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.312 285790 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.313 285790 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.313 285790 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.313 285790 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.317 285790 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.326 285790 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.334 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '1de2af17-a89c-45e5-97c6-db433f26bbb6'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], external_ids={}, name=1de2af17-a89c-45e5-97c6-db433f26bbb6, nb_cfg_timestamp=1759431364505, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.335 285790 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f0b87690e80>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.336 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.336 285790 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.337 285790 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.337 285790 INFO oslo_service.service [-] Starting 1 workers
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.341 285790 DEBUG oslo_service.service [-] Started child 285942 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.344 285790 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpi6luopni/privsep.sock']
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.344 285942 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-1021438'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.383 285942 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.384 285942 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.384 285942 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.389 285942 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.399 285942 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 02 19:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.409 285942 INFO eventlet.wsgi.server [-] (285942) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Oct 02 19:22:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:33.065 285790 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 19:22:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:33.066 285790 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpi6luopni/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 19:22:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.935 285947 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 19:22:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.943 285947 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 19:22:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.948 285947 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 02 19:22:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:32.948 285947 INFO oslo.privsep.daemon [-] privsep daemon running as pid 285947
Oct 02 19:22:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:33.070 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[23a9dc37-8e0a-4aab-ad34-20911ecf64d4]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:22:33 compute-0 ceph-mon[191910]: pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:22:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:22:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:33.594 285947 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:22:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:33.594 285947 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:22:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:33.594 285947 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:22:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:22:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:22:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:22:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:22:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.133 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[a7277942-fbf9-4777-baaf-a647c2ba15f8]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.137 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=1de2af17-a89c-45e5-97c6-db433f26bbb6, column=external_ids, values=({'neutron:ovn-metadata-id': '1a3da8af-d6b1-589a-b664-63a6201f674d'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.154 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1de2af17-a89c-45e5-97c6-db433f26bbb6, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.167 285790 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.167 285790 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.167 285790 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.167 285790 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.168 285790 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.168 285790 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.168 285790 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.168 285790 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.168 285790 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.169 285790 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.169 285790 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.169 285790 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.169 285790 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.169 285790 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.169 285790 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.170 285790 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.170 285790 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.170 285790 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.170 285790 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.170 285790 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.170 285790 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.171 285790 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.171 285790 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.171 285790 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.171 285790 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.171 285790 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.171 285790 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.172 285790 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.172 285790 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.172 285790 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.172 285790 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.172 285790 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.172 285790 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.173 285790 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.173 285790 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.173 285790 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.173 285790 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.174 285790 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.174 285790 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.174 285790 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.174 285790 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.174 285790 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.174 285790 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.174 285790 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.175 285790 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.175 285790 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.176 285790 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.176 285790 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.176 285790 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.177 285790 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.177 285790 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.177 285790 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.177 285790 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.177 285790 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.177 285790 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.177 285790 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.178 285790 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.178 285790 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.178 285790 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.178 285790 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.178 285790 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.178 285790 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.178 285790 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.179 285790 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.179 285790 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.179 285790 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.179 285790 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.179 285790 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.179 285790 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.180 285790 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.180 285790 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.180 285790 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.180 285790 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.180 285790 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.181 285790 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.181 285790 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.181 285790 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.181 285790 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.181 285790 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.181 285790 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.182 285790 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.182 285790 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.182 285790 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.182 285790 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.182 285790 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.182 285790 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.182 285790 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.183 285790 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.183 285790 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.183 285790 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.183 285790 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.183 285790 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.183 285790 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.184 285790 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.184 285790 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.184 285790 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.184 285790 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.184 285790 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.184 285790 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.184 285790 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.185 285790 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.186 285790 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.186 285790 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.186 285790 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.186 285790 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.186 285790 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.187 285790 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.187 285790 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.187 285790 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.187 285790 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.187 285790 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.188 285790 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.188 285790 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.188 285790 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.188 285790 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.188 285790 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.188 285790 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.189 285790 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.189 285790 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.189 285790 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.189 285790 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.189 285790 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.190 285790 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.190 285790 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.190 285790 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.190 285790 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.190 285790 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.190 285790 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.191 285790 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.191 285790 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.191 285790 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.191 285790 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.191 285790 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.191 285790 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.192 285790 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.192 285790 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.192 285790 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.192 285790 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.192 285790 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.193 285790 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.193 285790 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.193 285790 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.193 285790 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.193 285790 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.193 285790 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.193 285790 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.194 285790 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.194 285790 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.194 285790 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.194 285790 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.194 285790 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.194 285790 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.194 285790 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.195 285790 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.195 285790 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.195 285790 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.195 285790 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.195 285790 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.195 285790 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.195 285790 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.196 285790 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.196 285790 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.196 285790 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.196 285790 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.197 285790 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.197 285790 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.197 285790 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.197 285790 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.197 285790 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.197 285790 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.198 285790 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.198 285790 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.198 285790 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.198 285790 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.198 285790 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.198 285790 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.198 285790 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.198 285790 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.199 285790 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.199 285790 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.199 285790 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.199 285790 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.199 285790 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.199 285790 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.200 285790 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.200 285790 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.200 285790 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.200 285790 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.200 285790 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.200 285790 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.200 285790 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.201 285790 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.201 285790 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.201 285790 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.201 285790 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.201 285790 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.201 285790 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.201 285790 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.202 285790 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.202 285790 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.202 285790 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.202 285790 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.202 285790 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.202 285790 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.202 285790 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.202 285790 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.203 285790 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.203 285790 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.203 285790 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.203 285790 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.203 285790 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.203 285790 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.203 285790 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.203 285790 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.203 285790 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.204 285790 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.204 285790 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.204 285790 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.204 285790 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.204 285790 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.204 285790 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.204 285790 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.204 285790 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.204 285790 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.205 285790 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.205 285790 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.205 285790 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.205 285790 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.205 285790 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.205 285790 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.205 285790 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.205 285790 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.206 285790 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.206 285790 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.206 285790 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.206 285790 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.206 285790 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.206 285790 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.207 285790 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.207 285790 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.207 285790 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.207 285790 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.207 285790 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.207 285790 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.207 285790 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.208 285790 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.208 285790 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.208 285790 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.208 285790 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.208 285790 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.208 285790 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.208 285790 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.208 285790 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.209 285790 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.209 285790 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.209 285790 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.209 285790 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.209 285790 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.209 285790 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.209 285790 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.209 285790 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.210 285790 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.210 285790 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.210 285790 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.210 285790 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.210 285790 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.210 285790 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.210 285790 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.210 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.211 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.211 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.211 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.211 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.211 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.211 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.211 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.211 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.212 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.212 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.212 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.212 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.212 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.212 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.212 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.212 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.212 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.213 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.213 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.213 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.213 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.213 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.213 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.213 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.213 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.214 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.214 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.214 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.214 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.214 285790 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.214 285790 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.214 285790 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.214 285790 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.215 285790 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:22:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:22:34.215 285790 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 19:22:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:22:35 compute-0 ceph-mon[191910]: pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:36 compute-0 sshd-session[285952]: Accepted publickey for zuul from 192.168.122.30 port 36364 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:22:36 compute-0 systemd-logind[793]: New session 55 of user zuul.
Oct 02 19:22:36 compute-0 systemd[1]: Started Session 55 of User zuul.
Oct 02 19:22:36 compute-0 sshd-session[285952]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:22:37 compute-0 ceph-mon[191910]: pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:37 compute-0 python3.9[286105]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:22:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:38 compute-0 sudo[286259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onwpwrrcsxmgkhzeciooemjruekswnjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432958.2243845-34-197949558787225/AnsiballZ_command.py'
Oct 02 19:22:38 compute-0 sudo[286259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:39 compute-0 python3.9[286261]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:22:39 compute-0 sudo[286259]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:22:39 compute-0 ceph-mon[191910]: pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:40 compute-0 sudo[286421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfezkcslbwullesbhhysjlemlmdnfnbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432959.6573224-45-6864555935326/AnsiballZ_systemd_service.py'
Oct 02 19:22:40 compute-0 sudo[286421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:40 compute-0 sudo[286424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:22:40 compute-0 sudo[286424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:22:40 compute-0 sudo[286424]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:40 compute-0 sudo[286449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:22:40 compute-0 sudo[286449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:22:40 compute-0 sudo[286449]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:40 compute-0 python3.9[286423]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:22:40 compute-0 systemd[1]: Reloading.
Oct 02 19:22:40 compute-0 systemd-sysv-generator[286522]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:22:40 compute-0 systemd-rc-local-generator[286518]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:22:41 compute-0 sudo[286474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:22:41 compute-0 sudo[286421]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:41 compute-0 sudo[286474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:22:41 compute-0 sudo[286474]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:41 compute-0 sudo[286542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:22:41 compute-0 sudo[286542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:22:41 compute-0 ceph-mon[191910]: pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:42 compute-0 sudo[286542]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:22:42 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:22:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:22:42 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:22:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:22:42 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:22:42 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 250cd026-133b-433a-a43d-7886ac9fa4d6 does not exist
Oct 02 19:22:42 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 08b9aa81-e373-4d9e-8a05-fd52c6bb303e does not exist
Oct 02 19:22:42 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 9b53f602-2041-4cbf-b88e-8e2919d092f0 does not exist
Oct 02 19:22:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:22:42 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:22:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:22:42 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:22:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:22:42 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:22:42 compute-0 sudo[286739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:22:42 compute-0 sudo[286739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:22:42 compute-0 sudo[286739]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:42 compute-0 python3.9[286738]: ansible-ansible.builtin.service_facts Invoked
Oct 02 19:22:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:22:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:22:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:22:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:22:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:22:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:22:42 compute-0 sudo[286764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:22:42 compute-0 sudo[286764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:22:42 compute-0 sudo[286764]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:42 compute-0 network[286806]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 19:22:42 compute-0 network[286809]: 'network-scripts' will be removed from distribution in near future.
Oct 02 19:22:42 compute-0 network[286812]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 19:22:43 compute-0 sudo[286805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:22:43 compute-0 sudo[286805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:22:43 compute-0 sudo[286805]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:43 compute-0 ceph-mon[191910]: pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:43 compute-0 sudo[286839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:22:43 compute-0 sudo[286839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:22:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:44 compute-0 podman[286920]: 2025-10-02 19:22:44.121092338 +0000 UTC m=+0.077447181 container create f48e8dc6df6ce869eccccf82753f5e767f3b9804d1017dd0212045b75b79b898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_black, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 19:22:44 compute-0 podman[286920]: 2025-10-02 19:22:44.082205739 +0000 UTC m=+0.038560592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:22:44 compute-0 systemd[1]: Started libpod-conmon-f48e8dc6df6ce869eccccf82753f5e767f3b9804d1017dd0212045b75b79b898.scope.
Oct 02 19:22:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:22:44 compute-0 podman[286920]: 2025-10-02 19:22:44.320916862 +0000 UTC m=+0.277271685 container init f48e8dc6df6ce869eccccf82753f5e767f3b9804d1017dd0212045b75b79b898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_black, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:22:44 compute-0 podman[286920]: 2025-10-02 19:22:44.342911071 +0000 UTC m=+0.299265874 container start f48e8dc6df6ce869eccccf82753f5e767f3b9804d1017dd0212045b75b79b898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_black, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 19:22:44 compute-0 pensive_black[286940]: 167 167
Oct 02 19:22:44 compute-0 systemd[1]: libpod-f48e8dc6df6ce869eccccf82753f5e767f3b9804d1017dd0212045b75b79b898.scope: Deactivated successfully.
Oct 02 19:22:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:22:44 compute-0 podman[286920]: 2025-10-02 19:22:44.377542075 +0000 UTC m=+0.333896968 container attach f48e8dc6df6ce869eccccf82753f5e767f3b9804d1017dd0212045b75b79b898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_black, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:22:44 compute-0 podman[286920]: 2025-10-02 19:22:44.378152981 +0000 UTC m=+0.334507784 container died f48e8dc6df6ce869eccccf82753f5e767f3b9804d1017dd0212045b75b79b898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_black, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 19:22:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b2b11cd1a571182abb64d8fde811992a020416582abbe64265fc9d9cc3d90b5-merged.mount: Deactivated successfully.
Oct 02 19:22:44 compute-0 podman[286920]: 2025-10-02 19:22:44.621747168 +0000 UTC m=+0.578102001 container remove f48e8dc6df6ce869eccccf82753f5e767f3b9804d1017dd0212045b75b79b898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:22:44 compute-0 systemd[1]: libpod-conmon-f48e8dc6df6ce869eccccf82753f5e767f3b9804d1017dd0212045b75b79b898.scope: Deactivated successfully.
Oct 02 19:22:44 compute-0 podman[286982]: 2025-10-02 19:22:44.886329066 +0000 UTC m=+0.087336400 container create f4d744495e84b9d5bf568b26faf6cd1ad9e6fed3fdc40f328a9371376a89a63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lamarr, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:22:44 compute-0 systemd[1]: Started libpod-conmon-f4d744495e84b9d5bf568b26faf6cd1ad9e6fed3fdc40f328a9371376a89a63a.scope.
Oct 02 19:22:44 compute-0 podman[286982]: 2025-10-02 19:22:44.845914935 +0000 UTC m=+0.046922349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:22:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:22:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d8ce416e2cbfb2b2f39ba4fe166ef98c068d96f391152ba6b1600c7f9b63dd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:22:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d8ce416e2cbfb2b2f39ba4fe166ef98c068d96f391152ba6b1600c7f9b63dd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:22:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d8ce416e2cbfb2b2f39ba4fe166ef98c068d96f391152ba6b1600c7f9b63dd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:22:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d8ce416e2cbfb2b2f39ba4fe166ef98c068d96f391152ba6b1600c7f9b63dd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:22:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d8ce416e2cbfb2b2f39ba4fe166ef98c068d96f391152ba6b1600c7f9b63dd4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:22:45 compute-0 podman[286982]: 2025-10-02 19:22:45.029596959 +0000 UTC m=+0.230604303 container init f4d744495e84b9d5bf568b26faf6cd1ad9e6fed3fdc40f328a9371376a89a63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lamarr, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:22:45 compute-0 podman[286982]: 2025-10-02 19:22:45.047086276 +0000 UTC m=+0.248093620 container start f4d744495e84b9d5bf568b26faf6cd1ad9e6fed3fdc40f328a9371376a89a63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:22:45 compute-0 podman[286982]: 2025-10-02 19:22:45.053516821 +0000 UTC m=+0.254524175 container attach f4d744495e84b9d5bf568b26faf6cd1ad9e6fed3fdc40f328a9371376a89a63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lamarr, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 19:22:45 compute-0 ceph-mon[191910]: pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:46 compute-0 epic_lamarr[287001]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:22:46 compute-0 epic_lamarr[287001]: --> relative data size: 1.0
Oct 02 19:22:46 compute-0 epic_lamarr[287001]: --> All data devices are unavailable
Oct 02 19:22:46 compute-0 systemd[1]: libpod-f4d744495e84b9d5bf568b26faf6cd1ad9e6fed3fdc40f328a9371376a89a63a.scope: Deactivated successfully.
Oct 02 19:22:46 compute-0 systemd[1]: libpod-f4d744495e84b9d5bf568b26faf6cd1ad9e6fed3fdc40f328a9371376a89a63a.scope: Consumed 1.143s CPU time.
Oct 02 19:22:46 compute-0 podman[287073]: 2025-10-02 19:22:46.327752665 +0000 UTC m=+0.038611323 container died f4d744495e84b9d5bf568b26faf6cd1ad9e6fed3fdc40f328a9371376a89a63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 19:22:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d8ce416e2cbfb2b2f39ba4fe166ef98c068d96f391152ba6b1600c7f9b63dd4-merged.mount: Deactivated successfully.
Oct 02 19:22:46 compute-0 podman[287073]: 2025-10-02 19:22:46.414463567 +0000 UTC m=+0.125322205 container remove f4d744495e84b9d5bf568b26faf6cd1ad9e6fed3fdc40f328a9371376a89a63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lamarr, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:22:46 compute-0 systemd[1]: libpod-conmon-f4d744495e84b9d5bf568b26faf6cd1ad9e6fed3fdc40f328a9371376a89a63a.scope: Deactivated successfully.
Oct 02 19:22:46 compute-0 sudo[286839]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:46 compute-0 sudo[287092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:22:46 compute-0 sudo[287092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:22:46 compute-0 sudo[287092]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:46 compute-0 sudo[287122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:22:46 compute-0 sudo[287122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:22:46 compute-0 sudo[287122]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:46 compute-0 sudo[287151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:22:46 compute-0 sudo[287151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:22:46 compute-0 sudo[287151]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:46 compute-0 sudo[287179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:22:47 compute-0 sudo[287179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:22:47 compute-0 podman[287276]: 2025-10-02 19:22:47.503967778 +0000 UTC m=+0.083048853 container create 5f501b02de5e8ef187388d29da29d0e9e0e16624fd082d25107b372b74ac140e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 19:22:47 compute-0 systemd[1]: Started libpod-conmon-5f501b02de5e8ef187388d29da29d0e9e0e16624fd082d25107b372b74ac140e.scope.
Oct 02 19:22:47 compute-0 podman[287276]: 2025-10-02 19:22:47.474543437 +0000 UTC m=+0.053624592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:22:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:22:47 compute-0 podman[287276]: 2025-10-02 19:22:47.626804485 +0000 UTC m=+0.205885650 container init 5f501b02de5e8ef187388d29da29d0e9e0e16624fd082d25107b372b74ac140e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:22:47 compute-0 podman[287276]: 2025-10-02 19:22:47.643114809 +0000 UTC m=+0.222195894 container start 5f501b02de5e8ef187388d29da29d0e9e0e16624fd082d25107b372b74ac140e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_aryabhata, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:22:47 compute-0 ceph-mon[191910]: pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:47 compute-0 podman[287276]: 2025-10-02 19:22:47.651655812 +0000 UTC m=+0.230736967 container attach 5f501b02de5e8ef187388d29da29d0e9e0e16624fd082d25107b372b74ac140e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 19:22:47 compute-0 awesome_aryabhata[287292]: 167 167
Oct 02 19:22:47 compute-0 systemd[1]: libpod-5f501b02de5e8ef187388d29da29d0e9e0e16624fd082d25107b372b74ac140e.scope: Deactivated successfully.
Oct 02 19:22:47 compute-0 podman[287320]: 2025-10-02 19:22:47.719666635 +0000 UTC m=+0.041759589 container died 5f501b02de5e8ef187388d29da29d0e9e0e16624fd082d25107b372b74ac140e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 19:22:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e2b58fc6a1098d5d38a919002c5bff2b9674f37dea4a014136526418ead108c-merged.mount: Deactivated successfully.
Oct 02 19:22:47 compute-0 podman[287320]: 2025-10-02 19:22:47.76905697 +0000 UTC m=+0.091149904 container remove 5f501b02de5e8ef187388d29da29d0e9e0e16624fd082d25107b372b74ac140e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 19:22:47 compute-0 systemd[1]: libpod-conmon-5f501b02de5e8ef187388d29da29d0e9e0e16624fd082d25107b372b74ac140e.scope: Deactivated successfully.
Oct 02 19:22:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:48 compute-0 podman[287392]: 2025-10-02 19:22:48.031445048 +0000 UTC m=+0.106345898 container create 974526ae9adbe01459a4218122006a6509ecd5adc6729fb81cc398758d25e5f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 19:22:48 compute-0 podman[287392]: 2025-10-02 19:22:47.977053106 +0000 UTC m=+0.051953826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:22:48 compute-0 systemd[1]: Started libpod-conmon-974526ae9adbe01459a4218122006a6509ecd5adc6729fb81cc398758d25e5f4.scope.
Oct 02 19:22:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb6d0205dc637f780b8318cab8fbb5f41cce9c2f714cb796f2df60ba49654d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb6d0205dc637f780b8318cab8fbb5f41cce9c2f714cb796f2df60ba49654d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb6d0205dc637f780b8318cab8fbb5f41cce9c2f714cb796f2df60ba49654d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb6d0205dc637f780b8318cab8fbb5f41cce9c2f714cb796f2df60ba49654d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:22:48 compute-0 sudo[287462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sczoqagltpsbxxpwwlavaoamjtfnvpch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432967.6667724-64-30809053669306/AnsiballZ_systemd_service.py'
Oct 02 19:22:48 compute-0 podman[287392]: 2025-10-02 19:22:48.185201397 +0000 UTC m=+0.260102197 container init 974526ae9adbe01459a4218122006a6509ecd5adc6729fb81cc398758d25e5f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 19:22:48 compute-0 sudo[287462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:48 compute-0 podman[287392]: 2025-10-02 19:22:48.198572702 +0000 UTC m=+0.273473412 container start 974526ae9adbe01459a4218122006a6509ecd5adc6729fb81cc398758d25e5f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 19:22:48 compute-0 podman[287392]: 2025-10-02 19:22:48.214231778 +0000 UTC m=+0.289132558 container attach 974526ae9adbe01459a4218122006a6509ecd5adc6729fb81cc398758d25e5f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Oct 02 19:22:48 compute-0 python3.9[287465]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:22:48 compute-0 sudo[287462]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:48 compute-0 ceph-mon[191910]: pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:22:48.747742) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432968747797, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2043, "num_deletes": 251, "total_data_size": 3500736, "memory_usage": 3564224, "flush_reason": "Manual Compaction"}
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432968814128, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3435940, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9696, "largest_seqno": 11738, "table_properties": {"data_size": 3426643, "index_size": 5919, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17877, "raw_average_key_size": 19, "raw_value_size": 3408229, "raw_average_value_size": 3712, "num_data_blocks": 268, "num_entries": 918, "num_filter_entries": 918, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432733, "oldest_key_time": 1759432733, "file_creation_time": 1759432968, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 66443 microseconds, and 7116 cpu microseconds.
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:22:48.814182) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3435940 bytes OK
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:22:48.814210) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:22:48.863686) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:22:48.863722) EVENT_LOG_v1 {"time_micros": 1759432968863713, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:22:48.863741) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3492208, prev total WAL file size 3492208, number of live WAL files 2.
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:22:48.865068) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3355KB)], [26(5962KB)]
Oct 02 19:22:48 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432968865191, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9541502, "oldest_snapshot_seqno": -1}
Oct 02 19:22:49 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3690 keys, 7835029 bytes, temperature: kUnknown
Oct 02 19:22:49 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432969003348, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7835029, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7806743, "index_size": 17936, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88625, "raw_average_key_size": 24, "raw_value_size": 7736531, "raw_average_value_size": 2096, "num_data_blocks": 777, "num_entries": 3690, "num_filter_entries": 3690, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759432968, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:22:49 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:22:49 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:22:49.004065) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7835029 bytes
Oct 02 19:22:49 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:22:49.050600) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 69.0 rd, 56.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.8 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4204, records dropped: 514 output_compression: NoCompression
Oct 02 19:22:49 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:22:49.050641) EVENT_LOG_v1 {"time_micros": 1759432969050624, "job": 10, "event": "compaction_finished", "compaction_time_micros": 138279, "compaction_time_cpu_micros": 25201, "output_level": 6, "num_output_files": 1, "total_output_size": 7835029, "num_input_records": 4204, "num_output_records": 3690, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:22:49 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:22:49 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432969052041, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct 02 19:22:49 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:22:49 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759432969053437, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct 02 19:22:49 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:22:48.864852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:22:49 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:22:49.053715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:22:49 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:22:49.053725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:22:49 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:22:49.053729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:22:49 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:22:49.053733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:22:49 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:22:49.053738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:22:49 compute-0 quizzical_bell[287449]: {
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:     "0": [
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:         {
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "devices": [
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "/dev/loop3"
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             ],
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "lv_name": "ceph_lv0",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "lv_size": "21470642176",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "name": "ceph_lv0",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "tags": {
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.cluster_name": "ceph",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.crush_device_class": "",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.encrypted": "0",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.osd_id": "0",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.type": "block",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.vdo": "0"
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             },
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "type": "block",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "vg_name": "ceph_vg0"
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:         }
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:     ],
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:     "1": [
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:         {
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "devices": [
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "/dev/loop4"
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             ],
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "lv_name": "ceph_lv1",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "lv_size": "21470642176",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "name": "ceph_lv1",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "tags": {
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.cluster_name": "ceph",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.crush_device_class": "",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.encrypted": "0",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.osd_id": "1",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.type": "block",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.vdo": "0"
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             },
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "type": "block",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "vg_name": "ceph_vg1"
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:         }
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:     ],
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:     "2": [
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:         {
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "devices": [
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "/dev/loop5"
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             ],
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "lv_name": "ceph_lv2",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "lv_size": "21470642176",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "name": "ceph_lv2",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "tags": {
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.cluster_name": "ceph",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.crush_device_class": "",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.encrypted": "0",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.osd_id": "2",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.type": "block",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:                 "ceph.vdo": "0"
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             },
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "type": "block",
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:             "vg_name": "ceph_vg2"
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:         }
Oct 02 19:22:49 compute-0 quizzical_bell[287449]:     ]
Oct 02 19:22:49 compute-0 quizzical_bell[287449]: }
Oct 02 19:22:49 compute-0 systemd[1]: libpod-974526ae9adbe01459a4218122006a6509ecd5adc6729fb81cc398758d25e5f4.scope: Deactivated successfully.
Oct 02 19:22:49 compute-0 podman[287392]: 2025-10-02 19:22:49.114534256 +0000 UTC m=+1.189434996 container died 974526ae9adbe01459a4218122006a6509ecd5adc6729fb81cc398758d25e5f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:22:49 compute-0 sudo[287632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbqskqtkfbadgserpkhonexlomsjkaka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432968.815255-64-261977723024361/AnsiballZ_systemd_service.py'
Oct 02 19:22:49 compute-0 sudo[287632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:22:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-efb6d0205dc637f780b8318cab8fbb5f41cce9c2f714cb796f2df60ba49654d0-merged.mount: Deactivated successfully.
Oct 02 19:22:49 compute-0 python3.9[287634]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:22:49 compute-0 sudo[287632]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:50 compute-0 sudo[287790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmuurhpuozdfxzanckcvtycrufcfglrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432970.04821-64-213294115117178/AnsiballZ_systemd_service.py'
Oct 02 19:22:50 compute-0 sudo[287790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:50 compute-0 podman[287392]: 2025-10-02 19:22:50.849263695 +0000 UTC m=+2.924164395 container remove 974526ae9adbe01459a4218122006a6509ecd5adc6729fb81cc398758d25e5f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:22:50 compute-0 python3.9[287792]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:22:50 compute-0 systemd[1]: libpod-conmon-974526ae9adbe01459a4218122006a6509ecd5adc6729fb81cc398758d25e5f4.scope: Deactivated successfully.
Oct 02 19:22:50 compute-0 sudo[287179]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:50 compute-0 sudo[287790]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:51 compute-0 sudo[287794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:22:51 compute-0 sudo[287794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:22:51 compute-0 sudo[287794]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:51 compute-0 sudo[287843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:22:51 compute-0 sudo[287843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:22:51 compute-0 sudo[287843]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:51 compute-0 sudo[287892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:22:51 compute-0 sudo[287892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:22:51 compute-0 sudo[287892]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:51 compute-0 sudo[287952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:22:51 compute-0 sudo[287952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:22:51 compute-0 podman[287945]: 2025-10-02 19:22:51.469068791 +0000 UTC m=+0.120370161 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:22:51 compute-0 podman[287944]: 2025-10-02 19:22:51.476627847 +0000 UTC m=+0.128882313 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:22:51 compute-0 sudo[288096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhirulffnfpzgpmbfxlvrxdpygdsjlbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432971.179942-64-234591988018457/AnsiballZ_systemd_service.py'
Oct 02 19:22:51 compute-0 sudo[288096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:51 compute-0 ceph-mon[191910]: pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:52 compute-0 podman[288125]: 2025-10-02 19:22:52.000199691 +0000 UTC m=+0.090482716 container create 552979e6b20284ae6022f8bc537733941dfa043295f022ff55fd3d0827f36960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khorana, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 19:22:52 compute-0 systemd[1]: Started libpod-conmon-552979e6b20284ae6022f8bc537733941dfa043295f022ff55fd3d0827f36960.scope.
Oct 02 19:22:52 compute-0 podman[288125]: 2025-10-02 19:22:51.968108077 +0000 UTC m=+0.058391162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:22:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:22:52 compute-0 podman[288125]: 2025-10-02 19:22:52.116911731 +0000 UTC m=+0.207194806 container init 552979e6b20284ae6022f8bc537733941dfa043295f022ff55fd3d0827f36960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khorana, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 19:22:52 compute-0 podman[288125]: 2025-10-02 19:22:52.142726634 +0000 UTC m=+0.233009669 container start 552979e6b20284ae6022f8bc537733941dfa043295f022ff55fd3d0827f36960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khorana, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:22:52 compute-0 agitated_khorana[288141]: 167 167
Oct 02 19:22:52 compute-0 systemd[1]: libpod-552979e6b20284ae6022f8bc537733941dfa043295f022ff55fd3d0827f36960.scope: Deactivated successfully.
Oct 02 19:22:52 compute-0 podman[288125]: 2025-10-02 19:22:52.163285894 +0000 UTC m=+0.253568979 container attach 552979e6b20284ae6022f8bc537733941dfa043295f022ff55fd3d0827f36960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khorana, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 19:22:52 compute-0 podman[288125]: 2025-10-02 19:22:52.163747537 +0000 UTC m=+0.254030562 container died 552979e6b20284ae6022f8bc537733941dfa043295f022ff55fd3d0827f36960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 02 19:22:52 compute-0 python3.9[288107]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:22:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2795290bbe77eeaf74ca70240f9f0201abd25acedac13aacd4419eca5b252f6-merged.mount: Deactivated successfully.
Oct 02 19:22:52 compute-0 podman[288125]: 2025-10-02 19:22:52.223875645 +0000 UTC m=+0.314158640 container remove 552979e6b20284ae6022f8bc537733941dfa043295f022ff55fd3d0827f36960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khorana, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:22:52 compute-0 sudo[288096]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:52 compute-0 systemd[1]: libpod-conmon-552979e6b20284ae6022f8bc537733941dfa043295f022ff55fd3d0827f36960.scope: Deactivated successfully.
Oct 02 19:22:52 compute-0 podman[288190]: 2025-10-02 19:22:52.418677022 +0000 UTC m=+0.066695748 container create 937f3ef5a626b6a2fa157bbf0eb06df5da7ab7c24d1daaf851640c4b0cd533fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 19:22:52 compute-0 systemd[1]: Started libpod-conmon-937f3ef5a626b6a2fa157bbf0eb06df5da7ab7c24d1daaf851640c4b0cd533fe.scope.
Oct 02 19:22:52 compute-0 podman[288190]: 2025-10-02 19:22:52.389013004 +0000 UTC m=+0.037031740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:22:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc41efb23f06d180497b91e1d122b94e7b07abe0247a42029477225497b610f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc41efb23f06d180497b91e1d122b94e7b07abe0247a42029477225497b610f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc41efb23f06d180497b91e1d122b94e7b07abe0247a42029477225497b610f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc41efb23f06d180497b91e1d122b94e7b07abe0247a42029477225497b610f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:22:52 compute-0 podman[288190]: 2025-10-02 19:22:52.554503392 +0000 UTC m=+0.202522138 container init 937f3ef5a626b6a2fa157bbf0eb06df5da7ab7c24d1daaf851640c4b0cd533fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 02 19:22:52 compute-0 podman[288190]: 2025-10-02 19:22:52.571247678 +0000 UTC m=+0.219266374 container start 937f3ef5a626b6a2fa157bbf0eb06df5da7ab7c24d1daaf851640c4b0cd533fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:22:52 compute-0 podman[288190]: 2025-10-02 19:22:52.577002555 +0000 UTC m=+0.225021331 container attach 937f3ef5a626b6a2fa157bbf0eb06df5da7ab7c24d1daaf851640c4b0cd533fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wing, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 19:22:52 compute-0 ceph-mon[191910]: pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:53 compute-0 sudo[288336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbjmbtqxuwkbzfjpxjydfefxyebtnkrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432972.4617548-64-69228794397559/AnsiballZ_systemd_service.py'
Oct 02 19:22:53 compute-0 sudo[288336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:53 compute-0 python3.9[288338]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:22:53 compute-0 sudo[288336]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:53 compute-0 podman[288351]: 2025-10-02 19:22:53.555872092 +0000 UTC m=+0.118057167 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, release=1755695350, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 02 19:22:53 compute-0 podman[288354]: 2025-10-02 19:22:53.596056517 +0000 UTC m=+0.157766449 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 19:22:53 compute-0 trusting_wing[288232]: {
Oct 02 19:22:53 compute-0 trusting_wing[288232]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:22:53 compute-0 trusting_wing[288232]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:22:53 compute-0 trusting_wing[288232]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:22:53 compute-0 trusting_wing[288232]:         "osd_id": 1,
Oct 02 19:22:53 compute-0 trusting_wing[288232]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:22:53 compute-0 trusting_wing[288232]:         "type": "bluestore"
Oct 02 19:22:53 compute-0 trusting_wing[288232]:     },
Oct 02 19:22:53 compute-0 trusting_wing[288232]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:22:53 compute-0 trusting_wing[288232]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:22:53 compute-0 trusting_wing[288232]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:22:53 compute-0 trusting_wing[288232]:         "osd_id": 2,
Oct 02 19:22:53 compute-0 trusting_wing[288232]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:22:53 compute-0 trusting_wing[288232]:         "type": "bluestore"
Oct 02 19:22:53 compute-0 trusting_wing[288232]:     },
Oct 02 19:22:53 compute-0 trusting_wing[288232]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:22:53 compute-0 trusting_wing[288232]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:22:53 compute-0 trusting_wing[288232]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:22:53 compute-0 trusting_wing[288232]:         "osd_id": 0,
Oct 02 19:22:53 compute-0 trusting_wing[288232]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:22:53 compute-0 trusting_wing[288232]:         "type": "bluestore"
Oct 02 19:22:53 compute-0 trusting_wing[288232]:     }
Oct 02 19:22:53 compute-0 trusting_wing[288232]: }
Oct 02 19:22:53 compute-0 systemd[1]: libpod-937f3ef5a626b6a2fa157bbf0eb06df5da7ab7c24d1daaf851640c4b0cd533fe.scope: Deactivated successfully.
Oct 02 19:22:53 compute-0 systemd[1]: libpod-937f3ef5a626b6a2fa157bbf0eb06df5da7ab7c24d1daaf851640c4b0cd533fe.scope: Consumed 1.067s CPU time.
Oct 02 19:22:53 compute-0 podman[288190]: 2025-10-02 19:22:53.645155475 +0000 UTC m=+1.293174171 container died 937f3ef5a626b6a2fa157bbf0eb06df5da7ab7c24d1daaf851640c4b0cd533fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wing, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:22:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dc41efb23f06d180497b91e1d122b94e7b07abe0247a42029477225497b610f-merged.mount: Deactivated successfully.
Oct 02 19:22:53 compute-0 podman[288190]: 2025-10-02 19:22:53.738853838 +0000 UTC m=+1.386872524 container remove 937f3ef5a626b6a2fa157bbf0eb06df5da7ab7c24d1daaf851640c4b0cd533fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wing, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:22:53 compute-0 systemd[1]: libpod-conmon-937f3ef5a626b6a2fa157bbf0eb06df5da7ab7c24d1daaf851640c4b0cd533fe.scope: Deactivated successfully.
Oct 02 19:22:53 compute-0 sudo[287952]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:22:53 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:22:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:22:53 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:22:53 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev cd69d045-65c8-4973-9f71-002793f3f0d9 does not exist
Oct 02 19:22:53 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 954ce92f-354e-492e-b641-151ddd228422 does not exist
Oct 02 19:22:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:53 compute-0 sudo[288506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:22:53 compute-0 sudo[288506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:22:53 compute-0 sudo[288506]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:54 compute-0 sudo[288556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:22:54 compute-0 sudo[288556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:22:54 compute-0 sudo[288556]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:54 compute-0 sudo[288623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbwptyaguznatpmjanffmefpjnqtuxen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432973.6325424-64-4174650719314/AnsiballZ_systemd_service.py'
Oct 02 19:22:54 compute-0 sudo[288623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:22:54 compute-0 python3.9[288625]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:22:54 compute-0 sudo[288623]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:54 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:22:54 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:22:54 compute-0 ceph-mon[191910]: pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:55 compute-0 sudo[288776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tspijqhbigcfzaineahitogdyhsakumv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432974.7026284-64-256551273484450/AnsiballZ_systemd_service.py'
Oct 02 19:22:55 compute-0 sudo[288776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:55 compute-0 python3.9[288778]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:22:55 compute-0 sudo[288776]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:56 compute-0 sudo[288929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jepiquvnwzmfqrgywecydxmhnbdplhuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432975.9999099-116-162233615474322/AnsiballZ_file.py'
Oct 02 19:22:56 compute-0 sudo[288929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:56 compute-0 python3.9[288931]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:22:56 compute-0 ceph-mon[191910]: pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:56 compute-0 sudo[288929]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:57 compute-0 sudo[289081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vixffuvenyomfwgyizlzefehwupdamqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432977.17965-116-217832758280261/AnsiballZ_file.py'
Oct 02 19:22:57 compute-0 sudo[289081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:57 compute-0 python3.9[289083]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:22:57 compute-0 sudo[289081]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:58 compute-0 sudo[289233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbndgxlpqnxcttoqkswejcsgolvahqfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432978.1783192-116-220156811709739/AnsiballZ_file.py'
Oct 02 19:22:58 compute-0 sudo[289233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:58 compute-0 python3.9[289235]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:22:58 compute-0 sudo[289233]: pam_unix(sudo:session): session closed for user root
Oct 02 19:22:58 compute-0 ceph-mon[191910]: pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:22:59 compute-0 sudo[289385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbqvugpvrfwltbfjiebogzoioxoaayjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432979.1411922-116-33563661099594/AnsiballZ_file.py'
Oct 02 19:22:59 compute-0 sudo[289385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:22:59 compute-0 podman[157186]: time="2025-10-02T19:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:22:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35730 "" "Go-http-client/1.1"
Oct 02 19:22:59 compute-0 podman[289389]: 2025-10-02 19:22:59.799209104 +0000 UTC m=+0.108292262 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 19:22:59 compute-0 podman[289387]: 2025-10-02 19:22:59.81741259 +0000 UTC m=+0.123882236 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:22:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7289 "" "Go-http-client/1.1"
Oct 02 19:22:59 compute-0 podman[289388]: 2025-10-02 19:22:59.838974817 +0000 UTC m=+0.136387817 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:22:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:22:59 compute-0 python3.9[289390]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:22:59 compute-0 sudo[289385]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:00 compute-0 sudo[289595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knztsjlzgsmxmnmwzgoovgbywbnbjceq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432980.1497111-116-195025312644239/AnsiballZ_file.py'
Oct 02 19:23:00 compute-0 sudo[289595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:00 compute-0 python3.9[289597]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:23:00 compute-0 sudo[289595]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:00 compute-0 ceph-mon[191910]: pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:01 compute-0 openstack_network_exporter[159337]: ERROR   19:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:23:01 compute-0 openstack_network_exporter[159337]: ERROR   19:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:23:01 compute-0 openstack_network_exporter[159337]: ERROR   19:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:23:01 compute-0 openstack_network_exporter[159337]: ERROR   19:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:23:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:23:01 compute-0 openstack_network_exporter[159337]: ERROR   19:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:23:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:23:01 compute-0 sudo[289760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuhgnewijjhvokhqkiqjavgzjzfbeomh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432981.1325521-116-203035899422055/AnsiballZ_file.py'
Oct 02 19:23:01 compute-0 sudo[289760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:01 compute-0 podman[289721]: 2025-10-02 19:23:01.684107764 +0000 UTC m=+0.119534377 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=base rhel9, config_id=edpm, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct 02 19:23:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:01 compute-0 python3.9[289768]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:23:01 compute-0 sudo[289760]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:02 compute-0 sudo[289919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajfmnmvwmcchlcenaxswsbdhofjbpele ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432982.1659904-116-224070631248906/AnsiballZ_file.py'
Oct 02 19:23:02 compute-0 sudo[289919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:02 compute-0 python3.9[289921]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:23:02 compute-0 ceph-mon[191910]: pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:02 compute-0 sudo[289919]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:23:03
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'images', '.rgw.root', 'default.rgw.meta', '.mgr', 'vms']
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:23:03 compute-0 sudo[290071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkxqifnuwhnnnlclzredpgtnxkpbwhpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432983.2402287-166-9714867768006/AnsiballZ_file.py'
Oct 02 19:23:03 compute-0 sudo[290071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:23:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:03 compute-0 python3.9[290073]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:23:03 compute-0 sudo[290071]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:23:04 compute-0 sudo[290223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpteiopmszkvcanmfusqjcgpeeqgadzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432984.2038598-166-48583952835436/AnsiballZ_file.py'
Oct 02 19:23:04 compute-0 sudo[290223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:04 compute-0 python3.9[290225]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:23:04 compute-0 sudo[290223]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:04 compute-0 ceph-mon[191910]: pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:05 compute-0 sudo[290375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eorhfblzetdhflazfcbleqtdefnfbyop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432985.20496-166-135670027168072/AnsiballZ_file.py'
Oct 02 19:23:05 compute-0 sudo[290375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:05 compute-0 python3.9[290377]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:23:05 compute-0 sudo[290375]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:06 compute-0 sudo[290527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blhbatvdiqndobjyolkxdtcuaggjwspr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432986.1657012-166-136273473934059/AnsiballZ_file.py'
Oct 02 19:23:06 compute-0 sudo[290527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:06 compute-0 python3.9[290529]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:23:06 compute-0 sudo[290527]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:06 compute-0 ceph-mon[191910]: pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:07 compute-0 sudo[290679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgqqhunmcnplaoblvimiixyowevkhbyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432987.212608-166-176687295414561/AnsiballZ_file.py'
Oct 02 19:23:07 compute-0 sudo[290679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:07 compute-0 python3.9[290681]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:23:07 compute-0 sudo[290679]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:08 compute-0 sudo[290831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cndoigouvqzacqtoblqlfhheanhxkowu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432988.264729-166-212652072184545/AnsiballZ_file.py'
Oct 02 19:23:08 compute-0 sudo[290831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:08 compute-0 python3.9[290833]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:23:09 compute-0 sudo[290831]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:09 compute-0 ceph-mon[191910]: pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:23:09 compute-0 sudo[290983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayyxhdewltawkqzodrutkevfgqaqoeyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432989.2737918-166-18099550008849/AnsiballZ_file.py'
Oct 02 19:23:09 compute-0 sudo[290983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:10 compute-0 python3.9[290985]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:23:10 compute-0 sudo[290983]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:11 compute-0 sudo[291135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwhdvkdylhwpbvmjpemfbdivvwvajjmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432990.4679759-217-262504247790777/AnsiballZ_command.py'
Oct 02 19:23:11 compute-0 sudo[291135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:11 compute-0 ceph-mon[191910]: pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:11 compute-0 python3.9[291137]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:23:11 compute-0 sudo[291135]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:23:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:23:12 compute-0 python3.9[291289]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:23:13 compute-0 ceph-mon[191910]: pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:13 compute-0 sudo[291439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjpekrhteyhefpeckexstvbwhhsgnssl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432992.8736641-235-198495083345004/AnsiballZ_systemd_service.py'
Oct 02 19:23:13 compute-0 sudo[291439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:13 compute-0 python3.9[291441]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:23:13 compute-0 systemd[1]: Reloading.
Oct 02 19:23:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:13 compute-0 systemd-rc-local-generator[291463]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:23:13 compute-0 systemd-sysv-generator[291468]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:23:14 compute-0 sudo[291439]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:23:15 compute-0 sudo[291625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhbfcsrqrtzvwpiairmdbtjptovnwlto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432994.5492322-243-154332137382529/AnsiballZ_command.py'
Oct 02 19:23:15 compute-0 sudo[291625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:15 compute-0 ceph-mon[191910]: pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:15 compute-0 python3.9[291627]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:23:15 compute-0 sudo[291625]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:16 compute-0 sudo[291778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcmzgxxaovianjuoxufiwbxpfejvrfrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432995.5587628-243-179979054335236/AnsiballZ_command.py'
Oct 02 19:23:16 compute-0 sudo[291778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:16 compute-0 python3.9[291780]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:23:16 compute-0 sudo[291778]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:17 compute-0 ceph-mon[191910]: pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:17 compute-0 sudo[291931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvnwvmvujllinkzdjlrnuvjawzsahcxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432996.730892-243-201569988023664/AnsiballZ_command.py'
Oct 02 19:23:17 compute-0 sudo[291931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:17 compute-0 python3.9[291933]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:23:17 compute-0 sudo[291931]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:18 compute-0 sudo[292084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktsymvlzgotgtvuscjsbtbxnyotvslbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432997.7469218-243-208550015050766/AnsiballZ_command.py'
Oct 02 19:23:18 compute-0 sudo[292084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:18 compute-0 python3.9[292086]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:23:18 compute-0 sudo[292084]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:19 compute-0 ceph-mon[191910]: pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:23:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:19 compute-0 sudo[292238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thczitgtpkrzzdafaiibuvbhjeysrwyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432998.7555118-243-119605859106941/AnsiballZ_command.py'
Oct 02 19:23:20 compute-0 sudo[292238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:20 compute-0 python3.9[292240]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:23:20 compute-0 sudo[292238]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:20 compute-0 sudo[292391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vifhxvqgsacyebtiesztfgchiwsmggwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433000.463475-243-174861866957039/AnsiballZ_command.py'
Oct 02 19:23:20 compute-0 sudo[292391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:21 compute-0 ceph-mon[191910]: pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:21 compute-0 python3.9[292393]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:23:21 compute-0 sudo[292391]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:21 compute-0 podman[292408]: 2025-10-02 19:23:21.711334947 +0000 UTC m=+0.130427934 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:23:21 compute-0 podman[292401]: 2025-10-02 19:23:21.72465814 +0000 UTC m=+0.141745682 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm)
Oct 02 19:23:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:22 compute-0 sudo[292585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbuhqediuawuaugyeoaeqqsjrgqyahmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433001.753829-243-69837397547236/AnsiballZ_command.py'
Oct 02 19:23:22 compute-0 sudo[292585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:22 compute-0 python3.9[292587]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:23:22 compute-0 sudo[292585]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:23 compute-0 ceph-mon[191910]: pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:23 compute-0 sudo[292756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jygwcslaynvuzpvnijsvxndktgovoquz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433003.1131191-297-150040324604767/AnsiballZ_getent.py'
Oct 02 19:23:23 compute-0 sudo[292756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:23 compute-0 podman[292712]: 2025-10-02 19:23:23.897771794 +0000 UTC m=+0.181908917 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:23:23 compute-0 podman[292713]: 2025-10-02 19:23:23.968672985 +0000 UTC m=+0.241383777 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 19:23:24 compute-0 python3.9[292758]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 02 19:23:24 compute-0 sudo[292756]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:23:25 compute-0 sudo[292932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmflfnbsnihxoveidrxddliywavuxfhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433004.5603154-310-149925526517470/AnsiballZ_setup.py'
Oct 02 19:23:25 compute-0 sudo[292932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:25 compute-0 ceph-mon[191910]: pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:25 compute-0 python3.9[292934]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 19:23:25 compute-0 sudo[292932]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:26 compute-0 sudo[293016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byfpcimzkiwykkroakosicwcgjkzmvte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433004.5603154-310-149925526517470/AnsiballZ_dnf.py'
Oct 02 19:23:26 compute-0 sudo[293016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:26 compute-0 python3.9[293018]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:23:27 compute-0 ceph-mon[191910]: pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:27 compute-0 sudo[293016]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:29 compute-0 sudo[293169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coiurdjfndpqbxkqcvdjfqifbbgpxhum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433008.1853979-322-245056659436717/AnsiballZ_systemd.py'
Oct 02 19:23:29 compute-0 sudo[293169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:29 compute-0 ceph-mon[191910]: pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:29 compute-0 python3.9[293171]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 19:23:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:23:29 compute-0 sudo[293169]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:29 compute-0 podman[157186]: time="2025-10-02T19:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:23:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35730 "" "Go-http-client/1.1"
Oct 02 19:23:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7282 "" "Go-http-client/1.1"
Oct 02 19:23:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:30 compute-0 sudo[293358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgmrkytbesretkpoejqzwufselkvomnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433009.698765-322-63709679920084/AnsiballZ_systemd.py'
Oct 02 19:23:30 compute-0 sudo[293358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:30 compute-0 podman[293299]: 2025-10-02 19:23:30.309223813 +0000 UTC m=+0.125701925 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:23:30 compute-0 podman[293300]: 2025-10-02 19:23:30.311886696 +0000 UTC m=+0.123110725 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:23:30 compute-0 podman[293298]: 2025-10-02 19:23:30.324707345 +0000 UTC m=+0.142693759 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=edpm)
Oct 02 19:23:30 compute-0 python3.9[293380]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 19:23:30 compute-0 sudo[293358]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:31 compute-0 ceph-mon[191910]: pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:31 compute-0 sudo[293537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssckdaxbtpxcybmayzhqnqcfzmpxwwlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433010.94426-322-89779024560648/AnsiballZ_systemd.py'
Oct 02 19:23:31 compute-0 sudo[293537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:31 compute-0 openstack_network_exporter[159337]: ERROR   19:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:23:31 compute-0 openstack_network_exporter[159337]: ERROR   19:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:23:31 compute-0 openstack_network_exporter[159337]: ERROR   19:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:23:31 compute-0 openstack_network_exporter[159337]: ERROR   19:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:23:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:23:31 compute-0 openstack_network_exporter[159337]: ERROR   19:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:23:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:23:31 compute-0 python3.9[293539]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 19:23:31 compute-0 sudo[293537]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:31 compute-0 podman[293541]: 2025-10-02 19:23:31.864409963 +0000 UTC m=+0.108040875 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.buildah.version=1.29.0, version=9.4, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, release=1214.1726694543, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-type=git, com.redhat.component=ubi9-container, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible)
Oct 02 19:23:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:23:32.267 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:23:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:23:32.268 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:23:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:23:32.268 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:23:32 compute-0 sudo[293708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucqjbvzvconuqufqseayqkjhruvlynzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433012.0050428-322-157468842231950/AnsiballZ_systemd.py'
Oct 02 19:23:32 compute-0 sudo[293708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:32 compute-0 python3.9[293710]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 19:23:32 compute-0 sudo[293708]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:33 compute-0 ceph-mon[191910]: pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:23:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:23:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:23:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:23:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:23:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:23:33 compute-0 sudo[293863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysptxvzytyhfpfqhbmlybymloykqhzqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433013.174331-351-215190436880565/AnsiballZ_systemd.py'
Oct 02 19:23:33 compute-0 sudo[293863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:34 compute-0 python3.9[293865]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:34 compute-0 sudo[293863]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:23:34 compute-0 sudo[294018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edeoojyeygfqzafejiuhphniqfuhykih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433014.367174-351-190011723616549/AnsiballZ_systemd.py'
Oct 02 19:23:34 compute-0 sudo[294018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:35 compute-0 python3.9[294020]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:35 compute-0 ceph-mon[191910]: pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:35 compute-0 sudo[294018]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:36 compute-0 sudo[294173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cryjcvkkfuwlcoclkuqoqpaalqmosntk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433015.5449104-351-130391377058372/AnsiballZ_systemd.py'
Oct 02 19:23:36 compute-0 sudo[294173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:36 compute-0 python3.9[294175]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:36 compute-0 sudo[294173]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:37 compute-0 sudo[294328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srwlxcjbkqpsaosijaodtxwaokejfkde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433016.6823382-351-6980541894189/AnsiballZ_systemd.py'
Oct 02 19:23:37 compute-0 sudo[294328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:37 compute-0 ceph-mon[191910]: pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:37 compute-0 python3.9[294330]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:37 compute-0 sudo[294328]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:39 compute-0 ceph-mon[191910]: pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:39 compute-0 sudo[294483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cetkwgppdzbpdsdatwlsrokfhryxdotw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433017.885707-351-185403805571648/AnsiballZ_systemd.py'
Oct 02 19:23:39 compute-0 sudo[294483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:23:39 compute-0 python3.9[294485]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:39 compute-0 sudo[294483]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:41 compute-0 sudo[294638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbihydrbnmxfrumqvjgturyrrwjvohct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433020.0508978-387-211783112874873/AnsiballZ_systemd.py'
Oct 02 19:23:41 compute-0 sudo[294638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:41 compute-0 ceph-mon[191910]: pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:41 compute-0 python3.9[294640]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 19:23:41 compute-0 sudo[294638]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:42 compute-0 sudo[294793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szmojfqynigptecsruiusaxjjeyggqko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433021.8307788-395-20753470871169/AnsiballZ_systemd.py'
Oct 02 19:23:42 compute-0 sudo[294793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:42 compute-0 python3.9[294795]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:42 compute-0 sudo[294793]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:43 compute-0 ceph-mon[191910]: pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:43 compute-0 sudo[294948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqzqodubpqmqgxwodfwttwxxzgacgmsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433023.062499-395-29459423297273/AnsiballZ_systemd.py'
Oct 02 19:23:43 compute-0 sudo[294948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:43 compute-0 python3.9[294950]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:43 compute-0 sudo[294948]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:23:44 compute-0 sudo[295103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxjxgoijooodzbqnebztrxmbqokvjapk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433024.2297058-395-53567902552494/AnsiballZ_systemd.py'
Oct 02 19:23:44 compute-0 sudo[295103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:45 compute-0 python3.9[295105]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:45 compute-0 sudo[295103]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:45 compute-0 ceph-mon[191910]: pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:45 compute-0 sudo[295258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anjyivcfsonycbxladrrdyznpnlaknhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433025.4324598-395-60607815492238/AnsiballZ_systemd.py'
Oct 02 19:23:45 compute-0 sudo[295258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:46 compute-0 python3.9[295260]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:46 compute-0 sudo[295258]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:46 compute-0 sudo[295413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlcgszydrejwphkujwwwpesvujoarydp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433026.5777717-395-267043365946812/AnsiballZ_systemd.py'
Oct 02 19:23:46 compute-0 sudo[295413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:47 compute-0 python3.9[295415]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:47 compute-0 ceph-mon[191910]: pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:47 compute-0 sudo[295413]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:48 compute-0 sudo[295568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhkkqbebwhhskvtctuvgphgoziungkvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433027.6248977-395-57856791905180/AnsiballZ_systemd.py'
Oct 02 19:23:48 compute-0 sudo[295568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:48 compute-0 python3.9[295570]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:48 compute-0 sudo[295568]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:49 compute-0 sudo[295723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlkozmxogntflipxrjirvkpwpcyctwyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433028.737438-395-102938259334161/AnsiballZ_systemd.py'
Oct 02 19:23:49 compute-0 sudo[295723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:23:49 compute-0 python3.9[295725]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:49 compute-0 ceph-mon[191910]: pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:49 compute-0 sudo[295723]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:50 compute-0 sudo[295879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyjvegeyheuygagmvcxyufuitoeqfrua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433029.9942515-395-55527310946879/AnsiballZ_systemd.py'
Oct 02 19:23:50 compute-0 sudo[295879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:50 compute-0 python3.9[295881]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:50 compute-0 sudo[295879]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:51 compute-0 ceph-mon[191910]: pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:52 compute-0 sudo[296074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njrvmykfjeyvkgcvziylmrbhlbpipsxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433031.7560523-395-30368779118379/AnsiballZ_systemd.py'
Oct 02 19:23:52 compute-0 podman[296008]: 2025-10-02 19:23:52.285903216 +0000 UTC m=+0.082390045 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Oct 02 19:23:52 compute-0 sudo[296074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:52 compute-0 podman[296009]: 2025-10-02 19:23:52.292854726 +0000 UTC m=+0.087436543 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:23:52 compute-0 python3.9[296077]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:52 compute-0 ceph-mon[191910]: pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:52 compute-0 sudo[296074]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:54 compute-0 sudo[296209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:23:54 compute-0 sudo[296209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:23:54 compute-0 sudo[296288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqwkdpwitnubwfpccauhutgufvayxlte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433032.9479454-395-79772632297465/AnsiballZ_systemd.py'
Oct 02 19:23:54 compute-0 sudo[296209]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:54 compute-0 sudo[296288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:54 compute-0 podman[296206]: 2025-10-02 19:23:54.226968718 +0000 UTC m=+0.127765102 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, config_id=edpm, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 02 19:23:54 compute-0 podman[296207]: 2025-10-02 19:23:54.264288674 +0000 UTC m=+0.166163338 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 02 19:23:54 compute-0 sudo[296302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:23:54 compute-0 sudo[296302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:23:54 compute-0 sudo[296302]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:23:54 compute-0 sudo[296329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:23:54 compute-0 sudo[296329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:23:54 compute-0 sudo[296329]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:54 compute-0 sudo[296354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 19:23:54 compute-0 sudo[296354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:23:54 compute-0 python3.9[296300]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:54 compute-0 sudo[296288]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:54 compute-0 ceph-mon[191910]: pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:55 compute-0 podman[296525]: 2025-10-02 19:23:55.133017612 +0000 UTC m=+0.108920359 container exec a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:23:55 compute-0 podman[296525]: 2025-10-02 19:23:55.256152446 +0000 UTC m=+0.232055183 container exec_died a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:23:55 compute-0 sudo[296634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wneehvliwiflkqyubblntnrzygshpnlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433034.889219-395-108184607788329/AnsiballZ_systemd.py'
Oct 02 19:23:55 compute-0 sudo[296634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:55 compute-0 python3.9[296641]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:55 compute-0 sudo[296634]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:56 compute-0 sudo[296354]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:56 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:23:56 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:23:56 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:23:56 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:23:56 compute-0 sudo[296840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:23:56 compute-0 sudo[296840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:23:56 compute-0 sudo[296840]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:56 compute-0 sudo[296877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:23:56 compute-0 sudo[296877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:23:56 compute-0 sudo[296877]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:56 compute-0 sudo[296927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:23:56 compute-0 sudo[296927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:23:56 compute-0 sudo[296927]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:56 compute-0 sudo[296974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgdgpjmbvklizbwprpduyzioszsihhrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433036.05473-395-131092803911754/AnsiballZ_systemd.py'
Oct 02 19:23:56 compute-0 sudo[296974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:56 compute-0 sudo[296978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:23:56 compute-0 sudo[296978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:23:56 compute-0 python3.9[296981]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:56 compute-0 sudo[296974]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:57 compute-0 sudo[296978]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:57 compute-0 ceph-mon[191910]: pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:57 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:23:57 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:23:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:23:57 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:23:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:23:57 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:23:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:23:57 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:23:57 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 060c10a6-2b46-40c1-8a3d-2b25ce13d386 does not exist
Oct 02 19:23:57 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev a97e7825-c422-4b99-8551-7d8710254896 does not exist
Oct 02 19:23:57 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev cff40dbe-24d2-4d33-86da-b8396c46779d does not exist
Oct 02 19:23:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:23:57 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:23:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:23:57 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:23:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:23:57 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:23:57 compute-0 sudo[297113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:23:57 compute-0 sudo[297113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:23:57 compute-0 sudo[297113]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:57 compute-0 sudo[297161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:23:57 compute-0 sudo[297161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:23:57 compute-0 sudo[297161]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:57 compute-0 sudo[297210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:23:57 compute-0 sudo[297210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:23:57 compute-0 sudo[297210]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:57 compute-0 sudo[297262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smqslqmxdfiwhubbhroaoxvrwypcfrfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433037.1756341-395-271697689380442/AnsiballZ_systemd.py'
Oct 02 19:23:57 compute-0 sudo[297262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:57 compute-0 sudo[297261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:23:57 compute-0 sudo[297261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:23:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:57 compute-0 python3.9[297275]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:58 compute-0 sudo[297262]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:58 compute-0 podman[297330]: 2025-10-02 19:23:58.086020882 +0000 UTC m=+0.052843301 container create e010486e711a1bb768f43b88b9e6d01852dcdaf70143499d17c2c8282d16957c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:23:58 compute-0 systemd[1]: Started libpod-conmon-e010486e711a1bb768f43b88b9e6d01852dcdaf70143499d17c2c8282d16957c.scope.
Oct 02 19:23:58 compute-0 podman[297330]: 2025-10-02 19:23:58.058451781 +0000 UTC m=+0.025274220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:23:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:23:58 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:23:58 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:23:58 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:23:58 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:23:58 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:23:58 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:23:58 compute-0 podman[297330]: 2025-10-02 19:23:58.214121742 +0000 UTC m=+0.180944201 container init e010486e711a1bb768f43b88b9e6d01852dcdaf70143499d17c2c8282d16957c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 19:23:58 compute-0 podman[297330]: 2025-10-02 19:23:58.223497517 +0000 UTC m=+0.190319936 container start e010486e711a1bb768f43b88b9e6d01852dcdaf70143499d17c2c8282d16957c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lehmann, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 19:23:58 compute-0 podman[297330]: 2025-10-02 19:23:58.231031162 +0000 UTC m=+0.197853671 container attach e010486e711a1bb768f43b88b9e6d01852dcdaf70143499d17c2c8282d16957c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lehmann, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:23:58 compute-0 laughing_lehmann[297369]: 167 167
Oct 02 19:23:58 compute-0 systemd[1]: libpod-e010486e711a1bb768f43b88b9e6d01852dcdaf70143499d17c2c8282d16957c.scope: Deactivated successfully.
Oct 02 19:23:58 compute-0 podman[297330]: 2025-10-02 19:23:58.233628053 +0000 UTC m=+0.200450482 container died e010486e711a1bb768f43b88b9e6d01852dcdaf70143499d17c2c8282d16957c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lehmann, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 19:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-206b3a1b7c50593bb7651da487745f55bb0e2f03dece06d2dfc260c6950b7d3a-merged.mount: Deactivated successfully.
Oct 02 19:23:58 compute-0 podman[297330]: 2025-10-02 19:23:58.294142242 +0000 UTC m=+0.260964661 container remove e010486e711a1bb768f43b88b9e6d01852dcdaf70143499d17c2c8282d16957c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lehmann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 19:23:58 compute-0 systemd[1]: libpod-conmon-e010486e711a1bb768f43b88b9e6d01852dcdaf70143499d17c2c8282d16957c.scope: Deactivated successfully.
Oct 02 19:23:58 compute-0 podman[297445]: 2025-10-02 19:23:58.53266994 +0000 UTC m=+0.083078274 container create d30b357339ade1dfa2ca9640ecaca7bc13a5a3a51a163860d04507e703c02b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_noether, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 19:23:58 compute-0 podman[297445]: 2025-10-02 19:23:58.503553417 +0000 UTC m=+0.053961831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:23:58 compute-0 systemd[1]: Started libpod-conmon-d30b357339ade1dfa2ca9640ecaca7bc13a5a3a51a163860d04507e703c02b21.scope.
Oct 02 19:23:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c2df26012c299bf5951748f9e25332b9037106be306fb22e1ef3d68e517f4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c2df26012c299bf5951748f9e25332b9037106be306fb22e1ef3d68e517f4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c2df26012c299bf5951748f9e25332b9037106be306fb22e1ef3d68e517f4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c2df26012c299bf5951748f9e25332b9037106be306fb22e1ef3d68e517f4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c2df26012c299bf5951748f9e25332b9037106be306fb22e1ef3d68e517f4b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:23:58 compute-0 podman[297445]: 2025-10-02 19:23:58.670880296 +0000 UTC m=+0.221288630 container init d30b357339ade1dfa2ca9640ecaca7bc13a5a3a51a163860d04507e703c02b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_noether, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:23:58 compute-0 podman[297445]: 2025-10-02 19:23:58.684542258 +0000 UTC m=+0.234950592 container start d30b357339ade1dfa2ca9640ecaca7bc13a5a3a51a163860d04507e703c02b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 19:23:58 compute-0 podman[297445]: 2025-10-02 19:23:58.688961568 +0000 UTC m=+0.239369902 container attach d30b357339ade1dfa2ca9640ecaca7bc13a5a3a51a163860d04507e703c02b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:23:58 compute-0 sudo[297538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhwlilxppixfeenojzgkvhwvderwrjnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433038.2648203-395-264124596036076/AnsiballZ_systemd.py'
Oct 02 19:23:58 compute-0 sudo[297538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:23:59 compute-0 python3.9[297540]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:23:59 compute-0 sudo[297538]: pam_unix(sudo:session): session closed for user root
Oct 02 19:23:59 compute-0 ceph-mon[191910]: pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:23:59 compute-0 podman[157186]: time="2025-10-02T19:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:23:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 37437 "" "Go-http-client/1.1"
Oct 02 19:23:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7704 "" "Go-http-client/1.1"
Oct 02 19:23:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:23:59 compute-0 eloquent_noether[297502]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:23:59 compute-0 eloquent_noether[297502]: --> relative data size: 1.0
Oct 02 19:23:59 compute-0 eloquent_noether[297502]: --> All data devices are unavailable
Oct 02 19:23:59 compute-0 systemd[1]: libpod-d30b357339ade1dfa2ca9640ecaca7bc13a5a3a51a163860d04507e703c02b21.scope: Deactivated successfully.
Oct 02 19:23:59 compute-0 systemd[1]: libpod-d30b357339ade1dfa2ca9640ecaca7bc13a5a3a51a163860d04507e703c02b21.scope: Consumed 1.166s CPU time.
Oct 02 19:23:59 compute-0 podman[297445]: 2025-10-02 19:23:59.941355508 +0000 UTC m=+1.491763842 container died d30b357339ade1dfa2ca9640ecaca7bc13a5a3a51a163860d04507e703c02b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 19:24:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-71c2df26012c299bf5951748f9e25332b9037106be306fb22e1ef3d68e517f4b-merged.mount: Deactivated successfully.
Oct 02 19:24:00 compute-0 podman[297445]: 2025-10-02 19:24:00.095100227 +0000 UTC m=+1.645508561 container remove d30b357339ade1dfa2ca9640ecaca7bc13a5a3a51a163860d04507e703c02b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:24:00 compute-0 systemd[1]: libpod-conmon-d30b357339ade1dfa2ca9640ecaca7bc13a5a3a51a163860d04507e703c02b21.scope: Deactivated successfully.
Oct 02 19:24:00 compute-0 sudo[297261]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:00 compute-0 sudo[297730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btaosmvuijitfajjzxbvwzfmtfybherf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433039.6911366-497-264541598702280/AnsiballZ_file.py'
Oct 02 19:24:00 compute-0 sudo[297730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:00 compute-0 sudo[297733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:24:00 compute-0 sudo[297733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:24:00 compute-0 sudo[297733]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:00 compute-0 python3.9[297732]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:24:00 compute-0 sudo[297730]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:00 compute-0 sudo[297758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:24:00 compute-0 sudo[297758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:24:00 compute-0 sudo[297758]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:00 compute-0 podman[297784]: 2025-10-02 19:24:00.497096389 +0000 UTC m=+0.073048561 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:24:00 compute-0 podman[297783]: 2025-10-02 19:24:00.499645428 +0000 UTC m=+0.084041020 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:24:00 compute-0 podman[297782]: 2025-10-02 19:24:00.507018909 +0000 UTC m=+0.088582704 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Oct 02 19:24:00 compute-0 sudo[297807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:24:00 compute-0 sudo[297807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:24:00 compute-0 sudo[297807]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:00 compute-0 sudo[297889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:24:00 compute-0 sudo[297889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:24:00 compute-0 sudo[298078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaeimdxasbixdabmytqufgaognljvcrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433040.5865488-497-10262650799243/AnsiballZ_file.py'
Oct 02 19:24:00 compute-0 sudo[298078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:01 compute-0 podman[298079]: 2025-10-02 19:24:01.019868559 +0000 UTC m=+0.050200718 container create 58c77ffca58aec5be1a59a59aefb47e5c353d89c14827c91640e907032f6c585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swirles, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:24:01 compute-0 systemd[1]: Started libpod-conmon-58c77ffca58aec5be1a59a59aefb47e5c353d89c14827c91640e907032f6c585.scope.
Oct 02 19:24:01 compute-0 podman[298079]: 2025-10-02 19:24:00.997002886 +0000 UTC m=+0.027335065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:24:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:24:01 compute-0 podman[298079]: 2025-10-02 19:24:01.145145282 +0000 UTC m=+0.175477441 container init 58c77ffca58aec5be1a59a59aefb47e5c353d89c14827c91640e907032f6c585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 19:24:01 compute-0 podman[298079]: 2025-10-02 19:24:01.153440488 +0000 UTC m=+0.183772647 container start 58c77ffca58aec5be1a59a59aefb47e5c353d89c14827c91640e907032f6c585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swirles, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 19:24:01 compute-0 podman[298079]: 2025-10-02 19:24:01.158449555 +0000 UTC m=+0.188781734 container attach 58c77ffca58aec5be1a59a59aefb47e5c353d89c14827c91640e907032f6c585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 19:24:01 compute-0 keen_swirles[298097]: 167 167
Oct 02 19:24:01 compute-0 systemd[1]: libpod-58c77ffca58aec5be1a59a59aefb47e5c353d89c14827c91640e907032f6c585.scope: Deactivated successfully.
Oct 02 19:24:01 compute-0 podman[298079]: 2025-10-02 19:24:01.174187913 +0000 UTC m=+0.204520102 container died 58c77ffca58aec5be1a59a59aefb47e5c353d89c14827c91640e907032f6c585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swirles, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:24:01 compute-0 python3.9[298082]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:24:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea7aad53e1f895f16e5884c71ba45583b254d5cfa8a13c33d8ee487d0e7d82f3-merged.mount: Deactivated successfully.
Oct 02 19:24:01 compute-0 sudo[298078]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:01 compute-0 podman[298079]: 2025-10-02 19:24:01.255736325 +0000 UTC m=+0.286068474 container remove 58c77ffca58aec5be1a59a59aefb47e5c353d89c14827c91640e907032f6c585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swirles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:24:01 compute-0 systemd[1]: libpod-conmon-58c77ffca58aec5be1a59a59aefb47e5c353d89c14827c91640e907032f6c585.scope: Deactivated successfully.
Oct 02 19:24:01 compute-0 ceph-mon[191910]: pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:01 compute-0 openstack_network_exporter[159337]: ERROR   19:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:24:01 compute-0 openstack_network_exporter[159337]: ERROR   19:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:24:01 compute-0 openstack_network_exporter[159337]: ERROR   19:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:24:01 compute-0 openstack_network_exporter[159337]: ERROR   19:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:24:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:24:01 compute-0 openstack_network_exporter[159337]: ERROR   19:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:24:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:24:01 compute-0 podman[298145]: 2025-10-02 19:24:01.448670831 +0000 UTC m=+0.066438171 container create 9b0cb246acb880338c6f076a841abb36f74f85ffa98d4442e92dc90b9085393e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_napier, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 19:24:01 compute-0 podman[298145]: 2025-10-02 19:24:01.420582016 +0000 UTC m=+0.038349376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:24:01 compute-0 systemd[1]: Started libpod-conmon-9b0cb246acb880338c6f076a841abb36f74f85ffa98d4442e92dc90b9085393e.scope.
Oct 02 19:24:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/440737a108482d0792dc6622e21be3e9a911d73e65080dcc33a8cdb2a5da25b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/440737a108482d0792dc6622e21be3e9a911d73e65080dcc33a8cdb2a5da25b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/440737a108482d0792dc6622e21be3e9a911d73e65080dcc33a8cdb2a5da25b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/440737a108482d0792dc6622e21be3e9a911d73e65080dcc33a8cdb2a5da25b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:24:01 compute-0 podman[298145]: 2025-10-02 19:24:01.574627293 +0000 UTC m=+0.192394623 container init 9b0cb246acb880338c6f076a841abb36f74f85ffa98d4442e92dc90b9085393e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:24:01 compute-0 podman[298145]: 2025-10-02 19:24:01.590003632 +0000 UTC m=+0.207770932 container start 9b0cb246acb880338c6f076a841abb36f74f85ffa98d4442e92dc90b9085393e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 19:24:01 compute-0 podman[298145]: 2025-10-02 19:24:01.595543143 +0000 UTC m=+0.213310493 container attach 9b0cb246acb880338c6f076a841abb36f74f85ffa98d4442e92dc90b9085393e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_napier, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 19:24:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:01 compute-0 sudo[298291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onpgjnihnlnkbgkungicqjpixtmygtgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433041.4646301-497-190488186890050/AnsiballZ_file.py'
Oct 02 19:24:01 compute-0 sudo[298291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:02 compute-0 podman[298293]: 2025-10-02 19:24:02.148203268 +0000 UTC m=+0.153340908 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.buildah.version=1.29.0, distribution-scope=public, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4)
Oct 02 19:24:02 compute-0 python3.9[298294]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:24:02 compute-0 sudo[298291]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:02 compute-0 awesome_napier[298202]: {
Oct 02 19:24:02 compute-0 awesome_napier[298202]:     "0": [
Oct 02 19:24:02 compute-0 awesome_napier[298202]:         {
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "devices": [
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "/dev/loop3"
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             ],
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "lv_name": "ceph_lv0",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "lv_size": "21470642176",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "name": "ceph_lv0",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "tags": {
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.cluster_name": "ceph",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.crush_device_class": "",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.encrypted": "0",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.osd_id": "0",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.type": "block",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.vdo": "0"
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             },
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "type": "block",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "vg_name": "ceph_vg0"
Oct 02 19:24:02 compute-0 awesome_napier[298202]:         }
Oct 02 19:24:02 compute-0 awesome_napier[298202]:     ],
Oct 02 19:24:02 compute-0 awesome_napier[298202]:     "1": [
Oct 02 19:24:02 compute-0 awesome_napier[298202]:         {
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "devices": [
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "/dev/loop4"
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             ],
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "lv_name": "ceph_lv1",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "lv_size": "21470642176",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "name": "ceph_lv1",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "tags": {
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.cluster_name": "ceph",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.crush_device_class": "",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.encrypted": "0",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.osd_id": "1",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.type": "block",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.vdo": "0"
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             },
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "type": "block",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "vg_name": "ceph_vg1"
Oct 02 19:24:02 compute-0 awesome_napier[298202]:         }
Oct 02 19:24:02 compute-0 awesome_napier[298202]:     ],
Oct 02 19:24:02 compute-0 awesome_napier[298202]:     "2": [
Oct 02 19:24:02 compute-0 awesome_napier[298202]:         {
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "devices": [
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "/dev/loop5"
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             ],
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "lv_name": "ceph_lv2",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "lv_size": "21470642176",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "name": "ceph_lv2",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "tags": {
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.cluster_name": "ceph",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.crush_device_class": "",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.encrypted": "0",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.osd_id": "2",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.type": "block",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:                 "ceph.vdo": "0"
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             },
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "type": "block",
Oct 02 19:24:02 compute-0 awesome_napier[298202]:             "vg_name": "ceph_vg2"
Oct 02 19:24:02 compute-0 awesome_napier[298202]:         }
Oct 02 19:24:02 compute-0 awesome_napier[298202]:     ]
Oct 02 19:24:02 compute-0 awesome_napier[298202]: }
Oct 02 19:24:02 compute-0 systemd[1]: libpod-9b0cb246acb880338c6f076a841abb36f74f85ffa98d4442e92dc90b9085393e.scope: Deactivated successfully.
Oct 02 19:24:02 compute-0 podman[298145]: 2025-10-02 19:24:02.37691633 +0000 UTC m=+0.994683650 container died 9b0cb246acb880338c6f076a841abb36f74f85ffa98d4442e92dc90b9085393e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:24:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-440737a108482d0792dc6622e21be3e9a911d73e65080dcc33a8cdb2a5da25b0-merged.mount: Deactivated successfully.
Oct 02 19:24:02 compute-0 podman[298145]: 2025-10-02 19:24:02.476135213 +0000 UTC m=+1.093902523 container remove 9b0cb246acb880338c6f076a841abb36f74f85ffa98d4442e92dc90b9085393e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_napier, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 19:24:02 compute-0 systemd[1]: libpod-conmon-9b0cb246acb880338c6f076a841abb36f74f85ffa98d4442e92dc90b9085393e.scope: Deactivated successfully.
Oct 02 19:24:02 compute-0 sudo[297889]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:02 compute-0 sudo[298375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:24:02 compute-0 sudo[298375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:24:02 compute-0 sudo[298375]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:02 compute-0 sudo[298429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:24:02 compute-0 sudo[298429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:24:02 compute-0 sudo[298429]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:02 compute-0 sudo[298454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:24:02 compute-0 sudo[298454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:24:02 compute-0 sudo[298454]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:02 compute-0 sudo[298479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:24:02 compute-0 sudo[298479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:24:03 compute-0 podman[298541]: 2025-10-02 19:24:03.324776193 +0000 UTC m=+0.034576883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:24:03 compute-0 ceph-mon[191910]: pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:03 compute-0 podman[298541]: 2025-10-02 19:24:03.461017595 +0000 UTC m=+0.170818255 container create bc561db78324af9452641fc061b3c92aac77b69bf16181969828d2dd6183cfe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclaren, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:24:03
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['images', 'default.rgw.log', '.rgw.root', 'volumes', 'backups', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta']
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:24:03 compute-0 systemd[1]: Started libpod-conmon-bc561db78324af9452641fc061b3c92aac77b69bf16181969828d2dd6183cfe1.scope.
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:24:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:24:03 compute-0 podman[298541]: 2025-10-02 19:24:03.696294155 +0000 UTC m=+0.406094835 container init bc561db78324af9452641fc061b3c92aac77b69bf16181969828d2dd6183cfe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 19:24:03 compute-0 podman[298541]: 2025-10-02 19:24:03.708040325 +0000 UTC m=+0.417841025 container start bc561db78324af9452641fc061b3c92aac77b69bf16181969828d2dd6183cfe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclaren, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 19:24:03 compute-0 recursing_mclaren[298592]: 167 167
Oct 02 19:24:03 compute-0 podman[298541]: 2025-10-02 19:24:03.713684429 +0000 UTC m=+0.423485089 container attach bc561db78324af9452641fc061b3c92aac77b69bf16181969828d2dd6183cfe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclaren, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:24:03 compute-0 systemd[1]: libpod-bc561db78324af9452641fc061b3c92aac77b69bf16181969828d2dd6183cfe1.scope: Deactivated successfully.
Oct 02 19:24:03 compute-0 podman[298541]: 2025-10-02 19:24:03.718611663 +0000 UTC m=+0.428412353 container died bc561db78324af9452641fc061b3c92aac77b69bf16181969828d2dd6183cfe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclaren, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:24:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-329eb666838bcec2af5c3e7b50b61c85c34517e43726057822ef5023866780f8-merged.mount: Deactivated successfully.
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:24:03 compute-0 podman[298541]: 2025-10-02 19:24:03.777698713 +0000 UTC m=+0.487499373 container remove bc561db78324af9452641fc061b3c92aac77b69bf16181969828d2dd6183cfe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:24:03 compute-0 systemd[1]: libpod-conmon-bc561db78324af9452641fc061b3c92aac77b69bf16181969828d2dd6183cfe1.scope: Deactivated successfully.
Oct 02 19:24:03 compute-0 sudo[298645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbdkzapybjcvvxvrsdqtyiwydsnlzlyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433042.5212543-497-178872309909190/AnsiballZ_file.py'
Oct 02 19:24:03 compute-0 sudo[298645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:04 compute-0 python3.9[298649]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:24:04 compute-0 sudo[298645]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:04 compute-0 podman[298655]: 2025-10-02 19:24:04.047249666 +0000 UTC m=+0.094281259 container create e652436c78577bc1d7262267fa44b97dd45b913e217f9e7af100e58631dfc665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 19:24:04 compute-0 podman[298655]: 2025-10-02 19:24:04.009027825 +0000 UTC m=+0.056059518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:24:04 compute-0 systemd[1]: Started libpod-conmon-e652436c78577bc1d7262267fa44b97dd45b913e217f9e7af100e58631dfc665.scope.
Oct 02 19:24:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b07dcec7d9479a3db507919d5686fd019ec1028b318e6b14434e3d9bad4223d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b07dcec7d9479a3db507919d5686fd019ec1028b318e6b14434e3d9bad4223d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b07dcec7d9479a3db507919d5686fd019ec1028b318e6b14434e3d9bad4223d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b07dcec7d9479a3db507919d5686fd019ec1028b318e6b14434e3d9bad4223d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:24:04 compute-0 podman[298655]: 2025-10-02 19:24:04.250908165 +0000 UTC m=+0.297939838 container init e652436c78577bc1d7262267fa44b97dd45b913e217f9e7af100e58631dfc665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:24:04 compute-0 podman[298655]: 2025-10-02 19:24:04.269204993 +0000 UTC m=+0.316236586 container start e652436c78577bc1d7262267fa44b97dd45b913e217f9e7af100e58631dfc665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:24:04 compute-0 podman[298655]: 2025-10-02 19:24:04.283742059 +0000 UTC m=+0.330773742 container attach e652436c78577bc1d7262267fa44b97dd45b913e217f9e7af100e58631dfc665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:24:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:24:04 compute-0 sudo[298823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reosrksfkhckynqebjwbzmglzwmltsaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433044.2998223-497-16808807301405/AnsiballZ_file.py'
Oct 02 19:24:04 compute-0 sudo[298823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:05 compute-0 python3.9[298825]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:24:05 compute-0 sudo[298823]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:05 compute-0 confident_banach[298681]: {
Oct 02 19:24:05 compute-0 confident_banach[298681]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:24:05 compute-0 confident_banach[298681]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:24:05 compute-0 confident_banach[298681]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:24:05 compute-0 confident_banach[298681]:         "osd_id": 1,
Oct 02 19:24:05 compute-0 confident_banach[298681]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:24:05 compute-0 confident_banach[298681]:         "type": "bluestore"
Oct 02 19:24:05 compute-0 confident_banach[298681]:     },
Oct 02 19:24:05 compute-0 confident_banach[298681]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:24:05 compute-0 confident_banach[298681]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:24:05 compute-0 confident_banach[298681]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:24:05 compute-0 confident_banach[298681]:         "osd_id": 2,
Oct 02 19:24:05 compute-0 confident_banach[298681]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:24:05 compute-0 confident_banach[298681]:         "type": "bluestore"
Oct 02 19:24:05 compute-0 confident_banach[298681]:     },
Oct 02 19:24:05 compute-0 confident_banach[298681]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:24:05 compute-0 confident_banach[298681]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:24:05 compute-0 confident_banach[298681]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:24:05 compute-0 confident_banach[298681]:         "osd_id": 0,
Oct 02 19:24:05 compute-0 confident_banach[298681]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:24:05 compute-0 confident_banach[298681]:         "type": "bluestore"
Oct 02 19:24:05 compute-0 confident_banach[298681]:     }
Oct 02 19:24:05 compute-0 confident_banach[298681]: }
Oct 02 19:24:05 compute-0 systemd[1]: libpod-e652436c78577bc1d7262267fa44b97dd45b913e217f9e7af100e58631dfc665.scope: Deactivated successfully.
Oct 02 19:24:05 compute-0 systemd[1]: libpod-e652436c78577bc1d7262267fa44b97dd45b913e217f9e7af100e58631dfc665.scope: Consumed 1.126s CPU time.
Oct 02 19:24:05 compute-0 ceph-mon[191910]: pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:05 compute-0 podman[298855]: 2025-10-02 19:24:05.470024827 +0000 UTC m=+0.046058586 container died e652436c78577bc1d7262267fa44b97dd45b913e217f9e7af100e58631dfc665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_banach, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 19:24:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b07dcec7d9479a3db507919d5686fd019ec1028b318e6b14434e3d9bad4223d-merged.mount: Deactivated successfully.
Oct 02 19:24:05 compute-0 podman[298855]: 2025-10-02 19:24:05.551471366 +0000 UTC m=+0.127505025 container remove e652436c78577bc1d7262267fa44b97dd45b913e217f9e7af100e58631dfc665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_banach, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 19:24:05 compute-0 systemd[1]: libpod-conmon-e652436c78577bc1d7262267fa44b97dd45b913e217f9e7af100e58631dfc665.scope: Deactivated successfully.
Oct 02 19:24:05 compute-0 sudo[298479]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:24:05 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:24:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:24:05 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:24:05 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 9d4d3c29-0ed9-4465-ba27-3e22c75badb4 does not exist
Oct 02 19:24:05 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 21955c08-af5f-4150-bf01-fbf04232b151 does not exist
Oct 02 19:24:05 compute-0 sudo[298926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:24:05 compute-0 sudo[298926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:24:05 compute-0 sudo[298926]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:05 compute-0 sudo[298975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:24:05 compute-0 sudo[298975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:24:05 compute-0 sudo[298975]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:06 compute-0 sudo[299065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clgppmhtvbyzvtdlvlhkilpnkbqdnumu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433045.58949-497-22166069507621/AnsiballZ_file.py'
Oct 02 19:24:06 compute-0 sudo[299065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:06 compute-0 python3.9[299067]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:24:06 compute-0 sudo[299065]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:24:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:24:07 compute-0 sudo[299217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmsoxdlznvlqcylfypjtgehurwncusmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433046.6036377-540-70529270728818/AnsiballZ_stat.py'
Oct 02 19:24:07 compute-0 sudo[299217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:07 compute-0 python3.9[299219]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:07 compute-0 sudo[299217]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:07 compute-0 ceph-mon[191910]: pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:07 compute-0 sudo[299295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiswwcfyxzaqheuuvfhjdrjwabutwojv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433046.6036377-540-70529270728818/AnsiballZ_file.py'
Oct 02 19:24:07 compute-0 sudo[299295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:08 compute-0 python3.9[299297]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtlogd.conf _original_basename=virtlogd.conf recurse=False state=file path=/etc/libvirt/virtlogd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:08 compute-0 sudo[299295]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:08 compute-0 sudo[299447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpxygrcfddmjhduuaarpfjbxmhcsbcki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433048.374819-540-109780759142939/AnsiballZ_stat.py'
Oct 02 19:24:08 compute-0 sudo[299447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:09 compute-0 python3.9[299449]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:09 compute-0 sudo[299447]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:24:09 compute-0 sudo[299525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbgitzduoyhzmsrkzbyblubsujxrxvjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433048.374819-540-109780759142939/AnsiballZ_file.py'
Oct 02 19:24:09 compute-0 sudo[299525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:09 compute-0 ceph-mon[191910]: pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:09 compute-0 python3.9[299527]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtnodedevd.conf _original_basename=virtnodedevd.conf recurse=False state=file path=/etc/libvirt/virtnodedevd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:09 compute-0 sudo[299525]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:10 compute-0 sudo[299677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqtgfizprkzlvyghwsbdnzmjzqiqkkfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433050.0608616-540-73291834881244/AnsiballZ_stat.py'
Oct 02 19:24:10 compute-0 sudo[299677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:10 compute-0 ceph-mon[191910]: pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:10 compute-0 python3.9[299679]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:10 compute-0 sudo[299677]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:11 compute-0 sudo[299755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhadkbhqgvdmnyspvgtyxsfycwpcdkvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433050.0608616-540-73291834881244/AnsiballZ_file.py'
Oct 02 19:24:11 compute-0 sudo[299755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:11 compute-0 python3.9[299757]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtproxyd.conf _original_basename=virtproxyd.conf recurse=False state=file path=/etc/libvirt/virtproxyd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:11 compute-0 sudo[299755]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:12 compute-0 sudo[299907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpbewbgqxuaybzxkogtsbkaxiazljusu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433051.7551754-540-171492377410250/AnsiballZ_stat.py'
Oct 02 19:24:12 compute-0 sudo[299907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:24:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:24:12 compute-0 python3.9[299909]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:12 compute-0 sudo[299907]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:12 compute-0 sudo[299985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twubbeukcwuvmozhcvqzejqufmdcuhkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433051.7551754-540-171492377410250/AnsiballZ_file.py'
Oct 02 19:24:12 compute-0 sudo[299985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:12 compute-0 ceph-mon[191910]: pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:13 compute-0 python3.9[299987]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtqemud.conf _original_basename=virtqemud.conf recurse=False state=file path=/etc/libvirt/virtqemud.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:13 compute-0 sudo[299985]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:13 compute-0 sudo[300137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loslitcebuilyxikehtjognzcouhgnit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433053.474019-540-162561838584712/AnsiballZ_stat.py'
Oct 02 19:24:13 compute-0 sudo[300137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:14 compute-0 python3.9[300139]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:14 compute-0 sudo[300137]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:24:15 compute-0 ceph-mon[191910]: pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:15 compute-0 sudo[300215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qolzguwplhwtdsvzlrebgrebdpsovcrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433053.474019-540-162561838584712/AnsiballZ_file.py'
Oct 02 19:24:15 compute-0 sudo[300215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:15 compute-0 python3.9[300217]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/qemu.conf _original_basename=qemu.conf.j2 recurse=False state=file path=/etc/libvirt/qemu.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:15 compute-0 sudo[300215]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:16 compute-0 sudo[300367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxaljleerjvgjixzgwqzldgcqdysrdfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433056.1252458-540-152802028095457/AnsiballZ_stat.py'
Oct 02 19:24:16 compute-0 sudo[300367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:16 compute-0 python3.9[300369]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:16 compute-0 sudo[300367]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:17 compute-0 ceph-mon[191910]: pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:17 compute-0 sudo[300445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swiefnbxwopahnmzhtsppgnnmjhnsjsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433056.1252458-540-152802028095457/AnsiballZ_file.py'
Oct 02 19:24:17 compute-0 sudo[300445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:18 compute-0 python3.9[300447]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtsecretd.conf _original_basename=virtsecretd.conf recurse=False state=file path=/etc/libvirt/virtsecretd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:18 compute-0 sudo[300445]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:18 compute-0 sudo[300597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yapyoclqzkuhhkhofusndjdvlcdbteru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433058.387401-540-94621558170952/AnsiballZ_stat.py'
Oct 02 19:24:18 compute-0 sudo[300597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:19 compute-0 ceph-mon[191910]: pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:19 compute-0 python3.9[300599]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:19 compute-0 sudo[300597]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:24:19 compute-0 sudo[300676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueatrniprjkmtcoamprgmcvtkgndrxpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433058.387401-540-94621558170952/AnsiballZ_file.py'
Oct 02 19:24:19 compute-0 sudo[300676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:19 compute-0 python3.9[300678]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0600 owner=libvirt dest=/etc/libvirt/auth.conf _original_basename=auth.conf recurse=False state=file path=/etc/libvirt/auth.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:19 compute-0 sudo[300676]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:20 compute-0 sudo[300828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zomgmkrkoxwadzxolecsnsdinrtskedg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433060.1064353-540-56294001638400/AnsiballZ_stat.py'
Oct 02 19:24:20 compute-0 sudo[300828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:20 compute-0 python3.9[300830]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:20 compute-0 sudo[300828]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:21 compute-0 ceph-mon[191910]: pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:21 compute-0 sudo[300906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swgtlenudysiajllkwskcjvvfmhigljc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433060.1064353-540-56294001638400/AnsiballZ_file.py'
Oct 02 19:24:21 compute-0 sudo[300906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:21 compute-0 python3.9[300908]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/sasl2/libvirt.conf _original_basename=sasl_libvirt.conf recurse=False state=file path=/etc/sasl2/libvirt.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:21 compute-0 sudo[300906]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:22 compute-0 sudo[301058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfpphczgoeoabzqtfgvxeodrqdpahhgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433061.7459166-629-220310277297702/AnsiballZ_command.py'
Oct 02 19:24:22 compute-0 sudo[301058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:22 compute-0 podman[301061]: 2025-10-02 19:24:22.590665144 +0000 UTC m=+0.129451918 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:24:22 compute-0 podman[301060]: 2025-10-02 19:24:22.590700145 +0000 UTC m=+0.128680327 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250930, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute)
Oct 02 19:24:22 compute-0 python3.9[301062]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 02 19:24:22 compute-0 sudo[301058]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:23 compute-0 ceph-mon[191910]: pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:23 compute-0 sudo[301253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcibnsnhxlbcaamehqahqgboylnsudqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433063.0364332-638-79818984343216/AnsiballZ_file.py'
Oct 02 19:24:23 compute-0 sudo[301253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:23 compute-0 python3.9[301255]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:23 compute-0 sudo[301253]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.438 17 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.439 17 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a00e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.440 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feec38a00b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec5f86ab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38272c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec6d242f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38273e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38274d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.443 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feec72bb7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feec38264b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feec3827c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feec3827230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feec3825700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feec3827290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feec38277a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feec38272f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feec3827350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feec38a0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feec38273b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec4ac1ee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feec49646e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feec3827440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feec3827c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feec38274a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feec3827ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feec3827d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feec3827dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.450 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feec3827e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.452 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.452 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feec38a0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.452 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.453 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feec38276e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.453 17 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.452 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3825730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.453 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feec3827ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.454 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.454 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feec49641d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.454 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.453 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.455 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec65dade0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.455 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feec3827740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.456 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.456 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feec3827f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.456 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.458 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.458 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.458 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.458 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.458 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.458 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.458 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.459 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.459 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.459 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.459 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.459 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.460 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.460 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.460 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.460 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.460 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.460 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.460 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.460 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:24:24.461 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:24:24 compute-0 sudo[301438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwjpmyqkquqgfoibwwetjumgjkpvjhdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433064.096622-638-30330056035414/AnsiballZ_file.py'
Oct 02 19:24:24 compute-0 sudo[301438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:24 compute-0 podman[301380]: 2025-10-02 19:24:24.652112035 +0000 UTC m=+0.119917528 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7)
Oct 02 19:24:24 compute-0 podman[301381]: 2025-10-02 19:24:24.679734118 +0000 UTC m=+0.138749901 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:24:24 compute-0 python3.9[301447]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:24 compute-0 sudo[301438]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:25 compute-0 ceph-mon[191910]: pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:25 compute-0 sudo[301603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtozxrrzyfxszrsaprqsfbxsdsdtuasv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433065.0733364-638-98831502075816/AnsiballZ_file.py'
Oct 02 19:24:25 compute-0 sudo[301603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:25 compute-0 python3.9[301605]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:25 compute-0 sudo[301603]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:27 compute-0 ceph-mon[191910]: pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:27 compute-0 sudo[301755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xifavrnslmrrpeosnhbuvynuidhnzeot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433066.126239-638-249249001435070/AnsiballZ_file.py'
Oct 02 19:24:27 compute-0 sudo[301755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:27 compute-0 python3.9[301757]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:27 compute-0 sudo[301755]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:28 compute-0 sudo[301907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xakhtlsiehmrcohsstqnimlbvoobcsww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433067.7743275-638-78601915525867/AnsiballZ_file.py'
Oct 02 19:24:28 compute-0 sudo[301907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:28 compute-0 python3.9[301909]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:28 compute-0 sudo[301907]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:29 compute-0 ceph-mon[191910]: pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:24:29 compute-0 sudo[302059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afbfsmwdpxedbfxzovlyzyiutfdpemit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433069.2017777-638-275633045950463/AnsiballZ_file.py'
Oct 02 19:24:29 compute-0 sudo[302059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:29 compute-0 podman[157186]: time="2025-10-02T19:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:24:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35730 "" "Go-http-client/1.1"
Oct 02 19:24:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7278 "" "Go-http-client/1.1"
Oct 02 19:24:29 compute-0 python3.9[302061]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:29 compute-0 sudo[302059]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:30 compute-0 sudo[302257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzyzcwplovpvsfbltwhooyckgxkrwrzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433070.2152748-638-95950268014495/AnsiballZ_file.py'
Oct 02 19:24:30 compute-0 sudo[302257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:30 compute-0 podman[302187]: 2025-10-02 19:24:30.6841797 +0000 UTC m=+0.099679197 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:24:30 compute-0 podman[302186]: 2025-10-02 19:24:30.699061035 +0000 UTC m=+0.118078908 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:24:30 compute-0 podman[302185]: 2025-10-02 19:24:30.709553251 +0000 UTC m=+0.132341127 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:24:30 compute-0 python3.9[302271]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:30 compute-0 sudo[302257]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:31 compute-0 ceph-mon[191910]: pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:31 compute-0 openstack_network_exporter[159337]: ERROR   19:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:24:31 compute-0 openstack_network_exporter[159337]: ERROR   19:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:24:31 compute-0 openstack_network_exporter[159337]: ERROR   19:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:24:31 compute-0 openstack_network_exporter[159337]: ERROR   19:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:24:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:24:31 compute-0 openstack_network_exporter[159337]: ERROR   19:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:24:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:24:31 compute-0 sudo[302424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewozhfsieksncinprxvnkxgtbfshovxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433071.1248646-638-222765085986839/AnsiballZ_file.py'
Oct 02 19:24:31 compute-0 sudo[302424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:31 compute-0 python3.9[302426]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:31 compute-0 sudo[302424]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:24:32.267 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:24:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:24:32.268 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:24:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:24:32.268 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:24:32 compute-0 sudo[302592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvacaircurmyxgwalqmfhvxkyoqtuhzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433071.9429922-638-199892067781267/AnsiballZ_file.py'
Oct 02 19:24:32 compute-0 sudo[302592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:32 compute-0 podman[302550]: 2025-10-02 19:24:32.458242862 +0000 UTC m=+0.115979311 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, version=9.4, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., container_name=kepler)
Oct 02 19:24:32 compute-0 python3.9[302596]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:32 compute-0 sudo[302592]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:33 compute-0 ceph-mon[191910]: pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:33 compute-0 sudo[302746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwnwgraszvrjaxzcfgmhudgkopueozvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433072.91828-638-232170335270722/AnsiballZ_file.py'
Oct 02 19:24:33 compute-0 sudo[302746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:33 compute-0 python3.9[302748]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:24:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:24:33 compute-0 sudo[302746]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:24:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:24:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:24:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:24:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:34 compute-0 sudo[302898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqctoyfmbvzspfwgiqyldhhbsfqmjhnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433073.8137026-638-188199007256783/AnsiballZ_file.py'
Oct 02 19:24:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:24:34 compute-0 sudo[302898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:34 compute-0 python3.9[302900]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:34 compute-0 sudo[302898]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:35 compute-0 ceph-mon[191910]: pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:35 compute-0 sudo[303050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sixhcdkxldkvsiqgnsjgxguoiqwzgplu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433074.8424213-638-218534463270258/AnsiballZ_file.py'
Oct 02 19:24:35 compute-0 sudo[303050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:35 compute-0 python3.9[303052]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:35 compute-0 sudo[303050]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:36 compute-0 sudo[303202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rscviujtyjjtwvxgyssjistmajrrzwsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433075.727743-638-271982966660112/AnsiballZ_file.py'
Oct 02 19:24:36 compute-0 sudo[303202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:36 compute-0 python3.9[303204]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:36 compute-0 sudo[303202]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:37 compute-0 ceph-mon[191910]: pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:37 compute-0 sudo[303354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwwcoernlmaeyxnkbesaxmolpearltpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433076.7795992-638-73437257677432/AnsiballZ_file.py'
Oct 02 19:24:37 compute-0 sudo[303354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:37 compute-0 python3.9[303356]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:37 compute-0 sudo[303354]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:39 compute-0 sudo[303506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfjaylihtxlkgyctlyaoxbywxgwuxpwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433077.7691398-737-121959536862267/AnsiballZ_stat.py'
Oct 02 19:24:39 compute-0 sudo[303506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:39 compute-0 ceph-mon[191910]: pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:39 compute-0 python3.9[303508]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:39 compute-0 sudo[303506]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:24:39 compute-0 sudo[303584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvhnzajxseulbmjpkksmrhdclhtxrhxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433077.7691398-737-121959536862267/AnsiballZ_file.py'
Oct 02 19:24:39 compute-0 sudo[303584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:39 compute-0 python3.9[303586]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtlogd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtlogd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:40 compute-0 sudo[303584]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:40 compute-0 sudo[303736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgqyjzxltyddqufwnimxnwuwtgqjdptm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433080.2647035-737-197107461330434/AnsiballZ_stat.py'
Oct 02 19:24:40 compute-0 sudo[303736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:40 compute-0 python3.9[303738]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:40 compute-0 sudo[303736]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:41 compute-0 ceph-mon[191910]: pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:41 compute-0 sudo[303814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqmuwbevsovkoilxbnbhmpedrhouuihb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433080.2647035-737-197107461330434/AnsiballZ_file.py'
Oct 02 19:24:41 compute-0 sudo[303814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:41 compute-0 python3.9[303816]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:42 compute-0 sudo[303814]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:42 compute-0 sudo[303966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-napzrmbynsyqkxjakzzaebdmceoqhnjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433082.287068-737-32014741914855/AnsiballZ_stat.py'
Oct 02 19:24:42 compute-0 sudo[303966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:43 compute-0 python3.9[303968]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:43 compute-0 sudo[303966]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:43 compute-0 ceph-mon[191910]: pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:43 compute-0 sudo[304044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlplhsftcndftnvnosayzlbxskufvqbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433082.287068-737-32014741914855/AnsiballZ_file.py'
Oct 02 19:24:43 compute-0 sudo[304044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:43 compute-0 python3.9[304046]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:43 compute-0 sudo[304044]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:24:44 compute-0 sudo[304196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyxeculjzsfiqwmydbqrcgmhmirwotvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433083.979851-737-44524983338195/AnsiballZ_stat.py'
Oct 02 19:24:44 compute-0 sudo[304196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:44 compute-0 python3.9[304198]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:44 compute-0 sudo[304196]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:45 compute-0 sudo[304274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgvuvxrcdokroffoibdgfnpywqxrjjjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433083.979851-737-44524983338195/AnsiballZ_file.py'
Oct 02 19:24:45 compute-0 sudo[304274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:45 compute-0 ceph-mon[191910]: pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:45 compute-0 python3.9[304276]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:45 compute-0 sudo[304274]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:46 compute-0 sudo[304426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjjgrlplbqoawctthswsmebaatyvwxre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433085.5447893-737-226219907829760/AnsiballZ_stat.py'
Oct 02 19:24:46 compute-0 sudo[304426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:46 compute-0 python3.9[304428]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:46 compute-0 sudo[304426]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:46 compute-0 sudo[304504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zexcgfhfvqrosflrhncvdbbiivikcbvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433085.5447893-737-226219907829760/AnsiballZ_file.py'
Oct 02 19:24:46 compute-0 sudo[304504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:46 compute-0 python3.9[304506]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:47 compute-0 sudo[304504]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:47 compute-0 ceph-mon[191910]: pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:47 compute-0 sudo[304656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzgwodscmfbqlbtfeshakuknweqkoyyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433087.2349226-737-228735546184650/AnsiballZ_stat.py'
Oct 02 19:24:47 compute-0 sudo[304656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:47 compute-0 python3.9[304658]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:48 compute-0 sudo[304656]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:48 compute-0 sudo[304734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azsvjohditrluhcxnrytxrkkjpcbeahh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433087.2349226-737-228735546184650/AnsiballZ_file.py'
Oct 02 19:24:48 compute-0 sudo[304734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:48 compute-0 python3.9[304736]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:48 compute-0 sudo[304734]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:49 compute-0 ceph-mon[191910]: pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:49 compute-0 sudo[304886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peflhmkewsrelpmhwvuhkporuzegstqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433088.871141-737-111632357243930/AnsiballZ_stat.py'
Oct 02 19:24:49 compute-0 sudo[304886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:24:49 compute-0 python3.9[304888]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:49 compute-0 sudo[304886]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:50 compute-0 sudo[304965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyyyzowxtkxzensunmagjlpvmvajafxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433088.871141-737-111632357243930/AnsiballZ_file.py'
Oct 02 19:24:50 compute-0 sudo[304965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:50 compute-0 python3.9[304967]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:50 compute-0 sudo[304965]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:51 compute-0 sudo[305117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vygivpyhjmvsigxzendohgkpxaggbtaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433090.5292687-737-18967694995589/AnsiballZ_stat.py'
Oct 02 19:24:51 compute-0 sudo[305117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:51 compute-0 ceph-mon[191910]: pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:51 compute-0 python3.9[305119]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:51 compute-0 sudo[305117]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:51 compute-0 sudo[305195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxfbhgfdysiydevhnibebwxpfbfebdrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433090.5292687-737-18967694995589/AnsiballZ_file.py'
Oct 02 19:24:51 compute-0 sudo[305195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:51 compute-0 python3.9[305197]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:51 compute-0 sudo[305195]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:53 compute-0 podman[305322]: 2025-10-02 19:24:53.168263865 +0000 UTC m=+0.093425277 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:24:53 compute-0 podman[305321]: 2025-10-02 19:24:53.17542783 +0000 UTC m=+0.103412859 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:24:53 compute-0 sudo[305378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udkcfpyyljovfhvtfhnlidmscwtooftc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433092.6377215-737-82418414321028/AnsiballZ_stat.py'
Oct 02 19:24:53 compute-0 sudo[305378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:53 compute-0 ceph-mon[191910]: pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:53 compute-0 python3.9[305390]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:53 compute-0 sudo[305378]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:53 compute-0 sudo[305466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdzgmhwngvwqcpkvdxzvgfcnyngqyxhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433092.6377215-737-82418414321028/AnsiballZ_file.py'
Oct 02 19:24:53 compute-0 sudo[305466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:54 compute-0 python3.9[305468]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:54 compute-0 sudo[305466]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:24:55 compute-0 ceph-mon[191910]: pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:55 compute-0 podman[305574]: 2025-10-02 19:24:55.710237116 +0000 UTC m=+0.127449553 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal)
Oct 02 19:24:55 compute-0 podman[305581]: 2025-10-02 19:24:55.740615334 +0000 UTC m=+0.162911549 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:24:55 compute-0 sudo[305662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsbczpeqhibjlvppekemlscylazyzneg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433095.2155845-737-54616742236177/AnsiballZ_stat.py'
Oct 02 19:24:55 compute-0 sudo[305662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:55 compute-0 python3.9[305664]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:55 compute-0 sudo[305662]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:56 compute-0 sudo[305740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aewwzpowbjievmekcyqmsemorbisaayl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433095.2155845-737-54616742236177/AnsiballZ_file.py'
Oct 02 19:24:56 compute-0 sudo[305740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:56 compute-0 python3.9[305742]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:56 compute-0 sudo[305740]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:57 compute-0 ceph-mon[191910]: pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:57 compute-0 sudo[305892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjueuhpdtehtshzbrtygdkbosgigzqny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433096.943759-737-118305285659343/AnsiballZ_stat.py'
Oct 02 19:24:57 compute-0 sudo[305892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:57 compute-0 python3.9[305894]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:57 compute-0 sudo[305892]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:58 compute-0 sudo[305970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iufkhzjddqvggavnerhyvlzivlolptbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433096.943759-737-118305285659343/AnsiballZ_file.py'
Oct 02 19:24:58 compute-0 sudo[305970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:58 compute-0 python3.9[305972]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:24:58 compute-0 sudo[305970]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:59 compute-0 sudo[306122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idcefarfdpzrwfbpgyolebtrngyokiqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433098.6354775-737-21960912889077/AnsiballZ_stat.py'
Oct 02 19:24:59 compute-0 sudo[306122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:59 compute-0 ceph-mon[191910]: pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:24:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:24:59 compute-0 python3.9[306124]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:24:59 compute-0 sudo[306122]: pam_unix(sudo:session): session closed for user root
Oct 02 19:24:59 compute-0 podman[157186]: time="2025-10-02T19:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:24:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35730 "" "Go-http-client/1.1"
Oct 02 19:24:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7286 "" "Go-http-client/1.1"
Oct 02 19:24:59 compute-0 sudo[306200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tataryicxqxikjdebgpqdlbtxptucrif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433098.6354775-737-21960912889077/AnsiballZ_file.py'
Oct 02 19:24:59 compute-0 sudo[306200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:24:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:00 compute-0 python3.9[306202]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:00 compute-0 sudo[306200]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:00 compute-0 sudo[306387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oejjnxaidscrnfltkfnglibfkaxjaoao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433100.3274465-737-93353772174886/AnsiballZ_stat.py'
Oct 02 19:25:00 compute-0 sudo[306387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:00 compute-0 podman[306328]: 2025-10-02 19:25:00.875857566 +0000 UTC m=+0.098177466 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:25:00 compute-0 podman[306327]: 2025-10-02 19:25:00.895208333 +0000 UTC m=+0.126179968 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:25:00 compute-0 podman[306326]: 2025-10-02 19:25:00.914210131 +0000 UTC m=+0.149943726 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Oct 02 19:25:01 compute-0 python3.9[306408]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:25:01 compute-0 sudo[306387]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:01 compute-0 ceph-mon[191910]: pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:01 compute-0 openstack_network_exporter[159337]: ERROR   19:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:25:01 compute-0 openstack_network_exporter[159337]: ERROR   19:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:25:01 compute-0 openstack_network_exporter[159337]: ERROR   19:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:25:01 compute-0 openstack_network_exporter[159337]: ERROR   19:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:25:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:25:01 compute-0 openstack_network_exporter[159337]: ERROR   19:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:25:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:25:01 compute-0 sudo[306490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkndmizkyuyejviqqjyqeohsmdpqjydl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433100.3274465-737-93353772174886/AnsiballZ_file.py'
Oct 02 19:25:01 compute-0 sudo[306490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:01 compute-0 python3.9[306492]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:01 compute-0 sudo[306490]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:02 compute-0 sudo[306642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcaqigykpazjhhrfhvhxbmasequmxqtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433102.0265114-737-261244208786688/AnsiballZ_stat.py'
Oct 02 19:25:02 compute-0 sudo[306642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:02 compute-0 podman[306644]: 2025-10-02 19:25:02.668682667 +0000 UTC m=+0.114506351 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, name=ubi9, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, io.openshift.expose-services=, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1214.1726694543, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:25:02 compute-0 python3.9[306645]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:25:02 compute-0 sudo[306642]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:03 compute-0 sudo[306741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jricnvuucutikakntmfsbmjdhhxrbagu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433102.0265114-737-261244208786688/AnsiballZ_file.py'
Oct 02 19:25:03 compute-0 sudo[306741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:03 compute-0 ceph-mon[191910]: pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:03 compute-0 python3.9[306743]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:03 compute-0 sudo[306741]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:25:03
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'volumes', '.mgr', 'vms', '.rgw.root']
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:25:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:25:05 compute-0 python3.9[306893]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:25:05 compute-0 ceph-mon[191910]: pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:05 compute-0 sudo[306973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:25:05 compute-0 sudo[306973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:25:05 compute-0 sudo[306973]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:06 compute-0 sudo[306998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:25:06 compute-0 sudo[306998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:25:06 compute-0 sudo[306998]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:06 compute-0 sudo[307023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:25:06 compute-0 sudo[307023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:25:06 compute-0 sudo[307023]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:06 compute-0 sudo[307048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:25:06 compute-0 sudo[307048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:25:07 compute-0 sudo[307048]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:25:07 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:25:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:25:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:25:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:25:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:25:07 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 83fcf603-5933-4714-9ff3-9d3ce043a345 does not exist
Oct 02 19:25:07 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev d8333c4b-6815-498c-a665-92ea523f63ff does not exist
Oct 02 19:25:07 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev fa7b140e-b85f-445a-8d64-ae837aff39ef does not exist
Oct 02 19:25:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:25:07 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:25:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:25:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:25:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:25:07 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:25:07 compute-0 sudo[307109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:25:07 compute-0 sudo[307109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:25:07 compute-0 sudo[307109]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:07 compute-0 sudo[307156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:25:07 compute-0 sudo[307156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:25:07 compute-0 sudo[307156]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:07 compute-0 ceph-mon[191910]: pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:25:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:25:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:25:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:25:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:25:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:25:07 compute-0 sudo[307249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qicwbdltzoptbxlvvuphdvpytdnqcguz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433105.677052-901-279884039889646/AnsiballZ_seboolean.py'
Oct 02 19:25:07 compute-0 sudo[307203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:25:07 compute-0 sudo[307249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:07 compute-0 sudo[307203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:25:07 compute-0 sudo[307203]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:07 compute-0 sudo[307254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:25:07 compute-0 sudo[307254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:25:07 compute-0 python3.9[307252]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 02 19:25:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:08 compute-0 podman[307316]: 2025-10-02 19:25:08.132082583 +0000 UTC m=+0.096320170 container create 9126cbbab2ac1554fd667111a6e8d94773a2dc4c376dd6dcaba98d23761031cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_merkle, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 19:25:08 compute-0 podman[307316]: 2025-10-02 19:25:08.082141502 +0000 UTC m=+0.046379099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:25:08 compute-0 systemd[1]: Started libpod-conmon-9126cbbab2ac1554fd667111a6e8d94773a2dc4c376dd6dcaba98d23761031cb.scope.
Oct 02 19:25:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:25:08 compute-0 podman[307316]: 2025-10-02 19:25:08.285909834 +0000 UTC m=+0.250147421 container init 9126cbbab2ac1554fd667111a6e8d94773a2dc4c376dd6dcaba98d23761031cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_merkle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 19:25:08 compute-0 podman[307316]: 2025-10-02 19:25:08.299492014 +0000 UTC m=+0.263729581 container start 9126cbbab2ac1554fd667111a6e8d94773a2dc4c376dd6dcaba98d23761031cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_merkle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 19:25:08 compute-0 kind_merkle[307332]: 167 167
Oct 02 19:25:08 compute-0 systemd[1]: libpod-9126cbbab2ac1554fd667111a6e8d94773a2dc4c376dd6dcaba98d23761031cb.scope: Deactivated successfully.
Oct 02 19:25:08 compute-0 podman[307316]: 2025-10-02 19:25:08.325134543 +0000 UTC m=+0.289372110 container attach 9126cbbab2ac1554fd667111a6e8d94773a2dc4c376dd6dcaba98d23761031cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_merkle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:25:08 compute-0 podman[307316]: 2025-10-02 19:25:08.325679837 +0000 UTC m=+0.289917404 container died 9126cbbab2ac1554fd667111a6e8d94773a2dc4c376dd6dcaba98d23761031cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_merkle, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:25:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f8d06e08512549c35e50a5f57e1d3cca465cc6c816637c149d1fa04e7b8021a-merged.mount: Deactivated successfully.
Oct 02 19:25:08 compute-0 sudo[307249]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:08 compute-0 podman[307316]: 2025-10-02 19:25:08.423425884 +0000 UTC m=+0.387663481 container remove 9126cbbab2ac1554fd667111a6e8d94773a2dc4c376dd6dcaba98d23761031cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:25:08 compute-0 systemd[1]: libpod-conmon-9126cbbab2ac1554fd667111a6e8d94773a2dc4c376dd6dcaba98d23761031cb.scope: Deactivated successfully.
Oct 02 19:25:08 compute-0 podman[307379]: 2025-10-02 19:25:08.676624245 +0000 UTC m=+0.087511727 container create 1b6b8a8d7133c0ad1f569851ca6ce2b368694b5f513f4a2738b4f66e2a4756a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:25:08 compute-0 podman[307379]: 2025-10-02 19:25:08.639470391 +0000 UTC m=+0.050357963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:25:08 compute-0 systemd[1]: Started libpod-conmon-1b6b8a8d7133c0ad1f569851ca6ce2b368694b5f513f4a2738b4f66e2a4756a0.scope.
Oct 02 19:25:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51dc7df21bae30248247dcaf73e89a9fd64dbf386a5f508e9506fae6448401af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51dc7df21bae30248247dcaf73e89a9fd64dbf386a5f508e9506fae6448401af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51dc7df21bae30248247dcaf73e89a9fd64dbf386a5f508e9506fae6448401af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51dc7df21bae30248247dcaf73e89a9fd64dbf386a5f508e9506fae6448401af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51dc7df21bae30248247dcaf73e89a9fd64dbf386a5f508e9506fae6448401af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:25:08 compute-0 podman[307379]: 2025-10-02 19:25:08.81853493 +0000 UTC m=+0.229422492 container init 1b6b8a8d7133c0ad1f569851ca6ce2b368694b5f513f4a2738b4f66e2a4756a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wing, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:25:08 compute-0 podman[307379]: 2025-10-02 19:25:08.833081745 +0000 UTC m=+0.243969217 container start 1b6b8a8d7133c0ad1f569851ca6ce2b368694b5f513f4a2738b4f66e2a4756a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wing, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 19:25:08 compute-0 podman[307379]: 2025-10-02 19:25:08.837938644 +0000 UTC m=+0.248826206 container attach 1b6b8a8d7133c0ad1f569851ca6ce2b368694b5f513f4a2738b4f66e2a4756a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 19:25:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:25:09 compute-0 ceph-mon[191910]: pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:09 compute-0 sudo[307523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfzrplpjxcxrqafypezabszokcjloogp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433108.7147162-909-76402638209278/AnsiballZ_copy.py'
Oct 02 19:25:09 compute-0 sudo[307523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:09 compute-0 python3.9[307525]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:09 compute-0 sudo[307523]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:10 compute-0 unruffled_wing[307421]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:25:10 compute-0 unruffled_wing[307421]: --> relative data size: 1.0
Oct 02 19:25:10 compute-0 unruffled_wing[307421]: --> All data devices are unavailable
Oct 02 19:25:10 compute-0 podman[307379]: 2025-10-02 19:25:10.138257865 +0000 UTC m=+1.549145347 container died 1b6b8a8d7133c0ad1f569851ca6ce2b368694b5f513f4a2738b4f66e2a4756a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:25:10 compute-0 systemd[1]: libpod-1b6b8a8d7133c0ad1f569851ca6ce2b368694b5f513f4a2738b4f66e2a4756a0.scope: Deactivated successfully.
Oct 02 19:25:10 compute-0 systemd[1]: libpod-1b6b8a8d7133c0ad1f569851ca6ce2b368694b5f513f4a2738b4f66e2a4756a0.scope: Consumed 1.224s CPU time.
Oct 02 19:25:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-51dc7df21bae30248247dcaf73e89a9fd64dbf386a5f508e9506fae6448401af-merged.mount: Deactivated successfully.
Oct 02 19:25:10 compute-0 podman[307379]: 2025-10-02 19:25:10.247999259 +0000 UTC m=+1.658886741 container remove 1b6b8a8d7133c0ad1f569851ca6ce2b368694b5f513f4a2738b4f66e2a4756a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wing, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:25:10 compute-0 systemd[1]: libpod-conmon-1b6b8a8d7133c0ad1f569851ca6ce2b368694b5f513f4a2738b4f66e2a4756a0.scope: Deactivated successfully.
Oct 02 19:25:10 compute-0 sudo[307254]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:10 compute-0 sudo[307686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:25:10 compute-0 sudo[307686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:25:10 compute-0 sudo[307686]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:10 compute-0 sudo[307736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwjruccvvsjqwxxfbykadaeckkqyjenx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433109.9098964-909-220451815157831/AnsiballZ_copy.py'
Oct 02 19:25:10 compute-0 sudo[307736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:10 compute-0 sudo[307739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:25:10 compute-0 sudo[307739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:25:10 compute-0 sudo[307739]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:10 compute-0 python3.9[307743]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:10 compute-0 sudo[307736]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:10 compute-0 sudo[307765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:25:10 compute-0 sudo[307765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:25:10 compute-0 sudo[307765]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:10 compute-0 sudo[307791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:25:10 compute-0 sudo[307791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:25:11 compute-0 ceph-mon[191910]: pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:11 compute-0 podman[307972]: 2025-10-02 19:25:11.42693669 +0000 UTC m=+0.076909437 container create a18282023e57074204c364a4afeecd17cc745f59e595ada38d6fe8e741b3f024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cohen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 19:25:11 compute-0 systemd[1]: Started libpod-conmon-a18282023e57074204c364a4afeecd17cc745f59e595ada38d6fe8e741b3f024.scope.
Oct 02 19:25:11 compute-0 sudo[308018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piutkwezlsgljggsjovbtptwcavrccmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433110.968058-909-1570064041096/AnsiballZ_copy.py'
Oct 02 19:25:11 compute-0 sudo[308018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:11 compute-0 podman[307972]: 2025-10-02 19:25:11.397687386 +0000 UTC m=+0.047660133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:25:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:25:11 compute-0 podman[307972]: 2025-10-02 19:25:11.537766633 +0000 UTC m=+0.187739390 container init a18282023e57074204c364a4afeecd17cc745f59e595ada38d6fe8e741b3f024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cohen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 19:25:11 compute-0 podman[307972]: 2025-10-02 19:25:11.550223632 +0000 UTC m=+0.200196359 container start a18282023e57074204c364a4afeecd17cc745f59e595ada38d6fe8e741b3f024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cohen, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:25:11 compute-0 podman[307972]: 2025-10-02 19:25:11.556494378 +0000 UTC m=+0.206467135 container attach a18282023e57074204c364a4afeecd17cc745f59e595ada38d6fe8e741b3f024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cohen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:25:11 compute-0 priceless_cohen[308020]: 167 167
Oct 02 19:25:11 compute-0 systemd[1]: libpod-a18282023e57074204c364a4afeecd17cc745f59e595ada38d6fe8e741b3f024.scope: Deactivated successfully.
Oct 02 19:25:11 compute-0 podman[307972]: 2025-10-02 19:25:11.562936189 +0000 UTC m=+0.212908946 container died a18282023e57074204c364a4afeecd17cc745f59e595ada38d6fe8e741b3f024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cohen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 19:25:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-197f7d8fc8a1f8da9bfe5f3522ede26fb48e6392967b2b6da24109cd81c24155-merged.mount: Deactivated successfully.
Oct 02 19:25:11 compute-0 podman[307972]: 2025-10-02 19:25:11.62230079 +0000 UTC m=+0.272273507 container remove a18282023e57074204c364a4afeecd17cc745f59e595ada38d6fe8e741b3f024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cohen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 19:25:11 compute-0 systemd[1]: libpod-conmon-a18282023e57074204c364a4afeecd17cc745f59e595ada38d6fe8e741b3f024.scope: Deactivated successfully.
Oct 02 19:25:11 compute-0 python3.9[308022]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:11 compute-0 sudo[308018]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:11 compute-0 podman[308045]: 2025-10-02 19:25:11.859206849 +0000 UTC m=+0.086493359 container create 2f295c0dd148dc3075fa983a68dfae549a9816e53df89b3b029524e519ff4729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:25:11 compute-0 podman[308045]: 2025-10-02 19:25:11.832312518 +0000 UTC m=+0.059599028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:25:11 compute-0 systemd[1]: Started libpod-conmon-2f295c0dd148dc3075fa983a68dfae549a9816e53df89b3b029524e519ff4729.scope.
Oct 02 19:25:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff07d290e4109f0538befd3d6ceb4c55bd13155455b9d718017ec759ce251307/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff07d290e4109f0538befd3d6ceb4c55bd13155455b9d718017ec759ce251307/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff07d290e4109f0538befd3d6ceb4c55bd13155455b9d718017ec759ce251307/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff07d290e4109f0538befd3d6ceb4c55bd13155455b9d718017ec759ce251307/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:25:12 compute-0 podman[308045]: 2025-10-02 19:25:12.044913514 +0000 UTC m=+0.272200014 container init 2f295c0dd148dc3075fa983a68dfae549a9816e53df89b3b029524e519ff4729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 19:25:12 compute-0 podman[308045]: 2025-10-02 19:25:12.063746853 +0000 UTC m=+0.291033363 container start 2f295c0dd148dc3075fa983a68dfae549a9816e53df89b3b029524e519ff4729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:25:12 compute-0 podman[308045]: 2025-10-02 19:25:12.07462598 +0000 UTC m=+0.301912460 container attach 2f295c0dd148dc3075fa983a68dfae549a9816e53df89b3b029524e519ff4729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:25:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:25:12 compute-0 sudo[308215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjzyqxcnhwafhjbbrlvccwzhwmrxclwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433112.0029135-909-219461344019826/AnsiballZ_copy.py'
Oct 02 19:25:12 compute-0 sudo[308215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:12 compute-0 python3.9[308217]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:12 compute-0 sudo[308215]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]: {
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:     "0": [
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:         {
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "devices": [
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "/dev/loop3"
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             ],
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "lv_name": "ceph_lv0",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "lv_size": "21470642176",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "name": "ceph_lv0",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "tags": {
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.cluster_name": "ceph",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.crush_device_class": "",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.encrypted": "0",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.osd_id": "0",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.type": "block",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.vdo": "0"
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             },
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "type": "block",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "vg_name": "ceph_vg0"
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:         }
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:     ],
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:     "1": [
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:         {
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "devices": [
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "/dev/loop4"
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             ],
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "lv_name": "ceph_lv1",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "lv_size": "21470642176",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "name": "ceph_lv1",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "tags": {
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.cluster_name": "ceph",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.crush_device_class": "",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.encrypted": "0",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.osd_id": "1",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.type": "block",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.vdo": "0"
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             },
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "type": "block",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "vg_name": "ceph_vg1"
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:         }
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:     ],
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:     "2": [
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:         {
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "devices": [
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "/dev/loop5"
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             ],
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "lv_name": "ceph_lv2",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "lv_size": "21470642176",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "name": "ceph_lv2",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "tags": {
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.cluster_name": "ceph",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.crush_device_class": "",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.encrypted": "0",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.osd_id": "2",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.type": "block",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:                 "ceph.vdo": "0"
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             },
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "type": "block",
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:             "vg_name": "ceph_vg2"
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:         }
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]:     ]
Oct 02 19:25:12 compute-0 unruffled_neumann[308093]: }
Oct 02 19:25:12 compute-0 podman[308045]: 2025-10-02 19:25:12.951541937 +0000 UTC m=+1.178828447 container died 2f295c0dd148dc3075fa983a68dfae549a9816e53df89b3b029524e519ff4729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:25:12 compute-0 systemd[1]: libpod-2f295c0dd148dc3075fa983a68dfae549a9816e53df89b3b029524e519ff4729.scope: Deactivated successfully.
Oct 02 19:25:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff07d290e4109f0538befd3d6ceb4c55bd13155455b9d718017ec759ce251307-merged.mount: Deactivated successfully.
Oct 02 19:25:13 compute-0 podman[308045]: 2025-10-02 19:25:13.06502974 +0000 UTC m=+1.292316220 container remove 2f295c0dd148dc3075fa983a68dfae549a9816e53df89b3b029524e519ff4729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 19:25:13 compute-0 systemd[1]: libpod-conmon-2f295c0dd148dc3075fa983a68dfae549a9816e53df89b3b029524e519ff4729.scope: Deactivated successfully.
Oct 02 19:25:13 compute-0 sudo[307791]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:13 compute-0 sudo[308283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:25:13 compute-0 sudo[308283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:25:13 compute-0 sudo[308283]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:13 compute-0 sudo[308336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:25:13 compute-0 sudo[308336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:25:13 compute-0 sudo[308336]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:13 compute-0 sudo[308384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:25:13 compute-0 sudo[308384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:25:13 compute-0 sudo[308384]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:13 compute-0 ceph-mon[191910]: pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:13 compute-0 sudo[308430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:25:13 compute-0 sudo[308430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:25:13 compute-0 sudo[308484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbyzlagfwiphqslmefuivkzebnpwlrjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433113.1063797-909-12042457433523/AnsiballZ_copy.py'
Oct 02 19:25:13 compute-0 sudo[308484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:13 compute-0 python3.9[308486]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:13 compute-0 sudo[308484]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:14 compute-0 podman[308547]: 2025-10-02 19:25:14.122158995 +0000 UTC m=+0.085057513 container create 461eafb6a90d45b906fde0aa08e0dda9adec1a8bbf28f41a8d655d7d9eddafc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:25:14 compute-0 podman[308547]: 2025-10-02 19:25:14.08947927 +0000 UTC m=+0.052377858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:25:14 compute-0 systemd[1]: Started libpod-conmon-461eafb6a90d45b906fde0aa08e0dda9adec1a8bbf28f41a8d655d7d9eddafc9.scope.
Oct 02 19:25:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:25:14 compute-0 podman[308547]: 2025-10-02 19:25:14.24426861 +0000 UTC m=+0.207167218 container init 461eafb6a90d45b906fde0aa08e0dda9adec1a8bbf28f41a8d655d7d9eddafc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_morse, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:25:14 compute-0 podman[308547]: 2025-10-02 19:25:14.256469833 +0000 UTC m=+0.219368361 container start 461eafb6a90d45b906fde0aa08e0dda9adec1a8bbf28f41a8d655d7d9eddafc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 19:25:14 compute-0 modest_morse[308588]: 167 167
Oct 02 19:25:14 compute-0 systemd[1]: libpod-461eafb6a90d45b906fde0aa08e0dda9adec1a8bbf28f41a8d655d7d9eddafc9.scope: Deactivated successfully.
Oct 02 19:25:14 compute-0 podman[308547]: 2025-10-02 19:25:14.26279974 +0000 UTC m=+0.225698348 container attach 461eafb6a90d45b906fde0aa08e0dda9adec1a8bbf28f41a8d655d7d9eddafc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_morse, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 19:25:14 compute-0 podman[308547]: 2025-10-02 19:25:14.265765179 +0000 UTC m=+0.228663787 container died 461eafb6a90d45b906fde0aa08e0dda9adec1a8bbf28f41a8d655d7d9eddafc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_morse, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 19:25:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e13836251f9ee90542aad06a685ffffec2a2bc0e814f484cce85b214fd97a338-merged.mount: Deactivated successfully.
Oct 02 19:25:14 compute-0 podman[308547]: 2025-10-02 19:25:14.32695157 +0000 UTC m=+0.289850078 container remove 461eafb6a90d45b906fde0aa08e0dda9adec1a8bbf28f41a8d655d7d9eddafc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:25:14 compute-0 systemd[1]: libpod-conmon-461eafb6a90d45b906fde0aa08e0dda9adec1a8bbf28f41a8d655d7d9eddafc9.scope: Deactivated successfully.
Oct 02 19:25:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:25:14 compute-0 podman[308667]: 2025-10-02 19:25:14.589167064 +0000 UTC m=+0.087105297 container create 6821bda265802480b767647e499eab3b398d450b4297636a43796da566a15965 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:25:14 compute-0 podman[308667]: 2025-10-02 19:25:14.55651255 +0000 UTC m=+0.054450763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:25:14 compute-0 systemd[1]: Started libpod-conmon-6821bda265802480b767647e499eab3b398d450b4297636a43796da566a15965.scope.
Oct 02 19:25:14 compute-0 sudo[308728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvrpqlwabuqmjhckakbxlrcqtpqfmyry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433114.1735115-945-133068891202851/AnsiballZ_copy.py'
Oct 02 19:25:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2267578a026809fba717e1786361924ede9bf49141d241c523403e2351a9142/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2267578a026809fba717e1786361924ede9bf49141d241c523403e2351a9142/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2267578a026809fba717e1786361924ede9bf49141d241c523403e2351a9142/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2267578a026809fba717e1786361924ede9bf49141d241c523403e2351a9142/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:25:14 compute-0 sudo[308728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:14 compute-0 podman[308667]: 2025-10-02 19:25:14.762885566 +0000 UTC m=+0.260823839 container init 6821bda265802480b767647e499eab3b398d450b4297636a43796da566a15965 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_williamson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:25:14 compute-0 podman[308667]: 2025-10-02 19:25:14.783633995 +0000 UTC m=+0.281572228 container start 6821bda265802480b767647e499eab3b398d450b4297636a43796da566a15965 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_williamson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 19:25:14 compute-0 podman[308667]: 2025-10-02 19:25:14.790946229 +0000 UTC m=+0.288884462 container attach 6821bda265802480b767647e499eab3b398d450b4297636a43796da566a15965 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_williamson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:25:14 compute-0 python3.9[308733]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:14 compute-0 sudo[308728]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:15 compute-0 ceph-mon[191910]: pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:15 compute-0 sudo[308905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyzyexokbuvwxfmxhfaeczsrlwtgghzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433115.2055912-945-22753117341403/AnsiballZ_copy.py'
Oct 02 19:25:15 compute-0 sudo[308905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]: {
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:         "osd_id": 1,
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:         "type": "bluestore"
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:     },
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:         "osd_id": 2,
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:         "type": "bluestore"
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:     },
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:         "osd_id": 0,
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:         "type": "bluestore"
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]:     }
Oct 02 19:25:15 compute-0 quizzical_williamson[308729]: }
Oct 02 19:25:15 compute-0 systemd[1]: libpod-6821bda265802480b767647e499eab3b398d450b4297636a43796da566a15965.scope: Deactivated successfully.
Oct 02 19:25:15 compute-0 podman[308667]: 2025-10-02 19:25:15.889832674 +0000 UTC m=+1.387770877 container died 6821bda265802480b767647e499eab3b398d450b4297636a43796da566a15965 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:25:15 compute-0 systemd[1]: libpod-6821bda265802480b767647e499eab3b398d450b4297636a43796da566a15965.scope: Consumed 1.103s CPU time.
Oct 02 19:25:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2267578a026809fba717e1786361924ede9bf49141d241c523403e2351a9142-merged.mount: Deactivated successfully.
Oct 02 19:25:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:15 compute-0 python3.9[308910]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:15 compute-0 podman[308667]: 2025-10-02 19:25:15.999019846 +0000 UTC m=+1.496958049 container remove 6821bda265802480b767647e499eab3b398d450b4297636a43796da566a15965 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 19:25:16 compute-0 sudo[308905]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:16 compute-0 systemd[1]: libpod-conmon-6821bda265802480b767647e499eab3b398d450b4297636a43796da566a15965.scope: Deactivated successfully.
Oct 02 19:25:16 compute-0 sudo[308430]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:25:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:25:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:25:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:25:16 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 89b3d254-ab42-4fbf-8755-e4f50a8a81d4 does not exist
Oct 02 19:25:16 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 6a2fc2bf-7170-4fcc-b12f-e6be752285ac does not exist
Oct 02 19:25:16 compute-0 sudo[308927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:25:16 compute-0 sudo[308927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:25:16 compute-0 sudo[308927]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:16 compute-0 sudo[308952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:25:16 compute-0 sudo[308952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:25:16 compute-0 sudo[308952]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:17 compute-0 ceph-mon[191910]: pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:25:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:25:17 compute-0 sudo[309126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifgeprvwrwxmnqnswfwimkprbfiptzfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433116.609045-945-130442667578447/AnsiballZ_copy.py'
Oct 02 19:25:17 compute-0 sudo[309126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:17 compute-0 python3.9[309128]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:17 compute-0 sudo[309126]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:18 compute-0 sudo[309278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbqstggopqzwskornkbmyfxqfmvdccbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433117.6110184-945-255805369924562/AnsiballZ_copy.py'
Oct 02 19:25:18 compute-0 sudo[309278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:19 compute-0 python3.9[309280]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:19 compute-0 sudo[309278]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:19 compute-0 ceph-mon[191910]: pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:25:19 compute-0 sudo[309431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhjdfuhryaikxrffleomfmamnacuyfgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433119.2803261-945-37428425148268/AnsiballZ_copy.py'
Oct 02 19:25:19 compute-0 sudo[309431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:20 compute-0 python3.9[309433]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:20 compute-0 sudo[309431]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:21 compute-0 sudo[309583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzcrnkieodoxnklmgfdkiqduimhwxbid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433120.5738287-982-173925843943287/AnsiballZ_file.py'
Oct 02 19:25:21 compute-0 sudo[309583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:21 compute-0 ceph-mon[191910]: pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:21 compute-0 python3.9[309585]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:21 compute-0 sudo[309583]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:22 compute-0 sudo[309735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlfacbuccywwowtvzokbmltxxnhdcjij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433121.5858877-990-160174971863801/AnsiballZ_find.py'
Oct 02 19:25:22 compute-0 sudo[309735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:22 compute-0 python3.9[309737]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:25:22 compute-0 sudo[309735]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:23 compute-0 ceph-mon[191910]: pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:23 compute-0 sudo[309887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbteyvoxviguacueyojojhfalysqpons ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433122.6309283-998-38298909363505/AnsiballZ_command.py'
Oct 02 19:25:23 compute-0 sudo[309887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:23 compute-0 podman[309890]: 2025-10-02 19:25:23.309169968 +0000 UTC m=+0.076076586 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:25:23 compute-0 podman[309889]: 2025-10-02 19:25:23.32171759 +0000 UTC m=+0.086697687 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 19:25:23 compute-0 python3.9[309891]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:25:23 compute-0 sudo[309887]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:25:24 compute-0 python3.9[310082]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:25:25 compute-0 ceph-mon[191910]: pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:26 compute-0 python3.9[310232]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:25:26 compute-0 podman[310327]: 2025-10-02 19:25:26.694514311 +0000 UTC m=+0.160917583 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 02 19:25:26 compute-0 podman[310328]: 2025-10-02 19:25:26.712289151 +0000 UTC m=+0.170927718 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 19:25:26 compute-0 python3.9[310382]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759433125.0741882-1017-114942151430236/.source.xml follow=False _original_basename=secret.xml.j2 checksum=c46f4ecd70a03b399a985f41ba50c8a57f021cf8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:27 compute-0 ceph-mon[191910]: pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:27 compute-0 sudo[310548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubeaovvaimnprltrkzsqimbupocfsgab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433127.0879567-1032-119989455807477/AnsiballZ_command.py'
Oct 02 19:25:27 compute-0 sudo[310548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:27 compute-0 python3.9[310550]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 6019f664-a1c2-5955-8391-692cb79a59f9
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:25:27 compute-0 polkitd[6325]: Registered Authentication Agent for unix-process:310552:429944 (system bus name :1.4060 [/usr/bin/pkttyagent --process 310552 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 02 19:25:27 compute-0 polkitd[6325]: Unregistered Authentication Agent for unix-process:310552:429944 (system bus name :1.4060, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 02 19:25:27 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 02 19:25:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:27 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 02 19:25:28 compute-0 polkitd[6325]: Registered Authentication Agent for unix-process:310551:429943 (system bus name :1.4061 [/usr/bin/pkttyagent --process 310551 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 02 19:25:28 compute-0 polkitd[6325]: Unregistered Authentication Agent for unix-process:310551:429943 (system bus name :1.4061, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 02 19:25:28 compute-0 sudo[310548]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:29 compute-0 ceph-mon[191910]: pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:25:29 compute-0 python3.9[310729]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:29 compute-0 podman[157186]: time="2025-10-02T19:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:25:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35730 "" "Go-http-client/1.1"
Oct 02 19:25:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7287 "" "Go-http-client/1.1"
Oct 02 19:25:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:31 compute-0 sudo[310907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eelzyjxycbcwdlupzhcrhgnzaxgugysx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433129.79441-1048-79683217831/AnsiballZ_command.py'
Oct 02 19:25:31 compute-0 sudo[310907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:31 compute-0 podman[310858]: 2025-10-02 19:25:31.11281911 +0000 UTC m=+0.112236744 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 19:25:31 compute-0 podman[310861]: 2025-10-02 19:25:31.114772852 +0000 UTC m=+0.111262468 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:25:31 compute-0 podman[310854]: 2025-10-02 19:25:31.141198472 +0000 UTC m=+0.140584275 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:25:31 compute-0 ceph-mon[191910]: pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:31 compute-0 sudo[310907]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:31 compute-0 openstack_network_exporter[159337]: ERROR   19:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:25:31 compute-0 openstack_network_exporter[159337]: ERROR   19:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:25:31 compute-0 openstack_network_exporter[159337]: ERROR   19:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:25:31 compute-0 openstack_network_exporter[159337]: ERROR   19:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:25:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:25:31 compute-0 openstack_network_exporter[159337]: ERROR   19:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:25:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:25:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:32 compute-0 sudo[311095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsvovzzlpzdzvnyvixdyhmawpvuhyoeg ; FSID=6019f664-a1c2-5955-8391-692cb79a59f9 KEY=AQBszd5oAAAAABAAYG5TvDrd17sIUtWIgmz5JA== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433131.676416-1056-6122729539791/AnsiballZ_command.py'
Oct 02 19:25:32 compute-0 sudo[311095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:25:32.269 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:25:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:25:32.269 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:25:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:25:32.270 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:25:32 compute-0 polkitd[6325]: Registered Authentication Agent for unix-process:311098:430405 (system bus name :1.4064 [/usr/bin/pkttyagent --process 311098 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 02 19:25:32 compute-0 polkitd[6325]: Unregistered Authentication Agent for unix-process:311098:430405 (system bus name :1.4064, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 02 19:25:32 compute-0 sudo[311095]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:33 compute-0 ceph-mon[191910]: pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:33 compute-0 sudo[311268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbbgqiqesjlrqgyzrojgiirzhqurxviw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433132.9210565-1064-250025756170191/AnsiballZ_copy.py'
Oct 02 19:25:33 compute-0 sudo[311268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:33 compute-0 podman[311227]: 2025-10-02 19:25:33.478932338 +0000 UTC m=+0.116480606 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.expose-services=, name=ubi9, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, config_id=edpm, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, architecture=x86_64)
Oct 02 19:25:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:25:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:25:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:25:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:25:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:25:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:25:33 compute-0 python3.9[311273]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:33 compute-0 sudo[311268]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:25:34 compute-0 sudo[311423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjbxcsfnjabshgqvmgytjrczahdxnyoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433133.9507005-1072-114964464749376/AnsiballZ_stat.py'
Oct 02 19:25:34 compute-0 sudo[311423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:34 compute-0 python3.9[311425]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:25:34 compute-0 sudo[311423]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:35 compute-0 sudo[311501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stqzmqyvojejxypqscdjitdxzllpbivk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433133.9507005-1072-114964464749376/AnsiballZ_file.py'
Oct 02 19:25:35 compute-0 ceph-mon[191910]: pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:35 compute-0 sudo[311501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:35 compute-0 python3.9[311503]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/edpm-config/firewall/libvirt.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/libvirt.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:35 compute-0 sudo[311501]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:36 compute-0 sudo[311653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbygmdkjsxxszdjjqgrsribpiswtbpwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433135.8399603-1085-124424463904266/AnsiballZ_file.py'
Oct 02 19:25:36 compute-0 sudo[311653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:36 compute-0 python3.9[311655]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:36 compute-0 sudo[311653]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:37 compute-0 ceph-mon[191910]: pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:37 compute-0 sudo[311805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnczcfakbcraqbevreearcvrydulryjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433136.8741856-1093-150356484398864/AnsiballZ_stat.py'
Oct 02 19:25:37 compute-0 sudo[311805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:37 compute-0 python3.9[311807]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:25:37 compute-0 sudo[311805]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:37 compute-0 sudo[311883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrrwukhbbpuoafpcswyctryixyeunnyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433136.8741856-1093-150356484398864/AnsiballZ_file.py'
Oct 02 19:25:37 compute-0 sudo[311883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:38 compute-0 python3.9[311885]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:38 compute-0 sudo[311883]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:39 compute-0 sudo[312035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqczjpiykzqujjjoylvatqzzexkbkpko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433138.4912565-1105-66613310777684/AnsiballZ_stat.py'
Oct 02 19:25:39 compute-0 sudo[312035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:39 compute-0 ceph-mon[191910]: pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:25:39 compute-0 python3.9[312037]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:25:39 compute-0 sudo[312035]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:40 compute-0 sudo[312113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixloxgztnjzfpvaxdxdxefflfwtksurv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433138.4912565-1105-66613310777684/AnsiballZ_file.py'
Oct 02 19:25:40 compute-0 sudo[312113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:40 compute-0 python3.9[312115]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.5rz4stip recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:40 compute-0 sudo[312113]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:41 compute-0 ceph-mon[191910]: pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:41 compute-0 sudo[312265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfgpxisfwryfhoyiiwbzxqvuutdplkpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433141.0894535-1117-90731410376574/AnsiballZ_stat.py'
Oct 02 19:25:41 compute-0 sudo[312265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:41 compute-0 python3.9[312267]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:25:41 compute-0 sudo[312265]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:43 compute-0 sudo[312343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxfidwxevyczmvjfkanbqqedhaxoyjah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433141.0894535-1117-90731410376574/AnsiballZ_file.py'
Oct 02 19:25:43 compute-0 sudo[312343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:43 compute-0 ceph-mon[191910]: pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:43 compute-0 python3.9[312345]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:43 compute-0 sudo[312343]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:44 compute-0 sudo[312495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xovwqiybiyrfgektmhkbxyxehiqohjpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433143.7877495-1130-275231741462917/AnsiballZ_command.py'
Oct 02 19:25:44 compute-0 sudo[312495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:25:44 compute-0 python3.9[312497]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:25:44 compute-0 sudo[312495]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:45 compute-0 ceph-mon[191910]: pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:45 compute-0 sudo[312648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahmcblltterymxtovmrnglecvedkowhk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759433144.9929175-1138-263531641475817/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 19:25:45 compute-0 sudo[312648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:45 compute-0 python3[312650]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 19:25:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:45 compute-0 sudo[312648]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:46 compute-0 sudo[312800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgkgmlnnrfxgtgtydcpkznjhzocwciqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433146.2378352-1146-16948966884069/AnsiballZ_stat.py'
Oct 02 19:25:46 compute-0 sudo[312800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:47 compute-0 python3.9[312802]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:25:47 compute-0 sudo[312800]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:47 compute-0 ceph-mon[191910]: pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:47 compute-0 sudo[312878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvtfqacypzoaqpzunbkrerkpoukxuqug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433146.2378352-1146-16948966884069/AnsiballZ_file.py'
Oct 02 19:25:47 compute-0 sudo[312878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:47 compute-0 python3.9[312880]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:47 compute-0 sudo[312878]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:48 compute-0 sudo[313030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prasavpuguafblfvpfeincybmftuimul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433147.983463-1158-170984906919849/AnsiballZ_stat.py'
Oct 02 19:25:48 compute-0 sudo[313030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:48 compute-0 python3.9[313032]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:25:48 compute-0 sudo[313030]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:49 compute-0 sudo[313108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezjunlbfgdtwfieipijdqkwrtolqpzcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433147.983463-1158-170984906919849/AnsiballZ_file.py'
Oct 02 19:25:49 compute-0 sudo[313108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:49 compute-0 ceph-mon[191910]: pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:49 compute-0 python3.9[313110]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:49 compute-0 sudo[313108]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:25:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:50 compute-0 sudo[313261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qswsdbbadkjrytctuedtlsferotnwkpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433149.6961446-1170-60402623675999/AnsiballZ_stat.py'
Oct 02 19:25:50 compute-0 sudo[313261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:50 compute-0 python3.9[313263]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:25:50 compute-0 sudo[313261]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:50 compute-0 sudo[313339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gttdapqphuipxlukgfojooqmntmcedng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433149.6961446-1170-60402623675999/AnsiballZ_file.py'
Oct 02 19:25:50 compute-0 sudo[313339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:51 compute-0 python3.9[313341]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:51 compute-0 sudo[313339]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:51 compute-0 ceph-mon[191910]: pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:52 compute-0 sudo[313491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdqdrhjjwoaeaklxhsizqfyswtylppbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433151.548196-1182-26854345105774/AnsiballZ_stat.py'
Oct 02 19:25:52 compute-0 sudo[313491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:52 compute-0 python3.9[313493]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:25:52 compute-0 sudo[313491]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:53 compute-0 ceph-mon[191910]: pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:53 compute-0 sudo[313599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pufropvzwzuzewzzfwgmejgxwnbkquyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433151.548196-1182-26854345105774/AnsiballZ_file.py'
Oct 02 19:25:53 compute-0 podman[313544]: 2025-10-02 19:25:53.637866493 +0000 UTC m=+0.106994134 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:25:53 compute-0 sudo[313599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:53 compute-0 podman[313543]: 2025-10-02 19:25:53.658103459 +0000 UTC m=+0.138374746 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20250930)
Oct 02 19:25:53 compute-0 python3.9[313612]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:53 compute-0 sudo[313599]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:25:54 compute-0 sudo[313764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzwamieotrplcvnzmysyfzolckesvjyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433154.1411965-1194-122933575205509/AnsiballZ_stat.py'
Oct 02 19:25:54 compute-0 sudo[313764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:54 compute-0 python3.9[313766]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:25:55 compute-0 sudo[313764]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:55 compute-0 ceph-mon[191910]: pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:55 compute-0 sudo[313842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkgeilgzfbihnztoonswvhaixkmowhsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433154.1411965-1194-122933575205509/AnsiballZ_file.py'
Oct 02 19:25:55 compute-0 sudo[313842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:56 compute-0 python3.9[313844]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:56 compute-0 sudo[313842]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:57 compute-0 sudo[314027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdyimqjhugywckmmjplpifczfljwoisi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433156.5742395-1207-192248937121878/AnsiballZ_command.py'
Oct 02 19:25:57 compute-0 sudo[314027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:57 compute-0 podman[313968]: 2025-10-02 19:25:57.205654279 +0000 UTC m=+0.151846302 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible)
Oct 02 19:25:57 compute-0 podman[313969]: 2025-10-02 19:25:57.209643845 +0000 UTC m=+0.155321355 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:25:57 compute-0 python3.9[314036]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:25:57 compute-0 ceph-mon[191910]: pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:57 compute-0 sudo[314027]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:58 compute-0 sudo[314193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odlmdlpzclhocoglwahhwxgysxrorumv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433157.740358-1215-36688447727822/AnsiballZ_blockinfile.py'
Oct 02 19:25:58 compute-0 sudo[314193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:58 compute-0 python3.9[314195]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:25:58 compute-0 sudo[314193]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:25:59 compute-0 ceph-mon[191910]: pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:25:59 compute-0 sudo[314345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkzznqgxtzqiiwpszhowxyrydmiaqwfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433159.1040916-1224-201402956009954/AnsiballZ_command.py'
Oct 02 19:25:59 compute-0 sudo[314345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:25:59 compute-0 podman[157186]: time="2025-10-02T19:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:25:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35730 "" "Go-http-client/1.1"
Oct 02 19:25:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7278 "" "Go-http-client/1.1"
Oct 02 19:25:59 compute-0 python3.9[314347]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:25:59 compute-0 sudo[314345]: pam_unix(sudo:session): session closed for user root
Oct 02 19:25:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:00 compute-0 sudo[314498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rplxokrilodfytdlffguynfpuhyjhsip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433160.1408377-1232-212122977195739/AnsiballZ_stat.py'
Oct 02 19:26:00 compute-0 sudo[314498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:00 compute-0 python3.9[314500]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:26:00 compute-0 sudo[314498]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:01 compute-0 openstack_network_exporter[159337]: ERROR   19:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:26:01 compute-0 openstack_network_exporter[159337]: ERROR   19:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:26:01 compute-0 openstack_network_exporter[159337]: ERROR   19:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:26:01 compute-0 openstack_network_exporter[159337]: ERROR   19:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:26:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:26:01 compute-0 openstack_network_exporter[159337]: ERROR   19:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:26:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:26:01 compute-0 ceph-mon[191910]: pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:01 compute-0 podman[314624]: 2025-10-02 19:26:01.679217934 +0000 UTC m=+0.110187910 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:26:01 compute-0 podman[314626]: 2025-10-02 19:26:01.683776554 +0000 UTC m=+0.092172712 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:26:01 compute-0 sudo[314704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvlclqhsbcwdjvrvghjxtuarsevaggfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433161.137733-1241-82984925286129/AnsiballZ_file.py'
Oct 02 19:26:01 compute-0 podman[314625]: 2025-10-02 19:26:01.70211429 +0000 UTC m=+0.117184395 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 19:26:01 compute-0 sudo[314704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:01 compute-0 anacron[143512]: Job `cron.daily' started
Oct 02 19:26:01 compute-0 anacron[143512]: Job `cron.daily' terminated
Oct 02 19:26:01 compute-0 python3.9[314711]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:26:01 compute-0 sudo[314704]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:02 compute-0 sudo[314863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klngqfhteofjapgmhjwtegyjmafmrruh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433162.2246342-1249-19213610566965/AnsiballZ_stat.py'
Oct 02 19:26:02 compute-0 sudo[314863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:02 compute-0 python3.9[314865]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:26:02 compute-0 sudo[314863]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:03 compute-0 sudo[314941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnageynniexlvcaaowrqhtigfdrfnyum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433162.2246342-1249-19213610566965/AnsiballZ_file.py'
Oct 02 19:26:03 compute-0 sudo[314941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:03 compute-0 ceph-mon[191910]: pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:26:03
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'images', '.mgr', '.rgw.root', 'backups', 'default.rgw.meta']
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:26:03 compute-0 podman[314944]: 2025-10-02 19:26:03.694229192 +0000 UTC m=+0.119554708 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, version=9.4, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release=1214.1726694543, maintainer=Red Hat, Inc.)
Oct 02 19:26:03 compute-0 python3.9[314943]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/edpm_libvirt.target _original_basename=edpm_libvirt.target recurse=False state=file path=/etc/systemd/system/edpm_libvirt.target force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:26:03 compute-0 sudo[314941]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:26:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:26:04 compute-0 sudo[315113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lilxtpaoeuvqlbpipvyqumciyvytbjue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433163.976466-1261-100246687776849/AnsiballZ_stat.py'
Oct 02 19:26:04 compute-0 sudo[315113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:04 compute-0 python3.9[315115]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:26:04 compute-0 sudo[315113]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:05 compute-0 sudo[315191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpgaaipmfyentncesspnujqhkxqbnvit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433163.976466-1261-100246687776849/AnsiballZ_file.py'
Oct 02 19:26:05 compute-0 sudo[315191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:05 compute-0 python3.9[315193]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/edpm_libvirt_guests.service _original_basename=edpm_libvirt_guests.service recurse=False state=file path=/etc/systemd/system/edpm_libvirt_guests.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:26:05 compute-0 sudo[315191]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:05 compute-0 ceph-mon[191910]: pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:07 compute-0 sudo[315343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvvnjoqvvxwxmrpbitaznlrhqoxvthbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433165.692316-1273-182340471749063/AnsiballZ_stat.py'
Oct 02 19:26:07 compute-0 sudo[315343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:07 compute-0 python3.9[315345]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:26:07 compute-0 sudo[315343]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:07 compute-0 ceph-mon[191910]: pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:07 compute-0 sudo[315421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vthcgaafctjrdsrbkobzsxpmbzudrfzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433165.692316-1273-182340471749063/AnsiballZ_file.py'
Oct 02 19:26:07 compute-0 sudo[315421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:08 compute-0 python3.9[315423]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/virt-guest-shutdown.target _original_basename=virt-guest-shutdown.target recurse=False state=file path=/etc/systemd/system/virt-guest-shutdown.target force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:26:08 compute-0 sudo[315421]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:08 compute-0 sshd-session[285955]: Connection closed by 192.168.122.30 port 36364
Oct 02 19:26:08 compute-0 sshd-session[285952]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:26:08 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Oct 02 19:26:08 compute-0 systemd[1]: session-55.scope: Consumed 2min 48.385s CPU time.
Oct 02 19:26:08 compute-0 systemd-logind[793]: Session 55 logged out. Waiting for processes to exit.
Oct 02 19:26:08 compute-0 systemd-logind[793]: Removed session 55.
Oct 02 19:26:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:26:09 compute-0 ceph-mon[191910]: pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:11 compute-0 ceph-mon[191910]: pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:26:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:26:13 compute-0 ceph-mon[191910]: pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:13 compute-0 sshd-session[315448]: Accepted publickey for zuul from 192.168.122.30 port 45648 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:26:13 compute-0 systemd-logind[793]: New session 56 of user zuul.
Oct 02 19:26:13 compute-0 systemd[1]: Started Session 56 of User zuul.
Oct 02 19:26:13 compute-0 sshd-session[315448]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:26:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:26:15 compute-0 python3.9[315601]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:26:15 compute-0 ceph-mon[191910]: pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:16 compute-0 sudo[315682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:26:16 compute-0 sudo[315682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:26:16 compute-0 sudo[315682]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:16 compute-0 sudo[315707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:26:16 compute-0 sudo[315707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:26:16 compute-0 sudo[315707]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:16 compute-0 sudo[315737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:26:16 compute-0 sudo[315737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:26:16 compute-0 sudo[315737]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:16 compute-0 sudo[315782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:26:16 compute-0 sudo[315782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:26:16 compute-0 sudo[315855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmmieeuikivtpkjgwkpehiycsjjqwgar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433176.2545636-34-148921949727802/AnsiballZ_file.py'
Oct 02 19:26:16 compute-0 sudo[315855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:17 compute-0 python3.9[315857]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:26:17 compute-0 sudo[315855]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:17 compute-0 sudo[315782]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:26:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:26:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:26:17 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:26:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:26:17 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:26:17 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev bb3f1353-5a6c-4cd0-a973-95b5c0ce36fe does not exist
Oct 02 19:26:17 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 988a904a-0e53-4c85-9e19-384b11663d0e does not exist
Oct 02 19:26:17 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 7726fca5-6d20-4272-9645-13ee414fd3c9 does not exist
Oct 02 19:26:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:26:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:26:17 compute-0 ceph-mon[191910]: pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:26:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:26:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:26:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:26:17 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:26:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:26:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:26:17 compute-0 sudo[315961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:26:17 compute-0 sudo[315961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:26:17 compute-0 sudo[315961]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:17 compute-0 sudo[316013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:26:17 compute-0 sudo[316013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:26:17 compute-0 sudo[316013]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:17 compute-0 sudo[316058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:26:17 compute-0 sudo[316058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:26:17 compute-0 sudo[316058]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:17 compute-0 sudo[316115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzcasmwdecutrmitqocxinqibnuacwhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433177.4644868-34-21788769609413/AnsiballZ_file.py'
Oct 02 19:26:17 compute-0 sudo[316115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:18 compute-0 sudo[316116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:26:18 compute-0 sudo[316116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:26:18 compute-0 python3.9[316120]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:26:18 compute-0 sudo[316115]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:26:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:26:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:26:18 compute-0 podman[316235]: 2025-10-02 19:26:18.597831489 +0000 UTC m=+0.091501864 container create 7091a7235d36db0b46968b736f9ac7ab8f1e5490950af844dd67a3466c5c3372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_carson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 19:26:18 compute-0 podman[316235]: 2025-10-02 19:26:18.564159927 +0000 UTC m=+0.057830372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:26:18 compute-0 systemd[1]: Started libpod-conmon-7091a7235d36db0b46968b736f9ac7ab8f1e5490950af844dd67a3466c5c3372.scope.
Oct 02 19:26:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:26:18 compute-0 podman[316235]: 2025-10-02 19:26:18.718219958 +0000 UTC m=+0.211890313 container init 7091a7235d36db0b46968b736f9ac7ab8f1e5490950af844dd67a3466c5c3372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_carson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 19:26:18 compute-0 podman[316235]: 2025-10-02 19:26:18.735155136 +0000 UTC m=+0.228825511 container start 7091a7235d36db0b46968b736f9ac7ab8f1e5490950af844dd67a3466c5c3372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:26:18 compute-0 admiring_carson[316296]: 167 167
Oct 02 19:26:18 compute-0 podman[316235]: 2025-10-02 19:26:18.742436399 +0000 UTC m=+0.236106734 container attach 7091a7235d36db0b46968b736f9ac7ab8f1e5490950af844dd67a3466c5c3372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 19:26:18 compute-0 systemd[1]: libpod-7091a7235d36db0b46968b736f9ac7ab8f1e5490950af844dd67a3466c5c3372.scope: Deactivated successfully.
Oct 02 19:26:18 compute-0 podman[316235]: 2025-10-02 19:26:18.743255601 +0000 UTC m=+0.236925936 container died 7091a7235d36db0b46968b736f9ac7ab8f1e5490950af844dd67a3466c5c3372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_carson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:26:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b66616163caba9719d3870409881b89a60ec83a5fbaf4e5736564ec3e1389db5-merged.mount: Deactivated successfully.
Oct 02 19:26:18 compute-0 podman[316235]: 2025-10-02 19:26:18.804279437 +0000 UTC m=+0.297949782 container remove 7091a7235d36db0b46968b736f9ac7ab8f1e5490950af844dd67a3466c5c3372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_carson, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 19:26:18 compute-0 systemd[1]: libpod-conmon-7091a7235d36db0b46968b736f9ac7ab8f1e5490950af844dd67a3466c5c3372.scope: Deactivated successfully.
Oct 02 19:26:18 compute-0 sudo[316364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivuddilsjkhynijrmgbjnfzizxiakbsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433178.4351692-34-142150610027352/AnsiballZ_file.py'
Oct 02 19:26:18 compute-0 sudo[316364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:19 compute-0 podman[316372]: 2025-10-02 19:26:19.026133643 +0000 UTC m=+0.066892163 container create 96451c20c3f70323dd6f90c71e46b3caccacce1cb8eac8c3f4eb8e98294e68fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 19:26:19 compute-0 systemd[1]: Started libpod-conmon-96451c20c3f70323dd6f90c71e46b3caccacce1cb8eac8c3f4eb8e98294e68fd.scope.
Oct 02 19:26:19 compute-0 podman[316372]: 2025-10-02 19:26:18.995512972 +0000 UTC m=+0.036271532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:26:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a479099c346897ad032398e82c0409c7a53f55a57693fd72ba67ae2963078a54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a479099c346897ad032398e82c0409c7a53f55a57693fd72ba67ae2963078a54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a479099c346897ad032398e82c0409c7a53f55a57693fd72ba67ae2963078a54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a479099c346897ad032398e82c0409c7a53f55a57693fd72ba67ae2963078a54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a479099c346897ad032398e82c0409c7a53f55a57693fd72ba67ae2963078a54/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:26:19 compute-0 python3.9[316367]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:26:19 compute-0 podman[316372]: 2025-10-02 19:26:19.139842525 +0000 UTC m=+0.180601315 container init 96451c20c3f70323dd6f90c71e46b3caccacce1cb8eac8c3f4eb8e98294e68fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 19:26:19 compute-0 podman[316372]: 2025-10-02 19:26:19.154458292 +0000 UTC m=+0.195216802 container start 96451c20c3f70323dd6f90c71e46b3caccacce1cb8eac8c3f4eb8e98294e68fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:26:19 compute-0 sudo[316364]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:19 compute-0 podman[316372]: 2025-10-02 19:26:19.161163389 +0000 UTC m=+0.201921969 container attach 96451c20c3f70323dd6f90c71e46b3caccacce1cb8eac8c3f4eb8e98294e68fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 19:26:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:26:19.423439) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433179423462, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1845, "num_deletes": 252, "total_data_size": 3130220, "memory_usage": 3178776, "flush_reason": "Manual Compaction"}
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433179433053, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1769129, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11739, "largest_seqno": 13583, "table_properties": {"data_size": 1763104, "index_size": 3036, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14953, "raw_average_key_size": 20, "raw_value_size": 1749848, "raw_average_value_size": 2351, "num_data_blocks": 141, "num_entries": 744, "num_filter_entries": 744, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432970, "oldest_key_time": 1759432970, "file_creation_time": 1759433179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 9679 microseconds, and 4418 cpu microseconds.
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:26:19.433111) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1769129 bytes OK
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:26:19.433135) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:26:19.436492) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:26:19.436515) EVENT_LOG_v1 {"time_micros": 1759433179436508, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:26:19.436538) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3122429, prev total WAL file size 3122429, number of live WAL files 2.
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:26:19.438666) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1727KB)], [29(7651KB)]
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433179438735, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9604158, "oldest_snapshot_seqno": -1}
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4017 keys, 7579103 bytes, temperature: kUnknown
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433179500849, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7579103, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7550490, "index_size": 17495, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95490, "raw_average_key_size": 23, "raw_value_size": 7476288, "raw_average_value_size": 1861, "num_data_blocks": 762, "num_entries": 4017, "num_filter_entries": 4017, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759433179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:26:19.501119) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7579103 bytes
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:26:19.503467) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.4 rd, 121.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.5 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(9.7) write-amplify(4.3) OK, records in: 4434, records dropped: 417 output_compression: NoCompression
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:26:19.503502) EVENT_LOG_v1 {"time_micros": 1759433179503484, "job": 12, "event": "compaction_finished", "compaction_time_micros": 62196, "compaction_time_cpu_micros": 33254, "output_level": 6, "num_output_files": 1, "total_output_size": 7579103, "num_input_records": 4434, "num_output_records": 4017, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433179504178, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433179506982, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:26:19.438494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:26:19.507112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:26:19.507118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:26:19.507120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:26:19.507122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:26:19 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:26:19.507124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:26:19 compute-0 ceph-mon[191910]: pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:19 compute-0 sudo[316549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwkxugmmwcwkfgeainxyeixpmlpsfoqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433179.3844998-34-232200154525155/AnsiballZ_file.py'
Oct 02 19:26:19 compute-0 sudo[316549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:20 compute-0 python3.9[316553]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 19:26:20 compute-0 sudo[316549]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:20 compute-0 jovial_hodgkin[316388]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:26:20 compute-0 jovial_hodgkin[316388]: --> relative data size: 1.0
Oct 02 19:26:20 compute-0 jovial_hodgkin[316388]: --> All data devices are unavailable
Oct 02 19:26:20 compute-0 systemd[1]: libpod-96451c20c3f70323dd6f90c71e46b3caccacce1cb8eac8c3f4eb8e98294e68fd.scope: Deactivated successfully.
Oct 02 19:26:20 compute-0 systemd[1]: libpod-96451c20c3f70323dd6f90c71e46b3caccacce1cb8eac8c3f4eb8e98294e68fd.scope: Consumed 1.113s CPU time.
Oct 02 19:26:20 compute-0 podman[316570]: 2025-10-02 19:26:20.379496928 +0000 UTC m=+0.032546623 container died 96451c20c3f70323dd6f90c71e46b3caccacce1cb8eac8c3f4eb8e98294e68fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 19:26:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a479099c346897ad032398e82c0409c7a53f55a57693fd72ba67ae2963078a54-merged.mount: Deactivated successfully.
Oct 02 19:26:20 compute-0 podman[316570]: 2025-10-02 19:26:20.470890448 +0000 UTC m=+0.123940143 container remove 96451c20c3f70323dd6f90c71e46b3caccacce1cb8eac8c3f4eb8e98294e68fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 19:26:20 compute-0 systemd[1]: libpod-conmon-96451c20c3f70323dd6f90c71e46b3caccacce1cb8eac8c3f4eb8e98294e68fd.scope: Deactivated successfully.
Oct 02 19:26:20 compute-0 sudo[316116]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:20 compute-0 sudo[316585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:26:20 compute-0 sudo[316585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:26:20 compute-0 sudo[316585]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:20 compute-0 sudo[316610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:26:20 compute-0 sudo[316610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:26:20 compute-0 sudo[316610]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:20 compute-0 sudo[316659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:26:20 compute-0 sudo[316659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:26:20 compute-0 sudo[316659]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:20 compute-0 sudo[316702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:26:20 compute-0 sudo[316702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:26:21 compute-0 sudo[316872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djjyvuuvlmwxaniqkttvnqhnbmtdnmmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433180.9052265-34-94446457662953/AnsiballZ_file.py'
Oct 02 19:26:21 compute-0 sudo[316872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:21 compute-0 podman[316874]: 2025-10-02 19:26:21.386988121 +0000 UTC m=+0.056020195 container create 9d180273dd74c6488758faecd89e348b358221aeb36a5b1ae6e03f123774f2f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ardinghelli, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:26:21 compute-0 systemd[1]: Started libpod-conmon-9d180273dd74c6488758faecd89e348b358221aeb36a5b1ae6e03f123774f2f0.scope.
Oct 02 19:26:21 compute-0 podman[316874]: 2025-10-02 19:26:21.366910079 +0000 UTC m=+0.035942173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:26:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:26:21 compute-0 podman[316874]: 2025-10-02 19:26:21.491492659 +0000 UTC m=+0.160524753 container init 9d180273dd74c6488758faecd89e348b358221aeb36a5b1ae6e03f123774f2f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ardinghelli, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:26:21 compute-0 podman[316874]: 2025-10-02 19:26:21.508513049 +0000 UTC m=+0.177545123 container start 9d180273dd74c6488758faecd89e348b358221aeb36a5b1ae6e03f123774f2f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ardinghelli, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:26:21 compute-0 lucid_ardinghelli[316891]: 167 167
Oct 02 19:26:21 compute-0 podman[316874]: 2025-10-02 19:26:21.519177132 +0000 UTC m=+0.188209236 container attach 9d180273dd74c6488758faecd89e348b358221aeb36a5b1ae6e03f123774f2f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 19:26:21 compute-0 systemd[1]: libpod-9d180273dd74c6488758faecd89e348b358221aeb36a5b1ae6e03f123774f2f0.scope: Deactivated successfully.
Oct 02 19:26:21 compute-0 podman[316874]: 2025-10-02 19:26:21.520441435 +0000 UTC m=+0.189473539 container died 9d180273dd74c6488758faecd89e348b358221aeb36a5b1ae6e03f123774f2f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:26:21 compute-0 python3.9[316876]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:26:21 compute-0 sudo[316872]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cca12992875ce264083bba98d4802d73582874617edf48c5c2a050372ac7e69-merged.mount: Deactivated successfully.
Oct 02 19:26:21 compute-0 ceph-mon[191910]: pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:21 compute-0 podman[316874]: 2025-10-02 19:26:21.604560963 +0000 UTC m=+0.273593037 container remove 9d180273dd74c6488758faecd89e348b358221aeb36a5b1ae6e03f123774f2f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 19:26:21 compute-0 systemd[1]: libpod-conmon-9d180273dd74c6488758faecd89e348b358221aeb36a5b1ae6e03f123774f2f0.scope: Deactivated successfully.
Oct 02 19:26:21 compute-0 podman[316939]: 2025-10-02 19:26:21.857859722 +0000 UTC m=+0.108836304 container create 2c27324f4e166cd7ff05c77ee184472dfd6b97b26b4c5b3281768ad20e16c2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 19:26:21 compute-0 podman[316939]: 2025-10-02 19:26:21.797959896 +0000 UTC m=+0.048936508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:26:21 compute-0 systemd[1]: Started libpod-conmon-2c27324f4e166cd7ff05c77ee184472dfd6b97b26b4c5b3281768ad20e16c2c9.scope.
Oct 02 19:26:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:26:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d676d3cf4ac31e5d17746dc000b80022bd7fbd7f2b938a1dec5c22c50651cdd8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:26:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d676d3cf4ac31e5d17746dc000b80022bd7fbd7f2b938a1dec5c22c50651cdd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:26:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d676d3cf4ac31e5d17746dc000b80022bd7fbd7f2b938a1dec5c22c50651cdd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:26:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d676d3cf4ac31e5d17746dc000b80022bd7fbd7f2b938a1dec5c22c50651cdd8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:26:22 compute-0 podman[316939]: 2025-10-02 19:26:22.062981835 +0000 UTC m=+0.313958497 container init 2c27324f4e166cd7ff05c77ee184472dfd6b97b26b4c5b3281768ad20e16c2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:26:22 compute-0 podman[316939]: 2025-10-02 19:26:22.074745326 +0000 UTC m=+0.325721928 container start 2c27324f4e166cd7ff05c77ee184472dfd6b97b26b4c5b3281768ad20e16c2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_neumann, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:26:22 compute-0 podman[316939]: 2025-10-02 19:26:22.082208204 +0000 UTC m=+0.333184806 container attach 2c27324f4e166cd7ff05c77ee184472dfd6b97b26b4c5b3281768ad20e16c2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:26:22 compute-0 sudo[317087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aptbskllgjozsrjjakrbkptkjskpwudj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433181.799955-70-253797429576952/AnsiballZ_stat.py'
Oct 02 19:26:22 compute-0 sudo[317087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:22 compute-0 elastic_neumann[317005]: {
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:     "0": [
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:         {
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "devices": [
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "/dev/loop3"
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             ],
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "lv_name": "ceph_lv0",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "lv_size": "21470642176",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "name": "ceph_lv0",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "tags": {
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.cluster_name": "ceph",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.crush_device_class": "",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.encrypted": "0",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.osd_id": "0",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.type": "block",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.vdo": "0"
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             },
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "type": "block",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "vg_name": "ceph_vg0"
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:         }
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:     ],
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:     "1": [
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:         {
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "devices": [
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "/dev/loop4"
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             ],
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "lv_name": "ceph_lv1",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "lv_size": "21470642176",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "name": "ceph_lv1",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "tags": {
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.cluster_name": "ceph",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.crush_device_class": "",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.encrypted": "0",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.osd_id": "1",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.type": "block",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.vdo": "0"
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             },
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "type": "block",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "vg_name": "ceph_vg1"
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:         }
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:     ],
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:     "2": [
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:         {
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "devices": [
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "/dev/loop5"
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             ],
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "lv_name": "ceph_lv2",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "lv_size": "21470642176",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "name": "ceph_lv2",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "tags": {
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.cluster_name": "ceph",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.crush_device_class": "",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.encrypted": "0",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.osd_id": "2",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.type": "block",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:                 "ceph.vdo": "0"
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             },
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "type": "block",
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:             "vg_name": "ceph_vg2"
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:         }
Oct 02 19:26:22 compute-0 elastic_neumann[317005]:     ]
Oct 02 19:26:22 compute-0 elastic_neumann[317005]: }
Oct 02 19:26:22 compute-0 systemd[1]: libpod-2c27324f4e166cd7ff05c77ee184472dfd6b97b26b4c5b3281768ad20e16c2c9.scope: Deactivated successfully.
Oct 02 19:26:22 compute-0 conmon[317005]: conmon 2c27324f4e166cd7ff05 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2c27324f4e166cd7ff05c77ee184472dfd6b97b26b4c5b3281768ad20e16c2c9.scope/container/memory.events
Oct 02 19:26:22 compute-0 podman[316939]: 2025-10-02 19:26:22.960197578 +0000 UTC m=+1.211174160 container died 2c27324f4e166cd7ff05c77ee184472dfd6b97b26b4c5b3281768ad20e16c2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:26:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d676d3cf4ac31e5d17746dc000b80022bd7fbd7f2b938a1dec5c22c50651cdd8-merged.mount: Deactivated successfully.
Oct 02 19:26:23 compute-0 podman[316939]: 2025-10-02 19:26:23.089788411 +0000 UTC m=+1.340764973 container remove 2c27324f4e166cd7ff05c77ee184472dfd6b97b26b4c5b3281768ad20e16c2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_neumann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Oct 02 19:26:23 compute-0 systemd[1]: libpod-conmon-2c27324f4e166cd7ff05c77ee184472dfd6b97b26b4c5b3281768ad20e16c2c9.scope: Deactivated successfully.
Oct 02 19:26:23 compute-0 sudo[316702]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:23 compute-0 python3.9[317089]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:26:23 compute-0 sudo[317087]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:23 compute-0 sudo[317101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:26:23 compute-0 sudo[317101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:26:23 compute-0 sudo[317101]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:23 compute-0 sudo[317129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:26:23 compute-0 sudo[317129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:26:23 compute-0 sudo[317129]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:23 compute-0 sudo[317177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:26:23 compute-0 sudo[317177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:26:23 compute-0 sudo[317177]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:23 compute-0 sudo[317225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:26:23 compute-0 sudo[317225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:26:23 compute-0 ceph-mon[191910]: pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:24 compute-0 podman[317329]: 2025-10-02 19:26:24.099982355 +0000 UTC m=+0.063204154 container create a91c410ac8cf6615483a1d5e6deb2e57e208b915f46580d1d9884ec42454b4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 19:26:24 compute-0 systemd[1]: Started libpod-conmon-a91c410ac8cf6615483a1d5e6deb2e57e208b915f46580d1d9884ec42454b4f6.scope.
Oct 02 19:26:24 compute-0 podman[317329]: 2025-10-02 19:26:24.077765537 +0000 UTC m=+0.040987376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:26:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:26:24 compute-0 podman[317329]: 2025-10-02 19:26:24.20474242 +0000 UTC m=+0.167964229 container init a91c410ac8cf6615483a1d5e6deb2e57e208b915f46580d1d9884ec42454b4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:26:24 compute-0 podman[317329]: 2025-10-02 19:26:24.22136342 +0000 UTC m=+0.184585249 container start a91c410ac8cf6615483a1d5e6deb2e57e208b915f46580d1d9884ec42454b4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hodgkin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:26:24 compute-0 podman[317329]: 2025-10-02 19:26:24.227513933 +0000 UTC m=+0.190735752 container attach a91c410ac8cf6615483a1d5e6deb2e57e208b915f46580d1d9884ec42454b4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:26:24 compute-0 recursing_hodgkin[317357]: 167 167
Oct 02 19:26:24 compute-0 systemd[1]: libpod-a91c410ac8cf6615483a1d5e6deb2e57e208b915f46580d1d9884ec42454b4f6.scope: Deactivated successfully.
Oct 02 19:26:24 compute-0 podman[317329]: 2025-10-02 19:26:24.229245219 +0000 UTC m=+0.192467048 container died a91c410ac8cf6615483a1d5e6deb2e57e208b915f46580d1d9884ec42454b4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hodgkin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 19:26:24 compute-0 podman[317355]: 2025-10-02 19:26:24.246436474 +0000 UTC m=+0.091072203 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct 02 19:26:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef5332dff8f4d0fad0428f78d5ec6cb28f8b621b657ba7a109f54eecfbea1b25-merged.mount: Deactivated successfully.
Oct 02 19:26:24 compute-0 podman[317356]: 2025-10-02 19:26:24.278800441 +0000 UTC m=+0.117234816 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:26:24 compute-0 podman[317329]: 2025-10-02 19:26:24.293256864 +0000 UTC m=+0.256478653 container remove a91c410ac8cf6615483a1d5e6deb2e57e208b915f46580d1d9884ec42454b4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hodgkin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 19:26:24 compute-0 systemd[1]: libpod-conmon-a91c410ac8cf6615483a1d5e6deb2e57e208b915f46580d1d9884ec42454b4f6.scope: Deactivated successfully.
Oct 02 19:26:24 compute-0 sudo[317464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlvsaotrzimjmcvenivbjofkdmpbkzae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433183.5036178-78-8930657682264/AnsiballZ_systemd.py'
Oct 02 19:26:24 compute-0 sudo[317464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.439 17 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.439 17 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.439 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a00e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.440 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feec38a00b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec5f86ab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38272c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec6d242f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38273e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.442 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.443 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feec72bb7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.443 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feec38264b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.443 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feec3827c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.443 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feec3827230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.444 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feec3825700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.444 17 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38274d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.445 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feec3827290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feec38277a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.446 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.446 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.446 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec4ac1ee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3825730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3611370>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feec38272f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feec3827350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feec38a0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feec38273b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feec49646e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feec3827440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feec3827c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feec38274a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feec3827ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feec3827d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feec3827dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feec3827e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feec38a0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feec38276e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feec3827ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feec49641d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feec3827740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feec3827f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.452 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.453 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.453 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.453 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.453 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.453 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.453 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.453 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.454 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:26:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:26:24 compute-0 podman[317472]: 2025-10-02 19:26:24.562849064 +0000 UTC m=+0.106356197 container create 71ce0220d1ec3c1b236d08d444eb6369e0b11f5d2f1e8af2bae42458771254a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lumiere, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 19:26:24 compute-0 podman[317472]: 2025-10-02 19:26:24.517553295 +0000 UTC m=+0.061060498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:26:24 compute-0 systemd[1]: Started libpod-conmon-71ce0220d1ec3c1b236d08d444eb6369e0b11f5d2f1e8af2bae42458771254a4.scope.
Oct 02 19:26:24 compute-0 python3.9[317466]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:26:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77135e4dedac17269b70cecf7a0d3d5034a41e4681a47e14a0fe6d9e81bed4ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77135e4dedac17269b70cecf7a0d3d5034a41e4681a47e14a0fe6d9e81bed4ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77135e4dedac17269b70cecf7a0d3d5034a41e4681a47e14a0fe6d9e81bed4ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77135e4dedac17269b70cecf7a0d3d5034a41e4681a47e14a0fe6d9e81bed4ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:26:24 compute-0 podman[317472]: 2025-10-02 19:26:24.7193656 +0000 UTC m=+0.262872803 container init 71ce0220d1ec3c1b236d08d444eb6369e0b11f5d2f1e8af2bae42458771254a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lumiere, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 19:26:24 compute-0 podman[317472]: 2025-10-02 19:26:24.742837992 +0000 UTC m=+0.286345145 container start 71ce0220d1ec3c1b236d08d444eb6369e0b11f5d2f1e8af2bae42458771254a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:26:24 compute-0 podman[317472]: 2025-10-02 19:26:24.749345674 +0000 UTC m=+0.292852897 container attach 71ce0220d1ec3c1b236d08d444eb6369e0b11f5d2f1e8af2bae42458771254a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lumiere, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:26:24 compute-0 systemd[1]: Reloading.
Oct 02 19:26:25 compute-0 systemd-sysv-generator[317526]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:26:25 compute-0 systemd-rc-local-generator[317521]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:26:25 compute-0 sudo[317464]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:25 compute-0 ceph-mon[191910]: pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]: {
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:         "osd_id": 1,
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:         "type": "bluestore"
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:     },
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:         "osd_id": 2,
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:         "type": "bluestore"
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:     },
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:         "osd_id": 0,
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:         "type": "bluestore"
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]:     }
Oct 02 19:26:25 compute-0 fervent_lumiere[317487]: }
Oct 02 19:26:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:26 compute-0 systemd[1]: libpod-71ce0220d1ec3c1b236d08d444eb6369e0b11f5d2f1e8af2bae42458771254a4.scope: Deactivated successfully.
Oct 02 19:26:26 compute-0 systemd[1]: libpod-71ce0220d1ec3c1b236d08d444eb6369e0b11f5d2f1e8af2bae42458771254a4.scope: Consumed 1.261s CPU time.
Oct 02 19:26:26 compute-0 podman[317472]: 2025-10-02 19:26:26.023191892 +0000 UTC m=+1.566699025 container died 71ce0220d1ec3c1b236d08d444eb6369e0b11f5d2f1e8af2bae42458771254a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lumiere, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 02 19:26:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-77135e4dedac17269b70cecf7a0d3d5034a41e4681a47e14a0fe6d9e81bed4ca-merged.mount: Deactivated successfully.
Oct 02 19:26:26 compute-0 podman[317472]: 2025-10-02 19:26:26.109589761 +0000 UTC m=+1.653096894 container remove 71ce0220d1ec3c1b236d08d444eb6369e0b11f5d2f1e8af2bae42458771254a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lumiere, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:26:26 compute-0 systemd[1]: libpod-conmon-71ce0220d1ec3c1b236d08d444eb6369e0b11f5d2f1e8af2bae42458771254a4.scope: Deactivated successfully.
Oct 02 19:26:26 compute-0 sudo[317225]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:26:26 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:26:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:26:26 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:26:26 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 67976ddd-b4ed-4014-881f-afbf0faa99a7 does not exist
Oct 02 19:26:26 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 2593be90-d610-4dd0-809e-62ad609734ad does not exist
Oct 02 19:26:26 compute-0 sudo[317734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uihzyezpktmesmfczxqlcvdaptowiwej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433185.6138527-86-36034685813561/AnsiballZ_service_facts.py'
Oct 02 19:26:26 compute-0 sudo[317734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:26 compute-0 sudo[317715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:26:26 compute-0 sudo[317715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:26:26 compute-0 sudo[317715]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:26 compute-0 sudo[317749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:26:26 compute-0 sudo[317749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:26:26 compute-0 sudo[317749]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:26 compute-0 python3.9[317746]: ansible-ansible.builtin.service_facts Invoked
Oct 02 19:26:26 compute-0 network[317790]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 19:26:26 compute-0 network[317791]: 'network-scripts' will be removed from distribution in near future.
Oct 02 19:26:26 compute-0 network[317792]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 19:26:27 compute-0 ceph-mon[191910]: pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:26:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:26:27 compute-0 podman[317798]: 2025-10-02 19:26:27.629816555 +0000 UTC m=+0.140460341 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, architecture=x86_64, vcs-type=git, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=)
Oct 02 19:26:27 compute-0 podman[317800]: 2025-10-02 19:26:27.640176159 +0000 UTC m=+0.151139714 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 19:26:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:29 compute-0 ceph-mon[191910]: pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:26:29 compute-0 podman[157186]: time="2025-10-02T19:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:26:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35730 "" "Go-http-client/1.1"
Oct 02 19:26:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7289 "" "Go-http-client/1.1"
Oct 02 19:26:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:31 compute-0 ceph-mon[191910]: pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:31 compute-0 openstack_network_exporter[159337]: ERROR   19:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:26:31 compute-0 openstack_network_exporter[159337]: ERROR   19:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:26:31 compute-0 openstack_network_exporter[159337]: ERROR   19:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:26:31 compute-0 openstack_network_exporter[159337]: ERROR   19:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:26:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:26:31 compute-0 openstack_network_exporter[159337]: ERROR   19:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:26:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:26:31 compute-0 sudo[317734]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:26:32.270 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:26:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:26:32.271 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:26:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:26:32.271 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:26:32 compute-0 sudo[318147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqlbveohuoglqjjywjpbxcwjphbyubrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433191.7903013-94-101913000640465/AnsiballZ_systemd.py'
Oct 02 19:26:32 compute-0 podman[318085]: 2025-10-02 19:26:32.306746855 +0000 UTC m=+0.088646849 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:26:32 compute-0 sudo[318147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:32 compute-0 podman[318086]: 2025-10-02 19:26:32.317775457 +0000 UTC m=+0.093665461 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:26:32 compute-0 podman[318084]: 2025-10-02 19:26:32.318836575 +0000 UTC m=+0.107330813 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:26:32 compute-0 python3.9[318165]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:26:32 compute-0 systemd[1]: Reloading.
Oct 02 19:26:32 compute-0 systemd-rc-local-generator[318197]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:26:32 compute-0 systemd-sysv-generator[318202]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:26:33 compute-0 sudo[318147]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:33 compute-0 ceph-mon[191910]: pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:26:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:26:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:26:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:26:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:26:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:26:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:34 compute-0 podman[318331]: 2025-10-02 19:26:34.046246817 +0000 UTC m=+0.104222881 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., version=9.4, io.openshift.expose-services=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., container_name=kepler, managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct 02 19:26:34 compute-0 python3.9[318374]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:26:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:26:35 compute-0 ceph-mon[191910]: pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:36 compute-0 sudo[318526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmahkwslbzbzstxqvplpfuggtzteypjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433194.5725427-111-130295502122136/AnsiballZ_podman_container.py'
Oct 02 19:26:36 compute-0 sudo[318526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:36 compute-0 python3.9[318528]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 02 19:26:37 compute-0 ceph-mon[191910]: pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:37 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:26:37 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:26:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:38 compute-0 podman[318541]: 2025-10-02 19:26:38.512695982 +0000 UTC m=+1.984651285 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 19:26:38 compute-0 podman[318595]: 2025-10-02 19:26:38.749528245 +0000 UTC m=+0.085847335 container create 84ee6c454ec62543de80e3ce87cc115391d56939c8f97ac7a5a336ed99b41ec6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 19:26:38 compute-0 NetworkManager[44968]: <info>  [1759433198.7917] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/21)
Oct 02 19:26:38 compute-0 podman[318595]: 2025-10-02 19:26:38.705498219 +0000 UTC m=+0.041817369 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 19:26:38 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 02 19:26:38 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 02 19:26:38 compute-0 kernel: veth0: entered allmulticast mode
Oct 02 19:26:38 compute-0 kernel: veth0: entered promiscuous mode
Oct 02 19:26:38 compute-0 NetworkManager[44968]: <info>  [1759433198.8199] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/22)
Oct 02 19:26:38 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 02 19:26:38 compute-0 kernel: podman0: port 1(veth0) entered forwarding state
Oct 02 19:26:38 compute-0 NetworkManager[44968]: <info>  [1759433198.8229] device (veth0): carrier: link connected
Oct 02 19:26:38 compute-0 NetworkManager[44968]: <info>  [1759433198.8239] device (podman0): carrier: link connected
Oct 02 19:26:38 compute-0 systemd-udevd[318624]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:26:38 compute-0 systemd-udevd[318622]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:26:38 compute-0 NetworkManager[44968]: <info>  [1759433198.8895] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:26:38 compute-0 NetworkManager[44968]: <info>  [1759433198.8954] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:26:38 compute-0 NetworkManager[44968]: <info>  [1759433198.9004] device (podman0): Activation: starting connection 'podman0' (db3eeb4e-7266-45ec-9aab-62095daf3083)
Oct 02 19:26:38 compute-0 NetworkManager[44968]: <info>  [1759433198.9010] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 02 19:26:38 compute-0 NetworkManager[44968]: <info>  [1759433198.9045] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 02 19:26:38 compute-0 NetworkManager[44968]: <info>  [1759433198.9068] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 02 19:26:38 compute-0 NetworkManager[44968]: <info>  [1759433198.9091] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 02 19:26:38 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 19:26:38 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 19:26:38 compute-0 NetworkManager[44968]: <info>  [1759433198.9439] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 02 19:26:38 compute-0 NetworkManager[44968]: <info>  [1759433198.9457] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 02 19:26:38 compute-0 NetworkManager[44968]: <info>  [1759433198.9515] device (podman0): Activation: successful, device activated.
Oct 02 19:26:38 compute-0 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 02 19:26:39 compute-0 ceph-mon[191910]: pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:39 compute-0 systemd[1]: Started libpod-conmon-84ee6c454ec62543de80e3ce87cc115391d56939c8f97ac7a5a336ed99b41ec6.scope.
Oct 02 19:26:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:26:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:26:39 compute-0 podman[318595]: 2025-10-02 19:26:39.456249663 +0000 UTC m=+0.792568743 container init 84ee6c454ec62543de80e3ce87cc115391d56939c8f97ac7a5a336ed99b41ec6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 19:26:39 compute-0 podman[318595]: 2025-10-02 19:26:39.476089758 +0000 UTC m=+0.812408808 container start 84ee6c454ec62543de80e3ce87cc115391d56939c8f97ac7a5a336ed99b41ec6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 19:26:39 compute-0 podman[318595]: 2025-10-02 19:26:39.48031324 +0000 UTC m=+0.816632320 container attach 84ee6c454ec62543de80e3ce87cc115391d56939c8f97ac7a5a336ed99b41ec6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:26:39 compute-0 iscsid_config[318751]: iqn.1994-05.com.redhat:30b5d3c3591
Oct 02 19:26:39 compute-0 systemd[1]: libpod-84ee6c454ec62543de80e3ce87cc115391d56939c8f97ac7a5a336ed99b41ec6.scope: Deactivated successfully.
Oct 02 19:26:39 compute-0 conmon[318751]: conmon 84ee6c454ec62543de80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84ee6c454ec62543de80e3ce87cc115391d56939c8f97ac7a5a336ed99b41ec6.scope/container/memory.events
Oct 02 19:26:39 compute-0 podman[318595]: 2025-10-02 19:26:39.488255101 +0000 UTC m=+0.824574161 container died 84ee6c454ec62543de80e3ce87cc115391d56939c8f97ac7a5a336ed99b41ec6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 19:26:39 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 02 19:26:39 compute-0 kernel: veth0 (unregistering): left allmulticast mode
Oct 02 19:26:39 compute-0 kernel: veth0 (unregistering): left promiscuous mode
Oct 02 19:26:39 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 02 19:26:39 compute-0 NetworkManager[44968]: <info>  [1759433199.5701] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:26:39 compute-0 systemd[1]: run-netns-netns\x2d6aea4b7e\x2d3ae5\x2d760c\x2d0afd\x2dbafff7ca5a10.mount: Deactivated successfully.
Oct 02 19:26:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-84ee6c454ec62543de80e3ce87cc115391d56939c8f97ac7a5a336ed99b41ec6-userdata-shm.mount: Deactivated successfully.
Oct 02 19:26:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-78e6c79e20c3560be8dca96644a4960b307ad07e1e2015206c0fe88548a51388-merged.mount: Deactivated successfully.
Oct 02 19:26:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:40 compute-0 podman[318595]: 2025-10-02 19:26:40.029571518 +0000 UTC m=+1.365890608 container remove 84ee6c454ec62543de80e3ce87cc115391d56939c8f97ac7a5a336ed99b41ec6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 19:26:40 compute-0 systemd[1]: libpod-conmon-84ee6c454ec62543de80e3ce87cc115391d56939c8f97ac7a5a336ed99b41ec6.scope: Deactivated successfully.
Oct 02 19:26:40 compute-0 python3.9[318528]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid:current-podified /usr/sbin/iscsi-iname
Oct 02 19:26:40 compute-0 python3.9[318528]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                             DEPRECATED command:
                                             It is recommended to use Quadlets for running containers and pods under systemd.
                                             
                                             Please refer to podman-systemd.unit(5) for details.
                                             Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 02 19:26:40 compute-0 sudo[318526]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:41 compute-0 sudo[318985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nokjdgevkwmswxyuxxipxarvlnvtwsoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433200.5279915-119-75961771760716/AnsiballZ_stat.py'
Oct 02 19:26:41 compute-0 sudo[318985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:41 compute-0 ceph-mon[191910]: pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:41 compute-0 python3.9[318987]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:26:41 compute-0 sudo[318985]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:42 compute-0 sudo[319108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hibibzujyexzjlvzbbdaoujvfluuwctu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433200.5279915-119-75961771760716/AnsiballZ_copy.py'
Oct 02 19:26:42 compute-0 sudo[319108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:42 compute-0 python3.9[319110]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759433200.5279915-119-75961771760716/.source.iscsi _original_basename=.j6ymhreo follow=False checksum=9b0a4af00114a3de213b16c58f7a9e163dd95c38 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:26:42 compute-0 sudo[319108]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:43 compute-0 ceph-mon[191910]: pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:43 compute-0 sudo[319260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euepspifsqceimwbdfakqpkiwbteomkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433202.7820678-134-244082587173623/AnsiballZ_file.py'
Oct 02 19:26:43 compute-0 sudo[319260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:43 compute-0 python3.9[319262]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:26:43 compute-0 sudo[319260]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:26:44 compute-0 python3.9[319412]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:26:45 compute-0 ceph-mon[191910]: pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:45 compute-0 sudo[319564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kylwkfqggkanxisqmwitaydkuljuyugb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433204.8486674-151-201646917417090/AnsiballZ_lineinfile.py'
Oct 02 19:26:45 compute-0 sudo[319564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:45 compute-0 python3.9[319566]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:26:45 compute-0 sudo[319564]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:47 compute-0 sudo[319716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bifbvnzldyoderozjglingaqpmqstrow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433206.7545187-160-78141614274763/AnsiballZ_file.py'
Oct 02 19:26:47 compute-0 sudo[319716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:47 compute-0 ceph-mon[191910]: pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:47 compute-0 python3.9[319718]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:26:47 compute-0 sudo[319716]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:48 compute-0 sudo[319868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buoabuitonxfudhtgvedintbctfvpzyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433207.8102093-168-71247921789992/AnsiballZ_stat.py'
Oct 02 19:26:48 compute-0 sudo[319868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:48 compute-0 python3.9[319870]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:26:48 compute-0 sudo[319868]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:49 compute-0 ceph-mon[191910]: pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:49 compute-0 sudo[319946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrpvkksetfeevecmhdtoakidduqdykmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433207.8102093-168-71247921789992/AnsiballZ_file.py'
Oct 02 19:26:49 compute-0 sudo[319946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:26:49 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 19:26:49 compute-0 python3.9[319948]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:26:49 compute-0 sudo[319946]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:50 compute-0 sudo[320099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgyjqxuzlpotzdlourugfvblvnqbjart ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433209.8942645-168-276380147290254/AnsiballZ_stat.py'
Oct 02 19:26:50 compute-0 sudo[320099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:50 compute-0 python3.9[320101]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:26:50 compute-0 sudo[320099]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:50 compute-0 sudo[320177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btlxbyvzxjwwohipbohicjmnbqqtgfkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433209.8942645-168-276380147290254/AnsiballZ_file.py'
Oct 02 19:26:50 compute-0 sudo[320177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:51 compute-0 python3.9[320179]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:26:51 compute-0 sudo[320177]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:51 compute-0 ceph-mon[191910]: pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:52 compute-0 sudo[320329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-galkoxqbwkbtrtxzbkcvutirlzxsqsyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433211.4973488-191-150056730459746/AnsiballZ_file.py'
Oct 02 19:26:52 compute-0 sudo[320329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:52 compute-0 python3.9[320331]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:26:52 compute-0 sudo[320329]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:52 compute-0 sudo[320481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhurcynztdzztrmtlcgkgrrlfyujtjms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433212.4892147-199-8515549065229/AnsiballZ_stat.py'
Oct 02 19:26:52 compute-0 sudo[320481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:53 compute-0 python3.9[320483]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:26:53 compute-0 sudo[320481]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:53 compute-0 ceph-mon[191910]: pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:53 compute-0 sudo[320559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpnmjbqquhqaqgavdgtezjuygfuncorn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433212.4892147-199-8515549065229/AnsiballZ_file.py'
Oct 02 19:26:53 compute-0 sudo[320559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:53 compute-0 python3.9[320561]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:26:53 compute-0 sudo[320559]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:26:54 compute-0 podman[320669]: 2025-10-02 19:26:54.706553212 +0000 UTC m=+0.120047650 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:26:54 compute-0 podman[320664]: 2025-10-02 19:26:54.707369424 +0000 UTC m=+0.120226085 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 19:26:54 compute-0 sudo[320751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrasydgldcryteecxwjvacxkwsmkpmvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433214.2498739-211-28473229267721/AnsiballZ_stat.py'
Oct 02 19:26:54 compute-0 sudo[320751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:54 compute-0 python3.9[320753]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:26:54 compute-0 sudo[320751]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:55 compute-0 sudo[320829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxzlrjpfzqexilcaidnjqsmsjmbsadra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433214.2498739-211-28473229267721/AnsiballZ_file.py'
Oct 02 19:26:55 compute-0 sudo[320829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:55 compute-0 ceph-mon[191910]: pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:55 compute-0 python3.9[320831]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:26:55 compute-0 sudo[320829]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:56 compute-0 sudo[320981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcjpbmxyzjsljncmwldjhblqricpphcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433215.8644779-223-44669690088016/AnsiballZ_systemd.py'
Oct 02 19:26:56 compute-0 sudo[320981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:56 compute-0 python3.9[320983]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:26:56 compute-0 systemd[1]: Reloading.
Oct 02 19:26:56 compute-0 systemd-sysv-generator[321013]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:26:56 compute-0 systemd-rc-local-generator[321009]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:26:57 compute-0 ceph-mon[191910]: pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:57 compute-0 sudo[320981]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:58 compute-0 sudo[321200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnawthqcwfixfgnelcjzxytpyijzroqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433217.6949978-231-46546321527148/AnsiballZ_stat.py'
Oct 02 19:26:58 compute-0 sudo[321200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:58 compute-0 podman[321144]: 2025-10-02 19:26:58.271012438 +0000 UTC m=+0.127896598 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, container_name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, release=1755695350, config_id=edpm, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 02 19:26:58 compute-0 podman[321145]: 2025-10-02 19:26:58.302151523 +0000 UTC m=+0.146966494 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller)
Oct 02 19:26:58 compute-0 python3.9[321208]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:26:58 compute-0 sudo[321200]: pam_unix(sudo:session): session closed for user root
Oct 02 19:26:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:26:59 compute-0 ceph-mon[191910]: pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:26:59 compute-0 sudo[321290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oegquqiapsupxcfqzdhedshywrvbgrrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433217.6949978-231-46546321527148/AnsiballZ_file.py'
Oct 02 19:26:59 compute-0 sudo[321290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:26:59 compute-0 podman[157186]: time="2025-10-02T19:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:26:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35730 "" "Go-http-client/1.1"
Oct 02 19:26:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7277 "" "Go-http-client/1.1"
Oct 02 19:26:59 compute-0 python3.9[321292]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:26:59 compute-0 sudo[321290]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:00 compute-0 sudo[321442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkjgxtnesassancpurfefjhoadovcfje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433220.1936886-243-17402660236491/AnsiballZ_stat.py'
Oct 02 19:27:00 compute-0 sudo[321442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:00 compute-0 python3.9[321444]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:27:00 compute-0 sudo[321442]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:01 compute-0 openstack_network_exporter[159337]: ERROR   19:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:27:01 compute-0 openstack_network_exporter[159337]: ERROR   19:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:27:01 compute-0 openstack_network_exporter[159337]: ERROR   19:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:27:01 compute-0 openstack_network_exporter[159337]: ERROR   19:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:27:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:27:01 compute-0 openstack_network_exporter[159337]: ERROR   19:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:27:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:27:01 compute-0 ceph-mon[191910]: pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:01 compute-0 sudo[321520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eriyqwexmgwouhofdaptdyykekrniewa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433220.1936886-243-17402660236491/AnsiballZ_file.py'
Oct 02 19:27:01 compute-0 sudo[321520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:02 compute-0 python3.9[321522]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:27:02 compute-0 sudo[321520]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:02 compute-0 podman[321601]: 2025-10-02 19:27:02.662477558 +0000 UTC m=+0.083743899 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:27:02 compute-0 podman[321599]: 2025-10-02 19:27:02.69162496 +0000 UTC m=+0.119131576 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:27:02 compute-0 podman[321600]: 2025-10-02 19:27:02.698515603 +0000 UTC m=+0.110867448 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:27:02 compute-0 sudo[321734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfwopdzdmuwdyakfikjsqgfqdhlahovf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433222.4662967-255-223330126772131/AnsiballZ_systemd.py'
Oct 02 19:27:02 compute-0 sudo[321734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:03 compute-0 python3.9[321736]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:27:03 compute-0 systemd[1]: Reloading.
Oct 02 19:27:03 compute-0 systemd-rc-local-generator[321767]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:27:03 compute-0 systemd-sysv-generator[321770]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:27:03 compute-0 ceph-mon[191910]: pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:27:03
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'vms', '.rgw.root']
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:27:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:27:03 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 19:27:03 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 19:27:03 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 19:27:03 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 19:27:03 compute-0 sudo[321734]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:27:04 compute-0 sudo[321946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnimcpgperclnbvyclduokogbdvmthlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433224.221542-265-82776324860641/AnsiballZ_file.py'
Oct 02 19:27:04 compute-0 sudo[321946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:04 compute-0 podman[321902]: 2025-10-02 19:27:04.75919867 +0000 UTC m=+0.177274246 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, architecture=x86_64, version=9.4, name=ubi9, managed_by=edpm_ansible, vendor=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.openshift.tags=base rhel9)
Oct 02 19:27:04 compute-0 python3.9[321948]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:27:04 compute-0 sudo[321946]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:05 compute-0 ceph-mon[191910]: pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:05 compute-0 sudo[322101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojthieaidughfgsnjexbvbfkcmirtmrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433225.1906285-273-78328293080614/AnsiballZ_stat.py'
Oct 02 19:27:05 compute-0 sudo[322101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:06 compute-0 python3.9[322103]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:27:06 compute-0 sudo[322101]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:06 compute-0 sudo[322224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnsauurmzkbmknbthoeoupjufmfnlatq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433225.1906285-273-78328293080614/AnsiballZ_copy.py'
Oct 02 19:27:06 compute-0 sudo[322224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:06 compute-0 python3.9[322226]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759433225.1906285-273-78328293080614/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:27:06 compute-0 sudo[322224]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:07 compute-0 ceph-mon[191910]: pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:07 compute-0 sudo[322376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umyopvmeabvaugdlhdjqurucghhzycih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433227.3451495-290-24228932440871/AnsiballZ_file.py'
Oct 02 19:27:07 compute-0 sudo[322376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:08 compute-0 python3.9[322378]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:27:08 compute-0 sudo[322376]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:08 compute-0 sudo[322528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzjezqrljhhtknttoooszvxgumfmtqrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433228.403793-298-33058187234789/AnsiballZ_stat.py'
Oct 02 19:27:08 compute-0 sudo[322528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:09 compute-0 python3.9[322530]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:27:09 compute-0 sudo[322528]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:27:09 compute-0 ceph-mon[191910]: pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:09 compute-0 sudo[322651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbnbgxrzkzldifborxymafaftbtflymp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433228.403793-298-33058187234789/AnsiballZ_copy.py'
Oct 02 19:27:09 compute-0 sudo[322651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:09 compute-0 python3.9[322653]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759433228.403793-298-33058187234789/.source.json _original_basename=.d8xiji8b follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:27:09 compute-0 sudo[322651]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:10 compute-0 sudo[322803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abpanuylqmtpowiqgdyrvrwthlwjjgtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433230.2561681-313-31226908532754/AnsiballZ_file.py'
Oct 02 19:27:10 compute-0 sudo[322803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:11 compute-0 python3.9[322805]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:27:11 compute-0 sudo[322803]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:11 compute-0 ceph-mon[191910]: pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:27:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:27:12 compute-0 sudo[322955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxjyywuqgwiyxobxdkqslkuohbiouejc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433231.9771764-321-24603578977245/AnsiballZ_stat.py'
Oct 02 19:27:12 compute-0 sudo[322955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:12 compute-0 sudo[322955]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:13 compute-0 sudo[323078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbkfsnyyomftufzidegdvqdnchinzvoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433231.9771764-321-24603578977245/AnsiballZ_copy.py'
Oct 02 19:27:13 compute-0 sudo[323078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:13 compute-0 ceph-mon[191910]: pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:13 compute-0 sudo[323078]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:27:15 compute-0 sudo[323230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deylywchzyxzxykixjkmbjlqujazfpkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433234.8231647-338-174658304322879/AnsiballZ_container_config_data.py'
Oct 02 19:27:15 compute-0 sudo[323230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:15 compute-0 ceph-mon[191910]: pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:15 compute-0 python3.9[323232]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 02 19:27:15 compute-0 sudo[323230]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:16 compute-0 sudo[323382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blgjcjuhaxcagxzdnqjyxnqwmdppjjnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433236.066639-347-143228485023718/AnsiballZ_container_config_hash.py'
Oct 02 19:27:16 compute-0 sudo[323382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:16 compute-0 python3.9[323384]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:27:17 compute-0 sudo[323382]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:17 compute-0 ceph-mon[191910]: pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:18 compute-0 sudo[323534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfjjuycduuxzvucspnvbppffvxtkdqld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433237.3196187-356-192365529572531/AnsiballZ_podman_container_info.py'
Oct 02 19:27:18 compute-0 sudo[323534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:18 compute-0 python3.9[323536]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 19:27:18 compute-0 sudo[323534]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:27:19 compute-0 ceph-mon[191910]: pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:20 compute-0 sudo[323712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugrbihlicjgmbirdbkiteswlznelfbuz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759433239.6671653-369-201856842508380/AnsiballZ_edpm_container_manage.py'
Oct 02 19:27:20 compute-0 sudo[323712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:20 compute-0 python3[323714]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:27:21 compute-0 podman[323748]: 2025-10-02 19:27:21.043679553 +0000 UTC m=+0.103833551 container create a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid)
Oct 02 19:27:21 compute-0 podman[323748]: 2025-10-02 19:27:20.995121537 +0000 UTC m=+0.055275585 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 19:27:21 compute-0 python3[323714]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 19:27:21 compute-0 sudo[323712]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:21 compute-0 ceph-mon[191910]: pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:22 compute-0 sudo[323935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlaizxqzsatumoipimthqjomvomadikk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433241.5157228-377-101510363842403/AnsiballZ_stat.py'
Oct 02 19:27:22 compute-0 sudo[323935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:22 compute-0 python3.9[323937]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:27:22 compute-0 sudo[323935]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:23 compute-0 sudo[324089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyfhjmdvjtjkxxrzndemyckvopxobauq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433242.6408384-386-63698984987993/AnsiballZ_file.py'
Oct 02 19:27:23 compute-0 sudo[324089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:23 compute-0 python3.9[324091]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:27:23 compute-0 sudo[324089]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:23 compute-0 ceph-mon[191910]: pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:23 compute-0 sudo[324165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evllmyuhnjrarxslygtssyjsmrqmtwyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433242.6408384-386-63698984987993/AnsiballZ_stat.py'
Oct 02 19:27:23 compute-0 sudo[324165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:23 compute-0 python3.9[324167]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:27:24 compute-0 sudo[324165]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:27:24 compute-0 podman[324192]: 2025-10-02 19:27:24.898430697 +0000 UTC m=+0.075239884 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:27:24 compute-0 podman[324191]: 2025-10-02 19:27:24.914716748 +0000 UTC m=+0.091259008 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930)
Oct 02 19:27:25 compute-0 ceph-mon[191910]: pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:25 compute-0 sudo[324358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwdyqztqdyujlfkuyceesvebizddclxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433244.1276958-386-263905708209675/AnsiballZ_copy.py'
Oct 02 19:27:25 compute-0 sudo[324358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:26 compute-0 python3.9[324360]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759433244.1276958-386-263905708209675/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:27:26 compute-0 sudo[324358]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:26 compute-0 sudo[324397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:27:26 compute-0 sudo[324397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:27:26 compute-0 sudo[324397]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:26 compute-0 sudo[324480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftvvrjocxsqhpwpmhftvqbinrsmmxpae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433244.1276958-386-263905708209675/AnsiballZ_systemd.py'
Oct 02 19:27:26 compute-0 sudo[324480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:26 compute-0 sudo[324443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:27:26 compute-0 sudo[324443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:27:26 compute-0 sudo[324443]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:26 compute-0 sudo[324487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:27:26 compute-0 sudo[324487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:27:26 compute-0 sudo[324487]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:26 compute-0 sudo[324512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:27:26 compute-0 sudo[324512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:27:26 compute-0 python3.9[324484]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:27:26 compute-0 systemd[1]: Reloading.
Oct 02 19:27:27 compute-0 systemd-sysv-generator[324577]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:27:27 compute-0 systemd-rc-local-generator[324573]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:27:27 compute-0 sudo[324480]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:27 compute-0 sudo[324512]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:27:27 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:27:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:27:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:27:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:27:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:27:27 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev fefc90fe-cd6c-4664-a494-547a54c36854 does not exist
Oct 02 19:27:27 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev f820c679-6284-414a-82fe-dd57b370f121 does not exist
Oct 02 19:27:27 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 4b83792e-627e-427b-a3f3-97751dc77f7f does not exist
Oct 02 19:27:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:27:27 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:27:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:27:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:27:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:27:27 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:27:27 compute-0 sudo[324600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:27:27 compute-0 sudo[324600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:27:27 compute-0 sudo[324600]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:27 compute-0 ceph-mon[191910]: pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:27:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:27:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:27:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:27:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:27:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:27:27 compute-0 sudo[324625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:27:27 compute-0 sudo[324625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:27:27 compute-0 sudo[324625]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:27 compute-0 sudo[324673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:27:27 compute-0 sudo[324673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:27:27 compute-0 sudo[324673]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:27 compute-0 sudo[324722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:27:27 compute-0 sudo[324722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:27:27 compute-0 sudo[324773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxnsauudhuscobeashtfvynjbzapkxpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433244.1276958-386-263905708209675/AnsiballZ_systemd.py'
Oct 02 19:27:28 compute-0 sudo[324773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:28 compute-0 python3.9[324775]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:27:28 compute-0 systemd[1]: Reloading.
Oct 02 19:27:28 compute-0 podman[324811]: 2025-10-02 19:27:28.505156961 +0000 UTC m=+0.109963642 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter)
Oct 02 19:27:28 compute-0 podman[324831]: 2025-10-02 19:27:28.529083645 +0000 UTC m=+0.089412288 container create 91fb725c0bd45eae1963a49df9227b85ee00e81027d95b95dc70ae59cd826291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chaplygin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:27:28 compute-0 podman[324815]: 2025-10-02 19:27:28.545447759 +0000 UTC m=+0.142322360 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:27:28 compute-0 podman[324831]: 2025-10-02 19:27:28.50697822 +0000 UTC m=+0.067306863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:27:28 compute-0 systemd-sysv-generator[324905]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:27:28 compute-0 systemd-rc-local-generator[324900]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:27:28 compute-0 systemd[1]: Started libpod-conmon-91fb725c0bd45eae1963a49df9227b85ee00e81027d95b95dc70ae59cd826291.scope.
Oct 02 19:27:28 compute-0 systemd[1]: Starting iscsid container...
Oct 02 19:27:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:27:28 compute-0 podman[324831]: 2025-10-02 19:27:28.941022156 +0000 UTC m=+0.501350789 container init 91fb725c0bd45eae1963a49df9227b85ee00e81027d95b95dc70ae59cd826291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:27:28 compute-0 podman[324831]: 2025-10-02 19:27:28.954478052 +0000 UTC m=+0.514806675 container start 91fb725c0bd45eae1963a49df9227b85ee00e81027d95b95dc70ae59cd826291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 19:27:28 compute-0 podman[324831]: 2025-10-02 19:27:28.959938797 +0000 UTC m=+0.520267440 container attach 91fb725c0bd45eae1963a49df9227b85ee00e81027d95b95dc70ae59cd826291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chaplygin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:27:28 compute-0 vigilant_chaplygin[324913]: 167 167
Oct 02 19:27:28 compute-0 systemd[1]: libpod-91fb725c0bd45eae1963a49df9227b85ee00e81027d95b95dc70ae59cd826291.scope: Deactivated successfully.
Oct 02 19:27:28 compute-0 podman[324831]: 2025-10-02 19:27:28.967502987 +0000 UTC m=+0.527831650 container died 91fb725c0bd45eae1963a49df9227b85ee00e81027d95b95dc70ae59cd826291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chaplygin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:27:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e533729668ad987a6fd26becb0939c6c7443946428403f012d15cfcbad55047f-merged.mount: Deactivated successfully.
Oct 02 19:27:29 compute-0 podman[324831]: 2025-10-02 19:27:29.039090383 +0000 UTC m=+0.599419016 container remove 91fb725c0bd45eae1963a49df9227b85ee00e81027d95b95dc70ae59cd826291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chaplygin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:27:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:27:29 compute-0 systemd[1]: libpod-conmon-91fb725c0bd45eae1963a49df9227b85ee00e81027d95b95dc70ae59cd826291.scope: Deactivated successfully.
Oct 02 19:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fcd02cba70966bcdfe0eeb52e59f25e9d63f458c0f18533c284fa3ab9dbfb61/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 02 19:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fcd02cba70966bcdfe0eeb52e59f25e9d63f458c0f18533c284fa3ab9dbfb61/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 19:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fcd02cba70966bcdfe0eeb52e59f25e9d63f458c0f18533c284fa3ab9dbfb61/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 19:27:29 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92.
Oct 02 19:27:29 compute-0 podman[324914]: 2025-10-02 19:27:29.148008078 +0000 UTC m=+0.241524848 container init a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid)
Oct 02 19:27:29 compute-0 iscsid[324944]: + sudo -E kolla_set_configs
Oct 02 19:27:29 compute-0 sudo[324956]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 19:27:29 compute-0 podman[324914]: 2025-10-02 19:27:29.201604587 +0000 UTC m=+0.295121317 container start a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:27:29 compute-0 podman[324914]: iscsid
Oct 02 19:27:29 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 02 19:27:29 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 02 19:27:29 compute-0 systemd[1]: Started iscsid container.
Oct 02 19:27:29 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 02 19:27:29 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 02 19:27:29 compute-0 sudo[324773]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:29 compute-0 podman[324959]: 2025-10-02 19:27:29.281546615 +0000 UTC m=+0.083385790 container create 2412a0a0ab40b82992e1395a58a14c3f03de5f20e25db7c012954ef41907f5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:27:29 compute-0 systemd[324982]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 02 19:27:29 compute-0 systemd[1]: Started libpod-conmon-2412a0a0ab40b82992e1395a58a14c3f03de5f20e25db7c012954ef41907f5b4.scope.
Oct 02 19:27:29 compute-0 podman[324958]: 2025-10-02 19:27:29.339558091 +0000 UTC m=+0.119398133 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001)
Oct 02 19:27:29 compute-0 podman[324959]: 2025-10-02 19:27:29.256726637 +0000 UTC m=+0.058565882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:27:29 compute-0 systemd[1]: a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92-4fb579332f6c27e0.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:27:29 compute-0 systemd[1]: a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92-4fb579332f6c27e0.service: Failed with result 'exit-code'.
Oct 02 19:27:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49800fedcaa7dd6b248aae501b93d44a526b761d6d76eb07513bcd743d10303/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49800fedcaa7dd6b248aae501b93d44a526b761d6d76eb07513bcd743d10303/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49800fedcaa7dd6b248aae501b93d44a526b761d6d76eb07513bcd743d10303/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49800fedcaa7dd6b248aae501b93d44a526b761d6d76eb07513bcd743d10303/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49800fedcaa7dd6b248aae501b93d44a526b761d6d76eb07513bcd743d10303/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:27:29 compute-0 podman[324959]: 2025-10-02 19:27:29.404267295 +0000 UTC m=+0.206106480 container init 2412a0a0ab40b82992e1395a58a14c3f03de5f20e25db7c012954ef41907f5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_varahamihira, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:27:29 compute-0 podman[324959]: 2025-10-02 19:27:29.42107539 +0000 UTC m=+0.222914565 container start 2412a0a0ab40b82992e1395a58a14c3f03de5f20e25db7c012954ef41907f5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_varahamihira, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:27:29 compute-0 podman[324959]: 2025-10-02 19:27:29.425719453 +0000 UTC m=+0.227558668 container attach 2412a0a0ab40b82992e1395a58a14c3f03de5f20e25db7c012954ef41907f5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_varahamihira, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:27:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:27:29 compute-0 systemd[324982]: Queued start job for default target Main User Target.
Oct 02 19:27:29 compute-0 systemd[324982]: Created slice User Application Slice.
Oct 02 19:27:29 compute-0 systemd[324982]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 02 19:27:29 compute-0 systemd[324982]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 19:27:29 compute-0 systemd[324982]: Reached target Paths.
Oct 02 19:27:29 compute-0 systemd[324982]: Reached target Timers.
Oct 02 19:27:29 compute-0 systemd[324982]: Starting D-Bus User Message Bus Socket...
Oct 02 19:27:29 compute-0 systemd[324982]: Starting Create User's Volatile Files and Directories...
Oct 02 19:27:29 compute-0 systemd[324982]: Listening on D-Bus User Message Bus Socket.
Oct 02 19:27:29 compute-0 systemd[324982]: Reached target Sockets.
Oct 02 19:27:29 compute-0 systemd[324982]: Finished Create User's Volatile Files and Directories.
Oct 02 19:27:29 compute-0 systemd[324982]: Reached target Basic System.
Oct 02 19:27:29 compute-0 systemd[324982]: Reached target Main User Target.
Oct 02 19:27:29 compute-0 systemd[324982]: Startup finished in 193ms.
Oct 02 19:27:29 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 02 19:27:29 compute-0 systemd[1]: Started Session c3 of User root.
Oct 02 19:27:29 compute-0 sudo[324956]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 19:27:29 compute-0 iscsid[324944]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:27:29 compute-0 iscsid[324944]: INFO:__main__:Validating config file
Oct 02 19:27:29 compute-0 iscsid[324944]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:27:29 compute-0 iscsid[324944]: INFO:__main__:Writing out command to execute
Oct 02 19:27:29 compute-0 sudo[324956]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:29 compute-0 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 02 19:27:29 compute-0 iscsid[324944]: ++ cat /run_command
Oct 02 19:27:29 compute-0 iscsid[324944]: + CMD='/usr/sbin/iscsid -f'
Oct 02 19:27:29 compute-0 iscsid[324944]: + ARGS=
Oct 02 19:27:29 compute-0 iscsid[324944]: + sudo kolla_copy_cacerts
Oct 02 19:27:29 compute-0 sudo[325072]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 19:27:29 compute-0 systemd[1]: Started Session c4 of User root.
Oct 02 19:27:29 compute-0 sudo[325072]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 19:27:29 compute-0 sudo[325072]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:29 compute-0 ceph-mon[191910]: pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:29 compute-0 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 02 19:27:29 compute-0 iscsid[324944]: + [[ ! -n '' ]]
Oct 02 19:27:29 compute-0 iscsid[324944]: + . kolla_extend_start
Oct 02 19:27:29 compute-0 iscsid[324944]: Running command: '/usr/sbin/iscsid -f'
Oct 02 19:27:29 compute-0 iscsid[324944]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 02 19:27:29 compute-0 iscsid[324944]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 02 19:27:29 compute-0 iscsid[324944]: + umask 0022
Oct 02 19:27:29 compute-0 iscsid[324944]: + exec /usr/sbin/iscsid -f
Oct 02 19:27:29 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Oct 02 19:27:29 compute-0 podman[157186]: time="2025-10-02T19:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:27:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 39906 "" "Go-http-client/1.1"
Oct 02 19:27:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8114 "" "Go-http-client/1.1"
Oct 02 19:27:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:30 compute-0 python3.9[325178]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:27:30 compute-0 nice_varahamihira[325008]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:27:30 compute-0 nice_varahamihira[325008]: --> relative data size: 1.0
Oct 02 19:27:30 compute-0 nice_varahamihira[325008]: --> All data devices are unavailable
Oct 02 19:27:30 compute-0 systemd[1]: libpod-2412a0a0ab40b82992e1395a58a14c3f03de5f20e25db7c012954ef41907f5b4.scope: Deactivated successfully.
Oct 02 19:27:30 compute-0 systemd[1]: libpod-2412a0a0ab40b82992e1395a58a14c3f03de5f20e25db7c012954ef41907f5b4.scope: Consumed 1.145s CPU time.
Oct 02 19:27:30 compute-0 podman[324959]: 2025-10-02 19:27:30.642957252 +0000 UTC m=+1.444796427 container died 2412a0a0ab40b82992e1395a58a14c3f03de5f20e25db7c012954ef41907f5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_varahamihira, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f49800fedcaa7dd6b248aae501b93d44a526b761d6d76eb07513bcd743d10303-merged.mount: Deactivated successfully.
Oct 02 19:27:30 compute-0 podman[324959]: 2025-10-02 19:27:30.731677782 +0000 UTC m=+1.533516957 container remove 2412a0a0ab40b82992e1395a58a14c3f03de5f20e25db7c012954ef41907f5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_varahamihira, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 19:27:30 compute-0 systemd[1]: libpod-conmon-2412a0a0ab40b82992e1395a58a14c3f03de5f20e25db7c012954ef41907f5b4.scope: Deactivated successfully.
Oct 02 19:27:30 compute-0 sudo[324722]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:30 compute-0 sudo[325313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:27:30 compute-0 sudo[325313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:27:30 compute-0 sudo[325313]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:31 compute-0 sudo[325361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:27:31 compute-0 sudo[325361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:27:31 compute-0 sudo[325361]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:31 compute-0 sudo[325415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejhlyavtkncvxpozvkivszgqmkjxdocz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433250.5665681-423-59696047676924/AnsiballZ_file.py'
Oct 02 19:27:31 compute-0 sudo[325415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:31 compute-0 sudo[325413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:27:31 compute-0 sudo[325413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:27:31 compute-0 sudo[325413]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:31 compute-0 sudo[325441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:27:31 compute-0 sudo[325441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:27:31 compute-0 python3.9[325432]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:27:31 compute-0 sudo[325415]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:31 compute-0 openstack_network_exporter[159337]: ERROR   19:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:27:31 compute-0 openstack_network_exporter[159337]: ERROR   19:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:27:31 compute-0 openstack_network_exporter[159337]: ERROR   19:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:27:31 compute-0 openstack_network_exporter[159337]: ERROR   19:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:27:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:27:31 compute-0 openstack_network_exporter[159337]: ERROR   19:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:27:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:27:31 compute-0 ceph-mon[191910]: pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:31 compute-0 podman[325547]: 2025-10-02 19:27:31.859615836 +0000 UTC m=+0.074167505 container create 9f40129b1df43841809a44f23c861f227c16357160e2f8fa8e66ec443d1d67f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sanderson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 19:27:31 compute-0 systemd[1]: Started libpod-conmon-9f40129b1df43841809a44f23c861f227c16357160e2f8fa8e66ec443d1d67f4.scope.
Oct 02 19:27:31 compute-0 podman[325547]: 2025-10-02 19:27:31.830853024 +0000 UTC m=+0.045404683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:27:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:27:31 compute-0 podman[325547]: 2025-10-02 19:27:31.971117619 +0000 UTC m=+0.185669268 container init 9f40129b1df43841809a44f23c861f227c16357160e2f8fa8e66ec443d1d67f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 19:27:31 compute-0 podman[325547]: 2025-10-02 19:27:31.987723639 +0000 UTC m=+0.202275288 container start 9f40129b1df43841809a44f23c861f227c16357160e2f8fa8e66ec443d1d67f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 19:27:31 compute-0 podman[325547]: 2025-10-02 19:27:31.993333108 +0000 UTC m=+0.207884817 container attach 9f40129b1df43841809a44f23c861f227c16357160e2f8fa8e66ec443d1d67f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sanderson, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:27:31 compute-0 tender_sanderson[325591]: 167 167
Oct 02 19:27:32 compute-0 systemd[1]: libpod-9f40129b1df43841809a44f23c861f227c16357160e2f8fa8e66ec443d1d67f4.scope: Deactivated successfully.
Oct 02 19:27:32 compute-0 podman[325547]: 2025-10-02 19:27:32.000615631 +0000 UTC m=+0.215167270 container died 9f40129b1df43841809a44f23c861f227c16357160e2f8fa8e66ec443d1d67f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sanderson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 19:27:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-da4c4399657b8fe09a6e71b4d48773e21134780fb9b2c9d768fd19d7652b004d-merged.mount: Deactivated successfully.
Oct 02 19:27:32 compute-0 podman[325547]: 2025-10-02 19:27:32.068488288 +0000 UTC m=+0.283039937 container remove 9f40129b1df43841809a44f23c861f227c16357160e2f8fa8e66ec443d1d67f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:27:32 compute-0 systemd[1]: libpod-conmon-9f40129b1df43841809a44f23c861f227c16357160e2f8fa8e66ec443d1d67f4.scope: Deactivated successfully.
Oct 02 19:27:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:27:32.271 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:27:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:27:32.273 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:27:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:27:32.273 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:27:32 compute-0 sudo[325703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcycogfxcdiwojlvsibzlzgyrpsngjek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433251.8028367-434-241650107822112/AnsiballZ_service_facts.py'
Oct 02 19:27:32 compute-0 podman[325671]: 2025-10-02 19:27:32.321715704 +0000 UTC m=+0.074064052 container create 46e964a8502dfb6632bad6e3701f8df2dea430fe097998dd4f13eabfdebe1e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:27:32 compute-0 sudo[325703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:32 compute-0 systemd[1]: Started libpod-conmon-46e964a8502dfb6632bad6e3701f8df2dea430fe097998dd4f13eabfdebe1e4c.scope.
Oct 02 19:27:32 compute-0 podman[325671]: 2025-10-02 19:27:32.296077955 +0000 UTC m=+0.048426333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:27:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d28189e4cc4cee262e56977f38cbe30668332254ddeca06aa49bbbaee2ead2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d28189e4cc4cee262e56977f38cbe30668332254ddeca06aa49bbbaee2ead2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d28189e4cc4cee262e56977f38cbe30668332254ddeca06aa49bbbaee2ead2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d28189e4cc4cee262e56977f38cbe30668332254ddeca06aa49bbbaee2ead2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:27:32 compute-0 podman[325671]: 2025-10-02 19:27:32.476553005 +0000 UTC m=+0.228901423 container init 46e964a8502dfb6632bad6e3701f8df2dea430fe097998dd4f13eabfdebe1e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pascal, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 19:27:32 compute-0 podman[325671]: 2025-10-02 19:27:32.495833106 +0000 UTC m=+0.248181474 container start 46e964a8502dfb6632bad6e3701f8df2dea430fe097998dd4f13eabfdebe1e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pascal, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 19:27:32 compute-0 podman[325671]: 2025-10-02 19:27:32.501921707 +0000 UTC m=+0.254270065 container attach 46e964a8502dfb6632bad6e3701f8df2dea430fe097998dd4f13eabfdebe1e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:27:32 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 02 19:27:32 compute-0 python3.9[325708]: ansible-ansible.builtin.service_facts Invoked
Oct 02 19:27:32 compute-0 network[325733]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 19:27:32 compute-0 network[325734]: 'network-scripts' will be removed from distribution in near future.
Oct 02 19:27:32 compute-0 network[325735]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 19:27:32 compute-0 podman[325742]: 2025-10-02 19:27:32.855137872 +0000 UTC m=+0.104459977 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 19:27:32 compute-0 podman[325740]: 2025-10-02 19:27:32.884245333 +0000 UTC m=+0.138142910 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:27:32 compute-0 podman[325741]: 2025-10-02 19:27:32.888056204 +0000 UTC m=+0.142740221 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:27:33 compute-0 silly_pascal[325711]: {
Oct 02 19:27:33 compute-0 silly_pascal[325711]:     "0": [
Oct 02 19:27:33 compute-0 silly_pascal[325711]:         {
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "devices": [
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "/dev/loop3"
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             ],
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "lv_name": "ceph_lv0",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "lv_size": "21470642176",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "name": "ceph_lv0",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "tags": {
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.cluster_name": "ceph",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.crush_device_class": "",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.encrypted": "0",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.osd_id": "0",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.type": "block",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.vdo": "0"
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             },
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "type": "block",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "vg_name": "ceph_vg0"
Oct 02 19:27:33 compute-0 silly_pascal[325711]:         }
Oct 02 19:27:33 compute-0 silly_pascal[325711]:     ],
Oct 02 19:27:33 compute-0 silly_pascal[325711]:     "1": [
Oct 02 19:27:33 compute-0 silly_pascal[325711]:         {
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "devices": [
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "/dev/loop4"
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             ],
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "lv_name": "ceph_lv1",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "lv_size": "21470642176",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "name": "ceph_lv1",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "tags": {
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.cluster_name": "ceph",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.crush_device_class": "",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.encrypted": "0",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.osd_id": "1",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.type": "block",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.vdo": "0"
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             },
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "type": "block",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "vg_name": "ceph_vg1"
Oct 02 19:27:33 compute-0 silly_pascal[325711]:         }
Oct 02 19:27:33 compute-0 silly_pascal[325711]:     ],
Oct 02 19:27:33 compute-0 silly_pascal[325711]:     "2": [
Oct 02 19:27:33 compute-0 silly_pascal[325711]:         {
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "devices": [
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "/dev/loop5"
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             ],
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "lv_name": "ceph_lv2",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "lv_size": "21470642176",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "name": "ceph_lv2",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "tags": {
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.cluster_name": "ceph",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.crush_device_class": "",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.encrypted": "0",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.osd_id": "2",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.type": "block",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:                 "ceph.vdo": "0"
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             },
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "type": "block",
Oct 02 19:27:33 compute-0 silly_pascal[325711]:             "vg_name": "ceph_vg2"
Oct 02 19:27:33 compute-0 silly_pascal[325711]:         }
Oct 02 19:27:33 compute-0 silly_pascal[325711]:     ]
Oct 02 19:27:33 compute-0 silly_pascal[325711]: }
Oct 02 19:27:33 compute-0 podman[325671]: 2025-10-02 19:27:33.354900869 +0000 UTC m=+1.107249197 container died 46e964a8502dfb6632bad6e3701f8df2dea430fe097998dd4f13eabfdebe1e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:27:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:27:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:27:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:27:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:27:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:27:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:27:33 compute-0 ceph-mon[191910]: pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:33 compute-0 systemd[1]: libpod-46e964a8502dfb6632bad6e3701f8df2dea430fe097998dd4f13eabfdebe1e4c.scope: Deactivated successfully.
Oct 02 19:27:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4d28189e4cc4cee262e56977f38cbe30668332254ddeca06aa49bbbaee2ead2-merged.mount: Deactivated successfully.
Oct 02 19:27:33 compute-0 podman[325671]: 2025-10-02 19:27:33.84012568 +0000 UTC m=+1.592474038 container remove 46e964a8502dfb6632bad6e3701f8df2dea430fe097998dd4f13eabfdebe1e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pascal, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:27:33 compute-0 systemd[1]: libpod-conmon-46e964a8502dfb6632bad6e3701f8df2dea430fe097998dd4f13eabfdebe1e4c.scope: Deactivated successfully.
Oct 02 19:27:33 compute-0 sudo[325441]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:34 compute-0 sudo[325822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:27:34 compute-0 sudo[325822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:27:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:34 compute-0 sudo[325822]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:34 compute-0 sudo[325848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:27:34 compute-0 sudo[325848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:27:34 compute-0 sudo[325848]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:34 compute-0 sudo[325873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:27:34 compute-0 sudo[325873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:27:34 compute-0 sudo[325873]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:34 compute-0 sudo[325898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:27:34 compute-0 sudo[325898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:27:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:27:35 compute-0 podman[325965]: 2025-10-02 19:27:35.004106679 +0000 UTC m=+0.088535906 container create 2fe113164c0589ff078c24c253190104c9ac81ab83d5590e97a03841be39442e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ramanujan, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 19:27:35 compute-0 podman[325965]: 2025-10-02 19:27:34.97206492 +0000 UTC m=+0.056494217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:27:35 compute-0 systemd[1]: Started libpod-conmon-2fe113164c0589ff078c24c253190104c9ac81ab83d5590e97a03841be39442e.scope.
Oct 02 19:27:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:27:35 compute-0 podman[325965]: 2025-10-02 19:27:35.173259699 +0000 UTC m=+0.257688946 container init 2fe113164c0589ff078c24c253190104c9ac81ab83d5590e97a03841be39442e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ramanujan, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:27:35 compute-0 podman[325965]: 2025-10-02 19:27:35.185512854 +0000 UTC m=+0.269942071 container start 2fe113164c0589ff078c24c253190104c9ac81ab83d5590e97a03841be39442e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ramanujan, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:27:35 compute-0 podman[325965]: 2025-10-02 19:27:35.193104005 +0000 UTC m=+0.277533212 container attach 2fe113164c0589ff078c24c253190104c9ac81ab83d5590e97a03841be39442e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ramanujan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:27:35 compute-0 nervous_ramanujan[325993]: 167 167
Oct 02 19:27:35 compute-0 podman[325978]: 2025-10-02 19:27:35.19821984 +0000 UTC m=+0.157008629 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, vcs-type=git, container_name=kepler, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release-0.7.12=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=base rhel9, config_id=edpm)
Oct 02 19:27:35 compute-0 systemd[1]: libpod-2fe113164c0589ff078c24c253190104c9ac81ab83d5590e97a03841be39442e.scope: Deactivated successfully.
Oct 02 19:27:35 compute-0 podman[325965]: 2025-10-02 19:27:35.200721016 +0000 UTC m=+0.285150253 container died 2fe113164c0589ff078c24c253190104c9ac81ab83d5590e97a03841be39442e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ramanujan, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:27:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd3c51cfd7d3345d909372c8f72486cde4ce288de6de8a5d2f87cfca557f2080-merged.mount: Deactivated successfully.
Oct 02 19:27:35 compute-0 podman[325965]: 2025-10-02 19:27:35.269139098 +0000 UTC m=+0.353568305 container remove 2fe113164c0589ff078c24c253190104c9ac81ab83d5590e97a03841be39442e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 02 19:27:35 compute-0 systemd[1]: libpod-conmon-2fe113164c0589ff078c24c253190104c9ac81ab83d5590e97a03841be39442e.scope: Deactivated successfully.
Oct 02 19:27:35 compute-0 podman[326035]: 2025-10-02 19:27:35.481595206 +0000 UTC m=+0.065836405 container create e925180ed8ba80d8e73b4e8deeada60b66483720e3d57757dfe74f65fffe3b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 19:27:35 compute-0 systemd[1]: Started libpod-conmon-e925180ed8ba80d8e73b4e8deeada60b66483720e3d57757dfe74f65fffe3b2f.scope.
Oct 02 19:27:35 compute-0 podman[326035]: 2025-10-02 19:27:35.458707029 +0000 UTC m=+0.042948248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:27:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:27:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf60c3b20b569a071c61824c422ac7d0e71aeafab6d14aae1eb105efbe768a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:27:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf60c3b20b569a071c61824c422ac7d0e71aeafab6d14aae1eb105efbe768a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:27:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf60c3b20b569a071c61824c422ac7d0e71aeafab6d14aae1eb105efbe768a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:27:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf60c3b20b569a071c61824c422ac7d0e71aeafab6d14aae1eb105efbe768a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:27:35 compute-0 podman[326035]: 2025-10-02 19:27:35.653245482 +0000 UTC m=+0.237486691 container init e925180ed8ba80d8e73b4e8deeada60b66483720e3d57757dfe74f65fffe3b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:27:35 compute-0 podman[326035]: 2025-10-02 19:27:35.672808689 +0000 UTC m=+0.257049878 container start e925180ed8ba80d8e73b4e8deeada60b66483720e3d57757dfe74f65fffe3b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 19:27:35 compute-0 podman[326035]: 2025-10-02 19:27:35.682236229 +0000 UTC m=+0.266477428 container attach e925180ed8ba80d8e73b4e8deeada60b66483720e3d57757dfe74f65fffe3b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:27:35 compute-0 ceph-mon[191910]: pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:36 compute-0 agitated_germain[326055]: {
Oct 02 19:27:36 compute-0 agitated_germain[326055]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:27:36 compute-0 agitated_germain[326055]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:27:36 compute-0 agitated_germain[326055]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:27:36 compute-0 agitated_germain[326055]:         "osd_id": 1,
Oct 02 19:27:36 compute-0 agitated_germain[326055]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:27:36 compute-0 agitated_germain[326055]:         "type": "bluestore"
Oct 02 19:27:36 compute-0 agitated_germain[326055]:     },
Oct 02 19:27:36 compute-0 agitated_germain[326055]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:27:36 compute-0 agitated_germain[326055]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:27:36 compute-0 agitated_germain[326055]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:27:36 compute-0 agitated_germain[326055]:         "osd_id": 2,
Oct 02 19:27:36 compute-0 agitated_germain[326055]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:27:36 compute-0 agitated_germain[326055]:         "type": "bluestore"
Oct 02 19:27:36 compute-0 agitated_germain[326055]:     },
Oct 02 19:27:36 compute-0 agitated_germain[326055]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:27:36 compute-0 agitated_germain[326055]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:27:36 compute-0 agitated_germain[326055]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:27:36 compute-0 agitated_germain[326055]:         "osd_id": 0,
Oct 02 19:27:36 compute-0 agitated_germain[326055]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:27:36 compute-0 agitated_germain[326055]:         "type": "bluestore"
Oct 02 19:27:36 compute-0 agitated_germain[326055]:     }
Oct 02 19:27:36 compute-0 agitated_germain[326055]: }
Oct 02 19:27:36 compute-0 systemd[1]: libpod-e925180ed8ba80d8e73b4e8deeada60b66483720e3d57757dfe74f65fffe3b2f.scope: Deactivated successfully.
Oct 02 19:27:36 compute-0 systemd[1]: libpod-e925180ed8ba80d8e73b4e8deeada60b66483720e3d57757dfe74f65fffe3b2f.scope: Consumed 1.060s CPU time.
Oct 02 19:27:36 compute-0 podman[326127]: 2025-10-02 19:27:36.796759257 +0000 UTC m=+0.029906063 container died e925180ed8ba80d8e73b4e8deeada60b66483720e3d57757dfe74f65fffe3b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 19:27:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6cf60c3b20b569a071c61824c422ac7d0e71aeafab6d14aae1eb105efbe768a-merged.mount: Deactivated successfully.
Oct 02 19:27:36 compute-0 podman[326127]: 2025-10-02 19:27:36.872603516 +0000 UTC m=+0.105750302 container remove e925180ed8ba80d8e73b4e8deeada60b66483720e3d57757dfe74f65fffe3b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:27:36 compute-0 systemd[1]: libpod-conmon-e925180ed8ba80d8e73b4e8deeada60b66483720e3d57757dfe74f65fffe3b2f.scope: Deactivated successfully.
Oct 02 19:27:36 compute-0 sudo[325898]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:27:36 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:27:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:27:36 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:27:36 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev b94cc443-b69f-4d0a-b7e2-9b8273ddc398 does not exist
Oct 02 19:27:36 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 3822f5dc-4cd3-47e5-96d9-18f8673d104b does not exist
Oct 02 19:27:37 compute-0 sudo[326148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:27:37 compute-0 sudo[326148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:27:37 compute-0 sudo[326148]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:37 compute-0 sudo[326177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:27:37 compute-0 sudo[326177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:27:37 compute-0 sudo[326177]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:37 compute-0 ceph-mon[191910]: pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:37 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:27:37 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:27:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:38 compute-0 sudo[325703]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:39 compute-0 sudo[326395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qifffcblspewjrrwbloibtpuigieceru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433258.8179405-444-220410961234470/AnsiballZ_file.py'
Oct 02 19:27:39 compute-0 sudo[326395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:27:39 compute-0 python3.9[326397]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 19:27:39 compute-0 sudo[326395]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:39 compute-0 ceph-mon[191910]: pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:39 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 02 19:27:39 compute-0 systemd[324982]: Activating special unit Exit the Session...
Oct 02 19:27:39 compute-0 systemd[324982]: Stopped target Main User Target.
Oct 02 19:27:39 compute-0 systemd[324982]: Stopped target Basic System.
Oct 02 19:27:39 compute-0 systemd[324982]: Stopped target Paths.
Oct 02 19:27:39 compute-0 systemd[324982]: Stopped target Sockets.
Oct 02 19:27:39 compute-0 systemd[324982]: Stopped target Timers.
Oct 02 19:27:39 compute-0 systemd[324982]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 19:27:39 compute-0 systemd[324982]: Closed D-Bus User Message Bus Socket.
Oct 02 19:27:39 compute-0 systemd[324982]: Stopped Create User's Volatile Files and Directories.
Oct 02 19:27:39 compute-0 systemd[324982]: Removed slice User Application Slice.
Oct 02 19:27:39 compute-0 systemd[324982]: Reached target Shutdown.
Oct 02 19:27:39 compute-0 systemd[324982]: Finished Exit the Session.
Oct 02 19:27:39 compute-0 systemd[324982]: Reached target Exit the Session.
Oct 02 19:27:39 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 02 19:27:39 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 02 19:27:39 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 02 19:27:39 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 02 19:27:39 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 02 19:27:39 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 02 19:27:39 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 02 19:27:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:40 compute-0 ceph-mon[191910]: pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:41 compute-0 sudo[326549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxvwxxtkbmrmuyvxsfffpbnksuicjzxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433259.9075809-452-135305811985670/AnsiballZ_modprobe.py'
Oct 02 19:27:41 compute-0 sudo[326549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:41 compute-0 python3.9[326551]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 02 19:27:41 compute-0 sudo[326549]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:42 compute-0 sudo[326705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcmnxtnmgglrixxyjndhvtjlvwyejlyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433262.0028515-460-158352417374580/AnsiballZ_stat.py'
Oct 02 19:27:42 compute-0 sudo[326705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:42 compute-0 python3.9[326707]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:27:42 compute-0 sudo[326705]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:43 compute-0 ceph-mon[191910]: pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:43 compute-0 sudo[326828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkyhcwbtujkavrjpgopfcikkpguoiwdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433262.0028515-460-158352417374580/AnsiballZ_copy.py'
Oct 02 19:27:43 compute-0 sudo[326828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:43 compute-0 python3.9[326830]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759433262.0028515-460-158352417374580/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:27:43 compute-0 sudo[326828]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:27:44.450243) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433264450331, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1164, "num_deletes": 505, "total_data_size": 1316699, "memory_usage": 1349600, "flush_reason": "Manual Compaction"}
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433264460767, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1293856, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13584, "largest_seqno": 14747, "table_properties": {"data_size": 1288637, "index_size": 2231, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 13453, "raw_average_key_size": 17, "raw_value_size": 1276296, "raw_average_value_size": 1688, "num_data_blocks": 102, "num_entries": 756, "num_filter_entries": 756, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759433179, "oldest_key_time": 1759433179, "file_creation_time": 1759433264, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 10586 microseconds, and 5411 cpu microseconds.
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:27:44.460838) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1293856 bytes OK
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:27:44.460860) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:27:44.463365) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:27:44.463417) EVENT_LOG_v1 {"time_micros": 1759433264463412, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:27:44.463435) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1310278, prev total WAL file size 1310278, number of live WAL files 2.
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:27:44.464714) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1263KB)], [32(7401KB)]
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433264464768, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 8872959, "oldest_snapshot_seqno": -1}
Oct 02 19:27:44 compute-0 sudo[326980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtwegsphyqeugksdrifqfvzwkhzzowzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433264.0721853-476-260835899568822/AnsiballZ_lineinfile.py'
Oct 02 19:27:44 compute-0 sudo[326980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3750 keys, 6975730 bytes, temperature: kUnknown
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433264512803, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 6975730, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6949046, "index_size": 16207, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 92030, "raw_average_key_size": 24, "raw_value_size": 6879477, "raw_average_value_size": 1834, "num_data_blocks": 687, "num_entries": 3750, "num_filter_entries": 3750, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759433264, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:27:44.513162) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 6975730 bytes
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:27:44.515508) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.4 rd, 144.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.2 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(12.2) write-amplify(5.4) OK, records in: 4773, records dropped: 1023 output_compression: NoCompression
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:27:44.515538) EVENT_LOG_v1 {"time_micros": 1759433264515525, "job": 14, "event": "compaction_finished", "compaction_time_micros": 48131, "compaction_time_cpu_micros": 32889, "output_level": 6, "num_output_files": 1, "total_output_size": 6975730, "num_input_records": 4773, "num_output_records": 3750, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433264516182, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433264519297, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:27:44.464502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:27:44.519689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:27:44.519695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:27:44.519698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:27:44.519701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:27:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:27:44.519704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:27:44 compute-0 python3.9[326982]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:27:44 compute-0 sudo[326980]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:45 compute-0 ceph-mon[191910]: pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:45 compute-0 sudo[327132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rakxqxsozaajkyljmetafettrjbsifmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433265.046635-484-55493647571759/AnsiballZ_systemd.py'
Oct 02 19:27:45 compute-0 sudo[327132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:45 compute-0 python3.9[327134]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:27:45 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 02 19:27:45 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 02 19:27:45 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 02 19:27:45 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 02 19:27:45 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 02 19:27:46 compute-0 sudo[327132]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:46 compute-0 sudo[327288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olrvjrqowcetrldxwcqopqjxlfmifsfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433266.3191223-492-253375171627835/AnsiballZ_file.py'
Oct 02 19:27:46 compute-0 sudo[327288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:47 compute-0 python3.9[327290]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:27:47 compute-0 sudo[327288]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:47 compute-0 ceph-mon[191910]: pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:47 compute-0 sudo[327440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rspjduxaekbmllvlvbhmmrxylgqsdwbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433267.3041377-501-124958870480600/AnsiballZ_stat.py'
Oct 02 19:27:47 compute-0 sudo[327440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:47 compute-0 python3.9[327442]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:27:47 compute-0 sudo[327440]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:48 compute-0 sudo[327592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbravfzsbqdsogaeskzhplbgxscfxolx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433268.2797956-510-167089156876212/AnsiballZ_stat.py'
Oct 02 19:27:48 compute-0 sudo[327592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:48 compute-0 python3.9[327594]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:27:48 compute-0 sudo[327592]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:27:49 compute-0 ceph-mon[191910]: pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:50 compute-0 sudo[327745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdiswidevlsjjlleworcsmqghexjgzri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433269.253097-518-74113452155220/AnsiballZ_stat.py'
Oct 02 19:27:50 compute-0 sudo[327745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:50 compute-0 python3.9[327747]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:27:50 compute-0 sudo[327745]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:51 compute-0 sudo[327868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpymeovcporisbvndgrwxtwbvjswpiem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433269.253097-518-74113452155220/AnsiballZ_copy.py'
Oct 02 19:27:51 compute-0 sudo[327868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:51 compute-0 python3.9[327870]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759433269.253097-518-74113452155220/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:27:51 compute-0 sudo[327868]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:51 compute-0 ceph-mon[191910]: pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:52 compute-0 sudo[328020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbehprjkofnnuuzsewoqvfwptkzxpgbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433271.5531456-533-113183320480372/AnsiballZ_command.py'
Oct 02 19:27:52 compute-0 sudo[328020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:53 compute-0 python3.9[328022]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:27:53 compute-0 sudo[328020]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:53 compute-0 ceph-mon[191910]: pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:53 compute-0 sudo[328173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heajxpbxeychhtxpsqsjkfrgfnxbeprn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433273.3147395-541-176113626916245/AnsiballZ_lineinfile.py'
Oct 02 19:27:53 compute-0 sudo[328173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:53 compute-0 python3.9[328175]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:27:53 compute-0 sudo[328173]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:27:55 compute-0 sudo[328325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptdysahjxikfgpemxbobozepwxtltivl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433274.279306-549-237872309936155/AnsiballZ_replace.py'
Oct 02 19:27:55 compute-0 sudo[328325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:55 compute-0 podman[328328]: 2025-10-02 19:27:55.175425835 +0000 UTC m=+0.126495511 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:27:55 compute-0 podman[328327]: 2025-10-02 19:27:55.201144447 +0000 UTC m=+0.154197786 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute)
Oct 02 19:27:55 compute-0 python3.9[328329]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:27:55 compute-0 sudo[328325]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:55 compute-0 ceph-mon[191910]: pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:56 compute-0 sudo[328519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fagfwxadxwdicenahhirbpzfvqvbwoyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433275.5762-557-99114772909864/AnsiballZ_replace.py'
Oct 02 19:27:56 compute-0 sudo[328519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:56 compute-0 python3.9[328521]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:27:56 compute-0 sudo[328519]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:57 compute-0 sudo[328671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxrumpiijsgnyfyqulfwadtjooehawem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433276.6687152-566-42472368615376/AnsiballZ_lineinfile.py'
Oct 02 19:27:57 compute-0 sudo[328671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:57 compute-0 python3.9[328673]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:27:57 compute-0 sudo[328671]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:57 compute-0 ceph-mon[191910]: pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:58 compute-0 sudo[328823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqwpjoetcycmynytfjxbsrahmrzojfql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433277.6629941-566-80650578293701/AnsiballZ_lineinfile.py'
Oct 02 19:27:58 compute-0 sudo[328823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:58 compute-0 python3.9[328825]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:27:58 compute-0 sudo[328823]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:59 compute-0 sudo[329003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xalmxfjudqsnrrxyrvidpiejetrwwcth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433278.6211934-566-143667362374281/AnsiballZ_lineinfile.py'
Oct 02 19:27:59 compute-0 sudo[329003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:27:59 compute-0 podman[328949]: 2025-10-02 19:27:59.187043104 +0000 UTC m=+0.147492197 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, version=9.6, build-date=2025-08-20T13:12:41)
Oct 02 19:27:59 compute-0 podman[328950]: 2025-10-02 19:27:59.246664494 +0000 UTC m=+0.202564506 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:27:59 compute-0 python3.9[329011]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:27:59 compute-0 sudo[329003]: pam_unix(sudo:session): session closed for user root
Oct 02 19:27:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:27:59 compute-0 ceph-mon[191910]: pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:27:59 compute-0 podman[329045]: 2025-10-02 19:27:59.696881798 +0000 UTC m=+0.119668421 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:27:59 compute-0 podman[157186]: time="2025-10-02T19:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:27:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38197 "" "Go-http-client/1.1"
Oct 02 19:27:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7703 "" "Go-http-client/1.1"
Oct 02 19:28:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:00 compute-0 sudo[329187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rslgnrsgxtqktxthefwtawiltbpaenjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433279.6525836-566-78456130873201/AnsiballZ_lineinfile.py'
Oct 02 19:28:00 compute-0 sudo[329187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:00 compute-0 python3.9[329189]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:28:00 compute-0 sudo[329187]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:01 compute-0 sudo[329339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svidmrtmutzyqganhjlqdriskfekuptj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433280.6002414-595-53581924455256/AnsiballZ_stat.py'
Oct 02 19:28:01 compute-0 sudo[329339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:01 compute-0 python3.9[329341]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:28:01 compute-0 sudo[329339]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:01 compute-0 openstack_network_exporter[159337]: ERROR   19:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:28:01 compute-0 openstack_network_exporter[159337]: ERROR   19:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:28:01 compute-0 openstack_network_exporter[159337]: ERROR   19:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:28:01 compute-0 openstack_network_exporter[159337]: ERROR   19:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:28:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:28:01 compute-0 openstack_network_exporter[159337]: ERROR   19:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:28:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:28:01 compute-0 ceph-mon[191910]: pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:02 compute-0 sudo[329493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atadlebepsrrzyzgfdmgpnuibaxhmxpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433281.5977623-603-43843825831964/AnsiballZ_file.py'
Oct 02 19:28:02 compute-0 sudo[329493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:02 compute-0 python3.9[329495]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:28:02 compute-0 sudo[329493]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:28:03
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['volumes', 'vms', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', '.mgr']
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:28:03 compute-0 ceph-mon[191910]: pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:28:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:28:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:04 compute-0 podman[329619]: 2025-10-02 19:28:04.18891156 +0000 UTC m=+0.087174029 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:28:04 compute-0 podman[329621]: 2025-10-02 19:28:04.189357012 +0000 UTC m=+0.074880844 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:28:04 compute-0 sudo[329698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkrptvvymtwggewbzjkgmhtvxzrmzktn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433282.7208004-612-32043249377418/AnsiballZ_file.py'
Oct 02 19:28:04 compute-0 sudo[329698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:04 compute-0 podman[329620]: 2025-10-02 19:28:04.249147926 +0000 UTC m=+0.129214343 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 19:28:04 compute-0 python3.9[329706]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:28:04 compute-0 sudo[329698]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:28:05 compute-0 ceph-mon[191910]: pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:05 compute-0 podman[329830]: 2025-10-02 19:28:05.709139993 +0000 UTC m=+0.126458950 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, release-0.7.12=, distribution-scope=public, release=1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 19:28:05 compute-0 sudo[329875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwfafderoflauqsxrobovqtpherpcdee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433284.704676-620-187151363192948/AnsiballZ_stat.py'
Oct 02 19:28:05 compute-0 sudo[329875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:06 compute-0 python3.9[329877]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:28:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:06 compute-0 sudo[329875]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:06 compute-0 sudo[329953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfvahciovbnkawjqdxnltygjmweyxixd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433284.704676-620-187151363192948/AnsiballZ_file.py'
Oct 02 19:28:06 compute-0 sudo[329953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:06 compute-0 python3.9[329955]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:28:06 compute-0 sudo[329953]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:07 compute-0 sudo[330105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnangbamdqexkomglmsplcdlabsyyrig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433286.8978772-620-239099640094406/AnsiballZ_stat.py'
Oct 02 19:28:07 compute-0 sudo[330105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:07 compute-0 python3.9[330107]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:28:07 compute-0 sudo[330105]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:07 compute-0 ceph-mon[191910]: pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:07 compute-0 sudo[330183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdkkaaegopcxdlyaajkrnsqmjdwkusdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433286.8978772-620-239099640094406/AnsiballZ_file.py'
Oct 02 19:28:07 compute-0 sudo[330183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:08 compute-0 python3.9[330185]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:28:08 compute-0 sudo[330183]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:08 compute-0 sudo[330335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwasjcxjdqbnhlujyzppwgisjqqgvwwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433288.4447324-643-147211837818060/AnsiballZ_file.py'
Oct 02 19:28:08 compute-0 sudo[330335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:09 compute-0 python3.9[330337]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:28:09 compute-0 sudo[330335]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:28:09 compute-0 ceph-mon[191910]: pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:09 compute-0 sudo[330487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruqxffjtkzfqpoxkyvijaenxumwuyffj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433289.4083161-651-240271147959065/AnsiballZ_stat.py'
Oct 02 19:28:09 compute-0 sudo[330487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:10 compute-0 python3.9[330489]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:28:10 compute-0 sudo[330487]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:10 compute-0 sudo[330565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vszlgjfxbyjklnqphdewazdrjrkzxxof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433289.4083161-651-240271147959065/AnsiballZ_file.py'
Oct 02 19:28:10 compute-0 sudo[330565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:10 compute-0 python3.9[330567]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:28:10 compute-0 sudo[330565]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:11 compute-0 sudo[330717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkszthjuizwctlqbliljpcwbqleghptu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433291.0302608-663-225431982884646/AnsiballZ_stat.py'
Oct 02 19:28:11 compute-0 sudo[330717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:11 compute-0 ceph-mon[191910]: pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:11 compute-0 python3.9[330719]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:28:11 compute-0 sudo[330717]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:12 compute-0 sudo[330795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fntnslsdkpgxfgplqrhagvfajbbrrtqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433291.0302608-663-225431982884646/AnsiballZ_file.py'
Oct 02 19:28:12 compute-0 sudo[330795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:28:12 compute-0 python3.9[330797]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:28:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:28:12 compute-0 sudo[330795]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:13 compute-0 sudo[330947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxgxbevomilffxhrjllomvrzgwqijjiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433292.6602902-675-38548763608773/AnsiballZ_systemd.py'
Oct 02 19:28:13 compute-0 sudo[330947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:13 compute-0 python3.9[330949]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:28:13 compute-0 systemd[1]: Reloading.
Oct 02 19:28:13 compute-0 ceph-mon[191910]: pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:13 compute-0 systemd-rc-local-generator[330975]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:28:13 compute-0 systemd-sysv-generator[330979]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:28:14 compute-0 sudo[330947]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:28:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1200.0 total, 600.0 interval
                                            Cumulative writes: 3308 writes, 14K keys, 3308 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                            Cumulative WAL: 3308 writes, 3308 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1276 writes, 5793 keys, 1276 commit groups, 1.0 writes per commit group, ingest: 8.47 MB, 0.01 MB/s
                                            Interval WAL: 1276 writes, 1276 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     91.5      0.17              0.05         7    0.024       0      0       0.0       0.0
                                              L6      1/0    6.65 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7    111.3     91.9      0.44              0.21         6    0.073     24K   3200       0.0       0.0
                                             Sum      1/0    6.65 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.7     80.8     91.8      0.61              0.26        13    0.047     24K   3200       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9     88.0     88.4      0.39              0.14         8    0.048     17K   2468       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    111.3     91.9      0.44              0.21         6    0.073     24K   3200       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     93.6      0.16              0.05         6    0.027       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.015, interval 0.007
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.05 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.6 seconds
                                            Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.4 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x557df35531f0#2 capacity: 308.00 MB usage: 1.71 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 4.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(99,1.50 MB,0.485834%) FilterBlock(14,74.42 KB,0.0235966%) IndexBlock(14,144.55 KB,0.0458309%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Oct 02 19:28:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:28:15 compute-0 sudo[331137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icvjkjhyknivdxvuvfuvzhahznpoflml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433294.8513994-683-63214612489271/AnsiballZ_stat.py'
Oct 02 19:28:15 compute-0 sudo[331137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:15 compute-0 python3.9[331139]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:28:15 compute-0 sudo[331137]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:15 compute-0 ceph-mon[191910]: pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:16 compute-0 sudo[331215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnqgacjnkafoypazcsghjjzsnrrkbyxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433294.8513994-683-63214612489271/AnsiballZ_file.py'
Oct 02 19:28:16 compute-0 sudo[331215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:16 compute-0 python3.9[331217]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:28:16 compute-0 sudo[331215]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:17 compute-0 sudo[331367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqywpnonatrxanovqqedzyplrmwxsrse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433297.1798184-695-256677349800114/AnsiballZ_stat.py'
Oct 02 19:28:17 compute-0 sudo[331367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:17 compute-0 ceph-mon[191910]: pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:17 compute-0 python3.9[331369]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:28:17 compute-0 sudo[331367]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:18 compute-0 sudo[331445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcjdssoktqpmytefwjsoddzctidbigvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433297.1798184-695-256677349800114/AnsiballZ_file.py'
Oct 02 19:28:18 compute-0 sudo[331445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:18 compute-0 python3.9[331447]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:28:18 compute-0 sudo[331445]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:19 compute-0 sudo[331597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bttfyebhrvqanztvautuentzccylstpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433298.7259746-707-115949335928238/AnsiballZ_systemd.py'
Oct 02 19:28:19 compute-0 sudo[331597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:28:19 compute-0 python3.9[331599]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:28:19 compute-0 systemd[1]: Reloading.
Oct 02 19:28:19 compute-0 ceph-mon[191910]: pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:19 compute-0 systemd-sysv-generator[331625]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:28:19 compute-0 systemd-rc-local-generator[331622]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:28:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:20 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 19:28:20 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 19:28:20 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 19:28:20 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 19:28:20 compute-0 sudo[331597]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:21 compute-0 sudo[331791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qelbuyfaeziplxkptngwtbhfhvyqhyem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433300.5163157-717-165948677434231/AnsiballZ_file.py'
Oct 02 19:28:21 compute-0 sudo[331791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:21 compute-0 python3.9[331793]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:28:21 compute-0 sudo[331791]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:21 compute-0 ceph-mon[191910]: pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:22 compute-0 sudo[331943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veramxkqrmaquxowgtqskfemsnjqzaae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433301.5119557-725-127661986167115/AnsiballZ_stat.py'
Oct 02 19:28:22 compute-0 sudo[331943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:22 compute-0 python3.9[331945]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:28:22 compute-0 sudo[331943]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:22 compute-0 sudo[332066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpuhwsocsybaqxktvgfslzymnynnueuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433301.5119557-725-127661986167115/AnsiballZ_copy.py'
Oct 02 19:28:22 compute-0 sudo[332066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:23 compute-0 python3.9[332068]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759433301.5119557-725-127661986167115/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:28:23 compute-0 sudo[332066]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:23 compute-0 ceph-mon[191910]: pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:24 compute-0 sudo[332218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdwbqgxvsgotyinmbhchmhqmxwvixnji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433303.6850107-742-198590420520448/AnsiballZ_file.py'
Oct 02 19:28:24 compute-0 sudo[332218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:24 compute-0 python3.9[332220]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:28:24 compute-0 sudo[332218]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.439 17 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.440 17 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.440 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a00e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.441 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feec38a00b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec5f86ab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38272c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec6d242f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38273e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38274d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec4ac1ee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3825730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec33b9670>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feec72bb7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feec38264b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.446 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feec3827c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feec3827230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feec3825700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feec3827290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feec38277a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feec38272f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feec3827350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feec38a0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feec38273b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feec49646e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feec3827440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feec3827c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feec38274a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feec3827ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feec3827d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feec3827dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.452 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feec3827e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.452 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.452 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feec38a0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.452 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.453 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feec38276e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.453 17 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.453 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feec3827ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.453 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.453 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feec49641d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.454 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.454 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feec3827740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.454 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.455 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feec3827f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.455 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.455 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.456 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:28:24.457 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:28:25 compute-0 podman[332299]: 2025-10-02 19:28:25.707885166 +0000 UTC m=+0.118432087 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:28:25 compute-0 ceph-mon[191910]: pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:25 compute-0 podman[332298]: 2025-10-02 19:28:25.741343592 +0000 UTC m=+0.153108096 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0)
Oct 02 19:28:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:26 compute-0 sudo[332412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bejrgdgtvmiuyndbbfftzadkpadcqdwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433304.7261841-750-171694513020705/AnsiballZ_stat.py'
Oct 02 19:28:26 compute-0 sudo[332412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:26 compute-0 python3.9[332414]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:28:26 compute-0 sudo[332412]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:27 compute-0 sudo[332535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyctlnpzpiirwxdpoebsflrqkdptyeoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433304.7261841-750-171694513020705/AnsiballZ_copy.py'
Oct 02 19:28:27 compute-0 sudo[332535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:27 compute-0 python3.9[332537]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759433304.7261841-750-171694513020705/.source.json _original_basename=.igg4xi8w follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:28:27 compute-0 sudo[332535]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:27 compute-0 ceph-mon[191910]: pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:28 compute-0 sudo[332687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxzhbcgzfuzmnmyhdthekghzehdxwbex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433307.9224188-765-63046276619810/AnsiballZ_file.py'
Oct 02 19:28:28 compute-0 sudo[332687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:28 compute-0 python3.9[332689]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:28:28 compute-0 sudo[332687]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:28:29 compute-0 sudo[332871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvjshmqsuarylrkoqidxodvfsyfyalze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433309.0584621-773-198040972313885/AnsiballZ_stat.py'
Oct 02 19:28:29 compute-0 sudo[332871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:29 compute-0 podman[332813]: 2025-10-02 19:28:29.57812177 +0000 UTC m=+0.115029287 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter)
Oct 02 19:28:29 compute-0 podman[332814]: 2025-10-02 19:28:29.60982905 +0000 UTC m=+0.145335210 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct 02 19:28:29 compute-0 podman[157186]: time="2025-10-02T19:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:28:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38197 "" "Go-http-client/1.1"
Oct 02 19:28:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7701 "" "Go-http-client/1.1"
Oct 02 19:28:29 compute-0 ceph-mon[191910]: pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:29 compute-0 sudo[332871]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:30 compute-0 sudo[333025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsfuoznsxnjtlbpwlqndbkqwdzcmddcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433309.0584621-773-198040972313885/AnsiballZ_copy.py'
Oct 02 19:28:30 compute-0 sudo[333025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:30 compute-0 podman[332983]: 2025-10-02 19:28:30.393434004 +0000 UTC m=+0.128102634 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 02 19:28:30 compute-0 sudo[333025]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:31 compute-0 openstack_network_exporter[159337]: ERROR   19:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:28:31 compute-0 openstack_network_exporter[159337]: ERROR   19:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:28:31 compute-0 openstack_network_exporter[159337]: ERROR   19:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:28:31 compute-0 openstack_network_exporter[159337]: ERROR   19:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:28:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:28:31 compute-0 openstack_network_exporter[159337]: ERROR   19:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:28:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:28:31 compute-0 sudo[333178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhulehrnqqmxcmloqoxezwxaplwhpvfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433311.2579536-790-225689769418849/AnsiballZ_container_config_data.py'
Oct 02 19:28:31 compute-0 sudo[333178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:31 compute-0 ceph-mon[191910]: pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:32 compute-0 python3.9[333180]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 02 19:28:32 compute-0 sudo[333178]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:28:32.272 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:28:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:28:32.273 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:28:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:28:32.273 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:28:32 compute-0 sudo[333330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgfrlzclesmvxryfzkvjgoqkahtoxfpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433312.3429222-799-236768100428427/AnsiballZ_container_config_hash.py'
Oct 02 19:28:32 compute-0 sudo[333330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:33 compute-0 python3.9[333332]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:28:33 compute-0 sudo[333330]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:28:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:28:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:28:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:28:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:28:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:28:33 compute-0 ceph-mon[191910]: pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:33 compute-0 sudo[333482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kztasxhxxhbdayioefkaqutquhntwjop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433313.4355094-808-144381281709036/AnsiballZ_podman_container_info.py'
Oct 02 19:28:33 compute-0 sudo[333482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:34 compute-0 python3.9[333484]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 19:28:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:28:34 compute-0 sudo[333482]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:34 compute-0 podman[333520]: 2025-10-02 19:28:34.636580006 +0000 UTC m=+0.073801326 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001)
Oct 02 19:28:34 compute-0 podman[333529]: 2025-10-02 19:28:34.644725932 +0000 UTC m=+0.074757541 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:28:34 compute-0 podman[333527]: 2025-10-02 19:28:34.68847633 +0000 UTC m=+0.111855993 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 19:28:34 compute-0 ceph-mon[191910]: pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:36 compute-0 sudo[333738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opwlwytyivjxjklyvsrirnjoetdhqeac ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759433315.5178657-821-181658619157974/AnsiballZ_edpm_container_manage.py'
Oct 02 19:28:36 compute-0 sudo[333738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:36 compute-0 podman[333695]: 2025-10-02 19:28:36.134004896 +0000 UTC m=+0.162390932 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=kepler, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:28:36 compute-0 python3[333743]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:28:37 compute-0 ceph-mon[191910]: pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:37 compute-0 sudo[333773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:28:37 compute-0 sudo[333773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:28:37 compute-0 sudo[333773]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:37 compute-0 sudo[333798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:28:37 compute-0 sudo[333798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:28:37 compute-0 sudo[333798]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:37 compute-0 sudo[333823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:28:37 compute-0 sudo[333823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:28:37 compute-0 sudo[333823]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:37 compute-0 sudo[333863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:28:37 compute-0 sudo[333863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:28:38 compute-0 podman[333756]: 2025-10-02 19:28:38.016213657 +0000 UTC m=+1.507772226 image pull d8d739f82a6fecf9df690e49539b589e74665b54e36448657b874630717d5bd1 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 02 19:28:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:38 compute-0 podman[333925]: 2025-10-02 19:28:38.271970951 +0000 UTC m=+0.100144704 container create 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:28:38 compute-0 podman[333925]: 2025-10-02 19:28:38.218950276 +0000 UTC m=+0.047124119 image pull d8d739f82a6fecf9df690e49539b589e74665b54e36448657b874630717d5bd1 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 02 19:28:38 compute-0 python3[333743]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 02 19:28:38 compute-0 sudo[333863]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:28:38 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:28:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:28:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:28:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:28:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:28:38 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 53f6b726-bb43-4662-a393-437ad919be80 does not exist
Oct 02 19:28:38 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev add10314-5444-4433-9a0f-7ee20f470d77 does not exist
Oct 02 19:28:38 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev a1f8808b-b309-4f59-be1d-add28ef7673e does not exist
Oct 02 19:28:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:28:38 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:28:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:28:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:28:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:28:38 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:28:38 compute-0 sudo[333964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:28:38 compute-0 sudo[333964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:28:38 compute-0 sudo[333964]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:38 compute-0 sudo[333738]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:38 compute-0 sudo[334001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:28:38 compute-0 sudo[334001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:28:38 compute-0 sudo[334001]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:38 compute-0 sudo[334050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:28:38 compute-0 sudo[334050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:28:38 compute-0 sudo[334050]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:38 compute-0 sudo[334076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:28:38 compute-0 sudo[334076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:28:39 compute-0 ceph-mon[191910]: pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:28:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:28:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:28:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:28:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:28:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:28:39 compute-0 sudo[334260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfhygwdwyxfiwesappfgnucrdryzkaag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433318.7196417-829-28656521821756/AnsiballZ_stat.py'
Oct 02 19:28:39 compute-0 sudo[334260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:39 compute-0 podman[334266]: 2025-10-02 19:28:39.302459384 +0000 UTC m=+0.085827684 container create 2ff2f3788cfe5de3a941084b74a32682525c30db2b8b6e83826d230e7be9aaa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_matsumoto, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 19:28:39 compute-0 podman[334266]: 2025-10-02 19:28:39.259689191 +0000 UTC m=+0.043057471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:28:39 compute-0 systemd[1]: Started libpod-conmon-2ff2f3788cfe5de3a941084b74a32682525c30db2b8b6e83826d230e7be9aaa0.scope.
Oct 02 19:28:39 compute-0 python3.9[334264]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:28:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:28:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:28:39 compute-0 podman[334266]: 2025-10-02 19:28:39.464634399 +0000 UTC m=+0.248002749 container init 2ff2f3788cfe5de3a941084b74a32682525c30db2b8b6e83826d230e7be9aaa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_matsumoto, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 19:28:39 compute-0 podman[334266]: 2025-10-02 19:28:39.482442041 +0000 UTC m=+0.265810341 container start 2ff2f3788cfe5de3a941084b74a32682525c30db2b8b6e83826d230e7be9aaa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_matsumoto, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 19:28:39 compute-0 sudo[334260]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:39 compute-0 podman[334266]: 2025-10-02 19:28:39.490019652 +0000 UTC m=+0.273387942 container attach 2ff2f3788cfe5de3a941084b74a32682525c30db2b8b6e83826d230e7be9aaa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 19:28:39 compute-0 wonderful_matsumoto[334282]: 167 167
Oct 02 19:28:39 compute-0 systemd[1]: libpod-2ff2f3788cfe5de3a941084b74a32682525c30db2b8b6e83826d230e7be9aaa0.scope: Deactivated successfully.
Oct 02 19:28:39 compute-0 podman[334266]: 2025-10-02 19:28:39.49409996 +0000 UTC m=+0.277468220 container died 2ff2f3788cfe5de3a941084b74a32682525c30db2b8b6e83826d230e7be9aaa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 19:28:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-43f3d4ed4060774856425436b948a9b65110daf45fd5998a254a464290630dc2-merged.mount: Deactivated successfully.
Oct 02 19:28:39 compute-0 podman[334266]: 2025-10-02 19:28:39.568265864 +0000 UTC m=+0.351634124 container remove 2ff2f3788cfe5de3a941084b74a32682525c30db2b8b6e83826d230e7be9aaa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_matsumoto, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 19:28:39 compute-0 systemd[1]: libpod-conmon-2ff2f3788cfe5de3a941084b74a32682525c30db2b8b6e83826d230e7be9aaa0.scope: Deactivated successfully.
Oct 02 19:28:39 compute-0 podman[334307]: 2025-10-02 19:28:39.77653284 +0000 UTC m=+0.080654017 container create 6648759304d415b1baef606d6714c5d896af237635a1b0f7b723cf58512a0cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 19:28:39 compute-0 podman[334307]: 2025-10-02 19:28:39.733606273 +0000 UTC m=+0.037727510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:28:39 compute-0 systemd[1]: Started libpod-conmon-6648759304d415b1baef606d6714c5d896af237635a1b0f7b723cf58512a0cb1.scope.
Oct 02 19:28:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df29b0f9cbb2b3349d1f7764bbf60ec5788c9adbc8fa832ef411b467f4b4350a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df29b0f9cbb2b3349d1f7764bbf60ec5788c9adbc8fa832ef411b467f4b4350a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df29b0f9cbb2b3349d1f7764bbf60ec5788c9adbc8fa832ef411b467f4b4350a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df29b0f9cbb2b3349d1f7764bbf60ec5788c9adbc8fa832ef411b467f4b4350a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df29b0f9cbb2b3349d1f7764bbf60ec5788c9adbc8fa832ef411b467f4b4350a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:28:39 compute-0 podman[334307]: 2025-10-02 19:28:39.947307323 +0000 UTC m=+0.251428480 container init 6648759304d415b1baef606d6714c5d896af237635a1b0f7b723cf58512a0cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shtern, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:28:39 compute-0 podman[334307]: 2025-10-02 19:28:39.959060074 +0000 UTC m=+0.263181221 container start 6648759304d415b1baef606d6714c5d896af237635a1b0f7b723cf58512a0cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:28:39 compute-0 podman[334307]: 2025-10-02 19:28:39.962859715 +0000 UTC m=+0.266980892 container attach 6648759304d415b1baef606d6714c5d896af237635a1b0f7b723cf58512a0cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shtern, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 19:28:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:40 compute-0 sudo[334483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkrujyvyxjqxxfklblbclxktluhzhlbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433320.2609448-838-80992856420302/AnsiballZ_file.py'
Oct 02 19:28:40 compute-0 sudo[334483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:40 compute-0 python3.9[334488]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:28:40 compute-0 sudo[334483]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:41 compute-0 amazing_shtern[334324]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:28:41 compute-0 amazing_shtern[334324]: --> relative data size: 1.0
Oct 02 19:28:41 compute-0 amazing_shtern[334324]: --> All data devices are unavailable
Oct 02 19:28:41 compute-0 systemd[1]: libpod-6648759304d415b1baef606d6714c5d896af237635a1b0f7b723cf58512a0cb1.scope: Deactivated successfully.
Oct 02 19:28:41 compute-0 systemd[1]: libpod-6648759304d415b1baef606d6714c5d896af237635a1b0f7b723cf58512a0cb1.scope: Consumed 1.099s CPU time.
Oct 02 19:28:41 compute-0 podman[334307]: 2025-10-02 19:28:41.146260997 +0000 UTC m=+1.450382174 container died 6648759304d415b1baef606d6714c5d896af237635a1b0f7b723cf58512a0cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shtern, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 19:28:41 compute-0 ceph-mon[191910]: pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-df29b0f9cbb2b3349d1f7764bbf60ec5788c9adbc8fa832ef411b467f4b4350a-merged.mount: Deactivated successfully.
Oct 02 19:28:41 compute-0 podman[334307]: 2025-10-02 19:28:41.24414315 +0000 UTC m=+1.548264287 container remove 6648759304d415b1baef606d6714c5d896af237635a1b0f7b723cf58512a0cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:28:41 compute-0 systemd[1]: libpod-conmon-6648759304d415b1baef606d6714c5d896af237635a1b0f7b723cf58512a0cb1.scope: Deactivated successfully.
Oct 02 19:28:41 compute-0 sudo[334076]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:41 compute-0 sudo[334606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baphiztlawejyogkgcroghcssfvjxiwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433320.2609448-838-80992856420302/AnsiballZ_stat.py'
Oct 02 19:28:41 compute-0 sudo[334570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:28:41 compute-0 sudo[334606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:41 compute-0 sudo[334570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:28:41 compute-0 sudo[334570]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:41 compute-0 sudo[334616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:28:41 compute-0 sudo[334616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:28:41 compute-0 sudo[334616]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:41 compute-0 python3.9[334614]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:28:41 compute-0 sudo[334606]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:41 compute-0 sudo[334641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:28:41 compute-0 sudo[334641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:28:41 compute-0 sudo[334641]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:41 compute-0 sudo[334673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:28:41 compute-0 sudo[334673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:28:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:42 compute-0 podman[334805]: 2025-10-02 19:28:42.251640944 +0000 UTC m=+0.068967278 container create e37862d06b292acfdf7bee784dede560beea1a33f6c1bcb8478f8fca1fc0e9b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_joliot, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 19:28:42 compute-0 podman[334805]: 2025-10-02 19:28:42.222341718 +0000 UTC m=+0.039668112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:28:42 compute-0 systemd[1]: Started libpod-conmon-e37862d06b292acfdf7bee784dede560beea1a33f6c1bcb8478f8fca1fc0e9b3.scope.
Oct 02 19:28:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:28:42 compute-0 podman[334805]: 2025-10-02 19:28:42.395964336 +0000 UTC m=+0.213290740 container init e37862d06b292acfdf7bee784dede560beea1a33f6c1bcb8478f8fca1fc0e9b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:28:42 compute-0 podman[334805]: 2025-10-02 19:28:42.419579512 +0000 UTC m=+0.236905866 container start e37862d06b292acfdf7bee784dede560beea1a33f6c1bcb8478f8fca1fc0e9b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_joliot, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 19:28:42 compute-0 podman[334805]: 2025-10-02 19:28:42.426166516 +0000 UTC m=+0.243492820 container attach e37862d06b292acfdf7bee784dede560beea1a33f6c1bcb8478f8fca1fc0e9b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 19:28:42 compute-0 compassionate_joliot[334844]: 167 167
Oct 02 19:28:42 compute-0 systemd[1]: libpod-e37862d06b292acfdf7bee784dede560beea1a33f6c1bcb8478f8fca1fc0e9b3.scope: Deactivated successfully.
Oct 02 19:28:42 compute-0 podman[334805]: 2025-10-02 19:28:42.430915932 +0000 UTC m=+0.248242246 container died e37862d06b292acfdf7bee784dede560beea1a33f6c1bcb8478f8fca1fc0e9b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 19:28:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6a9c26acac64d94e6611262cb60faf0f8322962974df9eb7d6fbdb4397ab700-merged.mount: Deactivated successfully.
Oct 02 19:28:42 compute-0 podman[334805]: 2025-10-02 19:28:42.481875892 +0000 UTC m=+0.299202206 container remove e37862d06b292acfdf7bee784dede560beea1a33f6c1bcb8478f8fca1fc0e9b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_joliot, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:28:42 compute-0 systemd[1]: libpod-conmon-e37862d06b292acfdf7bee784dede560beea1a33f6c1bcb8478f8fca1fc0e9b3.scope: Deactivated successfully.
Oct 02 19:28:42 compute-0 sudo[334912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsagbqggdiafdfwmaabukxtiibnykzsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433321.705104-838-175654173109641/AnsiballZ_copy.py'
Oct 02 19:28:42 compute-0 sudo[334912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:42 compute-0 podman[334920]: 2025-10-02 19:28:42.714793541 +0000 UTC m=+0.074984327 container create 7570dc7b873756779602f98e9d30fcafd53d320116af1c7a4e4e61e261929f01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leavitt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 19:28:42 compute-0 podman[334920]: 2025-10-02 19:28:42.680353148 +0000 UTC m=+0.040543974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:28:42 compute-0 python3.9[334914]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759433321.705104-838-175654173109641/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:28:42 compute-0 systemd[1]: Started libpod-conmon-7570dc7b873756779602f98e9d30fcafd53d320116af1c7a4e4e61e261929f01.scope.
Oct 02 19:28:42 compute-0 sudo[334912]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:28:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b477166caf8b83e63b5d7fa49cff4f0523fba11229c442292c509ac8cf57899f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:28:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b477166caf8b83e63b5d7fa49cff4f0523fba11229c442292c509ac8cf57899f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:28:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b477166caf8b83e63b5d7fa49cff4f0523fba11229c442292c509ac8cf57899f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:28:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b477166caf8b83e63b5d7fa49cff4f0523fba11229c442292c509ac8cf57899f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:28:42 compute-0 podman[334920]: 2025-10-02 19:28:42.879530764 +0000 UTC m=+0.239721560 container init 7570dc7b873756779602f98e9d30fcafd53d320116af1c7a4e4e61e261929f01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leavitt, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 19:28:42 compute-0 podman[334920]: 2025-10-02 19:28:42.900584681 +0000 UTC m=+0.260775467 container start 7570dc7b873756779602f98e9d30fcafd53d320116af1c7a4e4e61e261929f01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:28:42 compute-0 podman[334920]: 2025-10-02 19:28:42.906890718 +0000 UTC m=+0.267081504 container attach 7570dc7b873756779602f98e9d30fcafd53d320116af1c7a4e4e61e261929f01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leavitt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct 02 19:28:43 compute-0 ceph-mon[191910]: pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:43 compute-0 sudo[335014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdfkfrpaqyquxwrhyqdhosmazdnngtcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433321.705104-838-175654173109641/AnsiballZ_systemd.py'
Oct 02 19:28:43 compute-0 sudo[335014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:43 compute-0 python3.9[335016]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:28:43 compute-0 systemd[1]: Reloading.
Oct 02 19:28:43 compute-0 objective_leavitt[334936]: {
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:     "0": [
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:         {
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "devices": [
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "/dev/loop3"
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             ],
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "lv_name": "ceph_lv0",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "lv_size": "21470642176",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "name": "ceph_lv0",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "tags": {
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.cluster_name": "ceph",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.crush_device_class": "",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.encrypted": "0",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.osd_id": "0",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.type": "block",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.vdo": "0"
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             },
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "type": "block",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "vg_name": "ceph_vg0"
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:         }
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:     ],
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:     "1": [
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:         {
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "devices": [
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "/dev/loop4"
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             ],
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "lv_name": "ceph_lv1",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "lv_size": "21470642176",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "name": "ceph_lv1",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "tags": {
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.cluster_name": "ceph",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.crush_device_class": "",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.encrypted": "0",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.osd_id": "1",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.type": "block",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.vdo": "0"
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             },
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "type": "block",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "vg_name": "ceph_vg1"
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:         }
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:     ],
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:     "2": [
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:         {
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "devices": [
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "/dev/loop5"
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             ],
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "lv_name": "ceph_lv2",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "lv_size": "21470642176",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "name": "ceph_lv2",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "tags": {
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.cluster_name": "ceph",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.crush_device_class": "",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.encrypted": "0",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.osd_id": "2",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.type": "block",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:                 "ceph.vdo": "0"
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             },
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "type": "block",
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:             "vg_name": "ceph_vg2"
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:         }
Oct 02 19:28:43 compute-0 objective_leavitt[334936]:     ]
Oct 02 19:28:43 compute-0 objective_leavitt[334936]: }
Oct 02 19:28:43 compute-0 podman[334920]: 2025-10-02 19:28:43.732533155 +0000 UTC m=+1.092723941 container died 7570dc7b873756779602f98e9d30fcafd53d320116af1c7a4e4e61e261929f01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:28:43 compute-0 systemd-rc-local-generator[335042]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:28:43 compute-0 systemd-sysv-generator[335048]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:28:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:44 compute-0 systemd[1]: libpod-7570dc7b873756779602f98e9d30fcafd53d320116af1c7a4e4e61e261929f01.scope: Deactivated successfully.
Oct 02 19:28:44 compute-0 sudo[335014]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b477166caf8b83e63b5d7fa49cff4f0523fba11229c442292c509ac8cf57899f-merged.mount: Deactivated successfully.
Oct 02 19:28:44 compute-0 podman[334920]: 2025-10-02 19:28:44.165900043 +0000 UTC m=+1.526090799 container remove 7570dc7b873756779602f98e9d30fcafd53d320116af1c7a4e4e61e261929f01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:28:44 compute-0 systemd[1]: libpod-conmon-7570dc7b873756779602f98e9d30fcafd53d320116af1c7a4e4e61e261929f01.scope: Deactivated successfully.
Oct 02 19:28:44 compute-0 sudo[334673]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:44 compute-0 sudo[335070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:28:44 compute-0 sudo[335070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:28:44 compute-0 sudo[335070]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:44 compute-0 sudo[335114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:28:44 compute-0 sudo[335114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:28:44 compute-0 sudo[335114]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:28:44 compute-0 sudo[335163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:28:44 compute-0 sudo[335163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:28:44 compute-0 sudo[335163]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:44 compute-0 sudo[335213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwcpsyhkekmdkfptddxzknwxsjqisiob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433321.705104-838-175654173109641/AnsiballZ_systemd.py'
Oct 02 19:28:44 compute-0 sudo[335213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:44 compute-0 sudo[335217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:28:44 compute-0 sudo[335217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:28:44 compute-0 python3.9[335216]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:28:44 compute-0 systemd[1]: Reloading.
Oct 02 19:28:45 compute-0 systemd-rc-local-generator[335321]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:28:45 compute-0 systemd-sysv-generator[335325]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:28:45 compute-0 podman[335283]: 2025-10-02 19:28:45.164798029 +0000 UTC m=+0.095979483 container create 7da8cd7dcade6257e16ef69788d7b3ea6db8f5b9d24b291a5ebb4375b7aa5a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 19:28:45 compute-0 ceph-mon[191910]: pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:45 compute-0 podman[335283]: 2025-10-02 19:28:45.11989174 +0000 UTC m=+0.051073244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:28:45 compute-0 systemd[1]: Started libpod-conmon-7da8cd7dcade6257e16ef69788d7b3ea6db8f5b9d24b291a5ebb4375b7aa5a78.scope.
Oct 02 19:28:45 compute-0 systemd[1]: Starting multipathd container...
Oct 02 19:28:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:28:45 compute-0 podman[335283]: 2025-10-02 19:28:45.496269169 +0000 UTC m=+0.427450673 container init 7da8cd7dcade6257e16ef69788d7b3ea6db8f5b9d24b291a5ebb4375b7aa5a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 19:28:45 compute-0 podman[335283]: 2025-10-02 19:28:45.508991336 +0000 UTC m=+0.440172790 container start 7da8cd7dcade6257e16ef69788d7b3ea6db8f5b9d24b291a5ebb4375b7aa5a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_newton, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 19:28:45 compute-0 youthful_newton[335335]: 167 167
Oct 02 19:28:45 compute-0 systemd[1]: libpod-7da8cd7dcade6257e16ef69788d7b3ea6db8f5b9d24b291a5ebb4375b7aa5a78.scope: Deactivated successfully.
Oct 02 19:28:45 compute-0 conmon[335335]: conmon 7da8cd7dcade6257e16e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7da8cd7dcade6257e16ef69788d7b3ea6db8f5b9d24b291a5ebb4375b7aa5a78.scope/container/memory.events
Oct 02 19:28:45 compute-0 podman[335283]: 2025-10-02 19:28:45.527878676 +0000 UTC m=+0.459060150 container attach 7da8cd7dcade6257e16ef69788d7b3ea6db8f5b9d24b291a5ebb4375b7aa5a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_newton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:28:45 compute-0 podman[335283]: 2025-10-02 19:28:45.52954465 +0000 UTC m=+0.460726124 container died 7da8cd7dcade6257e16ef69788d7b3ea6db8f5b9d24b291a5ebb4375b7aa5a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Oct 02 19:28:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-701fa1548a0d74974475c96385023368520fe29de45034aa83435e0a96882ad1-merged.mount: Deactivated successfully.
Oct 02 19:28:45 compute-0 podman[335283]: 2025-10-02 19:28:45.596631257 +0000 UTC m=+0.527812731 container remove 7da8cd7dcade6257e16ef69788d7b3ea6db8f5b9d24b291a5ebb4375b7aa5a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:28:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:28:45 compute-0 systemd[1]: libpod-conmon-7da8cd7dcade6257e16ef69788d7b3ea6db8f5b9d24b291a5ebb4375b7aa5a78.scope: Deactivated successfully.
Oct 02 19:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ee6e51306b06211860ecf6d13af5954d32dc2b58bab0e6d666c233623f48dd/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 19:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ee6e51306b06211860ecf6d13af5954d32dc2b58bab0e6d666c233623f48dd/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 19:28:45 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd.
Oct 02 19:28:45 compute-0 podman[335334]: 2025-10-02 19:28:45.697609161 +0000 UTC m=+0.240692936 container init 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:28:45 compute-0 multipathd[335363]: + sudo -E kolla_set_configs
Oct 02 19:28:45 compute-0 sudo[335372]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 19:28:45 compute-0 sudo[335372]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:28:45 compute-0 sudo[335372]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 19:28:45 compute-0 podman[335334]: 2025-10-02 19:28:45.741954446 +0000 UTC m=+0.285038211 container start 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd)
Oct 02 19:28:45 compute-0 podman[335334]: multipathd
Oct 02 19:28:45 compute-0 systemd[1]: Started multipathd container.
Oct 02 19:28:45 compute-0 multipathd[335363]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:28:45 compute-0 multipathd[335363]: INFO:__main__:Validating config file
Oct 02 19:28:45 compute-0 multipathd[335363]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:28:45 compute-0 multipathd[335363]: INFO:__main__:Writing out command to execute
Oct 02 19:28:45 compute-0 sudo[335372]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:45 compute-0 multipathd[335363]: ++ cat /run_command
Oct 02 19:28:45 compute-0 multipathd[335363]: + CMD='/usr/sbin/multipathd -d'
Oct 02 19:28:45 compute-0 multipathd[335363]: + ARGS=
Oct 02 19:28:45 compute-0 multipathd[335363]: + sudo kolla_copy_cacerts
Oct 02 19:28:45 compute-0 sudo[335213]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:45 compute-0 sudo[335396]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 19:28:45 compute-0 sudo[335396]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:28:45 compute-0 sudo[335396]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 19:28:45 compute-0 sudo[335396]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:45 compute-0 multipathd[335363]: Running command: '/usr/sbin/multipathd -d'
Oct 02 19:28:45 compute-0 multipathd[335363]: + [[ ! -n '' ]]
Oct 02 19:28:45 compute-0 multipathd[335363]: + . kolla_extend_start
Oct 02 19:28:45 compute-0 multipathd[335363]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 02 19:28:45 compute-0 multipathd[335363]: + umask 0022
Oct 02 19:28:45 compute-0 multipathd[335363]: + exec /usr/sbin/multipathd -d
Oct 02 19:28:45 compute-0 podman[335386]: 2025-10-02 19:28:45.856978152 +0000 UTC m=+0.063549674 container create c22ec9deb26f9e450e93069228dbe5f382fd9b1af967d9016c05fdd9458e7f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 19:28:45 compute-0 multipathd[335363]: 4497.485364 | --------start up--------
Oct 02 19:28:45 compute-0 multipathd[335363]: 4497.485394 | read /etc/multipath.conf
Oct 02 19:28:45 compute-0 podman[335373]: 2025-10-02 19:28:45.862579161 +0000 UTC m=+0.108519756 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:28:45 compute-0 multipathd[335363]: 4497.495799 | path checkers start up
Oct 02 19:28:45 compute-0 systemd[1]: 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd-3cadecd7874bfdff.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:28:45 compute-0 systemd[1]: 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd-3cadecd7874bfdff.service: Failed with result 'exit-code'.
Oct 02 19:28:45 compute-0 systemd[1]: Started libpod-conmon-c22ec9deb26f9e450e93069228dbe5f382fd9b1af967d9016c05fdd9458e7f76.scope.
Oct 02 19:28:45 compute-0 podman[335386]: 2025-10-02 19:28:45.83424747 +0000 UTC m=+0.040818992 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:28:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8214e320296b8d6db4d5dcfbe73f75d0d834da4b3825db21001a71920b83786d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8214e320296b8d6db4d5dcfbe73f75d0d834da4b3825db21001a71920b83786d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8214e320296b8d6db4d5dcfbe73f75d0d834da4b3825db21001a71920b83786d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8214e320296b8d6db4d5dcfbe73f75d0d834da4b3825db21001a71920b83786d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:28:45 compute-0 podman[335386]: 2025-10-02 19:28:45.989207174 +0000 UTC m=+0.195778706 container init c22ec9deb26f9e450e93069228dbe5f382fd9b1af967d9016c05fdd9458e7f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_herschel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:28:46 compute-0 podman[335386]: 2025-10-02 19:28:46.006112642 +0000 UTC m=+0.212684174 container start c22ec9deb26f9e450e93069228dbe5f382fd9b1af967d9016c05fdd9458e7f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:28:46 compute-0 podman[335386]: 2025-10-02 19:28:46.011283259 +0000 UTC m=+0.217854801 container attach c22ec9deb26f9e450e93069228dbe5f382fd9b1af967d9016c05fdd9458e7f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 19:28:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:46 compute-0 python3.9[335580]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:28:47 compute-0 epic_herschel[335446]: {
Oct 02 19:28:47 compute-0 epic_herschel[335446]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:28:47 compute-0 epic_herschel[335446]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:28:47 compute-0 epic_herschel[335446]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:28:47 compute-0 epic_herschel[335446]:         "osd_id": 1,
Oct 02 19:28:47 compute-0 epic_herschel[335446]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:28:47 compute-0 epic_herschel[335446]:         "type": "bluestore"
Oct 02 19:28:47 compute-0 epic_herschel[335446]:     },
Oct 02 19:28:47 compute-0 epic_herschel[335446]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:28:47 compute-0 epic_herschel[335446]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:28:47 compute-0 epic_herschel[335446]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:28:47 compute-0 epic_herschel[335446]:         "osd_id": 2,
Oct 02 19:28:47 compute-0 epic_herschel[335446]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:28:47 compute-0 epic_herschel[335446]:         "type": "bluestore"
Oct 02 19:28:47 compute-0 epic_herschel[335446]:     },
Oct 02 19:28:47 compute-0 epic_herschel[335446]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:28:47 compute-0 epic_herschel[335446]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:28:47 compute-0 epic_herschel[335446]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:28:47 compute-0 epic_herschel[335446]:         "osd_id": 0,
Oct 02 19:28:47 compute-0 epic_herschel[335446]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:28:47 compute-0 epic_herschel[335446]:         "type": "bluestore"
Oct 02 19:28:47 compute-0 epic_herschel[335446]:     }
Oct 02 19:28:47 compute-0 epic_herschel[335446]: }
Oct 02 19:28:47 compute-0 ceph-mon[191910]: pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:47 compute-0 systemd[1]: libpod-c22ec9deb26f9e450e93069228dbe5f382fd9b1af967d9016c05fdd9458e7f76.scope: Deactivated successfully.
Oct 02 19:28:47 compute-0 systemd[1]: libpod-c22ec9deb26f9e450e93069228dbe5f382fd9b1af967d9016c05fdd9458e7f76.scope: Consumed 1.236s CPU time.
Oct 02 19:28:47 compute-0 podman[335386]: 2025-10-02 19:28:47.244924752 +0000 UTC m=+1.451496274 container died c22ec9deb26f9e450e93069228dbe5f382fd9b1af967d9016c05fdd9458e7f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 19:28:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8214e320296b8d6db4d5dcfbe73f75d0d834da4b3825db21001a71920b83786d-merged.mount: Deactivated successfully.
Oct 02 19:28:47 compute-0 podman[335386]: 2025-10-02 19:28:47.339455466 +0000 UTC m=+1.546026988 container remove c22ec9deb26f9e450e93069228dbe5f382fd9b1af967d9016c05fdd9458e7f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_herschel, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:28:47 compute-0 systemd[1]: libpod-conmon-c22ec9deb26f9e450e93069228dbe5f382fd9b1af967d9016c05fdd9458e7f76.scope: Deactivated successfully.
Oct 02 19:28:47 compute-0 sudo[335217]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:28:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:28:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:28:47 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:28:47 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 98723690-ea0e-4f33-8d94-d3137feffdd4 does not exist
Oct 02 19:28:47 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 502d2a6b-5729-4442-9aea-24ee9562642b does not exist
Oct 02 19:28:47 compute-0 sudo[335699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:28:47 compute-0 sudo[335699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:28:47 compute-0 sudo[335699]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:47 compute-0 sudo[335747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:28:47 compute-0 sudo[335747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:28:47 compute-0 sudo[335747]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:47 compute-0 sudo[335822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edujyeycrtyawtvhgxxputdxkltisnwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433327.3332026-874-165394956241802/AnsiballZ_command.py'
Oct 02 19:28:47 compute-0 sudo[335822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:48 compute-0 python3.9[335824]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:28:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:48 compute-0 sudo[335822]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:28:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:28:49 compute-0 sudo[335987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdzjhhorvxixysgnggzjrshkoiwnoenn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433328.5012147-882-53839632585572/AnsiballZ_systemd.py'
Oct 02 19:28:49 compute-0 sudo[335987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:49 compute-0 python3.9[335989]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:28:49 compute-0 systemd[1]: Stopping multipathd container...
Oct 02 19:28:49 compute-0 ceph-mon[191910]: pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:28:49 compute-0 multipathd[335363]: 4501.133393 | exit (signal)
Oct 02 19:28:49 compute-0 multipathd[335363]: 4501.134545 | --------shut down-------
Oct 02 19:28:49 compute-0 systemd[1]: libpod-21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd.scope: Deactivated successfully.
Oct 02 19:28:49 compute-0 podman[335993]: 2025-10-02 19:28:49.550528507 +0000 UTC m=+0.119334711 container died 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 02 19:28:49 compute-0 systemd[1]: 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd-3cadecd7874bfdff.timer: Deactivated successfully.
Oct 02 19:28:49 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd.
Oct 02 19:28:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd-userdata-shm.mount: Deactivated successfully.
Oct 02 19:28:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-97ee6e51306b06211860ecf6d13af5954d32dc2b58bab0e6d666c233623f48dd-merged.mount: Deactivated successfully.
Oct 02 19:28:49 compute-0 podman[335993]: 2025-10-02 19:28:49.614712597 +0000 UTC m=+0.183518761 container cleanup 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:28:49 compute-0 podman[335993]: multipathd
Oct 02 19:28:49 compute-0 podman[336022]: multipathd
Oct 02 19:28:49 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 02 19:28:49 compute-0 systemd[1]: Stopped multipathd container.
Oct 02 19:28:49 compute-0 systemd[1]: Starting multipathd container...
Oct 02 19:28:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:28:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ee6e51306b06211860ecf6d13af5954d32dc2b58bab0e6d666c233623f48dd/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 19:28:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ee6e51306b06211860ecf6d13af5954d32dc2b58bab0e6d666c233623f48dd/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 19:28:49 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd.
Oct 02 19:28:49 compute-0 podman[336035]: 2025-10-02 19:28:49.947701327 +0000 UTC m=+0.187956230 container init 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:28:49 compute-0 multipathd[336051]: + sudo -E kolla_set_configs
Oct 02 19:28:49 compute-0 sudo[336057]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 19:28:49 compute-0 sudo[336057]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:28:49 compute-0 sudo[336057]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 19:28:49 compute-0 podman[336035]: 2025-10-02 19:28:49.998727808 +0000 UTC m=+0.238982691 container start 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:28:50 compute-0 podman[336035]: multipathd
Oct 02 19:28:50 compute-0 systemd[1]: Started multipathd container.
Oct 02 19:28:50 compute-0 sudo[335987]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:50 compute-0 multipathd[336051]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:28:50 compute-0 multipathd[336051]: INFO:__main__:Validating config file
Oct 02 19:28:50 compute-0 multipathd[336051]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:28:50 compute-0 multipathd[336051]: INFO:__main__:Writing out command to execute
Oct 02 19:28:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:50 compute-0 sudo[336057]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:50 compute-0 multipathd[336051]: ++ cat /run_command
Oct 02 19:28:50 compute-0 multipathd[336051]: + CMD='/usr/sbin/multipathd -d'
Oct 02 19:28:50 compute-0 multipathd[336051]: + ARGS=
Oct 02 19:28:50 compute-0 multipathd[336051]: + sudo kolla_copy_cacerts
Oct 02 19:28:50 compute-0 sudo[336073]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 19:28:50 compute-0 sudo[336073]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:28:50 compute-0 sudo[336073]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 19:28:50 compute-0 sudo[336073]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:50 compute-0 multipathd[336051]: + [[ ! -n '' ]]
Oct 02 19:28:50 compute-0 multipathd[336051]: + . kolla_extend_start
Oct 02 19:28:50 compute-0 multipathd[336051]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 02 19:28:50 compute-0 multipathd[336051]: Running command: '/usr/sbin/multipathd -d'
Oct 02 19:28:50 compute-0 multipathd[336051]: + umask 0022
Oct 02 19:28:50 compute-0 multipathd[336051]: + exec /usr/sbin/multipathd -d
Oct 02 19:28:50 compute-0 multipathd[336051]: 4501.762260 | --------start up--------
Oct 02 19:28:50 compute-0 multipathd[336051]: 4501.762637 | read /etc/multipath.conf
Oct 02 19:28:50 compute-0 podman[336058]: 2025-10-02 19:28:50.148924766 +0000 UTC m=+0.132095109 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 02 19:28:50 compute-0 multipathd[336051]: 4501.776438 | path checkers start up
Oct 02 19:28:50 compute-0 systemd[1]: 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd-67faa82babc1f656.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:28:50 compute-0 systemd[1]: 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd-67faa82babc1f656.service: Failed with result 'exit-code'.
Oct 02 19:28:50 compute-0 sudo[336237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqwanlbxleueeygcsgujqublfgcxynbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433330.310844-890-275843245634387/AnsiballZ_file.py'
Oct 02 19:28:50 compute-0 sudo[336237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:51 compute-0 python3.9[336239]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:28:51 compute-0 sudo[336237]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:51 compute-0 ceph-mon[191910]: pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:52 compute-0 sudo[336389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dylylvnjkjrmrfxupbpydaxlcuauhxil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433331.9859219-902-99163689813773/AnsiballZ_file.py'
Oct 02 19:28:52 compute-0 sudo[336389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:52 compute-0 python3.9[336391]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 19:28:52 compute-0 sudo[336389]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:53 compute-0 sudo[336541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qotxnoctnrtvykubdefsuptyibhjchxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433333.0057833-910-86784193469306/AnsiballZ_modprobe.py'
Oct 02 19:28:53 compute-0 ceph-mon[191910]: pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:53 compute-0 sudo[336541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:53 compute-0 python3.9[336543]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 02 19:28:53 compute-0 kernel: Key type psk registered
Oct 02 19:28:53 compute-0 sudo[336541]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:28:54 compute-0 sudo[336705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oibkdmvkpzomjedkkialudpkxwxndncq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433334.078517-918-73993231882832/AnsiballZ_stat.py'
Oct 02 19:28:54 compute-0 sudo[336705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:54 compute-0 python3.9[336707]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:28:54 compute-0 sudo[336705]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:55 compute-0 ceph-mon[191910]: pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:55 compute-0 sudo[336828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrbarqqvmjejhyjztibyreygmytywyhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433334.078517-918-73993231882832/AnsiballZ_copy.py'
Oct 02 19:28:55 compute-0 sudo[336828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:55 compute-0 python3.9[336830]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759433334.078517-918-73993231882832/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:28:55 compute-0 sudo[336828]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:56 compute-0 sudo[337010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcgbrlspcjraqdxhcgwejxlnrqnfdxfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433336.111085-934-170982892808175/AnsiballZ_lineinfile.py'
Oct 02 19:28:56 compute-0 sudo[337010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:56 compute-0 podman[336954]: 2025-10-02 19:28:56.667777 +0000 UTC m=+0.125258178 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:28:56 compute-0 podman[336955]: 2025-10-02 19:28:56.684728729 +0000 UTC m=+0.138407856 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:28:56 compute-0 python3.9[337022]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:28:56 compute-0 sudo[337010]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:57 compute-0 ceph-mon[191910]: pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:57 compute-0 sudo[337172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwjundivlrbvxviuydritybyboleybwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433337.1703243-942-129026849735134/AnsiballZ_systemd.py'
Oct 02 19:28:57 compute-0 sudo[337172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:58 compute-0 python3.9[337174]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:28:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:58 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 02 19:28:58 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 02 19:28:58 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 02 19:28:58 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 02 19:28:58 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 02 19:28:58 compute-0 sudo[337172]: pam_unix(sudo:session): session closed for user root
Oct 02 19:28:59 compute-0 sudo[337328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsehoydsngvanwjuqxbpiygekvzcpfjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433338.5451117-950-201657903184568/AnsiballZ_setup.py'
Oct 02 19:28:59 compute-0 sudo[337328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:28:59 compute-0 python3.9[337330]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 19:28:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:28:59 compute-0 ceph-mon[191910]: pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:28:59 compute-0 podman[157186]: time="2025-10-02T19:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:28:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 40788 "" "Go-http-client/1.1"
Oct 02 19:28:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8121 "" "Go-http-client/1.1"
Oct 02 19:28:59 compute-0 sudo[337328]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:00 compute-0 podman[337362]: 2025-10-02 19:29:00.710005461 +0000 UTC m=+0.125831014 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, vcs-type=git, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, distribution-scope=public)
Oct 02 19:29:00 compute-0 podman[337363]: 2025-10-02 19:29:00.718637389 +0000 UTC m=+0.128758401 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:29:00 compute-0 podman[337364]: 2025-10-02 19:29:00.78474349 +0000 UTC m=+0.192384907 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:29:01 compute-0 sudo[337479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpwwkymnhrkxbskenhartusjxrcoobgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433338.5451117-950-201657903184568/AnsiballZ_dnf.py'
Oct 02 19:29:01 compute-0 sudo[337479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:01 compute-0 openstack_network_exporter[159337]: ERROR   19:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:29:01 compute-0 openstack_network_exporter[159337]: ERROR   19:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:29:01 compute-0 openstack_network_exporter[159337]: ERROR   19:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:29:01 compute-0 openstack_network_exporter[159337]: ERROR   19:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:29:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:29:01 compute-0 openstack_network_exporter[159337]: ERROR   19:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:29:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:29:01 compute-0 python3.9[337481]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:29:01 compute-0 ceph-mon[191910]: pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:29:03
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['.rgw.root', 'volumes', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta']
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:29:03 compute-0 ceph-mon[191910]: pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:29:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:29:03 compute-0 systemd[1]: Reloading.
Oct 02 19:29:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:04 compute-0 systemd-sysv-generator[337516]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:29:04 compute-0 systemd-rc-local-generator[337508]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:29:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:29:04 compute-0 systemd[1]: Reloading.
Oct 02 19:29:04 compute-0 systemd-rc-local-generator[337545]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:29:04 compute-0 systemd-sysv-generator[337552]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:29:05 compute-0 podman[337561]: 2025-10-02 19:29:05.210759965 +0000 UTC m=+0.096230019 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:29:05 compute-0 podman[337560]: 2025-10-02 19:29:05.210286862 +0000 UTC m=+0.095273264 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:29:05 compute-0 systemd-logind[793]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 02 19:29:05 compute-0 podman[337559]: 2025-10-02 19:29:05.260044 +0000 UTC m=+0.140475301 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:29:05 compute-0 systemd-logind[793]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 02 19:29:05 compute-0 lvm[337657]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 02 19:29:05 compute-0 lvm[337657]: VG ceph_vg1 finished
Oct 02 19:29:05 compute-0 lvm[337658]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 19:29:05 compute-0 lvm[337658]: VG ceph_vg0 finished
Oct 02 19:29:05 compute-0 lvm[337659]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 02 19:29:05 compute-0 lvm[337659]: VG ceph_vg2 finished
Oct 02 19:29:05 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 19:29:05 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 19:29:05 compute-0 ceph-mon[191910]: pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:05 compute-0 systemd[1]: Reloading.
Oct 02 19:29:05 compute-0 systemd-rc-local-generator[337712]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:29:05 compute-0 systemd-sysv-generator[337715]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:29:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:06 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 19:29:06 compute-0 podman[337914]: 2025-10-02 19:29:06.279573073 +0000 UTC m=+0.085466035 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, architecture=x86_64)
Oct 02 19:29:06 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 02 19:29:06 compute-0 PackageKit[338154]: daemon start
Oct 02 19:29:06 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 02 19:29:06 compute-0 sudo[337479]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:07 compute-0 sudo[339023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrukdrcajoezhlvneipnaduqratdebns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433346.9612508-962-47262307348102/AnsiballZ_file.py'
Oct 02 19:29:07 compute-0 sudo[339023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:07 compute-0 ceph-mon[191910]: pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:07 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 19:29:07 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 19:29:07 compute-0 systemd[1]: man-db-cache-update.service: Consumed 2.426s CPU time.
Oct 02 19:29:07 compute-0 systemd[1]: run-r410b9f21baca4a2aa51985c28420455b.service: Deactivated successfully.
Oct 02 19:29:07 compute-0 python3.9[339025]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:29:07 compute-0 sudo[339023]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:08 compute-0 python3.9[339176]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:29:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:29:09 compute-0 ceph-mon[191910]: pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:09 compute-0 sudo[339330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzvxjpjckhndzowoxdiduorjxlgckvvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433349.3271482-980-20748752338746/AnsiballZ_file.py'
Oct 02 19:29:09 compute-0 sudo[339330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:10 compute-0 python3.9[339332]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:29:10 compute-0 sudo[339330]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:11 compute-0 sudo[339482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkmvqzppxyepslsbgoozuyzxisuvfnzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433350.565547-991-248877667291644/AnsiballZ_systemd_service.py'
Oct 02 19:29:11 compute-0 sudo[339482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:11 compute-0 ceph-mon[191910]: pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:11 compute-0 python3.9[339484]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:29:11 compute-0 systemd[1]: Reloading.
Oct 02 19:29:12 compute-0 systemd-sysv-generator[339516]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:29:12 compute-0 systemd-rc-local-generator[339513]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:12 compute-0 sudo[339482]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:29:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:29:13 compute-0 python3.9[339670]: ansible-ansible.builtin.service_facts Invoked
Oct 02 19:29:13 compute-0 network[339687]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 19:29:13 compute-0 network[339688]: 'network-scripts' will be removed from distribution in near future.
Oct 02 19:29:13 compute-0 network[339689]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 19:29:13 compute-0 ceph-mon[191910]: pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:29:15 compute-0 ceph-mon[191910]: pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:17 compute-0 ceph-mon[191910]: pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:19 compute-0 sudo[339964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khjiijoiaqixamblrhfyarwpgrnwmnhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433358.5859814-1010-90288390598366/AnsiballZ_systemd_service.py'
Oct 02 19:29:19 compute-0 sudo[339964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:29:19 compute-0 python3.9[339966]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:29:19 compute-0 sudo[339964]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:19 compute-0 ceph-mon[191910]: pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:20 compute-0 podman[340027]: 2025-10-02 19:29:20.711974591 +0000 UTC m=+0.129437419 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 19:29:21 compute-0 sudo[340135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbefdgglelculzklwxajhkvgqbqerfnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433360.489359-1010-25563061112488/AnsiballZ_systemd_service.py'
Oct 02 19:29:21 compute-0 sudo[340135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:21 compute-0 python3.9[340137]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:29:21 compute-0 sudo[340135]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:21 compute-0 ceph-mon[191910]: pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:22 compute-0 sudo[340288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itpykiyabbhphhmkgvtnhqyktvuumqqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433361.6814177-1010-155415871730787/AnsiballZ_systemd_service.py'
Oct 02 19:29:22 compute-0 sudo[340288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:23 compute-0 python3.9[340290]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:29:23 compute-0 sudo[340288]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:23 compute-0 ceph-mon[191910]: pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:23 compute-0 sudo[340441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fledhucoeexwsngpwbghnfbhthnbtkio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433363.378631-1010-192597029465513/AnsiballZ_systemd_service.py'
Oct 02 19:29:23 compute-0 sudo[340441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:24 compute-0 python3.9[340443]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:29:24 compute-0 sudo[340441]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:29:25 compute-0 sudo[340594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niyymzwtnimgxpanqgscvundlzrwredj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433364.542257-1010-118718287529628/AnsiballZ_systemd_service.py'
Oct 02 19:29:25 compute-0 sudo[340594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:25 compute-0 python3.9[340596]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:29:25 compute-0 sudo[340594]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:25 compute-0 ceph-mon[191910]: pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:26 compute-0 sudo[340747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-porouinjfawxfscxqncfdopvrqwmnkyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433365.591826-1010-185666889061695/AnsiballZ_systemd_service.py'
Oct 02 19:29:26 compute-0 sudo[340747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:26 compute-0 python3.9[340749]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:29:26 compute-0 sudo[340747]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:27 compute-0 sudo[340927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aglojjuvtmgawgrvazscsduozftychxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433366.7265441-1010-183271760982237/AnsiballZ_systemd_service.py'
Oct 02 19:29:27 compute-0 podman[340874]: 2025-10-02 19:29:27.286947931 +0000 UTC m=+0.113787705 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Oct 02 19:29:27 compute-0 sudo[340927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:27 compute-0 podman[340875]: 2025-10-02 19:29:27.305104612 +0000 UTC m=+0.119943518 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:29:27 compute-0 python3.9[340945]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:29:27 compute-0 sudo[340927]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:27 compute-0 ceph-mon[191910]: pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:28 compute-0 sudo[341096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjhnryoxgziomnpznlqfxkzrzvocwyai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433367.8553815-1010-261237159514478/AnsiballZ_systemd_service.py'
Oct 02 19:29:28 compute-0 sudo[341096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:28 compute-0 python3.9[341098]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:29:28 compute-0 sudo[341096]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:29:29 compute-0 sudo[341249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utlgtrzsukifcaxclajafbrkmdhmglcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433369.158116-1069-85589664353683/AnsiballZ_file.py'
Oct 02 19:29:29 compute-0 sudo[341249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:29 compute-0 ceph-mon[191910]: pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:29 compute-0 podman[157186]: time="2025-10-02T19:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:29:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 40787 "" "Go-http-client/1.1"
Oct 02 19:29:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8119 "" "Go-http-client/1.1"
Oct 02 19:29:29 compute-0 python3.9[341251]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:29:29 compute-0 sudo[341249]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:31 compute-0 podman[341375]: 2025-10-02 19:29:31.242753322 +0000 UTC m=+0.106592704 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, config_id=edpm, architecture=x86_64)
Oct 02 19:29:31 compute-0 podman[341376]: 2025-10-02 19:29:31.249902142 +0000 UTC m=+0.098112280 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 19:29:31 compute-0 sudo[341453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uillwipdmnewqzttuaelqfnzwiudxbjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433370.719519-1069-21360116398180/AnsiballZ_file.py'
Oct 02 19:29:31 compute-0 sudo[341453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:31 compute-0 podman[341377]: 2025-10-02 19:29:31.3227147 +0000 UTC m=+0.172218272 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct 02 19:29:31 compute-0 openstack_network_exporter[159337]: ERROR   19:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:29:31 compute-0 openstack_network_exporter[159337]: ERROR   19:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:29:31 compute-0 openstack_network_exporter[159337]: ERROR   19:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:29:31 compute-0 openstack_network_exporter[159337]: ERROR   19:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:29:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:29:31 compute-0 openstack_network_exporter[159337]: ERROR   19:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:29:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:29:31 compute-0 python3.9[341459]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:29:31 compute-0 sudo[341453]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:31 compute-0 ceph-mon[191910]: pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:29:32.273 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:29:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:29:32.274 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:29:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:29:32.274 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:29:32 compute-0 sudo[341615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxoisgxemxhnsujpqzaixdlpxonxttfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433372.2815654-1069-158265453927387/AnsiballZ_file.py'
Oct 02 19:29:32 compute-0 sudo[341615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:32 compute-0 python3.9[341617]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:29:33 compute-0 sudo[341615]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:29:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:29:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:29:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:29:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:29:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:29:33 compute-0 ceph-mon[191910]: pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:33 compute-0 sudo[341767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdzllblyujdaevxywcjibcsltxkxccgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433373.3447187-1069-273122016082568/AnsiballZ_file.py'
Oct 02 19:29:33 compute-0 sudo[341767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:34 compute-0 python3.9[341769]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:29:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:34 compute-0 sudo[341767]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:29:34 compute-0 sudo[341919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwzknqnklvtxlxlbntzbzutlceatpcsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433374.316113-1069-226465630510678/AnsiballZ_file.py'
Oct 02 19:29:34 compute-0 sudo[341919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:34 compute-0 python3.9[341921]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:29:34 compute-0 sudo[341919]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:35 compute-0 podman[342046]: 2025-10-02 19:29:35.680091497 +0000 UTC m=+0.100738303 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:29:35 compute-0 sudo[342120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izyeffthjmkldktxgabiyijljwaexwbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433375.2095253-1069-107460121961363/AnsiballZ_file.py'
Oct 02 19:29:35 compute-0 sudo[342120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:35 compute-0 podman[342047]: 2025-10-02 19:29:35.713689445 +0000 UTC m=+0.138541943 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:29:35 compute-0 podman[342044]: 2025-10-02 19:29:35.714847726 +0000 UTC m=+0.135933684 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Oct 02 19:29:35 compute-0 ceph-mon[191910]: pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:35 compute-0 python3.9[342131]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:29:35 compute-0 sudo[342120]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:36 compute-0 sudo[342293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wureqwlcrigaubnyssrmpqegugvwrhew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433376.1942918-1069-206062127182077/AnsiballZ_file.py'
Oct 02 19:29:36 compute-0 sudo[342293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:36 compute-0 podman[342255]: 2025-10-02 19:29:36.689425589 +0000 UTC m=+0.137181307 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, container_name=kepler, release=1214.1726694543, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=)
Oct 02 19:29:36 compute-0 python3.9[342301]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:29:36 compute-0 sudo[342293]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:37 compute-0 sudo[342452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylateykrdfeebqqpxdpvufhpuufdvcon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433377.04275-1069-211966919054904/AnsiballZ_file.py'
Oct 02 19:29:37 compute-0 sudo[342452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:37 compute-0 python3.9[342454]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:29:37 compute-0 sudo[342452]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:37 compute-0 ceph-mon[191910]: pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:38 compute-0 sudo[342604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axuybauzobspdukgvgoxrhlzjlkqxllg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433378.0215378-1126-14030024662013/AnsiballZ_file.py'
Oct 02 19:29:38 compute-0 sudo[342604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:38 compute-0 python3.9[342606]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:29:38 compute-0 sudo[342604]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:29:39 compute-0 sudo[342756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbzdstvmpzfrjmigoalcmsrzowtzhwob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433378.9678946-1126-225058788670201/AnsiballZ_file.py'
Oct 02 19:29:39 compute-0 sudo[342756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:39 compute-0 python3.9[342758]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:29:39 compute-0 sudo[342756]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:39 compute-0 ceph-mon[191910]: pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:40 compute-0 sudo[342908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itqkrxbslechqcyxhwrvdrsipreukxev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433379.927018-1126-172418270731579/AnsiballZ_file.py'
Oct 02 19:29:40 compute-0 sudo[342908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:41 compute-0 python3.9[342910]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:29:41 compute-0 sudo[342908]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:41 compute-0 sudo[343060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dezvdvoatywquxwnrbmfbgwuqsksqzrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433381.265815-1126-154155343650257/AnsiballZ_file.py'
Oct 02 19:29:41 compute-0 sudo[343060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:41 compute-0 ceph-mon[191910]: pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:41 compute-0 python3.9[343062]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:29:41 compute-0 sudo[343060]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:43 compute-0 sudo[343212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzfmntnpjusnzfyqmwcbayzyfigaowbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433382.7051673-1126-44563846146283/AnsiballZ_file.py'
Oct 02 19:29:43 compute-0 sudo[343212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:43 compute-0 python3.9[343214]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:29:43 compute-0 sudo[343212]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:43 compute-0 ceph-mon[191910]: pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:44 compute-0 sudo[343364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntlmdhwacxrdlvsogyyyfnfpxdamvafj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433383.6353068-1126-183930911783536/AnsiballZ_file.py'
Oct 02 19:29:44 compute-0 sudo[343364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:44 compute-0 python3.9[343366]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:29:44 compute-0 sudo[343364]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:29:45 compute-0 sudo[343516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxunhbeefnfhvedmacklrnydmdusnbha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433384.5644476-1126-180852084567716/AnsiballZ_file.py'
Oct 02 19:29:45 compute-0 sudo[343516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:45 compute-0 python3.9[343518]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:29:45 compute-0 sudo[343516]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:45 compute-0 ceph-mon[191910]: pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:46 compute-0 sudo[343668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izbfcryxdazguqndrostydkbgdtzufob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433385.510144-1126-41962579068282/AnsiballZ_file.py'
Oct 02 19:29:46 compute-0 sudo[343668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:46 compute-0 python3.9[343670]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:29:46 compute-0 sudo[343668]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:47 compute-0 sudo[343820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stelvhhntqfsrucrwaugwyvjdlopyjha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433386.547412-1184-59516011580476/AnsiballZ_command.py'
Oct 02 19:29:47 compute-0 sudo[343820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:47 compute-0 python3.9[343822]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:29:47 compute-0 sudo[343820]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:47 compute-0 sudo[343872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:29:47 compute-0 sudo[343872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:29:47 compute-0 sudo[343872]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:47 compute-0 ceph-mon[191910]: pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:47 compute-0 sudo[343920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:29:47 compute-0 sudo[343920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:29:47 compute-0 sudo[343920]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:48 compute-0 sudo[343951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:29:48 compute-0 sudo[343951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:29:48 compute-0 sudo[343951]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:48 compute-0 sudo[343993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:29:48 compute-0 sudo[343993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:29:48 compute-0 python3.9[344076]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:29:48 compute-0 sudo[343993]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:29:48 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:29:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:29:48 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:29:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:29:48 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:29:48 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 8470e7a7-d9eb-435e-be60-7101f4bcc41b does not exist
Oct 02 19:29:48 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev b43dfae4-437d-435a-b291-523474ed4e35 does not exist
Oct 02 19:29:48 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 9c1d100c-266b-4b08-bc08-79ea5dce0ccd does not exist
Oct 02 19:29:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:29:48 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:29:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:29:48 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:29:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:29:48 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:29:48 compute-0 sudo[344129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:29:48 compute-0 sudo[344129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:29:48 compute-0 sudo[344129]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:29:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:29:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:29:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:29:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:29:48 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:29:48 compute-0 sudo[344172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:29:48 compute-0 sudo[344172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:29:48 compute-0 sudo[344172]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:48 compute-0 sudo[344219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:29:48 compute-0 sudo[344219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:29:48 compute-0 sudo[344219]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:49 compute-0 sudo[344256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:29:49 compute-0 sudo[344256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:29:49 compute-0 sudo[344369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfkmoueppopvlbujgyjynoasorhivsfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433388.8494942-1202-276158608133260/AnsiballZ_systemd_service.py'
Oct 02 19:29:49 compute-0 sudo[344369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:29:49 compute-0 podman[344397]: 2025-10-02 19:29:49.613450961 +0000 UTC m=+0.083966875 container create ed56be23cf463040f0f69f9784a68b6e0315df0f5b55284dd3a91d98884191d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:29:49 compute-0 systemd[1]: Started libpod-conmon-ed56be23cf463040f0f69f9784a68b6e0315df0f5b55284dd3a91d98884191d4.scope.
Oct 02 19:29:49 compute-0 podman[344397]: 2025-10-02 19:29:49.581718133 +0000 UTC m=+0.052234107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:29:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:29:49 compute-0 python3.9[344380]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:29:49 compute-0 systemd[1]: Reloading.
Oct 02 19:29:49 compute-0 podman[344397]: 2025-10-02 19:29:49.73316417 +0000 UTC m=+0.203680104 container init ed56be23cf463040f0f69f9784a68b6e0315df0f5b55284dd3a91d98884191d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:29:49 compute-0 podman[344397]: 2025-10-02 19:29:49.744927895 +0000 UTC m=+0.215443799 container start ed56be23cf463040f0f69f9784a68b6e0315df0f5b55284dd3a91d98884191d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cartwright, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:29:49 compute-0 podman[344397]: 2025-10-02 19:29:49.750615937 +0000 UTC m=+0.221131861 container attach ed56be23cf463040f0f69f9784a68b6e0315df0f5b55284dd3a91d98884191d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cartwright, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:29:49 compute-0 hopeful_cartwright[344411]: 167 167
Oct 02 19:29:49 compute-0 podman[344397]: 2025-10-02 19:29:49.756651898 +0000 UTC m=+0.227167812 container died ed56be23cf463040f0f69f9784a68b6e0315df0f5b55284dd3a91d98884191d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 19:29:49 compute-0 ceph-mon[191910]: pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:49 compute-0 systemd-sysv-generator[344457]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:29:49 compute-0 systemd-rc-local-generator[344454]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:29:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:50 compute-0 systemd[1]: libpod-ed56be23cf463040f0f69f9784a68b6e0315df0f5b55284dd3a91d98884191d4.scope: Deactivated successfully.
Oct 02 19:29:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-84c73d3d6f1fadd1fa2f9f36ec8615ec9dd892df42475171aa7407899a528905-merged.mount: Deactivated successfully.
Oct 02 19:29:50 compute-0 sudo[344369]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:50 compute-0 podman[344397]: 2025-10-02 19:29:50.222010973 +0000 UTC m=+0.692526877 container remove ed56be23cf463040f0f69f9784a68b6e0315df0f5b55284dd3a91d98884191d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cartwright, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:29:50 compute-0 systemd[1]: libpod-conmon-ed56be23cf463040f0f69f9784a68b6e0315df0f5b55284dd3a91d98884191d4.scope: Deactivated successfully.
Oct 02 19:29:50 compute-0 podman[344493]: 2025-10-02 19:29:50.441359195 +0000 UTC m=+0.059778699 container create 1b51789279e0fbfd566891d94c6a9b7a3ceb21235329c2802f596621f52bedb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 19:29:50 compute-0 systemd[1]: Started libpod-conmon-1b51789279e0fbfd566891d94c6a9b7a3ceb21235329c2802f596621f52bedb6.scope.
Oct 02 19:29:50 compute-0 podman[344493]: 2025-10-02 19:29:50.421571176 +0000 UTC m=+0.039990690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:29:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:29:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03fc61c4f2cf3197786f58e3f28d7726f8c3b9c24d67bb33c3e92a7d9873f82a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:29:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03fc61c4f2cf3197786f58e3f28d7726f8c3b9c24d67bb33c3e92a7d9873f82a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:29:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03fc61c4f2cf3197786f58e3f28d7726f8c3b9c24d67bb33c3e92a7d9873f82a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:29:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03fc61c4f2cf3197786f58e3f28d7726f8c3b9c24d67bb33c3e92a7d9873f82a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:29:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03fc61c4f2cf3197786f58e3f28d7726f8c3b9c24d67bb33c3e92a7d9873f82a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:29:50 compute-0 podman[344493]: 2025-10-02 19:29:50.611683766 +0000 UTC m=+0.230103290 container init 1b51789279e0fbfd566891d94c6a9b7a3ceb21235329c2802f596621f52bedb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pike, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 02 19:29:50 compute-0 podman[344493]: 2025-10-02 19:29:50.627734885 +0000 UTC m=+0.246154379 container start 1b51789279e0fbfd566891d94c6a9b7a3ceb21235329c2802f596621f52bedb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 19:29:50 compute-0 podman[344493]: 2025-10-02 19:29:50.631806704 +0000 UTC m=+0.250226228 container attach 1b51789279e0fbfd566891d94c6a9b7a3ceb21235329c2802f596621f52bedb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pike, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:29:50 compute-0 ceph-mon[191910]: pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:51 compute-0 sudo[344648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmzwvfdpauvyifdlwdqpvbyblpsnldna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433390.5098448-1210-185846527985890/AnsiballZ_command.py'
Oct 02 19:29:51 compute-0 sudo[344648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:51 compute-0 podman[344613]: 2025-10-02 19:29:51.039071527 +0000 UTC m=+0.125203597 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 19:29:51 compute-0 python3.9[344658]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:29:51 compute-0 sudo[344648]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:51 compute-0 flamboyant_pike[344532]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:29:51 compute-0 flamboyant_pike[344532]: --> relative data size: 1.0
Oct 02 19:29:51 compute-0 flamboyant_pike[344532]: --> All data devices are unavailable
Oct 02 19:29:51 compute-0 systemd[1]: libpod-1b51789279e0fbfd566891d94c6a9b7a3ceb21235329c2802f596621f52bedb6.scope: Deactivated successfully.
Oct 02 19:29:51 compute-0 systemd[1]: libpod-1b51789279e0fbfd566891d94c6a9b7a3ceb21235329c2802f596621f52bedb6.scope: Consumed 1.110s CPU time.
Oct 02 19:29:51 compute-0 podman[344493]: 2025-10-02 19:29:51.790441884 +0000 UTC m=+1.408861408 container died 1b51789279e0fbfd566891d94c6a9b7a3ceb21235329c2802f596621f52bedb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:29:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-03fc61c4f2cf3197786f58e3f28d7726f8c3b9c24d67bb33c3e92a7d9873f82a-merged.mount: Deactivated successfully.
Oct 02 19:29:51 compute-0 podman[344493]: 2025-10-02 19:29:51.873072632 +0000 UTC m=+1.491492166 container remove 1b51789279e0fbfd566891d94c6a9b7a3ceb21235329c2802f596621f52bedb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:29:51 compute-0 systemd[1]: libpod-conmon-1b51789279e0fbfd566891d94c6a9b7a3ceb21235329c2802f596621f52bedb6.scope: Deactivated successfully.
Oct 02 19:29:51 compute-0 sudo[344256]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:52 compute-0 sudo[344728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:29:52 compute-0 sudo[344728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:29:52 compute-0 sudo[344728]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:52 compute-0 sudo[344782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:29:52 compute-0 sudo[344782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:29:52 compute-0 sudo[344782]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:52 compute-0 sudo[344824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:29:52 compute-0 sudo[344824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:29:52 compute-0 sudo[344824]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:52 compute-0 sudo[344872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:29:52 compute-0 sudo[344872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:29:52 compute-0 sudo[344947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgjqnyosstdliwhjwnsvurmcazglmzmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433391.9737701-1210-116293674520787/AnsiballZ_command.py'
Oct 02 19:29:52 compute-0 sudo[344947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:52 compute-0 python3.9[344955]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:29:52 compute-0 sudo[344947]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:52 compute-0 podman[344990]: 2025-10-02 19:29:52.73259348 +0000 UTC m=+0.050869190 container create b7dae0dde90d6c2d0cb38f602ebbcffc8addf6892661b84cf8183c38958c2109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_engelbart, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:29:52 compute-0 systemd[1]: Started libpod-conmon-b7dae0dde90d6c2d0cb38f602ebbcffc8addf6892661b84cf8183c38958c2109.scope.
Oct 02 19:29:52 compute-0 podman[344990]: 2025-10-02 19:29:52.709452172 +0000 UTC m=+0.027727902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:29:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:29:52 compute-0 podman[344990]: 2025-10-02 19:29:52.840769321 +0000 UTC m=+0.159045041 container init b7dae0dde90d6c2d0cb38f602ebbcffc8addf6892661b84cf8183c38958c2109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 19:29:52 compute-0 podman[344990]: 2025-10-02 19:29:52.851475947 +0000 UTC m=+0.169751667 container start b7dae0dde90d6c2d0cb38f602ebbcffc8addf6892661b84cf8183c38958c2109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 19:29:52 compute-0 kind_engelbart[345030]: 167 167
Oct 02 19:29:52 compute-0 podman[344990]: 2025-10-02 19:29:52.85718886 +0000 UTC m=+0.175464580 container attach b7dae0dde90d6c2d0cb38f602ebbcffc8addf6892661b84cf8183c38958c2109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_engelbart, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 19:29:52 compute-0 systemd[1]: libpod-b7dae0dde90d6c2d0cb38f602ebbcffc8addf6892661b84cf8183c38958c2109.scope: Deactivated successfully.
Oct 02 19:29:52 compute-0 podman[344990]: 2025-10-02 19:29:52.859052499 +0000 UTC m=+0.177328249 container died b7dae0dde90d6c2d0cb38f602ebbcffc8addf6892661b84cf8183c38958c2109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_engelbart, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:29:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a4082e06e0cf1e09ea5563a4df2cff910715c5afdf0144cc2ce52371f533a35-merged.mount: Deactivated successfully.
Oct 02 19:29:52 compute-0 podman[344990]: 2025-10-02 19:29:52.915404865 +0000 UTC m=+0.233680565 container remove b7dae0dde90d6c2d0cb38f602ebbcffc8addf6892661b84cf8183c38958c2109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_engelbart, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:29:52 compute-0 systemd[1]: libpod-conmon-b7dae0dde90d6c2d0cb38f602ebbcffc8addf6892661b84cf8183c38958c2109.scope: Deactivated successfully.
Oct 02 19:29:53 compute-0 podman[345107]: 2025-10-02 19:29:53.110109118 +0000 UTC m=+0.064766882 container create 90d6204360e4e7f86a5f91e0c69f4e784cd4db765f2e056f1e5f3632ed8bbb29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mestorf, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 19:29:53 compute-0 ceph-mon[191910]: pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:53 compute-0 systemd[1]: Started libpod-conmon-90d6204360e4e7f86a5f91e0c69f4e784cd4db765f2e056f1e5f3632ed8bbb29.scope.
Oct 02 19:29:53 compute-0 podman[345107]: 2025-10-02 19:29:53.08809495 +0000 UTC m=+0.042752714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:29:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:29:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00312b9221734c9092bcc1e663ec0cea31755b98fe5b97aa94642f452db954b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:29:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00312b9221734c9092bcc1e663ec0cea31755b98fe5b97aa94642f452db954b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:29:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00312b9221734c9092bcc1e663ec0cea31755b98fe5b97aa94642f452db954b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:29:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00312b9221734c9092bcc1e663ec0cea31755b98fe5b97aa94642f452db954b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:29:53 compute-0 podman[345107]: 2025-10-02 19:29:53.242792914 +0000 UTC m=+0.197450668 container init 90d6204360e4e7f86a5f91e0c69f4e784cd4db765f2e056f1e5f3632ed8bbb29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mestorf, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:29:53 compute-0 podman[345107]: 2025-10-02 19:29:53.267457643 +0000 UTC m=+0.222115387 container start 90d6204360e4e7f86a5f91e0c69f4e784cd4db765f2e056f1e5f3632ed8bbb29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:29:53 compute-0 podman[345107]: 2025-10-02 19:29:53.272646511 +0000 UTC m=+0.227304295 container attach 90d6204360e4e7f86a5f91e0c69f4e784cd4db765f2e056f1e5f3632ed8bbb29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:29:53 compute-0 sudo[345199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehnohvreyqgparmiaadszihxkbaxozai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433392.9112277-1210-85244229992866/AnsiballZ_command.py'
Oct 02 19:29:53 compute-0 sudo[345199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:53 compute-0 python3.9[345201]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:29:53 compute-0 sudo[345199]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:54 compute-0 happy_mestorf[345164]: {
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:     "0": [
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:         {
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "devices": [
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "/dev/loop3"
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             ],
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "lv_name": "ceph_lv0",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "lv_size": "21470642176",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "name": "ceph_lv0",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "tags": {
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.cluster_name": "ceph",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.crush_device_class": "",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.encrypted": "0",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.osd_id": "0",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.type": "block",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.vdo": "0"
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             },
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "type": "block",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "vg_name": "ceph_vg0"
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:         }
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:     ],
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:     "1": [
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:         {
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "devices": [
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "/dev/loop4"
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             ],
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "lv_name": "ceph_lv1",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "lv_size": "21470642176",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "name": "ceph_lv1",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "tags": {
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.cluster_name": "ceph",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.crush_device_class": "",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.encrypted": "0",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.osd_id": "1",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.type": "block",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.vdo": "0"
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             },
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "type": "block",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "vg_name": "ceph_vg1"
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:         }
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:     ],
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:     "2": [
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:         {
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "devices": [
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "/dev/loop5"
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             ],
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "lv_name": "ceph_lv2",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "lv_size": "21470642176",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "name": "ceph_lv2",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "tags": {
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.cluster_name": "ceph",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.crush_device_class": "",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.encrypted": "0",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.osd_id": "2",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.type": "block",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:                 "ceph.vdo": "0"
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             },
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "type": "block",
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:             "vg_name": "ceph_vg2"
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:         }
Oct 02 19:29:54 compute-0 happy_mestorf[345164]:     ]
Oct 02 19:29:54 compute-0 happy_mestorf[345164]: }
Oct 02 19:29:54 compute-0 systemd[1]: libpod-90d6204360e4e7f86a5f91e0c69f4e784cd4db765f2e056f1e5f3632ed8bbb29.scope: Deactivated successfully.
Oct 02 19:29:54 compute-0 podman[345107]: 2025-10-02 19:29:54.092792937 +0000 UTC m=+1.047450691 container died 90d6204360e4e7f86a5f91e0c69f4e784cd4db765f2e056f1e5f3632ed8bbb29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mestorf, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 19:29:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-00312b9221734c9092bcc1e663ec0cea31755b98fe5b97aa94642f452db954b6-merged.mount: Deactivated successfully.
Oct 02 19:29:54 compute-0 podman[345107]: 2025-10-02 19:29:54.19542673 +0000 UTC m=+1.150084474 container remove 90d6204360e4e7f86a5f91e0c69f4e784cd4db765f2e056f1e5f3632ed8bbb29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mestorf, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 19:29:54 compute-0 systemd[1]: libpod-conmon-90d6204360e4e7f86a5f91e0c69f4e784cd4db765f2e056f1e5f3632ed8bbb29.scope: Deactivated successfully.
Oct 02 19:29:54 compute-0 sudo[344872]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:54 compute-0 sudo[345218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:29:54 compute-0 sudo[345218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:29:54 compute-0 sudo[345218]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:54 compute-0 sudo[345243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:29:54 compute-0 sudo[345243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:29:54 compute-0 sudo[345243]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:29:54 compute-0 sudo[345288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:29:54 compute-0 sudo[345288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:29:54 compute-0 sudo[345288]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:54 compute-0 sudo[345337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:29:54 compute-0 sudo[345337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:29:55 compute-0 sudo[345502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nupkywlzqknszglmkwbzyycgxiihhxlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433394.5935004-1210-225739284931934/AnsiballZ_command.py'
Oct 02 19:29:55 compute-0 sudo[345502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:55 compute-0 podman[345509]: 2025-10-02 19:29:55.168148453 +0000 UTC m=+0.089836932 container create f87499ce0f1d2d27878af1432632a4d1c594e64d87775e22d3591f03de3c1956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 19:29:55 compute-0 ceph-mon[191910]: pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:55 compute-0 systemd[1]: Started libpod-conmon-f87499ce0f1d2d27878af1432632a4d1c594e64d87775e22d3591f03de3c1956.scope.
Oct 02 19:29:55 compute-0 podman[345509]: 2025-10-02 19:29:55.133276711 +0000 UTC m=+0.054965200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:29:55 compute-0 python3.9[345508]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:29:55 compute-0 auditd[704]: Audit daemon rotating log files
Oct 02 19:29:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:29:55 compute-0 sudo[345502]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:55 compute-0 podman[345509]: 2025-10-02 19:29:55.289494625 +0000 UTC m=+0.211183124 container init f87499ce0f1d2d27878af1432632a4d1c594e64d87775e22d3591f03de3c1956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct 02 19:29:55 compute-0 podman[345509]: 2025-10-02 19:29:55.301029743 +0000 UTC m=+0.222718222 container start f87499ce0f1d2d27878af1432632a4d1c594e64d87775e22d3591f03de3c1956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 19:29:55 compute-0 podman[345509]: 2025-10-02 19:29:55.306198771 +0000 UTC m=+0.227887250 container attach f87499ce0f1d2d27878af1432632a4d1c594e64d87775e22d3591f03de3c1956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 19:29:55 compute-0 crazy_solomon[345525]: 167 167
Oct 02 19:29:55 compute-0 systemd[1]: libpod-f87499ce0f1d2d27878af1432632a4d1c594e64d87775e22d3591f03de3c1956.scope: Deactivated successfully.
Oct 02 19:29:55 compute-0 podman[345509]: 2025-10-02 19:29:55.309234702 +0000 UTC m=+0.230923181 container died f87499ce0f1d2d27878af1432632a4d1c594e64d87775e22d3591f03de3c1956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_solomon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 19:29:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-200687916c7fde36a2f7673ec3b9f82f56357227a674b43dad97355eccf6ef00-merged.mount: Deactivated successfully.
Oct 02 19:29:55 compute-0 podman[345509]: 2025-10-02 19:29:55.356975008 +0000 UTC m=+0.278663487 container remove f87499ce0f1d2d27878af1432632a4d1c594e64d87775e22d3591f03de3c1956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_solomon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:29:55 compute-0 systemd[1]: libpod-conmon-f87499ce0f1d2d27878af1432632a4d1c594e64d87775e22d3591f03de3c1956.scope: Deactivated successfully.
Oct 02 19:29:55 compute-0 podman[345595]: 2025-10-02 19:29:55.601743979 +0000 UTC m=+0.101555445 container create 0186bd78a64c8dc56c4f10dfc62cab0aa59ff40a954b3a58276e7812c86f745d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 19:29:55 compute-0 podman[345595]: 2025-10-02 19:29:55.567063092 +0000 UTC m=+0.066874618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:29:55 compute-0 systemd[1]: Started libpod-conmon-0186bd78a64c8dc56c4f10dfc62cab0aa59ff40a954b3a58276e7812c86f745d.scope.
Oct 02 19:29:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4e4306feb79c52e95b550dd002854df1383253731873084862a0476cdfc4c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4e4306feb79c52e95b550dd002854df1383253731873084862a0476cdfc4c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4e4306feb79c52e95b550dd002854df1383253731873084862a0476cdfc4c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4e4306feb79c52e95b550dd002854df1383253731873084862a0476cdfc4c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:29:55 compute-0 podman[345595]: 2025-10-02 19:29:55.766554943 +0000 UTC m=+0.266366459 container init 0186bd78a64c8dc56c4f10dfc62cab0aa59ff40a954b3a58276e7812c86f745d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wright, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 19:29:55 compute-0 podman[345595]: 2025-10-02 19:29:55.788809787 +0000 UTC m=+0.288621223 container start 0186bd78a64c8dc56c4f10dfc62cab0aa59ff40a954b3a58276e7812c86f745d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 19:29:55 compute-0 podman[345595]: 2025-10-02 19:29:55.794711995 +0000 UTC m=+0.294523451 container attach 0186bd78a64c8dc56c4f10dfc62cab0aa59ff40a954b3a58276e7812c86f745d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wright, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 19:29:56 compute-0 sudo[345718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stpfdkztvmdmrzdoqhzwxtklpecgcydi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433395.486919-1210-270294474915642/AnsiballZ_command.py'
Oct 02 19:29:56 compute-0 sudo[345718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:56 compute-0 python3.9[345720]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:29:56 compute-0 sudo[345718]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:56 compute-0 dreamy_wright[345653]: {
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:         "osd_id": 1,
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:         "type": "bluestore"
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:     },
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:         "osd_id": 2,
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:         "type": "bluestore"
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:     },
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:         "osd_id": 0,
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:         "type": "bluestore"
Oct 02 19:29:56 compute-0 dreamy_wright[345653]:     }
Oct 02 19:29:56 compute-0 dreamy_wright[345653]: }
Oct 02 19:29:56 compute-0 systemd[1]: libpod-0186bd78a64c8dc56c4f10dfc62cab0aa59ff40a954b3a58276e7812c86f745d.scope: Deactivated successfully.
Oct 02 19:29:56 compute-0 podman[345595]: 2025-10-02 19:29:56.970986358 +0000 UTC m=+1.470797794 container died 0186bd78a64c8dc56c4f10dfc62cab0aa59ff40a954b3a58276e7812c86f745d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wright, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 19:29:56 compute-0 systemd[1]: libpod-0186bd78a64c8dc56c4f10dfc62cab0aa59ff40a954b3a58276e7812c86f745d.scope: Consumed 1.173s CPU time.
Oct 02 19:29:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee4e4306feb79c52e95b550dd002854df1383253731873084862a0476cdfc4c0-merged.mount: Deactivated successfully.
Oct 02 19:29:57 compute-0 sudo[345910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixwizimnwblmgsvbdxgmilbeoaxpwtti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433396.5319145-1210-12199174573315/AnsiballZ_command.py'
Oct 02 19:29:57 compute-0 sudo[345910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:57 compute-0 podman[345595]: 2025-10-02 19:29:57.049026503 +0000 UTC m=+1.548837929 container remove 0186bd78a64c8dc56c4f10dfc62cab0aa59ff40a954b3a58276e7812c86f745d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:29:57 compute-0 systemd[1]: libpod-conmon-0186bd78a64c8dc56c4f10dfc62cab0aa59ff40a954b3a58276e7812c86f745d.scope: Deactivated successfully.
Oct 02 19:29:57 compute-0 sudo[345337]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:29:57 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:29:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:29:57 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:29:57 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 452b6373-80f3-4d89-af99-ca1fbf5cc892 does not exist
Oct 02 19:29:57 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 178ab0f9-1d6c-4a70-80a1-f090f776c7d2 does not exist
Oct 02 19:29:57 compute-0 sudo[345913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:29:57 compute-0 ceph-mon[191910]: pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:57 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:29:57 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:29:57 compute-0 sudo[345913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:29:57 compute-0 sudo[345913]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:57 compute-0 python3.9[345912]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:29:57 compute-0 sudo[345938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:29:57 compute-0 sudo[345938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:29:57 compute-0 sudo[345938]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:57 compute-0 sudo[345910]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:57 compute-0 podman[346016]: 2025-10-02 19:29:57.664250803 +0000 UTC m=+0.090626363 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:29:57 compute-0 podman[346012]: 2025-10-02 19:29:57.685688666 +0000 UTC m=+0.112702703 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930)
Oct 02 19:29:57 compute-0 sudo[346153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itkqpumylegihfktpyufktevnakchtgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433397.5298998-1210-87184585479602/AnsiballZ_command.py'
Oct 02 19:29:57 compute-0 sudo[346153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:58 compute-0 python3.9[346155]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:29:58 compute-0 sudo[346153]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:58 compute-0 sudo[346306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxlbdqxtlxnexplpkyjhkjmhmtvsslla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433398.4637153-1210-219843025162142/AnsiballZ_command.py'
Oct 02 19:29:58 compute-0 sudo[346306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:29:59 compute-0 python3.9[346308]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:29:59 compute-0 ceph-mon[191910]: pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:29:59 compute-0 sudo[346306]: pam_unix(sudo:session): session closed for user root
Oct 02 19:29:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:29:59 compute-0 podman[157186]: time="2025-10-02T19:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:29:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 40787 "" "Go-http-client/1.1"
Oct 02 19:29:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8124 "" "Go-http-client/1.1"
Oct 02 19:30:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:00 compute-0 sudo[346459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txyepdrgqcbomcykzbtughpggnqdpoyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433400.2395868-1289-92822960728058/AnsiballZ_file.py'
Oct 02 19:30:00 compute-0 sudo[346459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:00 compute-0 python3.9[346461]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:00 compute-0 sudo[346459]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:01 compute-0 ceph-mon[191910]: pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:01 compute-0 openstack_network_exporter[159337]: ERROR   19:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:30:01 compute-0 openstack_network_exporter[159337]: ERROR   19:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:30:01 compute-0 openstack_network_exporter[159337]: ERROR   19:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:30:01 compute-0 openstack_network_exporter[159337]: ERROR   19:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:30:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:30:01 compute-0 openstack_network_exporter[159337]: ERROR   19:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:30:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:30:01 compute-0 podman[346586]: 2025-10-02 19:30:01.69306945 +0000 UTC m=+0.105996143 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:30:01 compute-0 sudo[346661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyyerpustbuiccbgkjxhyoyruvkjmfpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433401.1809888-1289-185497534731175/AnsiballZ_file.py'
Oct 02 19:30:01 compute-0 sudo[346661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:01 compute-0 podman[346584]: 2025-10-02 19:30:01.732602797 +0000 UTC m=+0.144671727 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64)
Oct 02 19:30:01 compute-0 podman[346587]: 2025-10-02 19:30:01.744177146 +0000 UTC m=+0.143507896 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 02 19:30:01 compute-0 python3.9[346669]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:01 compute-0 sudo[346661]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:02 compute-0 sudo[346823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjhozkxzydiunuoocwntgpcxjsbfrrrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433402.1526933-1289-198708796334425/AnsiballZ_file.py'
Oct 02 19:30:02 compute-0 sudo[346823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:02 compute-0 python3.9[346825]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:02 compute-0 sudo[346823]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:03 compute-0 ceph-mon[191910]: pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:30:03
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'volumes', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'images', 'vms']
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:30:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:30:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:04 compute-0 sudo[346975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhibawoeidgcvavytmzqjllcbsrvtzll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433403.843995-1311-246759401025433/AnsiballZ_file.py'
Oct 02 19:30:04 compute-0 sudo[346975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:30:04 compute-0 python3.9[346977]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:04 compute-0 sudo[346975]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:05 compute-0 ceph-mon[191910]: pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:05 compute-0 podman[347101]: 2025-10-02 19:30:05.831477486 +0000 UTC m=+0.096248233 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:30:05 compute-0 sudo[347142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuzpgszewpcyulykqkartmbnjoixwuvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433405.3031301-1311-142631413749207/AnsiballZ_file.py'
Oct 02 19:30:05 compute-0 sudo[347142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:05 compute-0 podman[347147]: 2025-10-02 19:30:05.954094593 +0000 UTC m=+0.083445011 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:30:05 compute-0 podman[347146]: 2025-10-02 19:30:05.979017309 +0000 UTC m=+0.109345453 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Oct 02 19:30:06 compute-0 python3.9[347148]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:06 compute-0 sudo[347142]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:06 compute-0 sudo[347337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vixbgfyrpjdwftrhkojstyptgfksctxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433406.3105538-1311-24763946407189/AnsiballZ_file.py'
Oct 02 19:30:06 compute-0 sudo[347337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:06 compute-0 podman[347339]: 2025-10-02 19:30:06.925037937 +0000 UTC m=+0.125358221 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, config_id=edpm, container_name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Oct 02 19:30:07 compute-0 python3.9[347340]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:07 compute-0 sudo[347337]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:07 compute-0 ceph-mon[191910]: pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:07 compute-0 sudo[347507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxzqvtutslodjsokuqhbwrqdtsbqeddz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433407.3107889-1311-182651678647471/AnsiballZ_file.py'
Oct 02 19:30:07 compute-0 sudo[347507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:08 compute-0 python3.9[347509]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:08 compute-0 sudo[347507]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:08 compute-0 sudo[347659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emofyipcvekpbkoxqpvnnjefuexaxadp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433408.3493571-1311-186134043419913/AnsiballZ_file.py'
Oct 02 19:30:08 compute-0 sudo[347659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:09 compute-0 python3.9[347661]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:09 compute-0 sudo[347659]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:09 compute-0 ceph-mon[191910]: pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:30:09 compute-0 sudo[347811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owlmgdkhfnvcapfyspxkwaiksndnzqcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433409.332065-1311-100250976855676/AnsiballZ_file.py'
Oct 02 19:30:09 compute-0 sudo[347811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:10 compute-0 python3.9[347813]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:10 compute-0 sudo[347811]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:10 compute-0 sudo[347963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecnobwuzlzsoejnwbdrhugimxfmtkgyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433410.3266313-1311-6183325082879/AnsiballZ_file.py'
Oct 02 19:30:10 compute-0 sudo[347963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:11 compute-0 python3.9[347965]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:11 compute-0 sudo[347963]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:11 compute-0 ceph-mon[191910]: pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:11 compute-0 sudo[348115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyzcpfmnqqggsuvmidtliioeyfjtvazg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433411.3223312-1311-132404684499213/AnsiballZ_file.py'
Oct 02 19:30:11 compute-0 sudo[348115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:12 compute-0 python3.9[348117]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:12 compute-0 sudo[348115]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:30:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:30:12 compute-0 sudo[348267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wylgcyscakqtihsnsgwbztqqeplzccjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433412.3384755-1311-186239654124842/AnsiballZ_file.py'
Oct 02 19:30:12 compute-0 sudo[348267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:13 compute-0 python3.9[348269]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:13 compute-0 sudo[348267]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:13 compute-0 ceph-mon[191910]: pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:30:15 compute-0 ceph-mon[191910]: pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:30:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Cumulative writes: 5588 writes, 23K keys, 5588 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                            Cumulative WAL: 5588 writes, 853 syncs, 6.55 writes per sync, written: 0.02 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                            Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b415425090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b415425090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b415425090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55b4154251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Oct 02 19:30:17 compute-0 ceph-mon[191910]: pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:19 compute-0 ceph-mon[191910]: pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:30:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:20 compute-0 sudo[348420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-airajeojlkvcfqokwsucbzkdbsvzfdyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433419.918691-1514-117669632467617/AnsiballZ_getent.py'
Oct 02 19:30:20 compute-0 sudo[348420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:20 compute-0 python3.9[348422]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 02 19:30:20 compute-0 sudo[348420]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:21 compute-0 ceph-mon[191910]: pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:21 compute-0 podman[348523]: 2025-10-02 19:30:21.706480743 +0000 UTC m=+0.125640428 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:30:21 compute-0 sudo[348591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbwaxhnkbybyajknfaqtrbuoyjkzmsqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433421.1438181-1522-172899292034052/AnsiballZ_group.py'
Oct 02 19:30:21 compute-0 sudo[348591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:22 compute-0 python3.9[348593]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 19:30:22 compute-0 groupadd[348594]: group added to /etc/group: name=nova, GID=42436
Oct 02 19:30:22 compute-0 groupadd[348594]: group added to /etc/gshadow: name=nova
Oct 02 19:30:22 compute-0 groupadd[348594]: new group: name=nova, GID=42436
Oct 02 19:30:22 compute-0 sudo[348591]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:23 compute-0 sudo[348749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efxbnqmgpdgozzxctlunovzmybibrsmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433422.356088-1530-4919335180122/AnsiballZ_user.py'
Oct 02 19:30:23 compute-0 sudo[348749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:23 compute-0 python3.9[348751]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 19:30:23 compute-0 useradd[348753]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Oct 02 19:30:23 compute-0 useradd[348753]: add 'nova' to group 'libvirt'
Oct 02 19:30:23 compute-0 useradd[348753]: add 'nova' to shadow group 'libvirt'
Oct 02 19:30:23 compute-0 ceph-mon[191910]: pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:23 compute-0 sudo[348749]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:30:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Cumulative writes: 6813 writes, 27K keys, 6813 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                            Cumulative WAL: 6813 writes, 1229 syncs, 5.54 writes per sync, written: 0.02 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                            Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e63090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e63090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e63090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x563e27e631f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Oct 02 19:30:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:24 compute-0 sshd-session[348784]: Accepted publickey for zuul from 192.168.122.30 port 54492 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:30:24 compute-0 systemd-logind[793]: New session 58 of user zuul.
Oct 02 19:30:24 compute-0 systemd[1]: Started Session 58 of User zuul.
Oct 02 19:30:24 compute-0 sshd-session[348784]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.440 17 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.441 17 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.441 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a00e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.442 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feec38a00b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.443 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec5f86ab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38272c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec6d242f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.446 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38273e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.446 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.446 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.445 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.447 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feec72bb7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.447 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.448 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feec38264b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.448 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feec3827c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feec3827230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.449 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.449 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feec3825700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feec3827290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.450 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feec38277a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.450 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feec38272f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feec3827350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.451 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.451 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feec38a0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.452 17 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.452 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feec38273b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.452 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.453 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feec49646e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.453 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.454 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feec3827440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.455 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.455 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feec3827c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.455 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.454 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38274d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.456 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.456 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.457 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.458 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.458 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec4ac1ee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.459 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.459 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.459 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3825730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.460 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.460 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec3396840>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'cpu': [], 'disk.device.read.requests': [], 'network.incoming.bytes.delta': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.457 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feec38274a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.461 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.461 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feec3827ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.461 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.461 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feec3827d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.461 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.462 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feec3827dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.462 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.462 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feec3827e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.462 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.462 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feec38a0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.463 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.463 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feec38276e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.463 17 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.463 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feec3827ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.463 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.464 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feec49641d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.464 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.465 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feec3827740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.465 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.465 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feec3827f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.465 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.466 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.466 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.466 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.467 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.467 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.467 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.467 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.467 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.467 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.468 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.468 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.468 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.469 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.469 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.469 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.469 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.469 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.469 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.470 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.470 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.470 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.470 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.470 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.470 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.471 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:30:24.471 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:30:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:30:24 compute-0 sshd-session[348787]: Received disconnect from 192.168.122.30 port 54492:11: disconnected by user
Oct 02 19:30:24 compute-0 sshd-session[348787]: Disconnected from user zuul 192.168.122.30 port 54492
Oct 02 19:30:24 compute-0 sshd-session[348784]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:30:24 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Oct 02 19:30:24 compute-0 systemd-logind[793]: Session 58 logged out. Waiting for processes to exit.
Oct 02 19:30:24 compute-0 systemd-logind[793]: Removed session 58.
Oct 02 19:30:25 compute-0 ceph-mon[191910]: pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:25 compute-0 python3.9[348938]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:30:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:26 compute-0 python3.9[349059]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759433424.8035405-1555-232314310082621/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:27 compute-0 ceph-mon[191910]: pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:27 compute-0 python3.9[349209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:30:27 compute-0 podman[349259]: 2025-10-02 19:30:27.930998343 +0000 UTC m=+0.102639674 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 19:30:27 compute-0 podman[349260]: 2025-10-02 19:30:27.959364191 +0000 UTC m=+0.136582441 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:30:28 compute-0 python3.9[349314]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:29 compute-0 ceph-mon[191910]: pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:30:29 compute-0 python3.9[349477]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:30:29 compute-0 podman[157186]: time="2025-10-02T19:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:30:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 40787 "" "Go-http-client/1.1"
Oct 02 19:30:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8121 "" "Go-http-client/1.1"
Oct 02 19:30:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:30 compute-0 python3.9[349598]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759433428.880192-1555-121998179395173/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:30:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Cumulative writes: 5704 writes, 24K keys, 5704 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                            Cumulative WAL: 5704 writes, 878 syncs, 6.50 writes per sync, written: 0.02 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                            Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5639634ad1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Oct 02 19:30:31 compute-0 openstack_network_exporter[159337]: ERROR   19:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:30:31 compute-0 openstack_network_exporter[159337]: ERROR   19:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:30:31 compute-0 openstack_network_exporter[159337]: ERROR   19:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:30:31 compute-0 openstack_network_exporter[159337]: ERROR   19:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:30:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:30:31 compute-0 openstack_network_exporter[159337]: ERROR   19:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:30:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:30:31 compute-0 ceph-mon[191910]: pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:31 compute-0 ceph-mgr[192222]: [devicehealth INFO root] Check health
Oct 02 19:30:31 compute-0 podman[349729]: 2025-10-02 19:30:31.997632142 +0000 UTC m=+0.108944492 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:30:32 compute-0 podman[349723]: 2025-10-02 19:30:32.007094075 +0000 UTC m=+0.125060573 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, architecture=x86_64, container_name=openstack_network_exporter, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, config_id=edpm)
Oct 02 19:30:32 compute-0 podman[349731]: 2025-10-02 19:30:32.022325332 +0000 UTC m=+0.133093428 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:30:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:32 compute-0 python3.9[349780]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:30:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:30:32.274 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:30:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:30:32.275 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:30:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:30:32.275 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:30:32 compute-0 python3.9[349927]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759433430.6231492-1555-38393139612639/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:33 compute-0 ceph-mon[191910]: pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:30:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:30:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:30:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:30:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:30:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:30:33 compute-0 python3.9[350077]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:30:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:30:34 compute-0 python3.9[350198]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759433433.2304711-1555-203810636068279/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:35 compute-0 ceph-mon[191910]: pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:35 compute-0 sudo[350348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxwuiuyraexzpvbqtywivjzehdwafmuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433435.12435-1624-231137482605959/AnsiballZ_file.py'
Oct 02 19:30:35 compute-0 sudo[350348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:35 compute-0 python3.9[350350]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:30:35 compute-0 sudo[350348]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:36 compute-0 podman[350351]: 2025-10-02 19:30:36.066006066 +0000 UTC m=+0.088021163 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:30:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:36 compute-0 podman[350392]: 2025-10-02 19:30:36.195488256 +0000 UTC m=+0.085287850 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:30:36 compute-0 podman[350388]: 2025-10-02 19:30:36.229179496 +0000 UTC m=+0.124665762 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:30:36 compute-0 sudo[350557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgssmzhkbcfajnlmbhlelwomizauoqem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433436.234245-1632-20329880696274/AnsiballZ_copy.py'
Oct 02 19:30:36 compute-0 sudo[350557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:36 compute-0 python3.9[350559]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:30:37 compute-0 sudo[350557]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:37 compute-0 podman[350560]: 2025-10-02 19:30:37.125915169 +0000 UTC m=+0.101444112 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=base rhel9, release-0.7.12=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:30:37 compute-0 ceph-mon[191910]: pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:37 compute-0 sudo[350729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-visviepupcivpljggybwdqcuzsppfgca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433437.3045104-1640-2007335013772/AnsiballZ_stat.py'
Oct 02 19:30:37 compute-0 sudo[350729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:37 compute-0 python3.9[350731]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:30:38 compute-0 sudo[350729]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:39 compute-0 sudo[350881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdidftrtfgttubxkklrtdkaixloebgeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433438.3271945-1648-7388928785252/AnsiballZ_stat.py'
Oct 02 19:30:39 compute-0 sudo[350881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:39 compute-0 python3.9[350883]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:30:39 compute-0 sudo[350881]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:30:39 compute-0 ceph-mon[191910]: pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:39 compute-0 sudo[351004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phivrbnquavtwwjgilwsyrbickqzksmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433438.3271945-1648-7388928785252/AnsiballZ_copy.py'
Oct 02 19:30:40 compute-0 sudo[351004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:40 compute-0 python3.9[351006]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759433438.3271945-1648-7388928785252/.source _original_basename=.i3ou2h4g follow=False checksum=04f64c987ea8ffefa7fa166236b70d6b227312e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 02 19:30:40 compute-0 sudo[351004]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:30:40.525840) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433440525924, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1562, "num_deletes": 251, "total_data_size": 2561780, "memory_usage": 2590048, "flush_reason": "Manual Compaction"}
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433440544196, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2527048, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14748, "largest_seqno": 16309, "table_properties": {"data_size": 2519786, "index_size": 4333, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14461, "raw_average_key_size": 19, "raw_value_size": 2505316, "raw_average_value_size": 3394, "num_data_blocks": 198, "num_entries": 738, "num_filter_entries": 738, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759433265, "oldest_key_time": 1759433265, "file_creation_time": 1759433440, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 18471 microseconds, and 11810 cpu microseconds.
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:30:40.544302) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2527048 bytes OK
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:30:40.544339) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:30:40.548934) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:30:40.548962) EVENT_LOG_v1 {"time_micros": 1759433440548954, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:30:40.548987) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2555035, prev total WAL file size 2555035, number of live WAL files 2.
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:30:40.551504) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2467KB)], [35(6812KB)]
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433440551594, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9502778, "oldest_snapshot_seqno": -1}
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3974 keys, 7733421 bytes, temperature: kUnknown
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433440600033, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7733421, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7704523, "index_size": 17844, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 97064, "raw_average_key_size": 24, "raw_value_size": 7630233, "raw_average_value_size": 1920, "num_data_blocks": 756, "num_entries": 3974, "num_filter_entries": 3974, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759433440, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:30:40.600295) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7733421 bytes
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:30:40.602670) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.9 rd, 159.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 6.7 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 4488, records dropped: 514 output_compression: NoCompression
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:30:40.602696) EVENT_LOG_v1 {"time_micros": 1759433440602681, "job": 16, "event": "compaction_finished", "compaction_time_micros": 48517, "compaction_time_cpu_micros": 34494, "output_level": 6, "num_output_files": 1, "total_output_size": 7733421, "num_input_records": 4488, "num_output_records": 3974, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433440603262, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433440605040, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:30:40.551292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:30:40.605329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:30:40.605338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:30:40.605343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:30:40.605347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:30:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:30:40.605352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:30:41 compute-0 ceph-mon[191910]: pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:41 compute-0 python3.9[351158]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:30:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:42 compute-0 python3.9[351310]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:30:43 compute-0 ceph-mon[191910]: pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:44 compute-0 python3.9[351431]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759433442.1589878-1674-81535282642389/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:30:45 compute-0 python3.9[351581]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:30:45 compute-0 ceph-mon[191910]: pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:46 compute-0 python3.9[351702]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759433444.7062922-1689-191219072729419/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:30:47 compute-0 sudo[351852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upcdbhgozmssnbcfwjdnttzyiiweocoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433446.8534386-1706-45565970703465/AnsiballZ_container_config_data.py'
Oct 02 19:30:47 compute-0 sudo[351852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:47 compute-0 python3.9[351854]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 02 19:30:47 compute-0 sudo[351852]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:47 compute-0 ceph-mon[191910]: pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:48 compute-0 sudo[352004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhfogwgdrnecxnkllyasyrptyhekxjtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433447.875409-1715-94197271887377/AnsiballZ_container_config_hash.py'
Oct 02 19:30:48 compute-0 sudo[352004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:48 compute-0 python3.9[352006]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:30:48 compute-0 sudo[352004]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:30:49 compute-0 sudo[352157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tecxpwcvpeoztshuoadacnpxfrgnptzl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759433449.1325417-1725-182094838882122/AnsiballZ_edpm_container_manage.py'
Oct 02 19:30:49 compute-0 sudo[352157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:30:49 compute-0 ceph-mon[191910]: pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:49 compute-0 python3[352159]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:30:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:51 compute-0 ceph-mon[191910]: pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:52 compute-0 podman[352194]: 2025-10-02 19:30:52.636144669 +0000 UTC m=+0.070140876 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 19:30:53 compute-0 ceph-mon[191910]: pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:30:55 compute-0 ceph-mon[191910]: pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:57 compute-0 sudo[352232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:30:57 compute-0 sudo[352232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:30:57 compute-0 sudo[352232]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:57 compute-0 sudo[352257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:30:57 compute-0 sudo[352257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:30:57 compute-0 sudo[352257]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:57 compute-0 ceph-mon[191910]: pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:57 compute-0 sudo[352282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:30:57 compute-0 sudo[352282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:30:57 compute-0 sudo[352282]: pam_unix(sudo:session): session closed for user root
Oct 02 19:30:57 compute-0 sudo[352307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 19:30:57 compute-0 sudo[352307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:30:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:30:58 compute-0 podman[352344]: 2025-10-02 19:30:58.931142693 +0000 UTC m=+0.359550539 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:30:58 compute-0 podman[352343]: 2025-10-02 19:30:58.980458461 +0000 UTC m=+0.414290362 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 19:30:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:30:59 compute-0 podman[157186]: time="2025-10-02T19:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:30:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 40787 "" "Go-http-client/1.1"
Oct 02 19:30:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8123 "" "Go-http-client/1.1"
Oct 02 19:31:00 compute-0 ceph-mon[191910]: pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:01 compute-0 ceph-mon[191910]: pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:01 compute-0 openstack_network_exporter[159337]: ERROR   19:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:31:01 compute-0 openstack_network_exporter[159337]: ERROR   19:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:31:01 compute-0 openstack_network_exporter[159337]: ERROR   19:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:31:01 compute-0 openstack_network_exporter[159337]: ERROR   19:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:31:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:31:01 compute-0 openstack_network_exporter[159337]: ERROR   19:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:31:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:31:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:31:03
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'backups', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'images']
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:31:03 compute-0 ceph-mon[191910]: pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:31:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:31:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:31:06 compute-0 ceph-mon[191910]: pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:06 compute-0 podman[352398]: 2025-10-02 19:31:06.867852425 +0000 UTC m=+4.290361526 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 19:31:06 compute-0 sudo[352307]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:31:06 compute-0 podman[352397]: 2025-10-02 19:31:06.896729647 +0000 UTC m=+4.310620387 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41)
Oct 02 19:31:06 compute-0 podman[352426]: 2025-10-02 19:31:06.909522239 +0000 UTC m=+0.325613571 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:31:06 compute-0 podman[352424]: 2025-10-02 19:31:06.918566051 +0000 UTC m=+0.348922344 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct 02 19:31:06 compute-0 podman[352425]: 2025-10-02 19:31:06.923703128 +0000 UTC m=+0.338447014 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 19:31:06 compute-0 podman[352399]: 2025-10-02 19:31:06.954130191 +0000 UTC m=+4.376081547 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct 02 19:31:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:31:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:31:07 compute-0 ceph-mon[191910]: pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:31:07 compute-0 podman[352171]: 2025-10-02 19:31:07.58766462 +0000 UTC m=+17.484394625 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 02 19:31:07 compute-0 sudo[352520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:31:07 compute-0 sudo[352520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:31:07 compute-0 sudo[352520]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:07 compute-0 podman[352522]: 2025-10-02 19:31:07.704995306 +0000 UTC m=+0.111332096 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=kepler, release-0.7.12=, config_id=edpm, name=ubi9, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct 02 19:31:07 compute-0 sudo[352573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:31:07 compute-0 sudo[352573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:31:07 compute-0 sudo[352573]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:07 compute-0 podman[352604]: 2025-10-02 19:31:07.870727314 +0000 UTC m=+0.093008356 container create 72103d433f9aa0b643abc86067e04da319dfabefd9bcb079ee3189d8c61cfbab (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 19:31:07 compute-0 podman[352604]: 2025-10-02 19:31:07.822236918 +0000 UTC m=+0.044518010 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 02 19:31:07 compute-0 python3[352159]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 02 19:31:07 compute-0 sudo[352617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:31:07 compute-0 sudo[352617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:31:07 compute-0 sudo[352617]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:08 compute-0 sudo[352651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:31:08 compute-0 sudo[352651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:31:08 compute-0 sudo[352157]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:08 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:31:08 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:31:08 compute-0 sudo[352651]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:31:08 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:31:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:31:08 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:31:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:31:08 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:31:08 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e7cf1271-e95d-4d3e-bf54-b495eea031ab does not exist
Oct 02 19:31:08 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev c4f462af-0be6-4962-9849-f9a161fb6d12 does not exist
Oct 02 19:31:08 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 779b6719-e8c5-4e45-9e2d-717340ab9cf6 does not exist
Oct 02 19:31:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:31:08 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:31:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:31:08 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:31:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:31:08 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:31:08 compute-0 sudo[352827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:31:08 compute-0 sudo[352827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:31:08 compute-0 sudo[352827]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:08 compute-0 sudo[352901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyjelxdfjczeyybjqqvublysmmuedihl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433468.4009755-1733-228248305206325/AnsiballZ_stat.py'
Oct 02 19:31:08 compute-0 sudo[352901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:31:08 compute-0 sudo[352895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:31:08 compute-0 sudo[352895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:31:08 compute-0 sudo[352895]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:09 compute-0 python3.9[352911]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:31:09 compute-0 sudo[352925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:31:09 compute-0 sudo[352925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:31:09 compute-0 sudo[352925]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:09 compute-0 sudo[352901]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:09 compute-0 sudo[352952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:31:09 compute-0 sudo[352952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:31:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:31:09 compute-0 ceph-mon[191910]: pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:09 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:31:09 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:31:09 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:31:09 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:31:09 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:31:09 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:31:09 compute-0 podman[353056]: 2025-10-02 19:31:09.733967454 +0000 UTC m=+0.058750781 container create cf2fd0248aba350c543c11c86efcdf4a557c8a2ea3312d3a39dababa6bca05ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_keller, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:31:09 compute-0 systemd[1]: Started libpod-conmon-cf2fd0248aba350c543c11c86efcdf4a557c8a2ea3312d3a39dababa6bca05ca.scope.
Oct 02 19:31:09 compute-0 podman[353056]: 2025-10-02 19:31:09.710190039 +0000 UTC m=+0.034973406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:31:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:31:09 compute-0 podman[353056]: 2025-10-02 19:31:09.859119408 +0000 UTC m=+0.183902775 container init cf2fd0248aba350c543c11c86efcdf4a557c8a2ea3312d3a39dababa6bca05ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_keller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 19:31:09 compute-0 podman[353056]: 2025-10-02 19:31:09.871658403 +0000 UTC m=+0.196441720 container start cf2fd0248aba350c543c11c86efcdf4a557c8a2ea3312d3a39dababa6bca05ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 19:31:09 compute-0 podman[353056]: 2025-10-02 19:31:09.876990646 +0000 UTC m=+0.201774063 container attach cf2fd0248aba350c543c11c86efcdf4a557c8a2ea3312d3a39dababa6bca05ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_keller, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:31:09 compute-0 hungry_keller[353103]: 167 167
Oct 02 19:31:09 compute-0 systemd[1]: libpod-cf2fd0248aba350c543c11c86efcdf4a557c8a2ea3312d3a39dababa6bca05ca.scope: Deactivated successfully.
Oct 02 19:31:09 compute-0 podman[353056]: 2025-10-02 19:31:09.891243107 +0000 UTC m=+0.216026444 container died cf2fd0248aba350c543c11c86efcdf4a557c8a2ea3312d3a39dababa6bca05ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_keller, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:31:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bb6636f482211b02b7bee66786926801257706959f6751b707bf0c4e001a906-merged.mount: Deactivated successfully.
Oct 02 19:31:09 compute-0 podman[353056]: 2025-10-02 19:31:09.961874904 +0000 UTC m=+0.286658221 container remove cf2fd0248aba350c543c11c86efcdf4a557c8a2ea3312d3a39dababa6bca05ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 19:31:09 compute-0 systemd[1]: libpod-conmon-cf2fd0248aba350c543c11c86efcdf4a557c8a2ea3312d3a39dababa6bca05ca.scope: Deactivated successfully.
Oct 02 19:31:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:10 compute-0 podman[353172]: 2025-10-02 19:31:10.207283392 +0000 UTC m=+0.087656004 container create c3583cd0104408867658f103017c4e0b959eb766b00bd215ceb514c086f33966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_turing, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:31:10 compute-0 podman[353172]: 2025-10-02 19:31:10.169193864 +0000 UTC m=+0.049566526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:31:10 compute-0 systemd[1]: Started libpod-conmon-c3583cd0104408867658f103017c4e0b959eb766b00bd215ceb514c086f33966.scope.
Oct 02 19:31:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca93adb618969ab5d7aebdf7ae78dca4098159ad8e04b62d5d039c7b8c332c89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca93adb618969ab5d7aebdf7ae78dca4098159ad8e04b62d5d039c7b8c332c89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca93adb618969ab5d7aebdf7ae78dca4098159ad8e04b62d5d039c7b8c332c89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca93adb618969ab5d7aebdf7ae78dca4098159ad8e04b62d5d039c7b8c332c89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca93adb618969ab5d7aebdf7ae78dca4098159ad8e04b62d5d039c7b8c332c89/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:10 compute-0 podman[353172]: 2025-10-02 19:31:10.380002007 +0000 UTC m=+0.260374659 container init c3583cd0104408867658f103017c4e0b959eb766b00bd215ceb514c086f33966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_turing, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 19:31:10 compute-0 podman[353172]: 2025-10-02 19:31:10.403106295 +0000 UTC m=+0.283478907 container start c3583cd0104408867658f103017c4e0b959eb766b00bd215ceb514c086f33966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_turing, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 19:31:10 compute-0 podman[353172]: 2025-10-02 19:31:10.41494575 +0000 UTC m=+0.295318362 container attach c3583cd0104408867658f103017c4e0b959eb766b00bd215ceb514c086f33966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 19:31:11 compute-0 sudo[353219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bksdtgjrkgkouliaybjikxlkvlltvooc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433469.6717803-1745-135879224992349/AnsiballZ_container_config_data.py'
Oct 02 19:31:11 compute-0 sudo[353219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:31:11 compute-0 python3.9[353221]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 02 19:31:11 compute-0 sudo[353219]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:11 compute-0 ceph-mon[191910]: pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:11 compute-0 musing_turing[353188]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:31:11 compute-0 musing_turing[353188]: --> relative data size: 1.0
Oct 02 19:31:11 compute-0 musing_turing[353188]: --> All data devices are unavailable
Oct 02 19:31:11 compute-0 systemd[1]: libpod-c3583cd0104408867658f103017c4e0b959eb766b00bd215ceb514c086f33966.scope: Deactivated successfully.
Oct 02 19:31:11 compute-0 systemd[1]: libpod-c3583cd0104408867658f103017c4e0b959eb766b00bd215ceb514c086f33966.scope: Consumed 1.258s CPU time.
Oct 02 19:31:11 compute-0 podman[353172]: 2025-10-02 19:31:11.764580165 +0000 UTC m=+1.644952797 container died c3583cd0104408867658f103017c4e0b959eb766b00bd215ceb514c086f33966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_turing, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 19:31:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca93adb618969ab5d7aebdf7ae78dca4098159ad8e04b62d5d039c7b8c332c89-merged.mount: Deactivated successfully.
Oct 02 19:31:11 compute-0 podman[353172]: 2025-10-02 19:31:11.856286956 +0000 UTC m=+1.736659548 container remove c3583cd0104408867658f103017c4e0b959eb766b00bd215ceb514c086f33966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:31:11 compute-0 systemd[1]: libpod-conmon-c3583cd0104408867658f103017c4e0b959eb766b00bd215ceb514c086f33966.scope: Deactivated successfully.
Oct 02 19:31:11 compute-0 sudo[352952]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:11 compute-0 sudo[353336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:31:11 compute-0 sudo[353336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:31:11 compute-0 sudo[353336]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:12 compute-0 sudo[353384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:31:12 compute-0 sudo[353384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:31:12 compute-0 sudo[353384]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:12 compute-0 sudo[353433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:31:12 compute-0 sudo[353433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:31:12 compute-0 sudo[353433]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:12 compute-0 sudo[353483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwjucjwjeciislrqaduvnfuxtydvcvbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433471.7811632-1754-267167458931319/AnsiballZ_container_config_hash.py'
Oct 02 19:31:12 compute-0 sudo[353483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:31:12 compute-0 sudo[353486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:31:12 compute-0 sudo[353486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:31:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:31:12 compute-0 python3.9[353489]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:31:12 compute-0 sudo[353483]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:12 compute-0 podman[353573]: 2025-10-02 19:31:12.841837042 +0000 UTC m=+0.077856702 container create db121d46042f51617d733b33301ed546d48b5198a37058d5ee95fb4194b96422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_payne, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 19:31:12 compute-0 podman[353573]: 2025-10-02 19:31:12.802311155 +0000 UTC m=+0.038330785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:31:12 compute-0 systemd[1]: Started libpod-conmon-db121d46042f51617d733b33301ed546d48b5198a37058d5ee95fb4194b96422.scope.
Oct 02 19:31:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:31:12 compute-0 podman[353573]: 2025-10-02 19:31:12.979809369 +0000 UTC m=+0.215829079 container init db121d46042f51617d733b33301ed546d48b5198a37058d5ee95fb4194b96422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:31:12 compute-0 podman[353573]: 2025-10-02 19:31:12.997900812 +0000 UTC m=+0.233920462 container start db121d46042f51617d733b33301ed546d48b5198a37058d5ee95fb4194b96422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 19:31:13 compute-0 fervent_payne[353588]: 167 167
Oct 02 19:31:13 compute-0 systemd[1]: libpod-db121d46042f51617d733b33301ed546d48b5198a37058d5ee95fb4194b96422.scope: Deactivated successfully.
Oct 02 19:31:13 compute-0 podman[353573]: 2025-10-02 19:31:13.052674536 +0000 UTC m=+0.288694246 container attach db121d46042f51617d733b33301ed546d48b5198a37058d5ee95fb4194b96422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 19:31:13 compute-0 podman[353573]: 2025-10-02 19:31:13.053095007 +0000 UTC m=+0.289114657 container died db121d46042f51617d733b33301ed546d48b5198a37058d5ee95fb4194b96422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_payne, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-de67a75f5f3ae56466631a09b6f57551a64d85b2a8985e252984d1209f72ab14-merged.mount: Deactivated successfully.
Oct 02 19:31:13 compute-0 podman[353573]: 2025-10-02 19:31:13.495774266 +0000 UTC m=+0.731793926 container remove db121d46042f51617d733b33301ed546d48b5198a37058d5ee95fb4194b96422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_payne, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 19:31:13 compute-0 systemd[1]: libpod-conmon-db121d46042f51617d733b33301ed546d48b5198a37058d5ee95fb4194b96422.scope: Deactivated successfully.
Oct 02 19:31:13 compute-0 ceph-mon[191910]: pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:13 compute-0 podman[353638]: 2025-10-02 19:31:13.78145374 +0000 UTC m=+0.099453109 container create 965a1aee664ecf679fca7ca6b424ee724ab3b9df6059ee106fe6faf68869c607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 19:31:13 compute-0 podman[353638]: 2025-10-02 19:31:13.735717558 +0000 UTC m=+0.053717007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:31:13 compute-0 systemd[1]: Started libpod-conmon-965a1aee664ecf679fca7ca6b424ee724ab3b9df6059ee106fe6faf68869c607.scope.
Oct 02 19:31:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d687c768ae08dcc7f5942afdf6cda09cf11d8c2db48e6946f6a4242c7d5534bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d687c768ae08dcc7f5942afdf6cda09cf11d8c2db48e6946f6a4242c7d5534bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d687c768ae08dcc7f5942afdf6cda09cf11d8c2db48e6946f6a4242c7d5534bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d687c768ae08dcc7f5942afdf6cda09cf11d8c2db48e6946f6a4242c7d5534bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:13 compute-0 podman[353638]: 2025-10-02 19:31:13.924266626 +0000 UTC m=+0.242266015 container init 965a1aee664ecf679fca7ca6b424ee724ab3b9df6059ee106fe6faf68869c607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 19:31:13 compute-0 podman[353638]: 2025-10-02 19:31:13.939328439 +0000 UTC m=+0.257327798 container start 965a1aee664ecf679fca7ca6b424ee724ab3b9df6059ee106fe6faf68869c607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:31:13 compute-0 podman[353638]: 2025-10-02 19:31:13.94348469 +0000 UTC m=+0.261484079 container attach 965a1aee664ecf679fca7ca6b424ee724ab3b9df6059ee106fe6faf68869c607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:31:14 compute-0 sudo[353761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfrkxejaooejqixnzljglbtdhfmozhsc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759433473.6643758-1764-64533105062202/AnsiballZ_edpm_container_manage.py'
Oct 02 19:31:14 compute-0 sudo[353761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:31:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:14 compute-0 python3[353763]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:31:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]: {
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:     "0": [
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:         {
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "devices": [
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "/dev/loop3"
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             ],
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "lv_name": "ceph_lv0",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "lv_size": "21470642176",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "name": "ceph_lv0",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "tags": {
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.cluster_name": "ceph",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.crush_device_class": "",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.encrypted": "0",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.osd_id": "0",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.type": "block",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.vdo": "0"
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             },
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "type": "block",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "vg_name": "ceph_vg0"
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:         }
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:     ],
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:     "1": [
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:         {
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "devices": [
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "/dev/loop4"
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             ],
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "lv_name": "ceph_lv1",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "lv_size": "21470642176",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "name": "ceph_lv1",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "tags": {
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.cluster_name": "ceph",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.crush_device_class": "",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.encrypted": "0",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.osd_id": "1",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.type": "block",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.vdo": "0"
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             },
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "type": "block",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "vg_name": "ceph_vg1"
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:         }
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:     ],
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:     "2": [
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:         {
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "devices": [
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "/dev/loop5"
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             ],
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "lv_name": "ceph_lv2",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "lv_size": "21470642176",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "name": "ceph_lv2",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "tags": {
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.cluster_name": "ceph",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.crush_device_class": "",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.encrypted": "0",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.osd_id": "2",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.type": "block",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:                 "ceph.vdo": "0"
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             },
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "type": "block",
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:             "vg_name": "ceph_vg2"
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:         }
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]:     ]
Oct 02 19:31:14 compute-0 hungry_lichterman[353706]: }
Oct 02 19:31:14 compute-0 systemd[1]: libpod-965a1aee664ecf679fca7ca6b424ee724ab3b9df6059ee106fe6faf68869c607.scope: Deactivated successfully.
Oct 02 19:31:14 compute-0 podman[353638]: 2025-10-02 19:31:14.821176043 +0000 UTC m=+1.139175452 container died 965a1aee664ecf679fca7ca6b424ee724ab3b9df6059ee106fe6faf68869c607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 19:31:14 compute-0 podman[353798]: 2025-10-02 19:31:14.835332631 +0000 UTC m=+0.130779116 container create 3095acc13401ddbc319e94618a67ee24087993b62ef5596946e126cbcc610f65 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute, org.label-schema.schema-version=1.0, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:31:14 compute-0 podman[353798]: 2025-10-02 19:31:14.767302613 +0000 UTC m=+0.062749158 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 02 19:31:14 compute-0 python3[353763]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Oct 02 19:31:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d687c768ae08dcc7f5942afdf6cda09cf11d8c2db48e6946f6a4242c7d5534bb-merged.mount: Deactivated successfully.
Oct 02 19:31:14 compute-0 podman[353638]: 2025-10-02 19:31:14.938323813 +0000 UTC m=+1.256323182 container remove 965a1aee664ecf679fca7ca6b424ee724ab3b9df6059ee106fe6faf68869c607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 19:31:14 compute-0 systemd[1]: libpod-conmon-965a1aee664ecf679fca7ca6b424ee724ab3b9df6059ee106fe6faf68869c607.scope: Deactivated successfully.
Oct 02 19:31:14 compute-0 sudo[353486]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:15 compute-0 sudo[353761]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:15 compute-0 sudo[353849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:31:15 compute-0 sudo[353849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:31:15 compute-0 sudo[353849]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:15 compute-0 sudo[353877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:31:15 compute-0 sudo[353877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:31:15 compute-0 sudo[353877]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:15 compute-0 sudo[353923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:31:15 compute-0 sudo[353923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:31:15 compute-0 sudo[353923]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:15 compute-0 sudo[353956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:31:15 compute-0 sudo[353956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:31:15 compute-0 ceph-mon[191910]: pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:15 compute-0 sudo[354144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twmqckjqkpjyzmkezwwjposwhnyuwnit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433475.3659043-1772-261593064650180/AnsiballZ_stat.py'
Oct 02 19:31:15 compute-0 sudo[354144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:31:15 compute-0 podman[354127]: 2025-10-02 19:31:15.927129886 +0000 UTC m=+0.061193576 container create 82d474846cffe0a0d499736095385f043c5a37d1f16f3107dd207df2e52aa147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_franklin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:31:15 compute-0 systemd[1]: Started libpod-conmon-82d474846cffe0a0d499736095385f043c5a37d1f16f3107dd207df2e52aa147.scope.
Oct 02 19:31:15 compute-0 podman[354127]: 2025-10-02 19:31:15.901794289 +0000 UTC m=+0.035858019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:31:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:31:16 compute-0 podman[354127]: 2025-10-02 19:31:16.047862002 +0000 UTC m=+0.181925792 container init 82d474846cffe0a0d499736095385f043c5a37d1f16f3107dd207df2e52aa147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct 02 19:31:16 compute-0 podman[354127]: 2025-10-02 19:31:16.065541625 +0000 UTC m=+0.199605345 container start 82d474846cffe0a0d499736095385f043c5a37d1f16f3107dd207df2e52aa147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_franklin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 19:31:16 compute-0 podman[354127]: 2025-10-02 19:31:16.073200369 +0000 UTC m=+0.207264139 container attach 82d474846cffe0a0d499736095385f043c5a37d1f16f3107dd207df2e52aa147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:31:16 compute-0 objective_franklin[354157]: 167 167
Oct 02 19:31:16 compute-0 systemd[1]: libpod-82d474846cffe0a0d499736095385f043c5a37d1f16f3107dd207df2e52aa147.scope: Deactivated successfully.
Oct 02 19:31:16 compute-0 podman[354127]: 2025-10-02 19:31:16.078909262 +0000 UTC m=+0.212972952 container died 82d474846cffe0a0d499736095385f043c5a37d1f16f3107dd207df2e52aa147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:31:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-29e5fde665ecf1231fde1023e13b38d4c9c4cf7d4d55386361d74bd4a90b20c2-merged.mount: Deactivated successfully.
Oct 02 19:31:16 compute-0 python3.9[354152]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:31:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:16 compute-0 podman[354127]: 2025-10-02 19:31:16.180735613 +0000 UTC m=+0.314799333 container remove 82d474846cffe0a0d499736095385f043c5a37d1f16f3107dd207df2e52aa147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_franklin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 19:31:16 compute-0 sudo[354144]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:16 compute-0 systemd[1]: libpod-conmon-82d474846cffe0a0d499736095385f043c5a37d1f16f3107dd207df2e52aa147.scope: Deactivated successfully.
Oct 02 19:31:16 compute-0 podman[354206]: 2025-10-02 19:31:16.409921827 +0000 UTC m=+0.073280139 container create ff4479fe228d9ce4d99910fec30ffbe8c2d06ebacb2f6cd5b4da3c36ac5ec391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_galileo, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:31:16 compute-0 podman[354206]: 2025-10-02 19:31:16.383510731 +0000 UTC m=+0.046869063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:31:16 compute-0 systemd[1]: Started libpod-conmon-ff4479fe228d9ce4d99910fec30ffbe8c2d06ebacb2f6cd5b4da3c36ac5ec391.scope.
Oct 02 19:31:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b3a81e378c7378feff3da3db31d6b8c9adb405839bc9a2a1c9fcc89465e8d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b3a81e378c7378feff3da3db31d6b8c9adb405839bc9a2a1c9fcc89465e8d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b3a81e378c7378feff3da3db31d6b8c9adb405839bc9a2a1c9fcc89465e8d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b3a81e378c7378feff3da3db31d6b8c9adb405839bc9a2a1c9fcc89465e8d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:16 compute-0 podman[354206]: 2025-10-02 19:31:16.593690168 +0000 UTC m=+0.257048580 container init ff4479fe228d9ce4d99910fec30ffbe8c2d06ebacb2f6cd5b4da3c36ac5ec391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_galileo, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 19:31:16 compute-0 podman[354206]: 2025-10-02 19:31:16.609322416 +0000 UTC m=+0.272680758 container start ff4479fe228d9ce4d99910fec30ffbe8c2d06ebacb2f6cd5b4da3c36ac5ec391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 19:31:16 compute-0 podman[354206]: 2025-10-02 19:31:16.618239234 +0000 UTC m=+0.281597636 container attach ff4479fe228d9ce4d99910fec30ffbe8c2d06ebacb2f6cd5b4da3c36ac5ec391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:31:17 compute-0 sudo[354352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwlyqimlcncbtqvevskilshphbeyaqtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433476.5375667-1781-178929311784795/AnsiballZ_file.py'
Oct 02 19:31:17 compute-0 sudo[354352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:31:17 compute-0 python3.9[354354]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:31:17 compute-0 sudo[354352]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:17 compute-0 ceph-mon[191910]: pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:17 compute-0 gracious_galileo[354235]: {
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:         "osd_id": 1,
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:         "type": "bluestore"
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:     },
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:         "osd_id": 2,
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:         "type": "bluestore"
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:     },
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:         "osd_id": 0,
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:         "type": "bluestore"
Oct 02 19:31:17 compute-0 gracious_galileo[354235]:     }
Oct 02 19:31:17 compute-0 gracious_galileo[354235]: }
Oct 02 19:31:17 compute-0 systemd[1]: libpod-ff4479fe228d9ce4d99910fec30ffbe8c2d06ebacb2f6cd5b4da3c36ac5ec391.scope: Deactivated successfully.
Oct 02 19:31:17 compute-0 podman[354206]: 2025-10-02 19:31:17.85119191 +0000 UTC m=+1.514550222 container died ff4479fe228d9ce4d99910fec30ffbe8c2d06ebacb2f6cd5b4da3c36ac5ec391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 19:31:17 compute-0 systemd[1]: libpod-ff4479fe228d9ce4d99910fec30ffbe8c2d06ebacb2f6cd5b4da3c36ac5ec391.scope: Consumed 1.219s CPU time.
Oct 02 19:31:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-58b3a81e378c7378feff3da3db31d6b8c9adb405839bc9a2a1c9fcc89465e8d8-merged.mount: Deactivated successfully.
Oct 02 19:31:17 compute-0 podman[354206]: 2025-10-02 19:31:17.985422227 +0000 UTC m=+1.648780529 container remove ff4479fe228d9ce4d99910fec30ffbe8c2d06ebacb2f6cd5b4da3c36ac5ec391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_galileo, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 19:31:18 compute-0 sudo[354541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmmuivrzhhuqhdnmhhgtuoaprkdjirxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433477.335919-1781-270414075211747/AnsiballZ_copy.py'
Oct 02 19:31:18 compute-0 systemd[1]: libpod-conmon-ff4479fe228d9ce4d99910fec30ffbe8c2d06ebacb2f6cd5b4da3c36ac5ec391.scope: Deactivated successfully.
Oct 02 19:31:18 compute-0 sudo[354541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:31:18 compute-0 sudo[353956]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:31:18 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:31:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:31:18 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:31:18 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 6263dfb8-e29d-459d-8fcd-4e941045fd58 does not exist
Oct 02 19:31:18 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 9531a3bd-3003-41e6-8016-50c0ac857800 does not exist
Oct 02 19:31:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:18 compute-0 sudo[354545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:31:18 compute-0 sudo[354545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:31:18 compute-0 sudo[354545]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:18 compute-0 python3.9[354544]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759433477.335919-1781-270414075211747/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:31:18 compute-0 sudo[354570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:31:18 compute-0 sudo[354570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:31:18 compute-0 sudo[354570]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:18 compute-0 sudo[354541]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:18 compute-0 sudo[354668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydpjwlmjmizixitfpkviafcmzaaxtdht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433477.335919-1781-270414075211747/AnsiballZ_systemd.py'
Oct 02 19:31:18 compute-0 sudo[354668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:31:18 compute-0 python3.9[354670]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:31:18 compute-0 systemd[1]: Reloading.
Oct 02 19:31:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:31:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:31:19 compute-0 ceph-mon[191910]: pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:19 compute-0 systemd-rc-local-generator[354695]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:31:19 compute-0 systemd-sysv-generator[354703]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:31:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:31:19 compute-0 sudo[354668]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:20 compute-0 sudo[354781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cndzjnudxsuadxzfnmyyershczctfdun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433477.335919-1781-270414075211747/AnsiballZ_systemd.py'
Oct 02 19:31:20 compute-0 sudo[354781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:31:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:20 compute-0 python3.9[354783]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:31:20 compute-0 systemd[1]: Reloading.
Oct 02 19:31:20 compute-0 systemd-sysv-generator[354816]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:31:20 compute-0 systemd-rc-local-generator[354812]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:31:20 compute-0 systemd[1]: Starting nova_compute container...
Oct 02 19:31:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade2f6ab70e91fcd23dcdc4a1db2e7288db4263ac9be3e3b4e10be7450bb505c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade2f6ab70e91fcd23dcdc4a1db2e7288db4263ac9be3e3b4e10be7450bb505c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade2f6ab70e91fcd23dcdc4a1db2e7288db4263ac9be3e3b4e10be7450bb505c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade2f6ab70e91fcd23dcdc4a1db2e7288db4263ac9be3e3b4e10be7450bb505c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade2f6ab70e91fcd23dcdc4a1db2e7288db4263ac9be3e3b4e10be7450bb505c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:21 compute-0 podman[354822]: 2025-10-02 19:31:21.189129345 +0000 UTC m=+0.227833928 container init 3095acc13401ddbc319e94618a67ee24087993b62ef5596946e126cbcc610f65 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 19:31:21 compute-0 podman[354822]: 2025-10-02 19:31:21.227012787 +0000 UTC m=+0.265717290 container start 3095acc13401ddbc319e94618a67ee24087993b62ef5596946e126cbcc610f65 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:31:21 compute-0 nova_compute[354838]: + sudo -E kolla_set_configs
Oct 02 19:31:21 compute-0 podman[354822]: nova_compute
Oct 02 19:31:21 compute-0 systemd[1]: Started nova_compute container.
Oct 02 19:31:21 compute-0 sudo[354781]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Validating config file
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Copying service configuration files
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Deleting /etc/ceph
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Creating directory /etc/ceph
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Setting permission for /etc/ceph
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Writing out command to execute
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 19:31:21 compute-0 nova_compute[354838]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 19:31:21 compute-0 nova_compute[354838]: ++ cat /run_command
Oct 02 19:31:21 compute-0 nova_compute[354838]: + CMD=nova-compute
Oct 02 19:31:21 compute-0 nova_compute[354838]: + ARGS=
Oct 02 19:31:21 compute-0 nova_compute[354838]: + sudo kolla_copy_cacerts
Oct 02 19:31:21 compute-0 ceph-mon[191910]: pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:21 compute-0 nova_compute[354838]: + [[ ! -n '' ]]
Oct 02 19:31:21 compute-0 nova_compute[354838]: + . kolla_extend_start
Oct 02 19:31:21 compute-0 nova_compute[354838]: Running command: 'nova-compute'
Oct 02 19:31:21 compute-0 nova_compute[354838]: + echo 'Running command: '\''nova-compute'\'''
Oct 02 19:31:21 compute-0 nova_compute[354838]: + umask 0022
Oct 02 19:31:21 compute-0 nova_compute[354838]: + exec nova-compute
Oct 02 19:31:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:22 compute-0 python3.9[355000]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:31:23 compute-0 podman[355124]: 2025-10-02 19:31:23.31601696 +0000 UTC m=+0.082405683 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 19:31:23 compute-0 ceph-mon[191910]: pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:23 compute-0 python3.9[355162]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:31:23 compute-0 nova_compute[354838]: 2025-10-02 19:31:23.698 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 19:31:23 compute-0 nova_compute[354838]: 2025-10-02 19:31:23.698 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 19:31:23 compute-0 nova_compute[354838]: 2025-10-02 19:31:23.698 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 19:31:23 compute-0 nova_compute[354838]: 2025-10-02 19:31:23.698 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 02 19:31:23 compute-0 nova_compute[354838]: 2025-10-02 19:31:23.864 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:31:23 compute-0 nova_compute[354838]: 2025-10-02 19:31:23.904 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:31:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.718 2 INFO nova.virt.driver [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.904 2 INFO nova.compute.provider_config [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.932 2 DEBUG oslo_concurrency.lockutils [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.933 2 DEBUG oslo_concurrency.lockutils [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.933 2 DEBUG oslo_concurrency.lockutils [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.933 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.933 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.933 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.934 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.934 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.934 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.934 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.934 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.934 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.934 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.935 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.935 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.935 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.935 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.935 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.935 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.936 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.936 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.936 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.936 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.936 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.936 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.936 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.937 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.937 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.937 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.937 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.937 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.937 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.938 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.938 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.938 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.938 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.938 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.938 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.938 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.939 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.939 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.939 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.939 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.939 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.939 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.940 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.940 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.940 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.940 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.940 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.940 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.941 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.941 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.941 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.941 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.941 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.941 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.942 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.942 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.942 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.942 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.942 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.942 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.942 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.943 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.943 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.943 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.943 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.943 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.943 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.943 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.944 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.944 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.944 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.944 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.944 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.944 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.944 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.945 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.945 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.945 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.945 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.945 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.945 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.945 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.946 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.946 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.946 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.946 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.946 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.946 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.946 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.947 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.947 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.947 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.947 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.947 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.947 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.947 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.948 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.948 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.948 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.948 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.948 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.948 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.948 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.949 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.949 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.949 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.949 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.949 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.949 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.949 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.950 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.950 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.950 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.950 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.950 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.950 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.951 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.951 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.951 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.951 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.951 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.951 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.952 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.952 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.952 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.952 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.952 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.952 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.952 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.953 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.953 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.953 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.953 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.953 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.953 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.953 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.954 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.954 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.954 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.954 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.954 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.954 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.954 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.954 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.955 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.955 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.955 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.955 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.955 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.955 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.956 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.956 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.956 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.956 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.956 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.956 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.956 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.957 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.957 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.957 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.957 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.957 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.957 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.957 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.958 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.958 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.958 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.958 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.958 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.958 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.959 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.959 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.959 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.959 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.959 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.959 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.959 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.960 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.960 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.960 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.960 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.960 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.960 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.961 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.961 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.961 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.961 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.961 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.961 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.961 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.962 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.962 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.962 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.962 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.962 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.962 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.962 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.963 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.963 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.963 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.963 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.963 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.963 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.963 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.964 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.964 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.964 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.964 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.964 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.964 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.965 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.965 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.965 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.965 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.965 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.965 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.965 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.966 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.966 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.966 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.966 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.966 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.966 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.966 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.967 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.967 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.967 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.967 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.967 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.967 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.967 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.968 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.968 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.968 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.968 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.968 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.968 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.969 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.969 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.969 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.969 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.969 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.969 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.970 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.970 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.970 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.970 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.970 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.970 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.970 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.971 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.971 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.971 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.971 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.971 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.971 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.971 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.972 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.972 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.972 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.972 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.972 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.972 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.972 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.973 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.973 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.973 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.973 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.973 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.973 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.973 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.974 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.974 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.974 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.974 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.974 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.974 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.975 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.975 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.975 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.975 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.975 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.975 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.975 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.976 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.976 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.976 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.976 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.976 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.976 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.976 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.977 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.977 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.977 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.977 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.977 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.977 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.977 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.978 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.978 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.978 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.978 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.978 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.978 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.978 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.979 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.979 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.979 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.979 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.979 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.979 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.979 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.980 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.980 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.980 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.980 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.980 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.980 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.980 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.981 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.981 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.981 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.981 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.981 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.981 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.981 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.982 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.982 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.982 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.982 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.982 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.982 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.982 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.983 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.983 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.983 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.983 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.983 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.983 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.984 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.984 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.984 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.984 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.984 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.984 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.984 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.985 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.985 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.985 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.985 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.985 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.985 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.985 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.986 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.986 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.986 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.986 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.986 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.987 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.987 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.987 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.987 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.987 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.987 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.987 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.988 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.988 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.988 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.988 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.988 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.988 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.988 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.989 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.989 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.989 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.989 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.989 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.989 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.989 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.990 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.990 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.990 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.990 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.990 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.990 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.990 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.991 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.991 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.991 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.991 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.991 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.991 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.992 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.992 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.992 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.992 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.992 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.992 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.993 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.993 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.993 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.993 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.993 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.993 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.993 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.994 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.994 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.994 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.994 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.994 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.995 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.995 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.995 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.995 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.995 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.995 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.995 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.996 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.996 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.996 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.996 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.996 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.996 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.997 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.997 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.997 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.997 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.997 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.997 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.997 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.998 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.998 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.998 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.998 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.998 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.998 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.999 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.999 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.999 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.999 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.999 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:24 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.999 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:24.999 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.000 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.000 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.000 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.000 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.001 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.001 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.001 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.001 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.002 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.002 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.002 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.002 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.002 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.003 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.003 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.003 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.003 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.003 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.003 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.003 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.004 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.004 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.004 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.004 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.004 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.004 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.005 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.005 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.005 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.005 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.005 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.005 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.005 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.006 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.006 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.006 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.006 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.006 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.006 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.007 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.007 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.007 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.007 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.007 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.008 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.008 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.008 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.008 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.008 2 WARNING oslo_config.cfg [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 02 19:31:25 compute-0 nova_compute[354838]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 02 19:31:25 compute-0 nova_compute[354838]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 02 19:31:25 compute-0 nova_compute[354838]: and ``live_migration_inbound_addr`` respectively.
Oct 02 19:31:25 compute-0 nova_compute[354838]: ).  Its value may be silently ignored in the future.
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.008 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.009 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.009 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.009 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.009 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.009 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.010 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.010 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.010 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.010 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.010 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.010 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.010 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.011 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.011 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.011 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.011 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.011 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.011 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.rbd_secret_uuid        = 6019f664-a1c2-5955-8391-692cb79a59f9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.011 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.012 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.012 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.012 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.012 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.012 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.012 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.013 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.013 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.013 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.013 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.013 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.013 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.013 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.014 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.014 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.014 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.014 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.014 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.014 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.015 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.015 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.015 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.015 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.015 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.015 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.015 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.016 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.016 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.016 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.016 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.016 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.016 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.016 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.017 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.017 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.017 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.017 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.017 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.017 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.017 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.018 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.018 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.018 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.018 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.018 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.018 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.018 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.019 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.019 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.019 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.019 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.019 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.019 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.019 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.020 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.020 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.020 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.020 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.020 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.020 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.020 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.021 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.021 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.021 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.021 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.021 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.021 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.022 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.022 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.022 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.022 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.022 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.022 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.023 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.023 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.023 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.023 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.023 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.023 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.024 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.024 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.024 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.024 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.024 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.024 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.024 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.025 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.025 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.025 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.025 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.025 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.025 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.025 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.026 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.026 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.026 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.026 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.026 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.026 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.027 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.027 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.027 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.027 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.027 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.027 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.027 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.028 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.028 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.028 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.028 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.028 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.028 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.028 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.029 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.029 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.029 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.029 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.029 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.029 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.030 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.030 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.030 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.030 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.031 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.031 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.031 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.031 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.031 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.032 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.032 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.033 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.033 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.033 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.033 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.033 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.033 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.033 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.034 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.034 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.034 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.034 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.034 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.034 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.035 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.035 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.035 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.035 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.035 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.035 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.035 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.036 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.036 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.036 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.036 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.036 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.036 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.036 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.037 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.037 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.037 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.037 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.037 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.037 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.038 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.038 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.038 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.038 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.038 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.038 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.038 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.039 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.039 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.039 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.039 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.039 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.039 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.039 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.040 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.040 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.040 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.040 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.040 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.040 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.040 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.041 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.041 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.041 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.041 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.041 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.041 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.042 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.042 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.042 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.042 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.042 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.042 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.042 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.043 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.043 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.043 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.043 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.043 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.043 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.043 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.044 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.044 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.044 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.044 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.044 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.044 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.044 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.044 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.045 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.045 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.045 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.045 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.045 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.045 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.045 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.046 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.046 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.046 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.046 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.046 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.046 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.046 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.047 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.047 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.047 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.047 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.047 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.047 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.048 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.048 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.048 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.048 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.048 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.048 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.048 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.049 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.049 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.049 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.049 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.049 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.049 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.049 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.050 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.050 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.050 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.050 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.050 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.050 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.051 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.051 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.051 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.051 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.051 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.051 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.051 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.052 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.052 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.052 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.052 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.052 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.052 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.052 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.053 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.053 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.053 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.053 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.053 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.053 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.053 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.054 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.054 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.054 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.054 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.054 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.054 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.054 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.055 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.055 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.055 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.055 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.055 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.055 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.056 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.056 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.056 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.056 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.056 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.056 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.056 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.056 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.057 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.057 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.057 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.057 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.057 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.057 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.057 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.058 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.058 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.058 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.058 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.058 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.058 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.058 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.059 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.059 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.059 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.059 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.059 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.059 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.059 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.059 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.060 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.060 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.060 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.060 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.060 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.060 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.060 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.061 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.061 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.061 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.061 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.061 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.061 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.062 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.062 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.062 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.062 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.062 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.062 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.062 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.063 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.063 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.063 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.063 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.063 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.063 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.063 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.064 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.064 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.064 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.064 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.064 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.064 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.064 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.065 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.065 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.065 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.065 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.065 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.065 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.065 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.065 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.066 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.066 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.066 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.066 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.066 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.066 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.067 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.067 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.067 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.067 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.067 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.067 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.067 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.068 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.068 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.068 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.068 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.068 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.068 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.068 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.069 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.069 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.069 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.069 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.069 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.069 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.069 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.070 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.070 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.070 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.070 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.070 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.070 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.070 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.071 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.071 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.071 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.071 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.071 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.071 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.071 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.072 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.072 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.072 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.072 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.072 2 DEBUG oslo_service.service [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.073 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.100 2 DEBUG nova.virt.libvirt.host [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.101 2 DEBUG nova.virt.libvirt.host [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.102 2 DEBUG nova.virt.libvirt.host [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.102 2 DEBUG nova.virt.libvirt.host [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.123 2 DEBUG nova.virt.libvirt.host [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f715bb33370> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.130 2 DEBUG nova.virt.libvirt.host [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f715bb33370> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.132 2 INFO nova.virt.libvirt.driver [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Connection event '1' reason 'None'
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.203 2 WARNING nova.virt.libvirt.driver [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 02 19:31:25 compute-0 nova_compute[354838]: 2025-10-02 19:31:25.205 2 DEBUG nova.virt.libvirt.volume.mount [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 02 19:31:25 compute-0 python3.9[355346]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:31:25 compute-0 ceph-mon[191910]: pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:26 compute-0 sudo[355514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skawkwpasxmygletkwzsrthaojfznzmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433485.6674862-1841-44173171128815/AnsiballZ_podman_container.py'
Oct 02 19:31:26 compute-0 sudo[355514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:31:26 compute-0 nova_compute[354838]: 2025-10-02 19:31:26.374 2 INFO nova.virt.libvirt.host [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Libvirt host capabilities <capabilities>
Oct 02 19:31:26 compute-0 nova_compute[354838]: 
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <host>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <uuid>b440ca7f-ed29-4df1-9220-db3fd23c361a</uuid>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <cpu>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <arch>x86_64</arch>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model>EPYC-Rome-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <vendor>AMD</vendor>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <microcode version='16777317'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <signature family='23' model='49' stepping='0'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='x2apic'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='tsc-deadline'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='osxsave'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='hypervisor'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='tsc_adjust'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='spec-ctrl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='stibp'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='arch-capabilities'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='ssbd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='cmp_legacy'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='topoext'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='virt-ssbd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='lbrv'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='tsc-scale'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='vmcb-clean'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='pause-filter'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='pfthreshold'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='svme-addr-chk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='rdctl-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='skip-l1dfl-vmentry'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='mds-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature name='pschange-mc-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <pages unit='KiB' size='4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <pages unit='KiB' size='2048'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <pages unit='KiB' size='1048576'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </cpu>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <power_management>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <suspend_mem/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </power_management>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <iommu support='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <migration_features>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <live/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <uri_transports>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <uri_transport>tcp</uri_transport>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <uri_transport>rdma</uri_transport>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </uri_transports>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </migration_features>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <topology>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <cells num='1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <cell id='0'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:           <memory unit='KiB'>7864100</memory>
Oct 02 19:31:26 compute-0 nova_compute[354838]:           <pages unit='KiB' size='4'>1966025</pages>
Oct 02 19:31:26 compute-0 nova_compute[354838]:           <pages unit='KiB' size='2048'>0</pages>
Oct 02 19:31:26 compute-0 nova_compute[354838]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 02 19:31:26 compute-0 nova_compute[354838]:           <distances>
Oct 02 19:31:26 compute-0 nova_compute[354838]:             <sibling id='0' value='10'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:           </distances>
Oct 02 19:31:26 compute-0 nova_compute[354838]:           <cpus num='8'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:           </cpus>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         </cell>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </cells>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </topology>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <cache>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </cache>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <secmodel>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model>selinux</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <doi>0</doi>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </secmodel>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <secmodel>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model>dac</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <doi>0</doi>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </secmodel>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </host>
Oct 02 19:31:26 compute-0 nova_compute[354838]: 
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <guest>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <os_type>hvm</os_type>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <arch name='i686'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <wordsize>32</wordsize>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <domain type='qemu'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <domain type='kvm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </arch>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <features>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <pae/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <nonpae/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <acpi default='on' toggle='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <apic default='on' toggle='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <cpuselection/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <deviceboot/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <disksnapshot default='on' toggle='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <externalSnapshot/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </features>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </guest>
Oct 02 19:31:26 compute-0 nova_compute[354838]: 
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <guest>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <os_type>hvm</os_type>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <arch name='x86_64'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <wordsize>64</wordsize>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <domain type='qemu'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <domain type='kvm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </arch>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <features>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <acpi default='on' toggle='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <apic default='on' toggle='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <cpuselection/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <deviceboot/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <disksnapshot default='on' toggle='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <externalSnapshot/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </features>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </guest>
Oct 02 19:31:26 compute-0 nova_compute[354838]: 
Oct 02 19:31:26 compute-0 nova_compute[354838]: </capabilities>
Oct 02 19:31:26 compute-0 nova_compute[354838]: 
Oct 02 19:31:26 compute-0 nova_compute[354838]: 2025-10-02 19:31:26.401 2 DEBUG nova.virt.libvirt.host [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 02 19:31:26 compute-0 nova_compute[354838]: 2025-10-02 19:31:26.453 2 DEBUG nova.virt.libvirt.host [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 02 19:31:26 compute-0 nova_compute[354838]: <domainCapabilities>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <domain>kvm</domain>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <arch>i686</arch>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <vcpu max='240'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <iothreads supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <os supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <enum name='firmware'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <loader supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='type'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>rom</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>pflash</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='readonly'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>yes</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>no</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='secure'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>no</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </loader>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </os>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <cpu>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <mode name='host-passthrough' supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='hostPassthroughMigratable'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>on</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>off</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </mode>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <mode name='maximum' supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='maximumMigratable'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>on</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>off</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </mode>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <mode name='host-model' supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <vendor>AMD</vendor>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='x2apic'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='hypervisor'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='stibp'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='ssbd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='overflow-recov'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='succor'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='ibrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='lbrv'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='tsc-scale'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='flushbyasid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='pause-filter'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='pfthreshold'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='rdctl-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='mds-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='gds-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='rfds-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='disable' name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </mode>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <mode name='custom' supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-noTSX'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cooperlake'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cooperlake-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cooperlake-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Denverton'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mpx'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Denverton-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mpx'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Denverton-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Denverton-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Dhyana-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Genoa'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amd-psfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='auto-ibrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='stibp-always-on'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amd-psfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='auto-ibrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='stibp-always-on'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Milan'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Milan-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Milan-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amd-psfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='stibp-always-on'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Rome'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Rome-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Rome-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Rome-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='GraniteRapids'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='prefetchiti'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='GraniteRapids-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='prefetchiti'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='GraniteRapids-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx10'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx10-128'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx10-256'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx10-512'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='prefetchiti'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-noTSX'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v5'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v6'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v7'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='IvyBridge'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='IvyBridge-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='IvyBridge-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='IvyBridge-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='KnightsMill'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-4fmaps'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-4vnniw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512er'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512pf'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='KnightsMill-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-4fmaps'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-4vnniw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512er'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512pf'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Opteron_G4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fma4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xop'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Opteron_G4-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fma4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xop'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Opteron_G5'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fma4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tbm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xop'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Opteron_G5-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fma4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tbm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xop'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SapphireRapids'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SapphireRapids-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SapphireRapids-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SapphireRapids-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SierraForest'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-ne-convert'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cmpccxadd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SierraForest-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-ne-convert'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cmpccxadd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v5'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='core-capability'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mpx'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='split-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='core-capability'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mpx'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='split-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='core-capability'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='split-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='core-capability'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='split-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='athlon'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnow'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnowext'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='athlon-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnow'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnowext'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='core2duo'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='core2duo-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='coreduo'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='coreduo-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='n270'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='n270-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='phenom'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnow'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnowext'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='phenom-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnow'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnowext'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </mode>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </cpu>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <memoryBacking supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <enum name='sourceType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>file</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>anonymous</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>memfd</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </memoryBacking>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <devices>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <disk supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='diskDevice'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>disk</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>cdrom</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>floppy</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>lun</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='bus'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>ide</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>fdc</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>scsi</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>usb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>sata</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio-transitional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio-non-transitional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </disk>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <graphics supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='type'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vnc</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>egl-headless</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>dbus</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </graphics>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <video supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='modelType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vga</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>cirrus</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>none</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>bochs</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>ramfb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </video>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <hostdev supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='mode'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>subsystem</value>
Oct 02 19:31:26 compute-0 python3.9[355516]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='startupPolicy'>
Oct 02 19:31:26 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>default</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>mandatory</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>requisite</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>optional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='subsysType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>usb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>pci</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>scsi</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='capsType'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='pciBackend'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </hostdev>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <rng supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio-transitional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio-non-transitional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendModel'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>random</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>egd</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>builtin</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </rng>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <filesystem supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='driverType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>path</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>handle</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtiofs</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </filesystem>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <tpm supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>tpm-tis</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>tpm-crb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendModel'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>emulator</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>external</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendVersion'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>2.0</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </tpm>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <redirdev supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='bus'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>usb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </redirdev>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <channel supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='type'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>pty</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>unix</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </channel>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <crypto supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='type'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>qemu</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendModel'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>builtin</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </crypto>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <interface supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>default</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>passt</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </interface>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <panic supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>isa</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>hyperv</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </panic>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </devices>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <features>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <gic supported='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <vmcoreinfo supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <genid supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <backingStoreInput supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <backup supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <async-teardown supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <ps2 supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <sev supported='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <sgx supported='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <hyperv supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='features'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>relaxed</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vapic</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>spinlocks</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vpindex</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>runtime</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>synic</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>stimer</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>reset</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vendor_id</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>frequencies</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>reenlightenment</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>tlbflush</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>ipi</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>avic</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>emsr_bitmap</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>xmm_input</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </hyperv>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <launchSecurity supported='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </features>
Oct 02 19:31:26 compute-0 nova_compute[354838]: </domainCapabilities>
Oct 02 19:31:26 compute-0 nova_compute[354838]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 19:31:26 compute-0 nova_compute[354838]: 2025-10-02 19:31:26.462 2 DEBUG nova.virt.libvirt.host [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 02 19:31:26 compute-0 nova_compute[354838]: <domainCapabilities>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <domain>kvm</domain>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <arch>i686</arch>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <vcpu max='4096'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <iothreads supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <os supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <enum name='firmware'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <loader supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='type'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>rom</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>pflash</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='readonly'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>yes</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>no</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='secure'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>no</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </loader>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </os>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <cpu>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <mode name='host-passthrough' supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='hostPassthroughMigratable'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>on</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>off</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </mode>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <mode name='maximum' supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='maximumMigratable'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>on</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>off</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </mode>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <mode name='host-model' supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <vendor>AMD</vendor>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='x2apic'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='hypervisor'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='stibp'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='ssbd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='overflow-recov'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='succor'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='ibrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='lbrv'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='tsc-scale'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='flushbyasid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='pause-filter'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='pfthreshold'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='rdctl-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='mds-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='gds-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='rfds-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='disable' name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </mode>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <mode name='custom' supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-noTSX'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cooperlake'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cooperlake-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cooperlake-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Denverton'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mpx'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Denverton-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mpx'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Denverton-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Denverton-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Dhyana-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Genoa'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amd-psfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='auto-ibrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='stibp-always-on'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amd-psfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='auto-ibrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='stibp-always-on'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Milan'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Milan-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Milan-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amd-psfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='stibp-always-on'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Rome'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Rome-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Rome-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Rome-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='GraniteRapids'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='prefetchiti'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='GraniteRapids-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='prefetchiti'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='GraniteRapids-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx10'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx10-128'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx10-256'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx10-512'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='prefetchiti'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-noTSX'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v5'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v6'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v7'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='IvyBridge'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='IvyBridge-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='IvyBridge-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='IvyBridge-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='KnightsMill'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-4fmaps'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-4vnniw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512er'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512pf'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='KnightsMill-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-4fmaps'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-4vnniw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512er'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512pf'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Opteron_G4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fma4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xop'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Opteron_G4-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fma4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xop'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Opteron_G5'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fma4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tbm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xop'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Opteron_G5-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fma4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tbm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xop'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SapphireRapids'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SapphireRapids-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SapphireRapids-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SapphireRapids-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SierraForest'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-ne-convert'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cmpccxadd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SierraForest-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-ne-convert'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cmpccxadd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v5'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='core-capability'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mpx'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='split-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='core-capability'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mpx'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='split-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='core-capability'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='split-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='core-capability'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='split-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='athlon'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnow'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnowext'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='athlon-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnow'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnowext'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='core2duo'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='core2duo-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='coreduo'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='coreduo-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='n270'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='n270-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='phenom'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnow'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnowext'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='phenom-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnow'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnowext'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </mode>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </cpu>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <memoryBacking supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <enum name='sourceType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>file</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>anonymous</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>memfd</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </memoryBacking>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <devices>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <disk supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='diskDevice'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>disk</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>cdrom</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>floppy</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>lun</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='bus'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>fdc</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>scsi</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>usb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>sata</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio-transitional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio-non-transitional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </disk>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <graphics supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='type'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vnc</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>egl-headless</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>dbus</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </graphics>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <video supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='modelType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vga</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>cirrus</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>none</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>bochs</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>ramfb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </video>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <hostdev supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='mode'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>subsystem</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='startupPolicy'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>default</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>mandatory</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>requisite</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>optional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='subsysType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>usb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>pci</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>scsi</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='capsType'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='pciBackend'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </hostdev>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <rng supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio-transitional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio-non-transitional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendModel'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>random</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>egd</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>builtin</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </rng>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <filesystem supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='driverType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>path</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>handle</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtiofs</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </filesystem>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <tpm supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>tpm-tis</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>tpm-crb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendModel'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>emulator</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>external</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendVersion'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>2.0</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </tpm>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <redirdev supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='bus'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>usb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </redirdev>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <channel supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='type'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>pty</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>unix</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </channel>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <crypto supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='type'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>qemu</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendModel'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>builtin</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </crypto>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <interface supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>default</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>passt</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </interface>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <panic supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>isa</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>hyperv</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </panic>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </devices>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <features>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <gic supported='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <vmcoreinfo supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <genid supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <backingStoreInput supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <backup supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <async-teardown supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <ps2 supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <sev supported='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <sgx supported='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <hyperv supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='features'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>relaxed</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vapic</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>spinlocks</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vpindex</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>runtime</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>synic</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>stimer</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>reset</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vendor_id</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>frequencies</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>reenlightenment</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>tlbflush</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>ipi</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>avic</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>emsr_bitmap</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>xmm_input</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </hyperv>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <launchSecurity supported='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </features>
Oct 02 19:31:26 compute-0 nova_compute[354838]: </domainCapabilities>
Oct 02 19:31:26 compute-0 nova_compute[354838]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 19:31:26 compute-0 nova_compute[354838]: 2025-10-02 19:31:26.526 2 DEBUG nova.virt.libvirt.host [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 02 19:31:26 compute-0 nova_compute[354838]: 2025-10-02 19:31:26.531 2 DEBUG nova.virt.libvirt.host [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 02 19:31:26 compute-0 nova_compute[354838]: <domainCapabilities>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <domain>kvm</domain>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <arch>x86_64</arch>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <vcpu max='240'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <iothreads supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <os supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <enum name='firmware'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <loader supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='type'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>rom</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>pflash</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='readonly'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>yes</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>no</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='secure'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>no</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </loader>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </os>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <cpu>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <mode name='host-passthrough' supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='hostPassthroughMigratable'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>on</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>off</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </mode>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <mode name='maximum' supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='maximumMigratable'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>on</value>
Oct 02 19:31:26 compute-0 sudo[355514]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>off</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </mode>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <mode name='host-model' supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <vendor>AMD</vendor>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='x2apic'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='hypervisor'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='stibp'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='ssbd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='overflow-recov'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='succor'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='ibrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='lbrv'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='tsc-scale'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='flushbyasid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='pause-filter'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='pfthreshold'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='rdctl-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='mds-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='gds-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='rfds-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='disable' name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </mode>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <mode name='custom' supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-noTSX'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cooperlake'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cooperlake-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cooperlake-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Denverton'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mpx'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Denverton-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mpx'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Denverton-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Denverton-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Dhyana-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Genoa'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amd-psfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='auto-ibrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='stibp-always-on'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amd-psfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='auto-ibrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='stibp-always-on'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Milan'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Milan-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Milan-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amd-psfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='stibp-always-on'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Rome'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Rome-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Rome-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Rome-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='GraniteRapids'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='prefetchiti'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='GraniteRapids-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='prefetchiti'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='GraniteRapids-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx10'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx10-128'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx10-256'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx10-512'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='prefetchiti'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-noTSX'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v5'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v6'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v7'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='IvyBridge'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='IvyBridge-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='IvyBridge-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='IvyBridge-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='KnightsMill'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-4fmaps'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-4vnniw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512er'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512pf'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='KnightsMill-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-4fmaps'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-4vnniw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512er'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512pf'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Opteron_G4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fma4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xop'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Opteron_G4-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fma4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xop'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Opteron_G5'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fma4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tbm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xop'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Opteron_G5-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fma4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tbm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xop'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SapphireRapids'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SapphireRapids-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SapphireRapids-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SapphireRapids-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SierraForest'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-ne-convert'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cmpccxadd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SierraForest-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-ne-convert'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cmpccxadd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v5'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='core-capability'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mpx'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='split-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='core-capability'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mpx'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='split-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='core-capability'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='split-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='core-capability'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='split-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='athlon'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnow'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnowext'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='athlon-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnow'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnowext'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='core2duo'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='core2duo-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='coreduo'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='coreduo-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='n270'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='n270-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='phenom'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnow'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnowext'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='phenom-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnow'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnowext'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </mode>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </cpu>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <memoryBacking supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <enum name='sourceType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>file</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>anonymous</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>memfd</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </memoryBacking>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <devices>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <disk supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='diskDevice'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>disk</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>cdrom</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>floppy</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>lun</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='bus'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>ide</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>fdc</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>scsi</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>usb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>sata</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio-transitional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio-non-transitional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </disk>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <graphics supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='type'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vnc</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>egl-headless</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>dbus</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </graphics>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <video supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='modelType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vga</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>cirrus</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>none</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>bochs</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>ramfb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </video>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <hostdev supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='mode'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>subsystem</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='startupPolicy'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>default</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>mandatory</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>requisite</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>optional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='subsysType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>usb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>pci</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>scsi</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='capsType'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='pciBackend'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </hostdev>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <rng supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio-transitional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio-non-transitional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendModel'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>random</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>egd</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>builtin</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </rng>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <filesystem supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='driverType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>path</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>handle</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtiofs</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </filesystem>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <tpm supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>tpm-tis</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>tpm-crb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendModel'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>emulator</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>external</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendVersion'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>2.0</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </tpm>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <redirdev supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='bus'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>usb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </redirdev>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <channel supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='type'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>pty</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>unix</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </channel>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <crypto supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='type'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>qemu</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendModel'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>builtin</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </crypto>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <interface supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>default</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>passt</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </interface>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <panic supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>isa</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>hyperv</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </panic>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </devices>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <features>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <gic supported='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <vmcoreinfo supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <genid supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <backingStoreInput supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <backup supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <async-teardown supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <ps2 supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <sev supported='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <sgx supported='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <hyperv supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='features'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>relaxed</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vapic</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>spinlocks</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vpindex</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>runtime</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>synic</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>stimer</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>reset</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vendor_id</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>frequencies</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>reenlightenment</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>tlbflush</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>ipi</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>avic</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>emsr_bitmap</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>xmm_input</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </hyperv>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <launchSecurity supported='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </features>
Oct 02 19:31:26 compute-0 nova_compute[354838]: </domainCapabilities>
Oct 02 19:31:26 compute-0 nova_compute[354838]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 19:31:26 compute-0 nova_compute[354838]: 2025-10-02 19:31:26.672 2 DEBUG nova.virt.libvirt.host [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 02 19:31:26 compute-0 nova_compute[354838]: <domainCapabilities>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <domain>kvm</domain>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <arch>x86_64</arch>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <vcpu max='4096'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <iothreads supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <os supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <enum name='firmware'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>efi</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <loader supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='type'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>rom</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>pflash</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='readonly'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>yes</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>no</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='secure'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>yes</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>no</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </loader>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </os>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <cpu>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <mode name='host-passthrough' supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='hostPassthroughMigratable'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>on</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>off</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </mode>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <mode name='maximum' supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='maximumMigratable'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>on</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>off</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </mode>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <mode name='host-model' supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <vendor>AMD</vendor>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='x2apic'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='hypervisor'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='stibp'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='ssbd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='overflow-recov'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='succor'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='ibrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='lbrv'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='tsc-scale'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='flushbyasid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='pause-filter'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='pfthreshold'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='rdctl-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='mds-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='gds-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='require' name='rfds-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <feature policy='disable' name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </mode>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <mode name='custom' supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-noTSX'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Broadwell-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cooperlake'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cooperlake-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Cooperlake-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Denverton'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mpx'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Denverton-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mpx'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Denverton-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Denverton-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Dhyana-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Genoa'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amd-psfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='auto-ibrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='stibp-always-on'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amd-psfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='auto-ibrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='stibp-always-on'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Milan'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Milan-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Milan-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amd-psfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='stibp-always-on'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Rome'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Rome-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Rome-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-Rome-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='EPYC-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='GraniteRapids'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='prefetchiti'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='GraniteRapids-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='prefetchiti'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='GraniteRapids-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx10'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx10-128'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx10-256'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx10-512'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='prefetchiti'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-noTSX'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Haswell-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v5'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v6'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Icelake-Server-v7'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='IvyBridge'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='IvyBridge-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='IvyBridge-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='IvyBridge-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='KnightsMill'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-4fmaps'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-4vnniw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512er'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512pf'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='KnightsMill-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-4fmaps'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-4vnniw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512er'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512pf'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Opteron_G4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fma4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xop'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Opteron_G4-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fma4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xop'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Opteron_G5'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fma4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tbm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xop'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Opteron_G5-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fma4'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tbm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xop'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SapphireRapids'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SapphireRapids-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SapphireRapids-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SapphireRapids-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='amx-tile'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-bf16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-fp16'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bitalg'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrc'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fzrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='la57'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='taa-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xfd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SierraForest'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-ne-convert'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cmpccxadd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='SierraForest-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-ifma'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-ne-convert'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx-vnni-int8'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cmpccxadd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fbsdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='fsrs'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ibrs-all'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mcdt-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pbrsb-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='psdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='serialize'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vaes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Client-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='hle'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='rtm'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Skylake-Server-v5'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512bw'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512cd'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512dq'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512f'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='avx512vl'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='invpcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pcid'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='pku'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='core-capability'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mpx'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='split-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='core-capability'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='mpx'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='split-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge-v2'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='core-capability'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='split-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge-v3'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='core-capability'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='split-lock-detect'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='Snowridge-v4'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='cldemote'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='erms'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='gfni'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdir64b'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='movdiri'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='xsaves'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='athlon'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnow'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnowext'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='athlon-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnow'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnowext'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='core2duo'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='core2duo-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='coreduo'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='coreduo-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='n270'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='n270-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='ss'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='phenom'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnow'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnowext'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <blockers model='phenom-v1'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnow'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <feature name='3dnowext'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </blockers>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </mode>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </cpu>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <memoryBacking supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <enum name='sourceType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>file</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>anonymous</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <value>memfd</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </memoryBacking>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <devices>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <disk supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='diskDevice'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>disk</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>cdrom</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>floppy</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>lun</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='bus'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>fdc</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>scsi</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>usb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>sata</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio-transitional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio-non-transitional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </disk>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <graphics supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='type'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vnc</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>egl-headless</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>dbus</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </graphics>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <video supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='modelType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vga</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>cirrus</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>none</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>bochs</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>ramfb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </video>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <hostdev supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='mode'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>subsystem</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='startupPolicy'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>default</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>mandatory</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>requisite</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>optional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='subsysType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>usb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>pci</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>scsi</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='capsType'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='pciBackend'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </hostdev>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <rng supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio-transitional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtio-non-transitional</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendModel'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>random</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>egd</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>builtin</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </rng>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <filesystem supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='driverType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>path</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>handle</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>virtiofs</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </filesystem>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <tpm supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>tpm-tis</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>tpm-crb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendModel'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>emulator</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>external</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendVersion'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>2.0</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </tpm>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <redirdev supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='bus'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>usb</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </redirdev>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <channel supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='type'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>pty</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>unix</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </channel>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <crypto supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='type'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>qemu</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendModel'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>builtin</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </crypto>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <interface supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='backendType'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>default</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>passt</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </interface>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <panic supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='model'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>isa</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>hyperv</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </panic>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </devices>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   <features>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <gic supported='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <vmcoreinfo supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <genid supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <backingStoreInput supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <backup supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <async-teardown supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <ps2 supported='yes'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <sev supported='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <sgx supported='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <hyperv supported='yes'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       <enum name='features'>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>relaxed</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vapic</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>spinlocks</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vpindex</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>runtime</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>synic</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>stimer</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>reset</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>vendor_id</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>frequencies</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>reenlightenment</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>tlbflush</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>ipi</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>avic</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>emsr_bitmap</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:         <value>xmm_input</value>
Oct 02 19:31:26 compute-0 nova_compute[354838]:       </enum>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     </hyperv>
Oct 02 19:31:26 compute-0 nova_compute[354838]:     <launchSecurity supported='no'/>
Oct 02 19:31:26 compute-0 nova_compute[354838]:   </features>
Oct 02 19:31:26 compute-0 nova_compute[354838]: </domainCapabilities>
Oct 02 19:31:26 compute-0 nova_compute[354838]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 19:31:26 compute-0 nova_compute[354838]: 2025-10-02 19:31:26.782 2 DEBUG nova.virt.libvirt.host [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 02 19:31:26 compute-0 nova_compute[354838]: 2025-10-02 19:31:26.782 2 DEBUG nova.virt.libvirt.host [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 02 19:31:26 compute-0 nova_compute[354838]: 2025-10-02 19:31:26.783 2 DEBUG nova.virt.libvirt.host [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 02 19:31:26 compute-0 nova_compute[354838]: 2025-10-02 19:31:26.783 2 INFO nova.virt.libvirt.host [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Secure Boot support detected
Oct 02 19:31:26 compute-0 nova_compute[354838]: 2025-10-02 19:31:26.787 2 INFO nova.virt.libvirt.driver [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 02 19:31:26 compute-0 nova_compute[354838]: 2025-10-02 19:31:26.787 2 INFO nova.virt.libvirt.driver [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 02 19:31:26 compute-0 nova_compute[354838]: 2025-10-02 19:31:26.804 2 DEBUG nova.virt.libvirt.driver [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 02 19:31:26 compute-0 nova_compute[354838]: 2025-10-02 19:31:26.965 2 INFO nova.virt.node [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Determined node identity 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 from /var/lib/nova/compute_id
Oct 02 19:31:27 compute-0 nova_compute[354838]: 2025-10-02 19:31:27.003 2 WARNING nova.compute.manager [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Compute nodes ['9d5f6e5d-658d-4616-b5da-8b0a4093afb0'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 02 19:31:27 compute-0 nova_compute[354838]: 2025-10-02 19:31:27.063 2 INFO nova.compute.manager [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 02 19:31:27 compute-0 nova_compute[354838]: 2025-10-02 19:31:27.131 2 WARNING nova.compute.manager [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 02 19:31:27 compute-0 nova_compute[354838]: 2025-10-02 19:31:27.131 2 DEBUG oslo_concurrency.lockutils [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:31:27 compute-0 nova_compute[354838]: 2025-10-02 19:31:27.131 2 DEBUG oslo_concurrency.lockutils [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:31:27 compute-0 nova_compute[354838]: 2025-10-02 19:31:27.132 2 DEBUG oslo_concurrency.lockutils [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:31:27 compute-0 nova_compute[354838]: 2025-10-02 19:31:27.132 2 DEBUG nova.compute.resource_tracker [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:31:27 compute-0 nova_compute[354838]: 2025-10-02 19:31:27.132 2 DEBUG oslo_concurrency.processutils [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:31:27 compute-0 ceph-mon[191910]: pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:31:27 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1457035723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:31:27 compute-0 nova_compute[354838]: 2025-10-02 19:31:27.589 2 DEBUG oslo_concurrency.processutils [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:31:27 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 02 19:31:27 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 02 19:31:27 compute-0 sudo[355735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufybqlbxslcfdkjyvsrzbmmyexwjsxut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433487.5636487-1849-261065015335836/AnsiballZ_systemd.py'
Oct 02 19:31:27 compute-0 sudo[355735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:31:28 compute-0 nova_compute[354838]: 2025-10-02 19:31:28.140 2 WARNING nova.virt.libvirt.driver [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:31:28 compute-0 nova_compute[354838]: 2025-10-02 19:31:28.142 2 DEBUG nova.compute.resource_tracker [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4562MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:31:28 compute-0 nova_compute[354838]: 2025-10-02 19:31:28.143 2 DEBUG oslo_concurrency.lockutils [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:31:28 compute-0 nova_compute[354838]: 2025-10-02 19:31:28.144 2 DEBUG oslo_concurrency.lockutils [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:31:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:28 compute-0 nova_compute[354838]: 2025-10-02 19:31:28.169 2 WARNING nova.compute.resource_tracker [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] No compute node record for compute-0.ctlplane.example.com:9d5f6e5d-658d-4616-b5da-8b0a4093afb0: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 could not be found.
Oct 02 19:31:28 compute-0 nova_compute[354838]: 2025-10-02 19:31:28.199 2 INFO nova.compute.resource_tracker [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0
Oct 02 19:31:28 compute-0 python3.9[355737]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:31:28 compute-0 systemd[1]: Stopping nova_compute container...
Oct 02 19:31:28 compute-0 nova_compute[354838]: 2025-10-02 19:31:28.356 2 DEBUG nova.compute.resource_tracker [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:31:28 compute-0 nova_compute[354838]: 2025-10-02 19:31:28.356 2 DEBUG nova.compute.resource_tracker [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:31:28 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1457035723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:31:28 compute-0 nova_compute[354838]: 2025-10-02 19:31:28.440 2 DEBUG oslo_concurrency.lockutils [None req-bcaea3ec-f84c-44a3-89cc-78838f878f2e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:31:28 compute-0 nova_compute[354838]: 2025-10-02 19:31:28.440 2 DEBUG oslo_concurrency.lockutils [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:31:28 compute-0 nova_compute[354838]: 2025-10-02 19:31:28.441 2 DEBUG oslo_concurrency.lockutils [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:31:28 compute-0 nova_compute[354838]: 2025-10-02 19:31:28.441 2 DEBUG oslo_concurrency.lockutils [None req-2d6d4542-f600-4772-835e-acdacf88f314 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:31:28 compute-0 virtqemud[153606]: End of file while reading data: Input/output error
Oct 02 19:31:28 compute-0 systemd[1]: libpod-3095acc13401ddbc319e94618a67ee24087993b62ef5596946e126cbcc610f65.scope: Deactivated successfully.
Oct 02 19:31:28 compute-0 systemd[1]: libpod-3095acc13401ddbc319e94618a67ee24087993b62ef5596946e126cbcc610f65.scope: Consumed 4.074s CPU time.
Oct 02 19:31:28 compute-0 podman[355741]: 2025-10-02 19:31:28.865162782 +0000 UTC m=+0.511797577 container died 3095acc13401ddbc319e94618a67ee24087993b62ef5596946e126cbcc610f65 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:31:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3095acc13401ddbc319e94618a67ee24087993b62ef5596946e126cbcc610f65-userdata-shm.mount: Deactivated successfully.
Oct 02 19:31:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-ade2f6ab70e91fcd23dcdc4a1db2e7288db4263ac9be3e3b4e10be7450bb505c-merged.mount: Deactivated successfully.
Oct 02 19:31:29 compute-0 podman[157186]: time="2025-10-02T19:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:31:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:31:29 compute-0 ceph-mon[191910]: pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:29 compute-0 podman[355741]: 2025-10-02 19:31:29.860821129 +0000 UTC m=+1.507455884 container cleanup 3095acc13401ddbc319e94618a67ee24087993b62ef5596946e126cbcc610f65 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:31:29 compute-0 podman[355741]: nova_compute
Oct 02 19:31:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45032 "" "Go-http-client/1.1"
Oct 02 19:31:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8119 "" "Go-http-client/1.1"
Oct 02 19:31:29 compute-0 podman[355768]: nova_compute
Oct 02 19:31:29 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 02 19:31:29 compute-0 systemd[1]: Stopped nova_compute container.
Oct 02 19:31:29 compute-0 systemd[1]: edpm_nova_compute.service: Consumed 1.119s CPU time, 17.2M memory peak, read 0B from disk, written 115.5K to disk.
Oct 02 19:31:29 compute-0 systemd[1]: Starting nova_compute container...
Oct 02 19:31:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade2f6ab70e91fcd23dcdc4a1db2e7288db4263ac9be3e3b4e10be7450bb505c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade2f6ab70e91fcd23dcdc4a1db2e7288db4263ac9be3e3b4e10be7450bb505c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade2f6ab70e91fcd23dcdc4a1db2e7288db4263ac9be3e3b4e10be7450bb505c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade2f6ab70e91fcd23dcdc4a1db2e7288db4263ac9be3e3b4e10be7450bb505c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade2f6ab70e91fcd23dcdc4a1db2e7288db4263ac9be3e3b4e10be7450bb505c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:30 compute-0 podman[355779]: 2025-10-02 19:31:30.271209165 +0000 UTC m=+0.267304944 container init 3095acc13401ddbc319e94618a67ee24087993b62ef5596946e126cbcc610f65 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:31:30 compute-0 podman[355779]: 2025-10-02 19:31:30.281921631 +0000 UTC m=+0.278017370 container start 3095acc13401ddbc319e94618a67ee24087993b62ef5596946e126cbcc610f65 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=nova_compute, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm)
Oct 02 19:31:30 compute-0 nova_compute[355794]: + sudo -E kolla_set_configs
Oct 02 19:31:30 compute-0 podman[355779]: nova_compute
Oct 02 19:31:30 compute-0 systemd[1]: Started nova_compute container.
Oct 02 19:31:30 compute-0 sudo[355735]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Validating config file
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Copying service configuration files
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Deleting /etc/ceph
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Creating directory /etc/ceph
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Setting permission for /etc/ceph
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Writing out command to execute
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 19:31:30 compute-0 nova_compute[355794]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 19:31:30 compute-0 nova_compute[355794]: ++ cat /run_command
Oct 02 19:31:30 compute-0 nova_compute[355794]: + CMD=nova-compute
Oct 02 19:31:30 compute-0 nova_compute[355794]: + ARGS=
Oct 02 19:31:30 compute-0 nova_compute[355794]: + sudo kolla_copy_cacerts
Oct 02 19:31:30 compute-0 nova_compute[355794]: + [[ ! -n '' ]]
Oct 02 19:31:30 compute-0 nova_compute[355794]: + . kolla_extend_start
Oct 02 19:31:30 compute-0 nova_compute[355794]: Running command: 'nova-compute'
Oct 02 19:31:30 compute-0 nova_compute[355794]: + echo 'Running command: '\''nova-compute'\'''
Oct 02 19:31:30 compute-0 nova_compute[355794]: + umask 0022
Oct 02 19:31:30 compute-0 nova_compute[355794]: + exec nova-compute
Oct 02 19:31:31 compute-0 sudo[355955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yajrpwzolcfnftzkjwtrooribjfcoyww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433490.6829805-1858-59539791464312/AnsiballZ_podman_container.py'
Oct 02 19:31:31 compute-0 sudo[355955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:31:31 compute-0 openstack_network_exporter[159337]: ERROR   19:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:31:31 compute-0 openstack_network_exporter[159337]: ERROR   19:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:31:31 compute-0 openstack_network_exporter[159337]: ERROR   19:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:31:31 compute-0 python3.9[355958]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 02 19:31:31 compute-0 openstack_network_exporter[159337]: ERROR   19:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:31:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:31:31 compute-0 openstack_network_exporter[159337]: ERROR   19:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:31:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:31:31 compute-0 systemd[1]: Started libpod-conmon-72103d433f9aa0b643abc86067e04da319dfabefd9bcb079ee3189d8c61cfbab.scope.
Oct 02 19:31:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:31:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/852a7ce71315401516ab6bfdd4653835234867b1906d423f9421523ef6a9fd6d/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/852a7ce71315401516ab6bfdd4653835234867b1906d423f9421523ef6a9fd6d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/852a7ce71315401516ab6bfdd4653835234867b1906d423f9421523ef6a9fd6d/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 02 19:31:32 compute-0 podman[355982]: 2025-10-02 19:31:32.056943473 +0000 UTC m=+0.448416473 container init 72103d433f9aa0b643abc86067e04da319dfabefd9bcb079ee3189d8c61cfbab (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:31:32 compute-0 podman[355982]: 2025-10-02 19:31:32.069208621 +0000 UTC m=+0.460681601 container start 72103d433f9aa0b643abc86067e04da319dfabefd9bcb079ee3189d8c61cfbab (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:31:32 compute-0 ceph-mon[191910]: pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:32 compute-0 nova_compute_init[356003]: INFO:nova_statedir:Applying nova statedir ownership
Oct 02 19:31:32 compute-0 nova_compute_init[356003]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 02 19:31:32 compute-0 nova_compute_init[356003]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 02 19:31:32 compute-0 nova_compute_init[356003]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 02 19:31:32 compute-0 nova_compute_init[356003]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 02 19:31:32 compute-0 nova_compute_init[356003]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 02 19:31:32 compute-0 nova_compute_init[356003]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 02 19:31:32 compute-0 nova_compute_init[356003]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 02 19:31:32 compute-0 nova_compute_init[356003]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 02 19:31:32 compute-0 nova_compute_init[356003]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 02 19:31:32 compute-0 nova_compute_init[356003]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 02 19:31:32 compute-0 nova_compute_init[356003]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 02 19:31:32 compute-0 nova_compute_init[356003]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 02 19:31:32 compute-0 nova_compute_init[356003]: INFO:nova_statedir:Nova statedir ownership complete
Oct 02 19:31:32 compute-0 systemd[1]: libpod-72103d433f9aa0b643abc86067e04da319dfabefd9bcb079ee3189d8c61cfbab.scope: Deactivated successfully.
Oct 02 19:31:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:32 compute-0 python3.9[355958]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 02 19:31:32 compute-0 podman[356004]: 2025-10-02 19:31:32.20802361 +0000 UTC m=+0.043335669 container died 72103d433f9aa0b643abc86067e04da319dfabefd9bcb079ee3189d8c61cfbab (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:31:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:31:32.276 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:31:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:31:32.276 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:31:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:31:32.276 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:31:32 compute-0 nova_compute[355794]: 2025-10-02 19:31:32.402 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 19:31:32 compute-0 nova_compute[355794]: 2025-10-02 19:31:32.402 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 19:31:32 compute-0 nova_compute[355794]: 2025-10-02 19:31:32.402 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 19:31:32 compute-0 nova_compute[355794]: 2025-10-02 19:31:32.402 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 02 19:31:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-72103d433f9aa0b643abc86067e04da319dfabefd9bcb079ee3189d8c61cfbab-userdata-shm.mount: Deactivated successfully.
Oct 02 19:31:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-852a7ce71315401516ab6bfdd4653835234867b1906d423f9421523ef6a9fd6d-merged.mount: Deactivated successfully.
Oct 02 19:31:32 compute-0 podman[356004]: 2025-10-02 19:31:32.504152223 +0000 UTC m=+0.339464193 container cleanup 72103d433f9aa0b643abc86067e04da319dfabefd9bcb079ee3189d8c61cfbab (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3)
Oct 02 19:31:32 compute-0 systemd[1]: libpod-conmon-72103d433f9aa0b643abc86067e04da319dfabefd9bcb079ee3189d8c61cfbab.scope: Deactivated successfully.
Oct 02 19:31:32 compute-0 nova_compute[355794]: 2025-10-02 19:31:32.534 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:31:32 compute-0 nova_compute[355794]: 2025-10-02 19:31:32.564 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:31:32 compute-0 sudo[355955]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.066 2 INFO nova.virt.driver [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 02 19:31:33 compute-0 ceph-mon[191910]: pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.194 2 INFO nova.compute.provider_config [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.215 2 DEBUG oslo_concurrency.lockutils [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.215 2 DEBUG oslo_concurrency.lockutils [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.215 2 DEBUG oslo_concurrency.lockutils [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.216 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.216 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.216 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.216 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.216 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.217 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.217 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.217 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.217 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.218 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.218 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.218 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.218 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.218 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.219 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.219 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.219 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.219 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.219 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.220 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.220 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.220 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.220 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.220 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.221 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.221 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.221 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.221 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.222 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.222 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.222 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.222 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.223 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.223 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.223 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.223 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.223 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.224 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.224 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.225 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.225 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.225 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.225 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.226 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.226 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.226 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.226 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.227 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.227 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.227 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.227 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.227 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.228 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.228 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.228 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.228 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.229 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.229 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.229 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.229 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.229 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.230 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.230 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.230 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.230 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.231 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.231 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.231 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.231 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.231 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.232 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.232 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.232 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.232 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.232 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.233 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.233 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.233 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.233 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.234 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.234 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.234 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.234 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.234 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.235 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.235 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.235 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.235 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.236 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.236 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.236 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.236 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.237 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.237 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.237 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.237 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.237 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.238 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.238 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.238 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.238 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.239 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.239 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.239 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.239 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.240 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.240 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.240 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.240 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.240 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.241 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.241 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.241 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.241 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.241 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.242 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.242 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.242 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.242 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.242 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.242 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.243 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.243 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.243 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.243 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.243 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.243 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.243 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.244 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.244 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.244 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.244 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.244 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.244 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.245 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.245 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.245 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.245 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.245 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.245 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.245 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.246 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.246 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.246 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.246 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.246 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.246 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.246 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.247 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.247 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.247 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.247 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.247 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.247 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.248 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.248 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.248 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.248 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.248 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.248 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.248 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.249 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.249 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.249 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.249 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.249 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.249 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.250 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.250 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.250 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.250 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.250 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.250 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.250 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.251 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.251 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.251 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.251 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.251 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.251 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.252 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.252 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.252 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.252 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.252 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.252 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.252 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.253 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.253 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.253 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.253 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.253 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.253 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.253 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.254 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.254 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.254 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.254 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.254 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.254 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.255 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.255 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.255 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.255 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.255 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.255 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.255 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.256 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.256 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.256 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.256 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.256 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.256 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.256 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.257 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.257 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.257 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.257 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.257 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.257 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.257 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.258 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.258 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.258 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.258 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.258 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.258 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.258 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.259 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.259 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.259 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.259 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.259 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.259 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.260 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.260 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.260 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.260 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.260 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.260 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.260 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.261 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.261 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.261 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.261 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.261 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.262 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.262 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.262 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.262 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.262 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.262 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.263 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.263 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.263 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.263 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.263 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.263 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.263 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.264 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.264 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.264 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.264 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.264 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.264 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.265 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.265 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.265 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.265 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.265 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.265 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.265 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.266 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.266 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.266 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.266 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.266 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.266 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.267 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.267 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.267 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.267 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.267 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.267 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.267 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.268 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.268 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.268 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.268 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.268 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.268 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.268 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.269 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.269 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.269 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.269 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.269 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.269 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.269 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.270 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.270 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.270 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.270 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.270 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.270 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.270 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.271 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.271 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.271 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.271 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.271 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.271 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.272 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.272 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.272 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.272 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.272 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.272 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.272 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.273 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.273 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.273 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.273 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.273 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.273 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.273 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.274 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.274 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.274 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.274 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.274 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.274 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.274 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.275 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.275 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.275 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.275 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.275 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.275 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.276 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.276 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.276 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.276 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.276 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.276 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.276 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.277 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.277 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.277 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.277 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.277 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.277 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.277 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.278 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.278 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.278 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.278 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.278 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.279 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.279 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.279 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.279 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.279 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.279 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.279 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.280 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.280 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.280 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.280 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.280 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.280 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.280 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.281 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.281 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.281 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.281 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.281 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.281 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.282 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.282 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.282 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.282 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.282 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.282 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.282 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.283 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.283 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.283 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.283 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.283 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.283 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.283 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.284 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.284 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.284 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.284 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.284 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.284 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.284 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.285 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.285 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.285 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.285 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.285 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.285 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.286 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.286 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.286 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.286 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.286 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.286 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.286 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.287 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.287 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.287 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.287 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.287 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.287 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.287 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.288 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.288 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.288 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.288 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.288 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.288 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.288 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.289 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.289 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.289 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.289 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.289 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.289 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.289 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.290 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.290 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.290 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.290 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.290 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.290 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.290 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.291 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.291 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.291 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.291 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.291 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.291 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.291 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.292 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.292 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.292 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.292 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.292 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.292 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.292 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.293 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.293 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.293 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.293 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.293 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.293 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.294 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.294 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.294 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.294 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.294 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.294 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.294 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.295 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.295 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.295 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.295 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.295 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.295 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.295 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.296 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.296 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.296 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.296 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.296 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.296 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.296 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.297 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.297 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.297 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.297 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.297 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.297 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.297 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.298 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.298 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.298 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.298 2 WARNING oslo_config.cfg [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 02 19:31:33 compute-0 nova_compute[355794]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 02 19:31:33 compute-0 nova_compute[355794]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 02 19:31:33 compute-0 nova_compute[355794]: and ``live_migration_inbound_addr`` respectively.
Oct 02 19:31:33 compute-0 nova_compute[355794]: ).  Its value may be silently ignored in the future.
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.298 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.299 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.299 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.299 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.299 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.299 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.299 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.300 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.300 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.300 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.300 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.300 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.300 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.300 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.301 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.301 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.301 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.301 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.301 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.rbd_secret_uuid        = 6019f664-a1c2-5955-8391-692cb79a59f9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.302 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.302 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.302 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.302 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.302 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.302 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.302 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.303 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.303 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.303 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.303 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.303 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.303 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.303 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.304 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.304 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.304 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.304 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.304 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.304 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.304 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.305 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.305 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.305 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.305 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.305 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.305 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.305 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.306 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.306 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.306 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.306 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.306 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.306 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.307 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.307 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.307 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.307 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.307 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.307 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.307 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.308 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.308 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.308 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.308 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.308 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.308 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.308 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.309 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.309 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.309 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.309 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.310 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.310 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.310 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.310 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.310 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.310 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.310 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.311 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.311 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.311 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.311 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.311 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.311 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.311 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.312 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.312 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.312 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.312 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.312 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.312 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.312 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.313 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.313 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.313 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.313 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.313 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.313 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.313 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.314 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.314 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.314 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.314 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.314 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.314 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.314 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.315 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.315 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.315 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.315 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.315 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.315 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.315 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.316 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.316 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.316 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.316 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.316 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.316 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.316 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.316 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.317 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.317 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.317 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.317 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.317 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.317 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.318 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.318 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.318 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.318 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.318 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.319 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.319 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.319 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.319 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.319 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.319 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.320 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.320 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.320 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.320 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.320 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.320 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.321 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.321 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.321 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.321 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.321 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.321 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.322 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.322 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.322 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.322 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.322 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.322 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.323 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.323 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.323 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.323 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.323 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.323 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.324 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.324 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.324 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.324 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.324 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.324 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.325 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.325 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.325 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.325 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.325 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.325 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.325 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.326 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.326 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.326 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.326 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.326 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.326 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.327 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.327 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.327 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.327 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.327 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.327 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.328 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.328 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.328 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.328 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.328 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.328 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.329 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.329 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.329 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.329 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.329 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.330 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.330 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.330 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.330 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.330 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.330 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.331 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.331 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.331 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.331 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.331 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.332 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.332 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.332 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.332 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.332 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.332 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.333 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.333 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.333 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.333 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.333 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.333 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.334 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.334 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.334 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.334 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.334 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.334 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.334 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 sshd-session[315451]: Connection closed by 192.168.122.30 port 45648
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.335 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.335 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.335 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.335 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.335 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.335 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.336 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.336 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.336 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.336 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.336 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.336 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.336 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.337 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.337 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.337 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.337 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.337 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.338 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 sshd-session[315448]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.338 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.338 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.338 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.338 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.339 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.339 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.339 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.340 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.342 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.343 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.344 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.344 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.344 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.345 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.345 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.345 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.345 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.346 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.346 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Oct 02 19:31:33 compute-0 systemd[1]: session-56.scope: Consumed 4min 37.846s CPU time.
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.346 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.346 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.347 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.347 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.347 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.348 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.348 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.348 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.348 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.348 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.349 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.349 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.349 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 systemd-logind[793]: Session 56 logged out. Waiting for processes to exit.
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.350 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.350 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.350 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.351 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.351 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.351 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.352 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.352 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.352 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.352 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.353 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.353 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.353 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.354 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.354 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.354 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.354 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.355 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.355 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.355 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.355 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.356 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 systemd-logind[793]: Removed session 56.
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.356 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.356 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.357 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.357 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.357 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.357 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.358 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.358 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.358 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.358 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.359 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.359 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.359 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.359 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.360 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.360 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.360 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.360 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.361 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.361 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.361 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.361 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.362 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.362 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.362 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.362 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.363 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.363 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.363 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.363 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.363 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.364 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.364 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.364 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.364 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.365 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.365 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.365 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.365 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.366 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.366 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.366 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.366 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.367 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.367 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.367 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.368 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.368 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.368 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.368 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.369 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.369 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.369 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.369 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.370 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.370 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.370 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.370 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.371 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.371 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.371 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.371 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.371 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.372 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.372 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.372 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.372 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.373 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.373 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.373 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.373 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.373 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.374 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.374 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.374 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.374 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.375 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.375 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.375 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.375 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.375 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.376 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.376 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.376 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.376 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.377 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.377 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.377 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.377 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.377 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.378 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.378 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.378 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.378 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.379 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.379 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.379 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.379 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.380 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.380 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.380 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.380 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.381 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.381 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.381 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.381 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.381 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.382 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.382 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.382 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.382 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.383 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.383 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.383 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.383 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.384 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.384 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.384 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.384 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.385 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.385 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.385 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.385 2 DEBUG oslo_service.service [None req-e05554da-3bda-4bc7-bde1-47999909655b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.387 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.446 2 INFO nova.virt.node [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Determined node identity 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 from /var/lib/nova/compute_id
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.447 2 DEBUG nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.448 2 DEBUG nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.448 2 DEBUG nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.449 2 DEBUG nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.470 2 DEBUG nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7ff2792bfa90> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.475 2 DEBUG nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7ff2792bfa90> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.476 2 INFO nova.virt.libvirt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Connection event '1' reason 'None'
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.485 2 INFO nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Libvirt host capabilities <capabilities>
Oct 02 19:31:33 compute-0 nova_compute[355794]: 
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <host>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <uuid>b440ca7f-ed29-4df1-9220-db3fd23c361a</uuid>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <cpu>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <arch>x86_64</arch>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model>EPYC-Rome-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <vendor>AMD</vendor>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <microcode version='16777317'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <signature family='23' model='49' stepping='0'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='x2apic'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='tsc-deadline'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='osxsave'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='hypervisor'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='tsc_adjust'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='spec-ctrl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='stibp'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='arch-capabilities'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='ssbd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='cmp_legacy'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='topoext'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='virt-ssbd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='lbrv'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='tsc-scale'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='vmcb-clean'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='pause-filter'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='pfthreshold'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='svme-addr-chk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='rdctl-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='skip-l1dfl-vmentry'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='mds-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature name='pschange-mc-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <pages unit='KiB' size='4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <pages unit='KiB' size='2048'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <pages unit='KiB' size='1048576'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </cpu>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <power_management>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <suspend_mem/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </power_management>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <iommu support='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <migration_features>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <live/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <uri_transports>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <uri_transport>tcp</uri_transport>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <uri_transport>rdma</uri_transport>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </uri_transports>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </migration_features>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <topology>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <cells num='1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <cell id='0'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:           <memory unit='KiB'>7864100</memory>
Oct 02 19:31:33 compute-0 nova_compute[355794]:           <pages unit='KiB' size='4'>1966025</pages>
Oct 02 19:31:33 compute-0 nova_compute[355794]:           <pages unit='KiB' size='2048'>0</pages>
Oct 02 19:31:33 compute-0 nova_compute[355794]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 02 19:31:33 compute-0 nova_compute[355794]:           <distances>
Oct 02 19:31:33 compute-0 nova_compute[355794]:             <sibling id='0' value='10'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:           </distances>
Oct 02 19:31:33 compute-0 nova_compute[355794]:           <cpus num='8'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:           </cpus>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         </cell>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </cells>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </topology>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <cache>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </cache>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <secmodel>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model>selinux</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <doi>0</doi>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </secmodel>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <secmodel>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model>dac</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <doi>0</doi>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </secmodel>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </host>
Oct 02 19:31:33 compute-0 nova_compute[355794]: 
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <guest>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <os_type>hvm</os_type>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <arch name='i686'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <wordsize>32</wordsize>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <domain type='qemu'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <domain type='kvm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </arch>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <features>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <pae/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <nonpae/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <acpi default='on' toggle='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <apic default='on' toggle='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <cpuselection/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <deviceboot/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <disksnapshot default='on' toggle='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <externalSnapshot/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </features>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </guest>
Oct 02 19:31:33 compute-0 nova_compute[355794]: 
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <guest>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <os_type>hvm</os_type>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <arch name='x86_64'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <wordsize>64</wordsize>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <domain type='qemu'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <domain type='kvm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </arch>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <features>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <acpi default='on' toggle='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <apic default='on' toggle='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <cpuselection/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <deviceboot/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <disksnapshot default='on' toggle='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <externalSnapshot/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </features>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </guest>
Oct 02 19:31:33 compute-0 nova_compute[355794]: 
Oct 02 19:31:33 compute-0 nova_compute[355794]: </capabilities>
Oct 02 19:31:33 compute-0 nova_compute[355794]: 
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.495 2 DEBUG nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.503 2 DEBUG nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 02 19:31:33 compute-0 nova_compute[355794]: <domainCapabilities>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <domain>kvm</domain>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <arch>i686</arch>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <vcpu max='240'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <iothreads supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <os supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <enum name='firmware'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <loader supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='type'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>rom</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>pflash</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='readonly'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>yes</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>no</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='secure'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>no</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </loader>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </os>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <cpu>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <mode name='host-passthrough' supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='hostPassthroughMigratable'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>on</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>off</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </mode>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <mode name='maximum' supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='maximumMigratable'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>on</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>off</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </mode>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <mode name='host-model' supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <vendor>AMD</vendor>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='x2apic'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='hypervisor'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='stibp'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='ssbd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='overflow-recov'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='succor'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='ibrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='lbrv'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='tsc-scale'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='flushbyasid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='pause-filter'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='pfthreshold'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='rdctl-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='mds-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='gds-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='rfds-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='disable' name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </mode>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <mode name='custom' supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-noTSX'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cooperlake'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cooperlake-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cooperlake-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Denverton'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mpx'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Denverton-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mpx'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Denverton-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Denverton-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Dhyana-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Genoa'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amd-psfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='auto-ibrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='stibp-always-on'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amd-psfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='auto-ibrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='stibp-always-on'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Milan'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Milan-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Milan-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amd-psfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='stibp-always-on'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Rome'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Rome-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Rome-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Rome-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='GraniteRapids'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='prefetchiti'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='GraniteRapids-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='prefetchiti'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='GraniteRapids-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx10'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx10-128'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx10-256'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx10-512'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='prefetchiti'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-noTSX'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v5'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v6'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v7'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='IvyBridge'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='IvyBridge-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='IvyBridge-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='IvyBridge-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='KnightsMill'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-4fmaps'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-4vnniw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512er'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512pf'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='KnightsMill-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-4fmaps'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-4vnniw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512er'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512pf'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Opteron_G4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fma4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xop'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Opteron_G4-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fma4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xop'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Opteron_G5'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fma4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tbm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xop'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Opteron_G5-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fma4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tbm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xop'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SapphireRapids'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SapphireRapids-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SapphireRapids-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SapphireRapids-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SierraForest'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-ne-convert'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cmpccxadd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SierraForest-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-ne-convert'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cmpccxadd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v5'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='core-capability'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mpx'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='split-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='core-capability'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mpx'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='split-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='core-capability'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='split-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='core-capability'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='split-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='athlon'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnow'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnowext'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='athlon-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnow'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnowext'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='core2duo'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='core2duo-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='coreduo'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='coreduo-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='n270'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='n270-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='phenom'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnow'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnowext'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='phenom-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnow'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnowext'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </mode>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </cpu>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <memoryBacking supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <enum name='sourceType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>file</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>anonymous</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>memfd</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </memoryBacking>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <devices>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <disk supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='diskDevice'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>disk</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>cdrom</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>floppy</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>lun</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='bus'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>ide</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>fdc</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>scsi</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>usb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>sata</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio-transitional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio-non-transitional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <graphics supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='type'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vnc</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>egl-headless</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>dbus</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </graphics>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <video supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='modelType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vga</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>cirrus</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>none</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>bochs</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>ramfb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </video>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <hostdev supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='mode'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>subsystem</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='startupPolicy'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>default</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>mandatory</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>requisite</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>optional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='subsysType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>usb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>pci</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>scsi</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='capsType'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='pciBackend'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </hostdev>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <rng supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio-transitional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio-non-transitional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendModel'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>random</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>egd</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>builtin</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </rng>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <filesystem supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='driverType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>path</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>handle</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtiofs</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </filesystem>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <tpm supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>tpm-tis</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>tpm-crb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendModel'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>emulator</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>external</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendVersion'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>2.0</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </tpm>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <redirdev supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='bus'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>usb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </redirdev>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <channel supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='type'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>pty</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>unix</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </channel>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <crypto supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='type'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>qemu</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendModel'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>builtin</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </crypto>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <interface supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>default</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>passt</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </interface>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <panic supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>isa</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>hyperv</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </panic>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </devices>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <features>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <gic supported='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <vmcoreinfo supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <genid supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <backingStoreInput supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <backup supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <async-teardown supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <ps2 supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <sev supported='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <sgx supported='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <hyperv supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='features'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>relaxed</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vapic</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>spinlocks</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vpindex</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>runtime</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>synic</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>stimer</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>reset</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vendor_id</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>frequencies</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>reenlightenment</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>tlbflush</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>ipi</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>avic</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>emsr_bitmap</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>xmm_input</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </hyperv>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <launchSecurity supported='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </features>
Oct 02 19:31:33 compute-0 nova_compute[355794]: </domainCapabilities>
Oct 02 19:31:33 compute-0 nova_compute[355794]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 19:31:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.508 2 DEBUG nova.virt.libvirt.volume.mount [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 02 19:31:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.513 2 DEBUG nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 02 19:31:33 compute-0 nova_compute[355794]: <domainCapabilities>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <domain>kvm</domain>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <arch>i686</arch>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <vcpu max='4096'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <iothreads supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <os supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <enum name='firmware'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <loader supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='type'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>rom</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>pflash</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='readonly'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>yes</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>no</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='secure'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>no</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </loader>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </os>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <cpu>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <mode name='host-passthrough' supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='hostPassthroughMigratable'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>on</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>off</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </mode>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <mode name='maximum' supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='maximumMigratable'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>on</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>off</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </mode>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <mode name='host-model' supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <vendor>AMD</vendor>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='x2apic'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='hypervisor'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='stibp'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='ssbd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='overflow-recov'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='succor'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='ibrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='lbrv'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='tsc-scale'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='flushbyasid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='pause-filter'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='pfthreshold'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='rdctl-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='mds-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='gds-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='rfds-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='disable' name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </mode>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <mode name='custom' supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-noTSX'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cooperlake'>
Oct 02 19:31:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cooperlake-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cooperlake-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Denverton'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mpx'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Denverton-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mpx'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Denverton-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Denverton-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Dhyana-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Genoa'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amd-psfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='auto-ibrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='stibp-always-on'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amd-psfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='auto-ibrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='stibp-always-on'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Milan'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Milan-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Milan-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amd-psfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='stibp-always-on'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Rome'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Rome-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Rome-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Rome-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='GraniteRapids'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='prefetchiti'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='GraniteRapids-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='prefetchiti'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='GraniteRapids-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx10'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx10-128'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx10-256'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx10-512'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='prefetchiti'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-noTSX'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v5'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v6'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v7'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='IvyBridge'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='IvyBridge-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='IvyBridge-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='IvyBridge-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='KnightsMill'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-4fmaps'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-4vnniw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512er'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512pf'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='KnightsMill-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-4fmaps'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-4vnniw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512er'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512pf'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Opteron_G4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fma4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xop'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Opteron_G4-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fma4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xop'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Opteron_G5'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fma4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tbm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xop'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Opteron_G5-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fma4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tbm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xop'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SapphireRapids'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SapphireRapids-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SapphireRapids-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SapphireRapids-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SierraForest'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-ne-convert'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cmpccxadd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SierraForest-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-ne-convert'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cmpccxadd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v5'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='core-capability'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mpx'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='split-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='core-capability'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mpx'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='split-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='core-capability'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='split-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='core-capability'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='split-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='athlon'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnow'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnowext'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='athlon-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnow'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnowext'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='core2duo'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='core2duo-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='coreduo'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='coreduo-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='n270'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='n270-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='phenom'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnow'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnowext'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='phenom-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnow'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnowext'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </mode>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </cpu>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <memoryBacking supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <enum name='sourceType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>file</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>anonymous</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>memfd</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </memoryBacking>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <devices>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <disk supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='diskDevice'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>disk</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>cdrom</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>floppy</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>lun</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='bus'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>fdc</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>scsi</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>usb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>sata</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio-transitional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio-non-transitional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <graphics supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='type'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vnc</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>egl-headless</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>dbus</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </graphics>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <video supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='modelType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vga</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>cirrus</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>none</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>bochs</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>ramfb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </video>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <hostdev supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='mode'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>subsystem</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='startupPolicy'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>default</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>mandatory</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>requisite</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>optional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='subsysType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>usb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>pci</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>scsi</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='capsType'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='pciBackend'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </hostdev>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <rng supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio-transitional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio-non-transitional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendModel'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>random</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>egd</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>builtin</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </rng>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <filesystem supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='driverType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>path</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>handle</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtiofs</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </filesystem>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <tpm supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>tpm-tis</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>tpm-crb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendModel'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>emulator</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>external</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendVersion'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>2.0</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </tpm>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <redirdev supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='bus'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>usb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </redirdev>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <channel supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='type'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>pty</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>unix</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </channel>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <crypto supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='type'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>qemu</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendModel'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>builtin</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </crypto>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <interface supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>default</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>passt</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </interface>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <panic supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>isa</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>hyperv</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </panic>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </devices>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <features>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <gic supported='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <vmcoreinfo supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <genid supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <backingStoreInput supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <backup supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <async-teardown supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <ps2 supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <sev supported='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <sgx supported='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <hyperv supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='features'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>relaxed</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vapic</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>spinlocks</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vpindex</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>runtime</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>synic</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>stimer</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>reset</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vendor_id</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>frequencies</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>reenlightenment</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>tlbflush</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>ipi</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>avic</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>emsr_bitmap</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>xmm_input</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </hyperv>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <launchSecurity supported='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </features>
Oct 02 19:31:33 compute-0 nova_compute[355794]: </domainCapabilities>
Oct 02 19:31:33 compute-0 nova_compute[355794]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.587 2 DEBUG nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.593 2 DEBUG nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 02 19:31:33 compute-0 nova_compute[355794]: <domainCapabilities>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <domain>kvm</domain>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <arch>x86_64</arch>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <vcpu max='240'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <iothreads supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <os supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <enum name='firmware'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <loader supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='type'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>rom</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>pflash</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='readonly'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>yes</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>no</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='secure'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>no</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </loader>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </os>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <cpu>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <mode name='host-passthrough' supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='hostPassthroughMigratable'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>on</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>off</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </mode>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <mode name='maximum' supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='maximumMigratable'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>on</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>off</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </mode>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <mode name='host-model' supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <vendor>AMD</vendor>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='x2apic'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='hypervisor'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='stibp'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='ssbd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='overflow-recov'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='succor'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='ibrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='lbrv'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='tsc-scale'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='flushbyasid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='pause-filter'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='pfthreshold'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='rdctl-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='mds-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='gds-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='rfds-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='disable' name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </mode>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <mode name='custom' supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-noTSX'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cooperlake'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cooperlake-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cooperlake-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Denverton'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mpx'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Denverton-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mpx'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Denverton-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Denverton-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Dhyana-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Genoa'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amd-psfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='auto-ibrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='stibp-always-on'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amd-psfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='auto-ibrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='stibp-always-on'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Milan'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Milan-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Milan-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amd-psfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='stibp-always-on'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Rome'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Rome-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Rome-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Rome-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='GraniteRapids'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='prefetchiti'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='GraniteRapids-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='prefetchiti'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='GraniteRapids-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx10'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx10-128'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx10-256'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx10-512'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='prefetchiti'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-noTSX'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v5'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v6'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v7'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='IvyBridge'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='IvyBridge-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='IvyBridge-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='IvyBridge-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='KnightsMill'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-4fmaps'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-4vnniw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512er'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512pf'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='KnightsMill-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-4fmaps'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-4vnniw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512er'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512pf'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Opteron_G4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fma4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xop'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Opteron_G4-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fma4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xop'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Opteron_G5'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fma4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tbm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xop'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Opteron_G5-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fma4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tbm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xop'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SapphireRapids'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SapphireRapids-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SapphireRapids-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SapphireRapids-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SierraForest'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-ne-convert'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cmpccxadd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SierraForest-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-ne-convert'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cmpccxadd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v5'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='core-capability'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mpx'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='split-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='core-capability'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mpx'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='split-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='core-capability'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='split-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='core-capability'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='split-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='athlon'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnow'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnowext'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='athlon-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnow'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnowext'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='core2duo'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='core2duo-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='coreduo'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='coreduo-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='n270'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='n270-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='phenom'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnow'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnowext'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='phenom-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnow'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnowext'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </mode>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </cpu>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <memoryBacking supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <enum name='sourceType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>file</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>anonymous</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>memfd</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </memoryBacking>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <devices>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <disk supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='diskDevice'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>disk</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>cdrom</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>floppy</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>lun</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='bus'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>ide</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>fdc</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>scsi</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>usb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>sata</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio-transitional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio-non-transitional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <graphics supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='type'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vnc</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>egl-headless</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>dbus</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </graphics>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <video supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='modelType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vga</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>cirrus</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>none</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>bochs</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>ramfb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </video>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <hostdev supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='mode'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>subsystem</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='startupPolicy'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>default</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>mandatory</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>requisite</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>optional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='subsysType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>usb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>pci</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>scsi</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='capsType'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='pciBackend'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </hostdev>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <rng supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio-transitional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio-non-transitional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendModel'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>random</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>egd</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>builtin</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </rng>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <filesystem supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='driverType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>path</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>handle</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtiofs</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </filesystem>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <tpm supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>tpm-tis</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>tpm-crb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendModel'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>emulator</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>external</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendVersion'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>2.0</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </tpm>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <redirdev supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='bus'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>usb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </redirdev>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <channel supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='type'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>pty</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>unix</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </channel>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <crypto supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='type'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>qemu</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendModel'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>builtin</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </crypto>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <interface supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>default</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>passt</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </interface>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <panic supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>isa</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>hyperv</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </panic>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </devices>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <features>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <gic supported='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <vmcoreinfo supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <genid supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <backingStoreInput supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <backup supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <async-teardown supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <ps2 supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <sev supported='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <sgx supported='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <hyperv supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='features'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>relaxed</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vapic</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>spinlocks</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vpindex</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>runtime</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>synic</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>stimer</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>reset</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vendor_id</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>frequencies</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>reenlightenment</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>tlbflush</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>ipi</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>avic</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>emsr_bitmap</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>xmm_input</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </hyperv>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <launchSecurity supported='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </features>
Oct 02 19:31:33 compute-0 nova_compute[355794]: </domainCapabilities>
Oct 02 19:31:33 compute-0 nova_compute[355794]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.719 2 DEBUG nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 02 19:31:33 compute-0 nova_compute[355794]: <domainCapabilities>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <domain>kvm</domain>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <arch>x86_64</arch>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <vcpu max='4096'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <iothreads supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <os supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <enum name='firmware'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>efi</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <loader supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='type'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>rom</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>pflash</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='readonly'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>yes</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>no</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='secure'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>yes</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>no</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </loader>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </os>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <cpu>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <mode name='host-passthrough' supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='hostPassthroughMigratable'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>on</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>off</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </mode>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <mode name='maximum' supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='maximumMigratable'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>on</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>off</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </mode>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <mode name='host-model' supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <vendor>AMD</vendor>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='x2apic'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='hypervisor'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='stibp'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='ssbd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='overflow-recov'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='succor'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='ibrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='lbrv'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='tsc-scale'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='flushbyasid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='pause-filter'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='pfthreshold'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='rdctl-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='mds-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='gds-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='require' name='rfds-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <feature policy='disable' name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </mode>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <mode name='custom' supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-noTSX'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Broadwell-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cooperlake'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cooperlake-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Cooperlake-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Denverton'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mpx'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Denverton-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mpx'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Denverton-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Denverton-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Dhyana-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Genoa'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amd-psfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='auto-ibrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='stibp-always-on'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amd-psfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='auto-ibrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='stibp-always-on'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Milan'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Milan-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Milan-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amd-psfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='no-nested-data-bp'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='null-sel-clr-base'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='stibp-always-on'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Rome'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Rome-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Rome-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-Rome-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='EPYC-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='GraniteRapids'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='prefetchiti'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='GraniteRapids-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='prefetchiti'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='GraniteRapids-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx10'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx10-128'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx10-256'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx10-512'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='prefetchiti'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-noTSX'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Haswell-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v5'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v6'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Icelake-Server-v7'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='IvyBridge'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='IvyBridge-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='IvyBridge-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='IvyBridge-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='KnightsMill'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-4fmaps'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-4vnniw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512er'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512pf'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='KnightsMill-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-4fmaps'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-4vnniw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512er'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512pf'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Opteron_G4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fma4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xop'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Opteron_G4-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fma4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xop'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Opteron_G5'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fma4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tbm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xop'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Opteron_G5-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fma4'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tbm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xop'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SapphireRapids'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SapphireRapids-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SapphireRapids-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SapphireRapids-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='amx-tile'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-bf16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-fp16'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bitalg'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vbmi2'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrc'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fzrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='la57'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='taa-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='tsx-ldtrk'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xfd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SierraForest'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-ne-convert'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cmpccxadd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='SierraForest-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-ifma'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-ne-convert'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx-vnni-int8'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='bus-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cmpccxadd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fbsdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='fsrs'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ibrs-all'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mcdt-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pbrsb-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='psdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='serialize'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vaes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='vpclmulqdq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Client-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='hle'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='rtm'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Skylake-Server-v5'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512bw'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512cd'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512dq'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512f'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='avx512vl'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='invpcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pcid'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='pku'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='core-capability'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mpx'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='split-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='core-capability'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='mpx'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='split-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge-v2'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='core-capability'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='split-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge-v3'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='core-capability'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='split-lock-detect'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='Snowridge-v4'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='cldemote'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='erms'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='gfni'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdir64b'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='movdiri'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='xsaves'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='athlon'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnow'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnowext'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='athlon-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnow'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnowext'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='core2duo'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='core2duo-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='coreduo'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='coreduo-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='n270'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='n270-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='ss'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='phenom'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnow'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnowext'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <blockers model='phenom-v1'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnow'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <feature name='3dnowext'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </blockers>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </mode>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </cpu>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <memoryBacking supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <enum name='sourceType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>file</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>anonymous</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <value>memfd</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </memoryBacking>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <devices>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <disk supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='diskDevice'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>disk</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>cdrom</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>floppy</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>lun</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='bus'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>fdc</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>scsi</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>usb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>sata</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio-transitional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio-non-transitional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <graphics supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='type'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vnc</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>egl-headless</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>dbus</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </graphics>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <video supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='modelType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vga</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>cirrus</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>none</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>bochs</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>ramfb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </video>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <hostdev supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='mode'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>subsystem</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='startupPolicy'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>default</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>mandatory</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>requisite</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>optional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='subsysType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>usb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>pci</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>scsi</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='capsType'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='pciBackend'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </hostdev>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <rng supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio-transitional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtio-non-transitional</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendModel'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>random</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>egd</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>builtin</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </rng>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <filesystem supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='driverType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>path</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>handle</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>virtiofs</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </filesystem>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <tpm supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>tpm-tis</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>tpm-crb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendModel'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>emulator</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>external</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendVersion'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>2.0</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </tpm>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <redirdev supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='bus'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>usb</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </redirdev>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <channel supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='type'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>pty</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>unix</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </channel>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <crypto supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='type'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>qemu</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendModel'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>builtin</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </crypto>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <interface supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='backendType'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>default</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>passt</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </interface>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <panic supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='model'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>isa</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>hyperv</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </panic>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </devices>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   <features>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <gic supported='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <vmcoreinfo supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <genid supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <backingStoreInput supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <backup supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <async-teardown supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <ps2 supported='yes'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <sev supported='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <sgx supported='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <hyperv supported='yes'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       <enum name='features'>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>relaxed</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vapic</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>spinlocks</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vpindex</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>runtime</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>synic</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>stimer</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>reset</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>vendor_id</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>frequencies</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>reenlightenment</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>tlbflush</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>ipi</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>avic</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>emsr_bitmap</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:         <value>xmm_input</value>
Oct 02 19:31:33 compute-0 nova_compute[355794]:       </enum>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     </hyperv>
Oct 02 19:31:33 compute-0 nova_compute[355794]:     <launchSecurity supported='no'/>
Oct 02 19:31:33 compute-0 nova_compute[355794]:   </features>
Oct 02 19:31:33 compute-0 nova_compute[355794]: </domainCapabilities>
Oct 02 19:31:33 compute-0 nova_compute[355794]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.820 2 DEBUG nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.820 2 DEBUG nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.820 2 DEBUG nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.820 2 INFO nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Secure Boot support detected
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.823 2 INFO nova.virt.libvirt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.823 2 INFO nova.virt.libvirt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.842 2 DEBUG nova.virt.libvirt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.879 2 INFO nova.virt.node [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Determined node identity 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 from /var/lib/nova/compute_id
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.934 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Verified node 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Oct 02 19:31:33 compute-0 nova_compute[355794]: 2025-10-02 19:31:33.976 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 02 19:31:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:34 compute-0 nova_compute[355794]: 2025-10-02 19:31:34.350 2 ERROR nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Could not retrieve compute node resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 and therefore unable to error out any instances stuck in BUILDING state. Error: Failed to retrieve allocations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '9d5f6e5d-658d-4616-b5da-8b0a4093afb0' not found: No resource provider with uuid 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 found  ", "request_id": "req-77bbb9c6-86df-48f6-85b0-e4e0ffa10304"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '9d5f6e5d-658d-4616-b5da-8b0a4093afb0' not found: No resource provider with uuid 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 found  ", "request_id": "req-77bbb9c6-86df-48f6-85b0-e4e0ffa10304"}]}
Oct 02 19:31:34 compute-0 nova_compute[355794]: 2025-10-02 19:31:34.390 2 DEBUG oslo_concurrency.lockutils [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:31:34 compute-0 nova_compute[355794]: 2025-10-02 19:31:34.391 2 DEBUG oslo_concurrency.lockutils [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:31:34 compute-0 nova_compute[355794]: 2025-10-02 19:31:34.391 2 DEBUG oslo_concurrency.lockutils [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:31:34 compute-0 nova_compute[355794]: 2025-10-02 19:31:34.392 2 DEBUG nova.compute.resource_tracker [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:31:34 compute-0 nova_compute[355794]: 2025-10-02 19:31:34.392 2 DEBUG oslo_concurrency.processutils [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:31:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:31:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:31:34 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3683241976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:31:34 compute-0 nova_compute[355794]: 2025-10-02 19:31:34.853 2 DEBUG oslo_concurrency.processutils [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:31:35 compute-0 nova_compute[355794]: 2025-10-02 19:31:35.218 2 WARNING nova.virt.libvirt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:31:35 compute-0 nova_compute[355794]: 2025-10-02 19:31:35.219 2 DEBUG nova.compute.resource_tracker [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4545MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:31:35 compute-0 nova_compute[355794]: 2025-10-02 19:31:35.220 2 DEBUG oslo_concurrency.lockutils [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:31:35 compute-0 nova_compute[355794]: 2025-10-02 19:31:35.220 2 DEBUG oslo_concurrency.lockutils [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:31:35 compute-0 ceph-mon[191910]: pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:35 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3683241976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:31:35 compute-0 nova_compute[355794]: 2025-10-02 19:31:35.412 2 ERROR nova.compute.resource_tracker [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '9d5f6e5d-658d-4616-b5da-8b0a4093afb0' not found: No resource provider with uuid 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 found  ", "request_id": "req-ca3f5788-132d-4ed4-ad70-b01e4f3b1d9e"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '9d5f6e5d-658d-4616-b5da-8b0a4093afb0' not found: No resource provider with uuid 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 found  ", "request_id": "req-ca3f5788-132d-4ed4-ad70-b01e4f3b1d9e"}]}
Oct 02 19:31:35 compute-0 nova_compute[355794]: 2025-10-02 19:31:35.413 2 DEBUG nova.compute.resource_tracker [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:31:35 compute-0 nova_compute[355794]: 2025-10-02 19:31:35.413 2 DEBUG nova.compute.resource_tracker [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:31:35 compute-0 nova_compute[355794]: 2025-10-02 19:31:35.901 2 INFO nova.scheduler.client.report [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [req-7f74d88c-a8e6-4132-ac56-86945a56d1c2] Created resource provider record via placement API for resource provider with UUID 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 and name compute-0.ctlplane.example.com.
Oct 02 19:31:35 compute-0 nova_compute[355794]: 2025-10-02 19:31:35.939 2 DEBUG oslo_concurrency.processutils [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:31:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:31:36 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2983297761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:31:36 compute-0 nova_compute[355794]: 2025-10-02 19:31:36.477 2 DEBUG oslo_concurrency.processutils [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:31:36 compute-0 nova_compute[355794]: 2025-10-02 19:31:36.489 2 DEBUG nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 02 19:31:36 compute-0 nova_compute[355794]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Oct 02 19:31:36 compute-0 nova_compute[355794]: 2025-10-02 19:31:36.489 2 INFO nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] kernel doesn't support AMD SEV
Oct 02 19:31:36 compute-0 nova_compute[355794]: 2025-10-02 19:31:36.492 2 DEBUG nova.compute.provider_tree [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Updating inventory in ProviderTree for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:31:36 compute-0 nova_compute[355794]: 2025-10-02 19:31:36.493 2 DEBUG nova.virt.libvirt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:31:36 compute-0 nova_compute[355794]: 2025-10-02 19:31:36.633 2 DEBUG nova.scheduler.client.report [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Updated inventory for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 02 19:31:36 compute-0 nova_compute[355794]: 2025-10-02 19:31:36.633 2 DEBUG nova.compute.provider_tree [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Updating resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 02 19:31:36 compute-0 nova_compute[355794]: 2025-10-02 19:31:36.634 2 DEBUG nova.compute.provider_tree [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Updating inventory in ProviderTree for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:31:36 compute-0 nova_compute[355794]: 2025-10-02 19:31:36.793 2 DEBUG nova.compute.provider_tree [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Updating resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 02 19:31:36 compute-0 nova_compute[355794]: 2025-10-02 19:31:36.818 2 DEBUG nova.compute.resource_tracker [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:31:36 compute-0 nova_compute[355794]: 2025-10-02 19:31:36.818 2 DEBUG oslo_concurrency.lockutils [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:31:36 compute-0 nova_compute[355794]: 2025-10-02 19:31:36.819 2 DEBUG nova.service [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Oct 02 19:31:36 compute-0 nova_compute[355794]: 2025-10-02 19:31:36.971 2 DEBUG nova.service [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Oct 02 19:31:36 compute-0 nova_compute[355794]: 2025-10-02 19:31:36.972 2 DEBUG nova.servicegroup.drivers.db [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Oct 02 19:31:37 compute-0 ceph-mon[191910]: pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:37 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2983297761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:31:37 compute-0 podman[356148]: 2025-10-02 19:31:37.696039611 +0000 UTC m=+0.088678411 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:31:37 compute-0 podman[356133]: 2025-10-02 19:31:37.696246136 +0000 UTC m=+0.118833306 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:31:37 compute-0 podman[356135]: 2025-10-02 19:31:37.702575536 +0000 UTC m=+0.107347970 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:31:37 compute-0 podman[356137]: 2025-10-02 19:31:37.71658208 +0000 UTC m=+0.114066759 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:31:37 compute-0 podman[356138]: 2025-10-02 19:31:37.716767885 +0000 UTC m=+0.110081473 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:31:37 compute-0 podman[356136]: 2025-10-02 19:31:37.726145895 +0000 UTC m=+0.110969896 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS)
Oct 02 19:31:37 compute-0 podman[356134]: 2025-10-02 19:31:37.729914386 +0000 UTC m=+0.147470542 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, config_id=edpm, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, architecture=x86_64, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Oct 02 19:31:37 compute-0 podman[356152]: 2025-10-02 19:31:37.739518863 +0000 UTC m=+0.131752352 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 19:31:37 compute-0 podman[356286]: 2025-10-02 19:31:37.809555614 +0000 UTC m=+0.061102294 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., version=9.4, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.29.0, release-0.7.12=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:31:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:39 compute-0 ceph-mon[191910]: pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:31:40 compute-0 sshd-session[356310]: Accepted publickey for zuul from 192.168.122.30 port 47316 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:31:40 compute-0 systemd-logind[793]: New session 59 of user zuul.
Oct 02 19:31:40 compute-0 systemd[1]: Started Session 59 of User zuul.
Oct 02 19:31:40 compute-0 sshd-session[356310]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:31:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:41 compute-0 ceph-mon[191910]: pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:41 compute-0 python3.9[356463]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:31:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:42 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:31:42 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:31:43 compute-0 sudo[356618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaqpxaqtiqoyhyjanprzfgxhlhqtxjdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433502.210373-36-122861783722893/AnsiballZ_systemd_service.py'
Oct 02 19:31:43 compute-0 sudo[356618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:31:43 compute-0 ceph-mon[191910]: pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:43 compute-0 python3.9[356620]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:31:43 compute-0 systemd[1]: Reloading.
Oct 02 19:31:43 compute-0 systemd-sysv-generator[356650]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:31:43 compute-0 systemd-rc-local-generator[356640]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:31:44 compute-0 sudo[356618]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:31:45 compute-0 python3.9[356806]: ansible-ansible.builtin.service_facts Invoked
Oct 02 19:31:45 compute-0 network[356823]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 19:31:45 compute-0 network[356824]: 'network-scripts' will be removed from distribution in near future.
Oct 02 19:31:45 compute-0 network[356825]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 19:31:45 compute-0 ceph-mon[191910]: pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:47 compute-0 ceph-mon[191910]: pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:49 compute-0 ceph-mon[191910]: pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:31:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:51 compute-0 ceph-mon[191910]: pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:51 compute-0 sudo[357101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpgaqhjmmqqjpkisvlhmdwnylapgdckg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433511.3124902-55-249465452118134/AnsiballZ_systemd_service.py'
Oct 02 19:31:51 compute-0 sudo[357101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:31:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:52 compute-0 python3.9[357103]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:31:52 compute-0 sudo[357101]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:53 compute-0 ceph-mon[191910]: pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:53 compute-0 podman[357199]: 2025-10-02 19:31:53.700295282 +0000 UTC m=+0.122596257 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:31:53 compute-0 sudo[357274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytgisjfcmsneaglvpzdcyuvwjneaspxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433513.205962-65-232499827155500/AnsiballZ_file.py'
Oct 02 19:31:53 compute-0 sudo[357274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:31:54 compute-0 python3.9[357276]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:31:54 compute-0 sudo[357274]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:54 compute-0 sudo[357427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiibjtwuxnelfrjjwzaqbpnghvwofagn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433514.2813122-73-32334945961226/AnsiballZ_file.py'
Oct 02 19:31:54 compute-0 sudo[357427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:31:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:31:55 compute-0 python3.9[357429]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:31:55 compute-0 sudo[357427]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:55 compute-0 ceph-mon[191910]: pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:31:56 compute-0 sudo[357579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwwamllwgjnznbbodjjrsovuhyjozqse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433515.4143832-82-232896470662421/AnsiballZ_command.py'
Oct 02 19:31:56 compute-0 sudo[357579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:31:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Oct 02 19:31:56 compute-0 python3.9[357581]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:31:56 compute-0 sudo[357579]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:57 compute-0 ceph-mon[191910]: pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Oct 02 19:31:57 compute-0 python3.9[357733]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:31:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Oct 02 19:31:58 compute-0 sudo[357883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzkdsbqotlbirvtdnkezusjnountmuby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433517.957972-100-67329520268999/AnsiballZ_systemd_service.py'
Oct 02 19:31:58 compute-0 sudo[357883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:31:58 compute-0 python3.9[357885]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:31:58 compute-0 systemd[1]: Reloading.
Oct 02 19:31:59 compute-0 systemd-rc-local-generator[357904]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:31:59 compute-0 systemd-sysv-generator[357910]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:31:59 compute-0 sudo[357883]: pam_unix(sudo:session): session closed for user root
Oct 02 19:31:59 compute-0 ceph-mon[191910]: pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Oct 02 19:31:59 compute-0 podman[157186]: time="2025-10-02T19:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:31:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:31:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8531 "" "Go-http-client/1.1"
Oct 02 19:31:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:32:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Oct 02 19:32:00 compute-0 sudo[358070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvjykikrdnphfbhxhbqjoqpahgkelwwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433519.7417336-108-197138536798634/AnsiballZ_command.py'
Oct 02 19:32:00 compute-0 sudo[358070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:00 compute-0 python3.9[358072]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:32:00 compute-0 sudo[358070]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:01 compute-0 sudo[358223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjqihirggneojuobfymvucdpswlvquql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433520.833747-117-124914073393423/AnsiballZ_file.py'
Oct 02 19:32:01 compute-0 sudo[358223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:01 compute-0 openstack_network_exporter[159337]: ERROR   19:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:32:01 compute-0 openstack_network_exporter[159337]: ERROR   19:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:32:01 compute-0 openstack_network_exporter[159337]: ERROR   19:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:32:01 compute-0 openstack_network_exporter[159337]: ERROR   19:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:32:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:32:01 compute-0 openstack_network_exporter[159337]: ERROR   19:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:32:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:32:01 compute-0 python3.9[358225]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:32:01 compute-0 ceph-mon[191910]: pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Oct 02 19:32:01 compute-0 sudo[358223]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 19:32:03 compute-0 python3.9[358375]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:32:03
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'backups', 'default.rgw.meta', '.rgw.root', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log']
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:32:03 compute-0 ceph-mon[191910]: pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:32:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:32:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 19:32:04 compute-0 python3.9[358527]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:32:05 compute-0 ceph-mon[191910]: pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 19:32:05 compute-0 python3.9[358603]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:32:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 19:32:06 compute-0 sudo[358753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pirvhmnaklwvrsjcmmrklpykqrwnqpkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433525.9559243-145-208859251897585/AnsiballZ_group.py'
Oct 02 19:32:06 compute-0 sudo[358753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:06 compute-0 python3.9[358755]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Oct 02 19:32:06 compute-0 sudo[358753]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:07 compute-0 ceph-mon[191910]: pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 19:32:07 compute-0 sudo[358991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hueoitkisgmcxileqwwdyjobvmzvgcro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433527.2570853-156-20466590006364/AnsiballZ_getent.py'
Oct 02 19:32:07 compute-0 sudo[358991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:07 compute-0 podman[358889]: 2025-10-02 19:32:07.981030267 +0000 UTC m=+0.113578316 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:32:08 compute-0 podman[358881]: 2025-10-02 19:32:08.002860821 +0000 UTC m=+0.122983088 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:32:08 compute-0 podman[358879]: 2025-10-02 19:32:08.004073603 +0000 UTC m=+0.143919577 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:32:08 compute-0 podman[358909]: 2025-10-02 19:32:08.011530803 +0000 UTC m=+0.128859315 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, container_name=kepler)
Oct 02 19:32:08 compute-0 podman[358882]: 2025-10-02 19:32:08.014969474 +0000 UTC m=+0.163316065 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 19:32:08 compute-0 podman[358884]: 2025-10-02 19:32:08.029501433 +0000 UTC m=+0.159182015 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:32:08 compute-0 podman[358880]: 2025-10-02 19:32:08.032928184 +0000 UTC m=+0.184805839 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, release=1755695350, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, distribution-scope=public, io.openshift.expose-services=, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, container_name=openstack_network_exporter)
Oct 02 19:32:08 compute-0 podman[358883]: 2025-10-02 19:32:08.055866437 +0000 UTC m=+0.181996404 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute)
Oct 02 19:32:08 compute-0 podman[358902]: 2025-10-02 19:32:08.076948261 +0000 UTC m=+0.193002779 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:32:08 compute-0 python3.9[359019]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Oct 02 19:32:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Oct 02 19:32:08 compute-0 sudo[358991]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:09 compute-0 ceph-mon[191910]: pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Oct 02 19:32:09 compute-0 python3.9[359233]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:32:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 31 op/s
Oct 02 19:32:10 compute-0 python3.9[359309]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/ceilometer.conf _original_basename=ceilometer.conf recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:11 compute-0 python3.9[359459]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:11 compute-0 ceph-mon[191910]: pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 31 op/s
Oct 02 19:32:12 compute-0 python3.9[359535]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/polling.yaml _original_basename=polling.yaml recurse=False state=file path=/var/lib/openstack/config/telemetry/polling.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:32:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:32:13 compute-0 python3.9[359685]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:13 compute-0 ceph-mon[191910]: pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Oct 02 19:32:13 compute-0 python3.9[359761]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/custom.conf _original_basename=custom.conf recurse=False state=file path=/var/lib/openstack/config/telemetry/custom.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:13 compute-0 nova_compute[355794]: 2025-10-02 19:32:13.976 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:14 compute-0 nova_compute[355794]: 2025-10-02 19:32:14.015 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:14 compute-0 python3.9[359911]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:32:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:32:15 compute-0 python3.9[360063]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:32:15 compute-0 ceph-mon[191910]: pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:17 compute-0 python3.9[360215]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:17 compute-0 ceph-mon[191910]: pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:17 compute-0 python3.9[360291]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json _original_basename=ceilometer-agent-compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:18 compute-0 sudo[360292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:32:18 compute-0 sudo[360292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:32:18 compute-0 sudo[360292]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:18 compute-0 sudo[360317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:32:18 compute-0 sudo[360317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:32:18 compute-0 sudo[360317]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:18 compute-0 sudo[360342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:32:18 compute-0 sudo[360342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:32:18 compute-0 sudo[360342]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:18 compute-0 sudo[360367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:32:18 compute-0 sudo[360367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:32:19 compute-0 sudo[360367]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 19:32:19 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 19:32:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:32:19 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:32:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:32:19 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:32:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:32:19 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:32:19 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 6f59cf6e-41a4-4180-bad9-9257dd2dfb8e does not exist
Oct 02 19:32:19 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e9d10f41-31b2-4b14-b439-fe32bba7d9df does not exist
Oct 02 19:32:19 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 86877c0b-d920-4732-b396-bc406202ffa1 does not exist
Oct 02 19:32:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:32:19 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:32:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:32:19 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:32:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:32:19 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:32:19 compute-0 sudo[360573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:32:19 compute-0 sudo[360573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:32:19 compute-0 python3.9[360571]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:19 compute-0 sudo[360573]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:19 compute-0 sudo[360599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:32:19 compute-0 sudo[360599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:32:19 compute-0 sudo[360599]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:19 compute-0 sudo[360646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:32:19 compute-0 ceph-mon[191910]: pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 19:32:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:32:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:32:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:32:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:32:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:32:19 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:32:19 compute-0 sudo[360646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:32:19 compute-0 sudo[360646]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:32:19 compute-0 sudo[360696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:32:19 compute-0 sudo[360696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:32:20 compute-0 python3.9[360748]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:20 compute-0 podman[360833]: 2025-10-02 19:32:20.421591282 +0000 UTC m=+0.086442521 container create 99a89963f15d4e4ddcc2e1148dbe1d40da181850bf7d602fb0675c6627460671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lovelace, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:32:20 compute-0 podman[360833]: 2025-10-02 19:32:20.377613136 +0000 UTC m=+0.042464485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:32:20 compute-0 systemd[1]: Started libpod-conmon-99a89963f15d4e4ddcc2e1148dbe1d40da181850bf7d602fb0675c6627460671.scope.
Oct 02 19:32:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:32:20 compute-0 podman[360833]: 2025-10-02 19:32:20.576579543 +0000 UTC m=+0.241430812 container init 99a89963f15d4e4ddcc2e1148dbe1d40da181850bf7d602fb0675c6627460671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lovelace, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:32:20 compute-0 podman[360833]: 2025-10-02 19:32:20.589338294 +0000 UTC m=+0.254189563 container start 99a89963f15d4e4ddcc2e1148dbe1d40da181850bf7d602fb0675c6627460671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lovelace, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:32:20 compute-0 podman[360833]: 2025-10-02 19:32:20.59703752 +0000 UTC m=+0.261888839 container attach 99a89963f15d4e4ddcc2e1148dbe1d40da181850bf7d602fb0675c6627460671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lovelace, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:32:20 compute-0 brave_lovelace[360881]: 167 167
Oct 02 19:32:20 compute-0 systemd[1]: libpod-99a89963f15d4e4ddcc2e1148dbe1d40da181850bf7d602fb0675c6627460671.scope: Deactivated successfully.
Oct 02 19:32:20 compute-0 podman[360833]: 2025-10-02 19:32:20.6180089 +0000 UTC m=+0.282860179 container died 99a89963f15d4e4ddcc2e1148dbe1d40da181850bf7d602fb0675c6627460671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lovelace, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:32:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-162a8122d77621196cc73b5ae31a8278a3657768b145ad67ce056b7e19621212-merged.mount: Deactivated successfully.
Oct 02 19:32:20 compute-0 podman[360833]: 2025-10-02 19:32:20.711235141 +0000 UTC m=+0.376086410 container remove 99a89963f15d4e4ddcc2e1148dbe1d40da181850bf7d602fb0675c6627460671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lovelace, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:32:20 compute-0 systemd[1]: libpod-conmon-99a89963f15d4e4ddcc2e1148dbe1d40da181850bf7d602fb0675c6627460671.scope: Deactivated successfully.
Oct 02 19:32:20 compute-0 podman[360971]: 2025-10-02 19:32:20.940168829 +0000 UTC m=+0.091011223 container create c437bb869a51272dbf5f95c5aa508c040e91a44ca311466e81673ecc303e753e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jackson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 02 19:32:21 compute-0 systemd[1]: Started libpod-conmon-c437bb869a51272dbf5f95c5aa508c040e91a44ca311466e81673ecc303e753e.scope.
Oct 02 19:32:21 compute-0 podman[360971]: 2025-10-02 19:32:20.918090059 +0000 UTC m=+0.068932493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:32:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3af9a375e307864f7a8efa9a71aadd6a4f5782217327b4fd171c803e5a9d12b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3af9a375e307864f7a8efa9a71aadd6a4f5782217327b4fd171c803e5a9d12b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3af9a375e307864f7a8efa9a71aadd6a4f5782217327b4fd171c803e5a9d12b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3af9a375e307864f7a8efa9a71aadd6a4f5782217327b4fd171c803e5a9d12b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3af9a375e307864f7a8efa9a71aadd6a4f5782217327b4fd171c803e5a9d12b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:32:21 compute-0 python3.9[360987]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:21 compute-0 podman[360971]: 2025-10-02 19:32:21.113444779 +0000 UTC m=+0.264287193 container init c437bb869a51272dbf5f95c5aa508c040e91a44ca311466e81673ecc303e753e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jackson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:32:21 compute-0 podman[360971]: 2025-10-02 19:32:21.142184107 +0000 UTC m=+0.293026491 container start c437bb869a51272dbf5f95c5aa508c040e91a44ca311466e81673ecc303e753e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jackson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:32:21 compute-0 podman[360971]: 2025-10-02 19:32:21.147267203 +0000 UTC m=+0.298109617 container attach c437bb869a51272dbf5f95c5aa508c040e91a44ca311466e81673ecc303e753e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jackson, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:32:21 compute-0 python3.9[361075]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json _original_basename=ceilometer_agent_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:21 compute-0 ceph-mon[191910]: pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:22 compute-0 quizzical_jackson[360995]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:32:22 compute-0 quizzical_jackson[360995]: --> relative data size: 1.0
Oct 02 19:32:22 compute-0 quizzical_jackson[360995]: --> All data devices are unavailable
Oct 02 19:32:22 compute-0 systemd[1]: libpod-c437bb869a51272dbf5f95c5aa508c040e91a44ca311466e81673ecc303e753e.scope: Deactivated successfully.
Oct 02 19:32:22 compute-0 systemd[1]: libpod-c437bb869a51272dbf5f95c5aa508c040e91a44ca311466e81673ecc303e753e.scope: Consumed 1.088s CPU time.
Oct 02 19:32:22 compute-0 podman[361223]: 2025-10-02 19:32:22.367779937 +0000 UTC m=+0.039312322 container died c437bb869a51272dbf5f95c5aa508c040e91a44ca311466e81673ecc303e753e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct 02 19:32:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3af9a375e307864f7a8efa9a71aadd6a4f5782217327b4fd171c803e5a9d12b-merged.mount: Deactivated successfully.
Oct 02 19:32:22 compute-0 podman[361223]: 2025-10-02 19:32:22.436644887 +0000 UTC m=+0.108177272 container remove c437bb869a51272dbf5f95c5aa508c040e91a44ca311466e81673ecc303e753e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 19:32:22 compute-0 systemd[1]: libpod-conmon-c437bb869a51272dbf5f95c5aa508c040e91a44ca311466e81673ecc303e753e.scope: Deactivated successfully.
Oct 02 19:32:22 compute-0 sudo[360696]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:22 compute-0 python3.9[361261]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:22 compute-0 sudo[361262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:32:22 compute-0 sudo[361262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:32:22 compute-0 sudo[361262]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:22 compute-0 sudo[361289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:32:22 compute-0 sudo[361289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:32:22 compute-0 sudo[361289]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:22 compute-0 sudo[361337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:32:22 compute-0 sudo[361337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:32:22 compute-0 sudo[361337]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:22 compute-0 sudo[361386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:32:23 compute-0 sudo[361386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:32:23 compute-0 python3.9[361437]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:23 compute-0 podman[361506]: 2025-10-02 19:32:23.515955858 +0000 UTC m=+0.060117637 container create 10291634a248362b051e4e5d69c7fcc1151ac346fe08d700dff03d2b6dd5f753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 19:32:23 compute-0 systemd[1]: Started libpod-conmon-10291634a248362b051e4e5d69c7fcc1151ac346fe08d700dff03d2b6dd5f753.scope.
Oct 02 19:32:23 compute-0 podman[361506]: 2025-10-02 19:32:23.496831747 +0000 UTC m=+0.040993576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:32:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:32:23 compute-0 podman[361506]: 2025-10-02 19:32:23.62567024 +0000 UTC m=+0.169832039 container init 10291634a248362b051e4e5d69c7fcc1151ac346fe08d700dff03d2b6dd5f753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lewin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 19:32:23 compute-0 podman[361506]: 2025-10-02 19:32:23.637473226 +0000 UTC m=+0.181635005 container start 10291634a248362b051e4e5d69c7fcc1151ac346fe08d700dff03d2b6dd5f753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lewin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 19:32:23 compute-0 podman[361506]: 2025-10-02 19:32:23.641974776 +0000 UTC m=+0.186136555 container attach 10291634a248362b051e4e5d69c7fcc1151ac346fe08d700dff03d2b6dd5f753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:32:23 compute-0 heuristic_lewin[361562]: 167 167
Oct 02 19:32:23 compute-0 podman[361506]: 2025-10-02 19:32:23.647555705 +0000 UTC m=+0.191717504 container died 10291634a248362b051e4e5d69c7fcc1151ac346fe08d700dff03d2b6dd5f753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lewin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 19:32:23 compute-0 systemd[1]: libpod-10291634a248362b051e4e5d69c7fcc1151ac346fe08d700dff03d2b6dd5f753.scope: Deactivated successfully.
Oct 02 19:32:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0758ba0ef58efd4133b797d2780ea681c2e73f3c96564a2deb5e912b780cadd4-merged.mount: Deactivated successfully.
Oct 02 19:32:23 compute-0 podman[361506]: 2025-10-02 19:32:23.707290881 +0000 UTC m=+0.251452680 container remove 10291634a248362b051e4e5d69c7fcc1151ac346fe08d700dff03d2b6dd5f753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 19:32:23 compute-0 systemd[1]: libpod-conmon-10291634a248362b051e4e5d69c7fcc1151ac346fe08d700dff03d2b6dd5f753.scope: Deactivated successfully.
Oct 02 19:32:23 compute-0 ceph-mon[191910]: pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:23 compute-0 podman[361610]: 2025-10-02 19:32:23.853872188 +0000 UTC m=+0.098885633 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 19:32:23 compute-0 podman[361654]: 2025-10-02 19:32:23.933033864 +0000 UTC m=+0.069092738 container create 1fb723f7dd24fe362a13ddacf9caae088919e15c046e540db2e93ac9a0e81a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:32:23 compute-0 systemd[1]: Started libpod-conmon-1fb723f7dd24fe362a13ddacf9caae088919e15c046e540db2e93ac9a0e81a8f.scope.
Oct 02 19:32:24 compute-0 podman[361654]: 2025-10-02 19:32:23.910656166 +0000 UTC m=+0.046715090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:32:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ead4bde0814bb5fca6be7d85af5c0fa54c062e695acb07ae8faf2e8b4c7fe74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ead4bde0814bb5fca6be7d85af5c0fa54c062e695acb07ae8faf2e8b4c7fe74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ead4bde0814bb5fca6be7d85af5c0fa54c062e695acb07ae8faf2e8b4c7fe74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ead4bde0814bb5fca6be7d85af5c0fa54c062e695acb07ae8faf2e8b4c7fe74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:32:24 compute-0 podman[361654]: 2025-10-02 19:32:24.073025454 +0000 UTC m=+0.209084358 container init 1fb723f7dd24fe362a13ddacf9caae088919e15c046e540db2e93ac9a0e81a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:32:24 compute-0 podman[361654]: 2025-10-02 19:32:24.099669836 +0000 UTC m=+0.235728710 container start 1fb723f7dd24fe362a13ddacf9caae088919e15c046e540db2e93ac9a0e81a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:32:24 compute-0 podman[361654]: 2025-10-02 19:32:24.104102265 +0000 UTC m=+0.240161149 container attach 1fb723f7dd24fe362a13ddacf9caae088919e15c046e540db2e93ac9a0e81a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_robinson, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 19:32:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:24 compute-0 python3.9[361699]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.441 17 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.442 17 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.442 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a00e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.443 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feec38a00b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.444 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.445 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec5f86ab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.446 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38272c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.447 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec6d242f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.448 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.448 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.449 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38a03b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.450 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38273e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.451 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.451 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.452 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.453 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec38274d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.453 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.446 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.454 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feec72bb7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.455 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.454 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.456 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.456 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.456 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec4ac1ee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.456 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.457 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.457 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3825730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.457 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.458 17 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feec3827fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feec4a1a8a0>] with cache [{}], pollster history [{'network.outgoing.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.455 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feec38264b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.459 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.460 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feec3827c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.460 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.460 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feec3827230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.460 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.461 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feec3825700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.461 17 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.461 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feec3827290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.461 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.461 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feec38277a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.462 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.462 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feec38272f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.462 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.462 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feec3827350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.463 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.463 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feec38a0380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.464 17 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.465 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feec38273b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.465 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.466 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feec49646e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.466 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.467 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feec3827440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.468 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.469 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feec3827c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.469 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.470 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feec38274a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.471 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.471 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feec3827ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.472 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.473 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feec3827d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.474 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.474 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feec3827dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.475 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.476 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feec3827e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.476 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.477 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feec38a0050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.478 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.478 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feec38276e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.479 17 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.480 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feec3827ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.481 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.481 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feec49641d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.482 17 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.483 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feec3827740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.483 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.484 17 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feec3827f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feec49bfaa0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.485 17 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.486 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.486 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.486 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.486 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.487 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.487 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.487 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.487 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.488 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.488 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.488 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.488 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.488 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.489 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.489 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.489 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.489 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.490 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.490 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.491 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.492 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.492 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.493 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.493 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.494 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:32:24.494 17 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:32:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:32:24 compute-0 python3.9[361781]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/firewall.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/firewall.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:24 compute-0 zealous_robinson[361700]: {
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:     "0": [
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:         {
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "devices": [
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "/dev/loop3"
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             ],
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "lv_name": "ceph_lv0",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "lv_size": "21470642176",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "name": "ceph_lv0",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "tags": {
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.cluster_name": "ceph",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.crush_device_class": "",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.encrypted": "0",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.osd_id": "0",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.type": "block",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.vdo": "0"
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             },
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "type": "block",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "vg_name": "ceph_vg0"
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:         }
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:     ],
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:     "1": [
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:         {
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "devices": [
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "/dev/loop4"
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             ],
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "lv_name": "ceph_lv1",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "lv_size": "21470642176",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "name": "ceph_lv1",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "tags": {
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.cluster_name": "ceph",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.crush_device_class": "",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.encrypted": "0",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.osd_id": "1",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.type": "block",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.vdo": "0"
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             },
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "type": "block",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "vg_name": "ceph_vg1"
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:         }
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:     ],
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:     "2": [
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:         {
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "devices": [
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "/dev/loop5"
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             ],
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "lv_name": "ceph_lv2",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "lv_size": "21470642176",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "name": "ceph_lv2",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "tags": {
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.cluster_name": "ceph",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.crush_device_class": "",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.encrypted": "0",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.osd_id": "2",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.type": "block",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:                 "ceph.vdo": "0"
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             },
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "type": "block",
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:             "vg_name": "ceph_vg2"
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:         }
Oct 02 19:32:24 compute-0 zealous_robinson[361700]:     ]
Oct 02 19:32:24 compute-0 zealous_robinson[361700]: }
Oct 02 19:32:24 compute-0 systemd[1]: libpod-1fb723f7dd24fe362a13ddacf9caae088919e15c046e540db2e93ac9a0e81a8f.scope: Deactivated successfully.
Oct 02 19:32:24 compute-0 podman[361654]: 2025-10-02 19:32:24.97323049 +0000 UTC m=+1.109289374 container died 1fb723f7dd24fe362a13ddacf9caae088919e15c046e540db2e93ac9a0e81a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_robinson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:32:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ead4bde0814bb5fca6be7d85af5c0fa54c062e695acb07ae8faf2e8b4c7fe74-merged.mount: Deactivated successfully.
Oct 02 19:32:25 compute-0 podman[361654]: 2025-10-02 19:32:25.072672277 +0000 UTC m=+1.208731161 container remove 1fb723f7dd24fe362a13ddacf9caae088919e15c046e540db2e93ac9a0e81a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:32:25 compute-0 systemd[1]: libpod-conmon-1fb723f7dd24fe362a13ddacf9caae088919e15c046e540db2e93ac9a0e81a8f.scope: Deactivated successfully.
Oct 02 19:32:25 compute-0 sudo[361386]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:25 compute-0 sudo[361850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:32:25 compute-0 sudo[361850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:32:25 compute-0 sudo[361850]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:25 compute-0 sudo[361897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:32:25 compute-0 sudo[361897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:32:25 compute-0 sudo[361897]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:25 compute-0 sudo[361945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:32:25 compute-0 sudo[361945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:32:25 compute-0 sudo[361945]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:25 compute-0 sudo[361995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:32:25 compute-0 sudo[361995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:32:25 compute-0 ceph-mon[191910]: pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:25 compute-0 python3.9[362043]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:26 compute-0 podman[362106]: 2025-10-02 19:32:26.106641936 +0000 UTC m=+0.069271002 container create 7a6fee33aec5e9f045d49c387f1d8bb2a5f033fb5248a01c2626177f13333201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_pike, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 19:32:26 compute-0 systemd[1]: Started libpod-conmon-7a6fee33aec5e9f045d49c387f1d8bb2a5f033fb5248a01c2626177f13333201.scope.
Oct 02 19:32:26 compute-0 podman[362106]: 2025-10-02 19:32:26.072995857 +0000 UTC m=+0.035624973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:32:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:32:26 compute-0 podman[362106]: 2025-10-02 19:32:26.224325021 +0000 UTC m=+0.186954137 container init 7a6fee33aec5e9f045d49c387f1d8bb2a5f033fb5248a01c2626177f13333201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_pike, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 19:32:26 compute-0 podman[362106]: 2025-10-02 19:32:26.236344272 +0000 UTC m=+0.198973358 container start 7a6fee33aec5e9f045d49c387f1d8bb2a5f033fb5248a01c2626177f13333201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_pike, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:32:26 compute-0 podman[362106]: 2025-10-02 19:32:26.242041754 +0000 UTC m=+0.204670870 container attach 7a6fee33aec5e9f045d49c387f1d8bb2a5f033fb5248a01c2626177f13333201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_pike, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:32:26 compute-0 quizzical_pike[362147]: 167 167
Oct 02 19:32:26 compute-0 systemd[1]: libpod-7a6fee33aec5e9f045d49c387f1d8bb2a5f033fb5248a01c2626177f13333201.scope: Deactivated successfully.
Oct 02 19:32:26 compute-0 podman[362106]: 2025-10-02 19:32:26.247418468 +0000 UTC m=+0.210047534 container died 7a6fee33aec5e9f045d49c387f1d8bb2a5f033fb5248a01c2626177f13333201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 19:32:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e014552bc89dd32a1f6e2948ca344bdf5be09760a66656b3a6eed13d54c5613-merged.mount: Deactivated successfully.
Oct 02 19:32:26 compute-0 podman[362106]: 2025-10-02 19:32:26.296950761 +0000 UTC m=+0.259579827 container remove 7a6fee33aec5e9f045d49c387f1d8bb2a5f033fb5248a01c2626177f13333201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:32:26 compute-0 systemd[1]: libpod-conmon-7a6fee33aec5e9f045d49c387f1d8bb2a5f033fb5248a01c2626177f13333201.scope: Deactivated successfully.
Oct 02 19:32:26 compute-0 python3.9[362178]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/node_exporter.json _original_basename=node_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:26 compute-0 podman[362195]: 2025-10-02 19:32:26.563103773 +0000 UTC m=+0.081094978 container create db7c99dfbb6d1aed22bd1c3e4cd7981be480c6670b29d84255cad9fcb3b90068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:32:26 compute-0 podman[362195]: 2025-10-02 19:32:26.532109165 +0000 UTC m=+0.050100420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:32:26 compute-0 systemd[1]: Started libpod-conmon-db7c99dfbb6d1aed22bd1c3e4cd7981be480c6670b29d84255cad9fcb3b90068.scope.
Oct 02 19:32:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba2dade5c4d8afa22179ddb3a8d85f09a303bc120de477f9f8a877463bb876e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba2dade5c4d8afa22179ddb3a8d85f09a303bc120de477f9f8a877463bb876e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba2dade5c4d8afa22179ddb3a8d85f09a303bc120de477f9f8a877463bb876e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba2dade5c4d8afa22179ddb3a8d85f09a303bc120de477f9f8a877463bb876e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:32:26 compute-0 podman[362195]: 2025-10-02 19:32:26.706130535 +0000 UTC m=+0.224121750 container init db7c99dfbb6d1aed22bd1c3e4cd7981be480c6670b29d84255cad9fcb3b90068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 19:32:26 compute-0 podman[362195]: 2025-10-02 19:32:26.716973535 +0000 UTC m=+0.234964720 container start db7c99dfbb6d1aed22bd1c3e4cd7981be480c6670b29d84255cad9fcb3b90068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_leakey, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:32:26 compute-0 podman[362195]: 2025-10-02 19:32:26.722189654 +0000 UTC m=+0.240180869 container attach db7c99dfbb6d1aed22bd1c3e4cd7981be480c6670b29d84255cad9fcb3b90068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_leakey, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 19:32:27 compute-0 python3.9[362366]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:27 compute-0 blissful_leakey[362256]: {
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:         "osd_id": 1,
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:         "type": "bluestore"
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:     },
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:         "osd_id": 2,
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:         "type": "bluestore"
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:     },
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:         "osd_id": 0,
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:         "type": "bluestore"
Oct 02 19:32:27 compute-0 blissful_leakey[362256]:     }
Oct 02 19:32:27 compute-0 blissful_leakey[362256]: }
Oct 02 19:32:27 compute-0 systemd[1]: libpod-db7c99dfbb6d1aed22bd1c3e4cd7981be480c6670b29d84255cad9fcb3b90068.scope: Deactivated successfully.
Oct 02 19:32:27 compute-0 systemd[1]: libpod-db7c99dfbb6d1aed22bd1c3e4cd7981be480c6670b29d84255cad9fcb3b90068.scope: Consumed 1.069s CPU time.
Oct 02 19:32:27 compute-0 podman[362195]: 2025-10-02 19:32:27.798120395 +0000 UTC m=+1.316111630 container died db7c99dfbb6d1aed22bd1c3e4cd7981be480c6670b29d84255cad9fcb3b90068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_leakey, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 19:32:27 compute-0 ceph-mon[191910]: pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ba2dade5c4d8afa22179ddb3a8d85f09a303bc120de477f9f8a877463bb876e-merged.mount: Deactivated successfully.
Oct 02 19:32:27 compute-0 podman[362195]: 2025-10-02 19:32:27.90980892 +0000 UTC m=+1.427800125 container remove db7c99dfbb6d1aed22bd1c3e4cd7981be480c6670b29d84255cad9fcb3b90068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 19:32:27 compute-0 systemd[1]: libpod-conmon-db7c99dfbb6d1aed22bd1c3e4cd7981be480c6670b29d84255cad9fcb3b90068.scope: Deactivated successfully.
Oct 02 19:32:27 compute-0 sudo[361995]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:32:28 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:32:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:32:28 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:32:28 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev df05ec0b-6be2-4e7f-8fbd-977aac65d946 does not exist
Oct 02 19:32:28 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev bca631e0-c384-4d04-9f35-d5f877e90e33 does not exist
Oct 02 19:32:28 compute-0 python3.9[362472]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:28 compute-0 sudo[362483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:32:28 compute-0 sudo[362483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:32:28 compute-0 sudo[362483]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:28 compute-0 sudo[362508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:32:28 compute-0 sudo[362508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:32:28 compute-0 sudo[362508]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:32:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:32:28 compute-0 ceph-mon[191910]: pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:29 compute-0 python3.9[362682]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:29 compute-0 podman[157186]: time="2025-10-02T19:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:32:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:32:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8549 "" "Go-http-client/1.1"
Oct 02 19:32:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:32:30 compute-0 python3.9[362758]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json _original_basename=openstack_network_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:31 compute-0 ceph-mon[191910]: pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:31 compute-0 openstack_network_exporter[159337]: ERROR   19:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:32:31 compute-0 openstack_network_exporter[159337]: ERROR   19:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:32:31 compute-0 openstack_network_exporter[159337]: ERROR   19:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:32:31 compute-0 openstack_network_exporter[159337]: ERROR   19:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:32:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:32:31 compute-0 openstack_network_exporter[159337]: ERROR   19:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:32:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:32:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:32 compute-0 python3.9[362908]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:32:32.277 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:32:32.278 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:32:32.279 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:32 compute-0 nova_compute[355794]: 2025-10-02 19:32:32.578 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:32 compute-0 nova_compute[355794]: 2025-10-02 19:32:32.580 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:32 compute-0 nova_compute[355794]: 2025-10-02 19:32:32.580 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:32:32 compute-0 nova_compute[355794]: 2025-10-02 19:32:32.580 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:32:32 compute-0 nova_compute[355794]: 2025-10-02 19:32:32.602 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:32:32 compute-0 nova_compute[355794]: 2025-10-02 19:32:32.602 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:32 compute-0 nova_compute[355794]: 2025-10-02 19:32:32.603 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:32 compute-0 nova_compute[355794]: 2025-10-02 19:32:32.604 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:32 compute-0 nova_compute[355794]: 2025-10-02 19:32:32.604 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:32 compute-0 nova_compute[355794]: 2025-10-02 19:32:32.604 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:32 compute-0 nova_compute[355794]: 2025-10-02 19:32:32.605 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:32 compute-0 nova_compute[355794]: 2025-10-02 19:32:32.605 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:32:32 compute-0 nova_compute[355794]: 2025-10-02 19:32:32.606 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:32 compute-0 nova_compute[355794]: 2025-10-02 19:32:32.644 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:32 compute-0 nova_compute[355794]: 2025-10-02 19:32:32.645 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:32 compute-0 nova_compute[355794]: 2025-10-02 19:32:32.645 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:32 compute-0 nova_compute[355794]: 2025-10-02 19:32:32.645 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:32:32 compute-0 nova_compute[355794]: 2025-10-02 19:32:32.647 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:32 compute-0 python3.9[362985]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml _original_basename=openstack_network_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:32:33 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1470999255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:32:33 compute-0 nova_compute[355794]: 2025-10-02 19:32:33.159 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:33 compute-0 ceph-mon[191910]: pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:33 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1470999255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:32:33 compute-0 nova_compute[355794]: 2025-10-02 19:32:33.533 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:32:33 compute-0 nova_compute[355794]: 2025-10-02 19:32:33.535 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4546MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:32:33 compute-0 nova_compute[355794]: 2025-10-02 19:32:33.536 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:33 compute-0 nova_compute[355794]: 2025-10-02 19:32:33.536 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:33 compute-0 nova_compute[355794]: 2025-10-02 19:32:33.606 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:32:33 compute-0 nova_compute[355794]: 2025-10-02 19:32:33.607 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:32:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:32:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:32:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:32:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:32:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:32:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:32:33 compute-0 nova_compute[355794]: 2025-10-02 19:32:33.629 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:33 compute-0 python3.9[363156]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:32:34 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/517739960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:32:34 compute-0 nova_compute[355794]: 2025-10-02 19:32:34.088 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:34 compute-0 nova_compute[355794]: 2025-10-02 19:32:34.097 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:32:34 compute-0 nova_compute[355794]: 2025-10-02 19:32:34.120 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:32:34 compute-0 nova_compute[355794]: 2025-10-02 19:32:34.122 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:32:34 compute-0 nova_compute[355794]: 2025-10-02 19:32:34.122 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:34 compute-0 python3.9[363254]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/podman_exporter.json _original_basename=podman_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:34 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/517739960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:32:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:32:35 compute-0 ceph-mon[191910]: pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:35 compute-0 python3.9[363404]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:36 compute-0 python3.9[363480]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:37 compute-0 python3.9[363630]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:37 compute-0 ceph-mon[191910]: pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:37 compute-0 python3.9[363706]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:38 compute-0 podman[363841]: 2025-10-02 19:32:38.720612944 +0000 UTC m=+0.127386666 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:32:38 compute-0 podman[363828]: 2025-10-02 19:32:38.728949936 +0000 UTC m=+0.142924200 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public)
Oct 02 19:32:38 compute-0 podman[363859]: 2025-10-02 19:32:38.737978378 +0000 UTC m=+0.126270576 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, name=ubi9, build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler)
Oct 02 19:32:38 compute-0 podman[363831]: 2025-10-02 19:32:38.744658166 +0000 UTC m=+0.142037717 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 19:32:38 compute-0 podman[363824]: 2025-10-02 19:32:38.744978245 +0000 UTC m=+0.166632874 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:32:38 compute-0 podman[363834]: 2025-10-02 19:32:38.758083245 +0000 UTC m=+0.161742533 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true)
Oct 02 19:32:38 compute-0 podman[363832]: 2025-10-02 19:32:38.767227099 +0000 UTC m=+0.156974456 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 19:32:38 compute-0 podman[363845]: 2025-10-02 19:32:38.768729179 +0000 UTC m=+0.151416717 container health_status d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:32:38 compute-0 podman[363849]: 2025-10-02 19:32:38.771310498 +0000 UTC m=+0.163461259 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:32:38 compute-0 python3.9[363951]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:39 compute-0 python3.9[364102]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:39 compute-0 ceph-mon[191910]: pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:32:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:40 compute-0 python3.9[364252]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:41 compute-0 python3.9[364328]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:41 compute-0 ceph-mon[191910]: pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:42 compute-0 sudo[364478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgkpotwtpzvczrzudazfsgjkcpynqksr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433561.8364854-393-149875168674491/AnsiballZ_file.py'
Oct 02 19:32:42 compute-0 sudo[364478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:42 compute-0 python3.9[364480]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:42 compute-0 sudo[364478]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:43 compute-0 ceph-mon[191910]: pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:44 compute-0 sudo[364630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wilgbftnipghksnxzmpgsarkconevehd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433562.9087255-401-232119329824695/AnsiballZ_file.py'
Oct 02 19:32:44 compute-0 sudo[364630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:44 compute-0 python3.9[364632]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:44 compute-0 sudo[364630]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:32:45 compute-0 sudo[364782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sanilvbcoanpbwvugscfwptvhhmglzhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433564.7700589-409-96884584379558/AnsiballZ_file.py'
Oct 02 19:32:45 compute-0 sudo[364782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:45 compute-0 python3.9[364784]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:32:45 compute-0 sudo[364782]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:45 compute-0 ceph-mon[191910]: pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:46 compute-0 sudo[364934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcdzdflqhanahrlrvzwupfftgtzpttpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433565.838366-417-102637671364114/AnsiballZ_systemd_service.py'
Oct 02 19:32:46 compute-0 sudo[364934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:46 compute-0 python3.9[364936]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:32:46 compute-0 sudo[364934]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:47 compute-0 ceph-mon[191910]: pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:47 compute-0 sudo[365088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szcedtlwluunizsspervrubbhhsuqxbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433567.2283015-426-218731899055638/AnsiballZ_stat.py'
Oct 02 19:32:47 compute-0 sudo[365088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:47 compute-0 python3.9[365090]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:48 compute-0 sudo[365088]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:48 compute-0 sudo[365166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuxylwzuarnsbvavitnyxyazblpkszml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433567.2283015-426-218731899055638/AnsiballZ_file.py'
Oct 02 19:32:48 compute-0 sudo[365166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:48 compute-0 python3.9[365168]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:32:48 compute-0 sudo[365166]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:49 compute-0 sudo[365242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytrvaceiydduyojwbagfvouquwincwqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433567.2283015-426-218731899055638/AnsiballZ_stat.py'
Oct 02 19:32:49 compute-0 sudo[365242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:49 compute-0 python3.9[365244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:32:49 compute-0 sudo[365242]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:49 compute-0 ceph-mon[191910]: pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:32:49 compute-0 sudo[365321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiemzgmfrwzcwtpnhwvwxpxhbtrziqmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433567.2283015-426-218731899055638/AnsiballZ_file.py'
Oct 02 19:32:49 compute-0 sudo[365321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:50 compute-0 python3.9[365323]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ _original_basename=healthcheck.future recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:32:50 compute-0 sudo[365321]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:51 compute-0 sudo[365473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiwavwzcfummaulugphiihyenbcwfgxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433570.522289-448-40651235668684/AnsiballZ_container_config_data.py'
Oct 02 19:32:51 compute-0 sudo[365473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:51 compute-0 python3.9[365475]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Oct 02 19:32:51 compute-0 sudo[365473]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:51 compute-0 ceph-mon[191910]: pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:52 compute-0 sudo[365625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-womjtxsmulfidnfusxixxrjdelljzgoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433571.8163915-457-57162024184884/AnsiballZ_container_config_hash.py'
Oct 02 19:32:52 compute-0 sudo[365625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:52 compute-0 python3.9[365627]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:32:52 compute-0 sudo[365625]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:53 compute-0 ceph-mon[191910]: pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:54 compute-0 sudo[365786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzuxmnrwpkxcvjptywzhmnauhtukoesw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759433573.6871502-467-97427196529850/AnsiballZ_edpm_container_manage.py'
Oct 02 19:32:54 compute-0 sudo[365786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:54 compute-0 podman[365751]: 2025-10-02 19:32:54.459765503 +0000 UTC m=+0.139854729 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 19:32:54 compute-0 python3[365793]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:32:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:32:54 compute-0 python3[365793]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "af55c482fa6ac3c7068a40d60290d5ada8b2ec948be38389742c3fe61801742f",
                                                     "Digest": "sha256:4c180f93e2b3735d04bf1d2c500a5db03500067fcb41b9f0d74e9dddef3d36fe",
                                                     "RepoTags": [
                                                          "quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute@sha256:4c180f93e2b3735d04bf1d2c500a5db03500067fcb41b9f0d74e9dddef3d36fe"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2025-10-02T05:12:20.810442253Z",
                                                     "Config": {
                                                          "User": "root",
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                               "LANG=en_US.UTF-8",
                                                               "TZ=UTC",
                                                               "container=oci"
                                                          ],
                                                          "Entrypoint": [
                                                               "dumb-init",
                                                               "--single-child",
                                                               "--"
                                                          ],
                                                          "Cmd": [
                                                               "kolla_start"
                                                          ],
                                                          "Labels": {
                                                               "io.buildah.version": "1.41.4",
                                                               "maintainer": "OpenStack Kubernetes Operator team",
                                                               "org.label-schema.build-date": "20250930",
                                                               "org.label-schema.license": "GPLv2",
                                                               "org.label-schema.name": "CentOS Stream 10 Base Image",
                                                               "org.label-schema.schema-version": "1.0",
                                                               "org.label-schema.vendor": "CentOS",
                                                               "tcib_build_tag": "874b68da40aaccacbf39bda6727f8345",
                                                               "tcib_managed": "true"
                                                          },
                                                          "StopSignal": "SIGTERM"
                                                     },
                                                     "Version": "",
                                                     "Author": "",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 601350857,
                                                     "VirtualSize": 601350857,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/e89d8222c81a239ed0303cc681202bfa30965bd1a8f2d53e7528c8f014e5f31c/diff:/var/lib/containers/storage/overlay/1fc8465d2e54cb15890bb3c0f90aadda9818ead619be3e58f148b539a70e054c/diff:/var/lib/containers/storage/overlay/c5ff9952b0367dd059bad363e1585e7d0355ed6ba52c10956cb78fbceffa1162/diff:/var/lib/containers/storage/overlay/3d7063f4eea69b495feea125e59ae13f34b53a14073a991c7a4a030171385d0a/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/3a8f8d8d58c1832b103026df3db1c351aabf0b50d8b670fb49565ba7ad4076ad/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/3a8f8d8d58c1832b103026df3db1c351aabf0b50d8b670fb49565ba7ad4076ad/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:3d7063f4eea69b495feea125e59ae13f34b53a14073a991c7a4a030171385d0a",
                                                               "sha256:891920e069eac8b1a93226e153512918f0bcacb06f1e430e31639cb839ec70f5",
                                                               "sha256:a993d622596218646f9e29bdd5b24f6164d81546ed80b75dc2ad0ffa7bdfd10e",
                                                               "sha256:42ccd7bcdefb4835270b80e199060952d2212f1c180d5b96bf9718b6476f8179",
                                                               "sha256:b4e771cfbd7c83ffd4b0e082bd4de403350831dada617eed970772f8a7e8d41a"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "io.buildah.version": "1.41.4",
                                                          "maintainer": "OpenStack Kubernetes Operator team",
                                                          "org.label-schema.build-date": "20250930",
                                                          "org.label-schema.license": "GPLv2",
                                                          "org.label-schema.name": "CentOS Stream 10 Base Image",
                                                          "org.label-schema.schema-version": "1.0",
                                                          "org.label-schema.vendor": "CentOS",
                                                          "tcib_build_tag": "874b68da40aaccacbf39bda6727f8345",
                                                          "tcib_managed": "true"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                     "User": "root",
                                                     "History": [
                                                          {
                                                               "created": "2025-09-30T01:01:28.661608703Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:418939e7e2ccbc31d43c6839107e54b74f045789b3e6192d8110e4430180b37e in / ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-09-30T01:01:28.661706565Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 10 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20250930\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-09-30T01:01:31.805474404Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:16.484125183Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                               "comment": "FROM quay.io/centos/centos:stream10",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:16.484140434Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:16.484154594Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:16.484171305Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:16.484186015Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:16.484197865Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:16.896849212Z",
                                                               "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:17.507329258Z",
                                                               "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/centos.repo\" ]; then rm -f /etc/yum.repos.d/centos*.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:27.560716717Z",
                                                               "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:30.529675395Z",
                                                               "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:30.880220349Z",
                                                               "created_by": "/bin/sh -c if [ ! -f \"/etc/pki/tls/cert.pem\" ]; then ln -s /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem /etc/pki/tls/cert.pem; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:31.215750559Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:31.575059134Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:32.20818526Z",
                                                               "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:32.563512503Z",
                                                               "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:32.923594383Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:33.27877381Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:33.626211518Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:33.97179191Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:34.316485613Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:34.67909803Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:35.024176715Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:35.371256353Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:35.717810084Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:36.06548627Z",
                                                               "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:36.416832229Z",
                                                               "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:38.931008787Z",
                                                               "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:39.279316862Z",
                                                               "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:39.628415782Z",
                                                               "created_by": "/bin/sh -c mkdir -p /openstack",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:40.716685178Z",
                                                               "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:42.012270439Z",
                                                               "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:42.012325891Z",
                                                               "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:42.012345171Z",
                                                               "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:42.012389803Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:04:43.596819209Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"874b68da40aaccacbf39bda6727f8345\""
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:05:17.973460708Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-master-centos10/openstack-base:874b68da40aaccacbf39bda6727f8345",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:05:31.020276305Z",
                                                               "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:05:35.251151479Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"874b68da40aaccacbf39bda6727f8345\""
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:08:04.769544928Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-master-centos10/openstack-os:874b68da40aaccacbf39bda6727f8345",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:08:10.894029523Z",
                                                               "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage ceilometer",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:08:25.988851741Z",
                                                               "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-common && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:08:38.688670644Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"874b68da40aaccacbf39bda6727f8345\""
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:12:08.547379977Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-base:874b68da40aaccacbf39bda6727f8345",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:12:20.807245184Z",
                                                               "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-compute && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T05:12:29.824024404Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"874b68da40aaccacbf39bda6727f8345\""
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested"
                                                     ]
                                                }
                                           ]
                                           : quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Oct 02 19:32:55 compute-0 sudo[365786]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:55 compute-0 ceph-mon[191910]: pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:56 compute-0 sudo[366004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rknseudkyvvitftxtyrmcqdjxjssjpfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433575.7409024-475-159185439046813/AnsiballZ_stat.py'
Oct 02 19:32:56 compute-0 sudo[366004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:56 compute-0 python3.9[366006]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:32:56 compute-0 sudo[366004]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:57 compute-0 sudo[366158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcccvuewuwsdxzyxsijjqxojafwveqqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433576.8692071-484-28006639616156/AnsiballZ_file.py'
Oct 02 19:32:57 compute-0 sudo[366158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:57 compute-0 python3.9[366160]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:57 compute-0 sudo[366158]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:57 compute-0 ceph-mon[191910]: pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:58 compute-0 sudo[366309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdvtfqvbnmnwupspgxiyuaeslsvficfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433577.7544482-484-191493785716811/AnsiballZ_copy.py'
Oct 02 19:32:58 compute-0 sudo[366309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:58 compute-0 python3.9[366311]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759433577.7544482-484-191493785716811/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:32:58 compute-0 sudo[366309]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:59 compute-0 podman[157186]: time="2025-10-02T19:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:32:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:32:59 compute-0 sudo[366385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqpfdhraikyxnzictrtteucbjepbtbvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433577.7544482-484-191493785716811/AnsiballZ_systemd.py'
Oct 02 19:32:59 compute-0 sudo[366385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8535 "" "Go-http-client/1.1"
Oct 02 19:32:59 compute-0 ceph-mon[191910]: pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:32:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:33:00 compute-0 python3.9[366387]: ansible-systemd Invoked with state=started name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:33:00 compute-0 sudo[366385]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:00 compute-0 sudo[366539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqlhzklabgdctxipwfroysuaodocisui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433580.4017549-504-232641282038820/AnsiballZ_systemd.py'
Oct 02 19:33:00 compute-0 sudo[366539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:01 compute-0 python3.9[366541]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:33:01 compute-0 systemd[1]: Stopping ceilometer_agent_compute container...
Oct 02 19:33:01 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:33:01.284 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Oct 02 19:33:01 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:33:01.386 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Oct 02 19:33:01 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:33:01.386 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Oct 02 19:33:01 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:33:01.387 17 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [17]
Oct 02 19:33:01 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:33:01.387 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Oct 02 19:33:01 compute-0 virtqemud[153606]: End of file while reading data: Input/output error
Oct 02 19:33:01 compute-0 virtqemud[153606]: End of file while reading data: Input/output error
Oct 02 19:33:01 compute-0 ceilometer_agent_compute[153674]: 2025-10-02 19:33:01.403 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Oct 02 19:33:01 compute-0 openstack_network_exporter[159337]: ERROR   19:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:33:01 compute-0 openstack_network_exporter[159337]: ERROR   19:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:33:01 compute-0 openstack_network_exporter[159337]: ERROR   19:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:33:01 compute-0 openstack_network_exporter[159337]: ERROR   19:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:33:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:33:01 compute-0 openstack_network_exporter[159337]: ERROR   19:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:33:01 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:33:01 compute-0 systemd[1]: libpod-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope: Deactivated successfully.
Oct 02 19:33:01 compute-0 podman[366545]: 2025-10-02 19:33:01.625239077 +0000 UTC m=+0.400615235 container died b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:33:01 compute-0 systemd[1]: libpod-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope: Consumed 4.107s CPU time.
Oct 02 19:33:01 compute-0 systemd[1]: b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca-929a170e456c76d.timer: Deactivated successfully.
Oct 02 19:33:01 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.
Oct 02 19:33:01 compute-0 systemd[1]: b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca-929a170e456c76d.service: Failed to open /run/systemd/transient/b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca-929a170e456c76d.service: No such file or directory
Oct 02 19:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca-userdata-shm.mount: Deactivated successfully.
Oct 02 19:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c137816fdf11fec8535861d16e376952678cff0dca35dc89ae89fcade482c6de-merged.mount: Deactivated successfully.
Oct 02 19:33:01 compute-0 podman[366545]: 2025-10-02 19:33:01.711490992 +0000 UTC m=+0.486867130 container cleanup b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm)
Oct 02 19:33:01 compute-0 podman[366545]: ceilometer_agent_compute
Oct 02 19:33:01 compute-0 podman[366572]: ceilometer_agent_compute
Oct 02 19:33:01 compute-0 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Oct 02 19:33:01 compute-0 systemd[1]: Stopped ceilometer_agent_compute container.
Oct 02 19:33:01 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Oct 02 19:33:01 compute-0 ceph-mon[191910]: pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c137816fdf11fec8535861d16e376952678cff0dca35dc89ae89fcade482c6de/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c137816fdf11fec8535861d16e376952678cff0dca35dc89ae89fcade482c6de/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c137816fdf11fec8535861d16e376952678cff0dca35dc89ae89fcade482c6de/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c137816fdf11fec8535861d16e376952678cff0dca35dc89ae89fcade482c6de/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:02 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.
Oct 02 19:33:02 compute-0 podman[366584]: 2025-10-02 19:33:02.008024236 +0000 UTC m=+0.153711359 container init b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: + sudo -E kolla_set_configs
Oct 02 19:33:02 compute-0 podman[366584]: 2025-10-02 19:33:02.039762924 +0000 UTC m=+0.185450027 container start b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930)
Oct 02 19:33:02 compute-0 podman[366584]: ceilometer_agent_compute
Oct 02 19:33:02 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Oct 02 19:33:02 compute-0 sudo[366604]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: sudo: unable to send audit message: Operation not permitted
Oct 02 19:33:02 compute-0 sudo[366539]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:02 compute-0 sudo[366604]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:33:02 compute-0 podman[366605]: 2025-10-02 19:33:02.145030667 +0000 UTC m=+0.090942251 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:33:02 compute-0 systemd[1]: b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca-6061c4c565629b81.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:33:02 compute-0 systemd[1]: b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca-6061c4c565629b81.service: Failed with result 'exit-code'.
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: INFO:__main__:Validating config file
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: INFO:__main__:Copying service configuration files
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: INFO:__main__:Writing out command to execute
Oct 02 19:33:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:02 compute-0 sudo[366604]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: ++ cat /run_command
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: + ARGS=
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: + sudo kolla_copy_cacerts
Oct 02 19:33:02 compute-0 sudo[366649]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: sudo: unable to send audit message: Operation not permitted
Oct 02 19:33:02 compute-0 sudo[366649]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:33:02 compute-0 sudo[366649]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: + [[ ! -n '' ]]
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: + . kolla_extend_start
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: + umask 0022
Oct 02 19:33:02 compute-0 ceilometer_agent_compute[366598]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Oct 02 19:33:02 compute-0 sudo[366777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glkdqnlslamtxklipnhkcsayakvcmeow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433582.3356538-512-36800046584650/AnsiballZ_stat.py'
Oct 02 19:33:02 compute-0 sudo[366777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:02 compute-0 python3.9[366779]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:33:02 compute-0 sudo[366777]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:03 compute-0 sudo[366855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qchubcdqzakkfhjjtszpifowhyfoatom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433582.3356538-512-36800046584650/AnsiballZ_file.py'
Oct 02 19:33:03 compute-0 sudo[366855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:03 compute-0 python3.9[366857]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/node_exporter/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/node_exporter/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:33:03
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'backups', '.mgr', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'default.rgw.control', 'images', 'default.rgw.meta', 'vms']
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:33:03 compute-0 sudo[366855]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:33:03 compute-0 ceph-mon[191910]: pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:33:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.880 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.881 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.881 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.882 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.882 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.883 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.883 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.883 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.884 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.884 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.884 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.884 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.885 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.886 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.886 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.886 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.886 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.887 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.887 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.887 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.887 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.888 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.888 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.888 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.889 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.889 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.889 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.890 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.890 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.890 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.891 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.891 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.891 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.891 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.892 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.892 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.892 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.892 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.892 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.893 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.893 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.893 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.893 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.894 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.894 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.894 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.894 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.895 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.895 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.895 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.895 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.896 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.896 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.896 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.896 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.896 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.897 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.897 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.897 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.897 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.898 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.898 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.898 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.899 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.899 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.899 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.900 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.900 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.900 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.900 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.900 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.901 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.901 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.901 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.901 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.902 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.902 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.902 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.902 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.902 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.903 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.903 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.903 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.904 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.904 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.904 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.904 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.905 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.905 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.905 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.906 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.906 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.906 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.906 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.906 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.907 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.907 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.907 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.907 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.907 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.908 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.908 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.908 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.908 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.909 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.909 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.909 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.909 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.909 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.910 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.910 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.910 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.911 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.911 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.911 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.911 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.911 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.911 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.912 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.912 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.912 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.912 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.912 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.913 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.913 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.913 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.913 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.913 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.914 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.914 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.914 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.914 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.914 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.914 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.915 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.915 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.915 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.915 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.915 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.915 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.915 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.915 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.916 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.916 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.916 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.916 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.916 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.916 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.916 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.917 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.917 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.917 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.917 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.941 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.942 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.942 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.942 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.942 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.942 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.942 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.943 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.943 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.943 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.943 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.943 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.943 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.943 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.943 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.944 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.944 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.944 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.944 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.944 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.944 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.944 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.944 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.944 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.944 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.945 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.945 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.945 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.945 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.945 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.945 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.945 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.945 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.945 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.945 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.945 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.945 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.945 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.946 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.946 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.946 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.946 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.946 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.946 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.946 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.946 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.946 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.946 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.946 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.947 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.947 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.947 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.947 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.947 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.947 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.947 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.947 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.947 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.947 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.947 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.948 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.948 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.948 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.948 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.948 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.948 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.948 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.948 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.948 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.948 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.948 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.949 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.949 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.949 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.949 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.949 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.949 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.949 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.949 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.949 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.949 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.949 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.950 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.950 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.950 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.950 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.950 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.950 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.950 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.950 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.950 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.950 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.951 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.951 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.951 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.951 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.951 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.951 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.951 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.951 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.951 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.951 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.951 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.952 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.952 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.952 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.952 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.952 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.952 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.952 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.952 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.952 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.952 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.953 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.953 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.953 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.953 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.953 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.953 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.953 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.953 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.954 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.954 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.954 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.954 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.954 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.954 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.954 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.954 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.955 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.955 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.955 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.955 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.955 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.955 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.955 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.955 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.955 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.956 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.956 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.956 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.956 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.956 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.956 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.956 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.956 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.956 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.956 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.956 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.956 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.956 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.957 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.957 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.957 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.959 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.961 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.963 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Oct 02 19:33:03 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:03.989 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.003 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.004 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.004 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Oct 02 19:33:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.224 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.225 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.225 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.225 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.225 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.225 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.225 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.225 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.226 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.226 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.226 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.226 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.226 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.227 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.227 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.227 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.227 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.227 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.227 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.228 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.228 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.228 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.228 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.228 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.228 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.228 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.229 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.229 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.229 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.229 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.229 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.229 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.229 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.229 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.230 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.230 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.230 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.230 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.230 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.230 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.230 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.230 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.231 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.231 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.231 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.231 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.231 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.231 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.231 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.232 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.232 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.232 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.232 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.232 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.232 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.233 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.233 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.233 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.233 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.233 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.233 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.233 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.233 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.234 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.234 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.234 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.234 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.234 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.234 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.234 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.235 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.235 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.235 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.235 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.235 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.235 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.235 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.235 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.236 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.236 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.236 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.236 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.236 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.236 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.237 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.237 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.237 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.237 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.237 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.237 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.237 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.237 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.238 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.238 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.238 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.238 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.238 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.238 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.238 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.238 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.238 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.238 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.239 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.239 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.239 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.239 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.239 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.239 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.239 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.240 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.240 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.240 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.240 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.240 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.240 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.240 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.241 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.241 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.241 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.241 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.241 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.241 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.241 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.241 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.241 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.241 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.241 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.242 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.242 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.242 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.242 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.242 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.242 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.242 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.242 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.242 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.242 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.242 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.242 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.243 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.243 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.243 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.243 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.243 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.243 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.243 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.243 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.243 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.244 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.244 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.244 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.244 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.244 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.244 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.244 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.244 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.244 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.245 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.245 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.245 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.245 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.245 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.245 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.246 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.246 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.246 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.246 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.246 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.246 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.251 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.291 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.292 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.294 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34334a70b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:33:04.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:04 compute-0 rsyslogd[187702]: imjournal from <compute-0:ceilometer_agent_compute>: begin to drop messages due to rate-limiting
Oct 02 19:33:04 compute-0 sudo[367020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzjgnlghtunynsadcqikxznwbnryrazk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433584.032008-526-34818392941721/AnsiballZ_container_config_data.py'
Oct 02 19:33:04 compute-0 sudo[367020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:04 compute-0 python3.9[367022]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Oct 02 19:33:04 compute-0 sudo[367020]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:33:05 compute-0 ceph-mon[191910]: pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:06 compute-0 sudo[367172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koekhjnkidolfkwhcdllgkprjkvhufku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433585.7905664-535-3960024741458/AnsiballZ_container_config_hash.py'
Oct 02 19:33:06 compute-0 sudo[367172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:06 compute-0 python3.9[367174]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:33:06 compute-0 sudo[367172]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:07 compute-0 sudo[367324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqdzseppuxwvxhclduuddbwibyotzmbw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759433586.9148803-545-84972449092559/AnsiballZ_edpm_container_manage.py'
Oct 02 19:33:07 compute-0 sudo[367324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:07 compute-0 python3[367326]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:33:07 compute-0 ceph-mon[191910]: pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:08 compute-0 python3[367326]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83",
                                                     "Digest": "sha256:fa8e5700b7762fffe0674e944762f44bb787a7e44d97569fe55348260453bf80",
                                                     "RepoTags": [
                                                          "quay.io/prometheus/node-exporter:v1.5.0"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c",
                                                          "quay.io/prometheus/node-exporter@sha256:fa8e5700b7762fffe0674e944762f44bb787a7e44d97569fe55348260453bf80"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2022-11-29T19:06:14.987394068Z",
                                                     "Config": {
                                                          "User": "nobody",
                                                          "ExposedPorts": {
                                                               "9100/tcp": {}
                                                          },
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
                                                          ],
                                                          "Entrypoint": [
                                                               "/bin/node_exporter"
                                                          ],
                                                          "Labels": {
                                                               "maintainer": "The Prometheus Authors <prometheus-developers@googlegroups.com>"
                                                          }
                                                     },
                                                     "Version": "19.03.8",
                                                     "Author": "",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 23851788,
                                                     "VirtualSize": 23851788,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c/diff:/var/lib/containers/storage/overlay/0438ade5aeea533b00cd75095bec75fbc2b307bace4c89bb39b75d428637bcd8/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:0438ade5aeea533b00cd75095bec75fbc2b307bace4c89bb39b75d428637bcd8",
                                                               "sha256:9f2d25037e3e722ca7f4ca9c7a885f19a2ce11140592ee0acb323dec3b26640d",
                                                               "sha256:76857a93cd03e12817c36c667cc3263d58886232cad116327e55d79036e5977d"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "maintainer": "The Prometheus Authors <prometheus-developers@googlegroups.com>"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                     "User": "nobody",
                                                     "History": [
                                                          {
                                                               "created": "2022-10-26T06:30:33.700079457Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:5e991de3200129dc05c3130f7a64bebb5704486b4f773bfcaa6b13165d6c2416 in / "
                                                          },
                                                          {
                                                               "created": "2022-10-26T06:30:33.794221299Z",
                                                               "created_by": "/bin/sh -c #(nop)  CMD [\"sh\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-15T10:54:54.845364304Z",
                                                               "created_by": "/bin/sh -c #(nop)  MAINTAINER The Prometheus Authors <prometheus-developers@googlegroups.com>",
                                                               "author": "The Prometheus Authors <prometheus-developers@googlegroups.com>",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-15T10:54:55.54866664Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY dir:02c961e21531be78a67ed9bad67d03391cfedcead8b0a35cfb9171346636f11a in / ",
                                                               "author": "The Prometheus Authors <prometheus-developers@googlegroups.com>"
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:13.622645057Z",
                                                               "created_by": "/bin/sh -c #(nop)  LABEL maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:13.810765105Z",
                                                               "created_by": "/bin/sh -c #(nop)  ARG ARCH=amd64",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:13.990897895Z",
                                                               "created_by": "/bin/sh -c #(nop)  ARG OS=linux",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:14.358293759Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY file:3ef20dd145817033186947b860c3b6f7bb06d4c435257258c0e5df15f6e51eb7 in /bin/node_exporter "
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:14.630644274Z",
                                                               "created_by": "/bin/sh -c #(nop)  EXPOSE 9100",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:14.79596292Z",
                                                               "created_by": "/bin/sh -c #(nop)  USER nobody",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:14.987394068Z",
                                                               "created_by": "/bin/sh -c #(nop)  ENTRYPOINT [\"/bin/node_exporter\"]",
                                                               "empty_layer": true
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.io/prometheus/node-exporter:v1.5.0"
                                                     ]
                                                }
                                           ]
                                           : quay.io/prometheus/node-exporter:v1.5.0
Oct 02 19:33:08 compute-0 systemd[1]: libpod-d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1.scope: Deactivated successfully.
Oct 02 19:33:08 compute-0 systemd[1]: libpod-d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1.scope: Consumed 6.125s CPU time.
Oct 02 19:33:08 compute-0 podman[367370]: 2025-10-02 19:33:08.213705833 +0000 UTC m=+0.095140993 container died d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:33:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:08 compute-0 systemd[1]: d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1-291a2ea7be92348a.timer: Deactivated successfully.
Oct 02 19:33:08 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1.
Oct 02 19:33:08 compute-0 systemd[1]: d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1-291a2ea7be92348a.service: Failed to open /run/systemd/transient/d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1-291a2ea7be92348a.service: No such file or directory
Oct 02 19:33:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1-userdata-shm.mount: Deactivated successfully.
Oct 02 19:33:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d013b77d80e4ab5eced721a5416ff9e029a62bd82f80e1313035fb8538e14c1b-merged.mount: Deactivated successfully.
Oct 02 19:33:08 compute-0 podman[367370]: 2025-10-02 19:33:08.301752406 +0000 UTC m=+0.183187526 container cleanup d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:33:08 compute-0 python3[367326]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop node_exporter
Oct 02 19:33:08 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 02 19:33:08 compute-0 podman[367400]: node_exporter
Oct 02 19:33:08 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Oct 02 19:33:08 compute-0 systemd[1]: d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1-291a2ea7be92348a.timer: Failed to open /run/systemd/transient/d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1-291a2ea7be92348a.timer: No such file or directory
Oct 02 19:33:08 compute-0 systemd[1]: d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1-291a2ea7be92348a.service: Failed to open /run/systemd/transient/d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1-291a2ea7be92348a.service: No such file or directory
Oct 02 19:33:08 compute-0 podman[367399]: 2025-10-02 19:33:08.460105768 +0000 UTC m=+0.116175886 container remove d074e14e6223cb13ad9ac57713da150c053735edecc603eb9ac3c601cfa0bbb1 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:33:08 compute-0 python3[367326]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force node_exporter
Oct 02 19:33:08 compute-0 systemd[1]: edpm_node_exporter.service: Scheduled restart job, restart counter is at 1.
Oct 02 19:33:08 compute-0 systemd[1]: Stopped node_exporter container.
Oct 02 19:33:08 compute-0 systemd[1]: Starting node_exporter container...
Oct 02 19:33:08 compute-0 podman[367425]: 2025-10-02 19:33:08.600666113 +0000 UTC m=+0.096349875 container create fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:33:08 compute-0 podman[367425]: 2025-10-02 19:33:08.556120052 +0000 UTC m=+0.051803874 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Oct 02 19:33:08 compute-0 python3[367326]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Oct 02 19:33:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d2bd3e0ee3902af053aa12bd6d99d95b49b3ecd0e1439e42793ab6b93b685d/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d2bd3e0ee3902af053aa12bd6d99d95b49b3ecd0e1439e42793ab6b93b685d/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:08 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2.
Oct 02 19:33:08 compute-0 podman[367434]: 2025-10-02 19:33:08.859043647 +0000 UTC m=+0.289695732 container init fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.893Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.893Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.893Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Oct 02 19:33:08 compute-0 podman[367434]: 2025-10-02 19:33:08.893498198 +0000 UTC m=+0.324150243 container start fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.893Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.894Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.894Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.894Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.894Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.894Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.894Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.894Z caller=node_exporter.go:117 level=info collector=arp
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.894Z caller=node_exporter.go:117 level=info collector=bcache
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.894Z caller=node_exporter.go:117 level=info collector=bonding
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.894Z caller=node_exporter.go:117 level=info collector=btrfs
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.894Z caller=node_exporter.go:117 level=info collector=conntrack
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.894Z caller=node_exporter.go:117 level=info collector=cpu
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.894Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.894Z caller=node_exporter.go:117 level=info collector=diskstats
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.894Z caller=node_exporter.go:117 level=info collector=edac
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.894Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.894Z caller=node_exporter.go:117 level=info collector=filefd
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.894Z caller=node_exporter.go:117 level=info collector=filesystem
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=infiniband
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=ipvs
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=loadavg
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=mdadm
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=meminfo
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=netclass
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=netdev
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=netstat
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=nfs
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=nfsd
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=nvme
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=schedstat
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=sockstat
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=softnet
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=systemd
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=tapestats
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=vmstat
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=xfs
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=node_exporter.go:117 level=info collector=zfs
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.895Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Oct 02 19:33:08 compute-0 node_exporter[367460]: ts=2025-10-02T19:33:08.896Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Oct 02 19:33:08 compute-0 podman[367434]: node_exporter
Oct 02 19:33:08 compute-0 systemd[1]: Started node_exporter container.
Oct 02 19:33:08 compute-0 python3[367326]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start node_exporter
Oct 02 19:33:08 compute-0 podman[367466]: 2025-10-02 19:33:08.961919186 +0000 UTC m=+0.171795122 container health_status 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 19:33:08 compute-0 podman[367480]: 2025-10-02 19:33:08.968529563 +0000 UTC m=+0.149560068 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, distribution-scope=public, version=9.4, io.openshift.tags=base rhel9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, vcs-type=git)
Oct 02 19:33:08 compute-0 podman[367467]: 2025-10-02 19:33:08.968601505 +0000 UTC m=+0.163481250 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Oct 02 19:33:08 compute-0 podman[367468]: 2025-10-02 19:33:08.97590991 +0000 UTC m=+0.142289953 container health_status ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:33:08 compute-0 podman[367465]: 2025-10-02 19:33:08.980333358 +0000 UTC m=+0.187966224 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 19:33:08 compute-0 podman[367488]: 2025-10-02 19:33:08.991046584 +0000 UTC m=+0.162819271 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct 02 19:33:08 compute-0 podman[367481]: 2025-10-02 19:33:08.993635094 +0000 UTC m=+0.153271097 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 19:33:09 compute-0 podman[367532]: 2025-10-02 19:33:09.016468664 +0000 UTC m=+0.110260778 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:33:09 compute-0 sudo[367324]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:33:09 compute-0 ceph-mon[191910]: pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:09 compute-0 sudo[367797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haqkizznaudoqkaltfsroawkwyimyqmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433589.4136877-553-82838864509278/AnsiballZ_stat.py'
Oct 02 19:33:10 compute-0 sudo[367797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:10 compute-0 python3.9[367799]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:33:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:10 compute-0 sudo[367797]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:11 compute-0 sudo[367951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpnavkcgybcloehwmijptlbbdhkklsul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433590.644704-562-11235526097301/AnsiballZ_file.py'
Oct 02 19:33:11 compute-0 sudo[367951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:11 compute-0 python3.9[367953]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:33:11 compute-0 sudo[367951]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:11 compute-0 ceph-mon[191910]: pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:12 compute-0 sudo[368102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hodqkposvwzadbauvlrtsjgojkajepsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433591.5928373-562-27223082753289/AnsiballZ_copy.py'
Oct 02 19:33:12 compute-0 sudo[368102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:33:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:33:12 compute-0 python3.9[368104]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759433591.5928373-562-27223082753289/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:33:12 compute-0 sudo[368102]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:13 compute-0 sudo[368178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgovwvpcweuvmuxbgjkfzjfexvwjkzza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433591.5928373-562-27223082753289/AnsiballZ_systemd.py'
Oct 02 19:33:13 compute-0 sudo[368178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:13 compute-0 python3.9[368180]: ansible-systemd Invoked with state=started name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:33:13 compute-0 sudo[368178]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:13 compute-0 ceph-mon[191910]: pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:14 compute-0 sudo[368332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jotoavrniyfxuitgtsaamalwqqfzkpiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433593.7299206-582-66607372150990/AnsiballZ_systemd.py'
Oct 02 19:33:14 compute-0 sudo[368332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:14 compute-0 python3.9[368334]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:33:14 compute-0 systemd[1]: Stopping node_exporter container...
Oct 02 19:33:14 compute-0 systemd[1]: libpod-fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2.scope: Deactivated successfully.
Oct 02 19:33:14 compute-0 podman[368338]: 2025-10-02 19:33:14.753864608 +0000 UTC m=+0.083390989 container died fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:33:14 compute-0 systemd[1]: fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2-254c8647a3952934.timer: Deactivated successfully.
Oct 02 19:33:14 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2.
Oct 02 19:33:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2-userdata-shm.mount: Deactivated successfully.
Oct 02 19:33:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-99d2bd3e0ee3902af053aa12bd6d99d95b49b3ecd0e1439e42793ab6b93b685d-merged.mount: Deactivated successfully.
Oct 02 19:33:14 compute-0 podman[368338]: 2025-10-02 19:33:14.802642832 +0000 UTC m=+0.132169263 container cleanup fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:33:14 compute-0 podman[368338]: node_exporter
Oct 02 19:33:14 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 02 19:33:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:33:14 compute-0 podman[368366]: node_exporter
Oct 02 19:33:14 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Oct 02 19:33:14 compute-0 systemd[1]: Stopped node_exporter container.
Oct 02 19:33:14 compute-0 systemd[1]: Starting node_exporter container...
Oct 02 19:33:14 compute-0 ceph-mon[191910]: pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:33:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d2bd3e0ee3902af053aa12bd6d99d95b49b3ecd0e1439e42793ab6b93b685d/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d2bd3e0ee3902af053aa12bd6d99d95b49b3ecd0e1439e42793ab6b93b685d/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:15 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2.
Oct 02 19:33:15 compute-0 podman[368378]: 2025-10-02 19:33:15.13600217 +0000 UTC m=+0.195338231 container init fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.159Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.160Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.160Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.169Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.169Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.169Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.170Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.170Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.170Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.171Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.171Z caller=node_exporter.go:117 level=info collector=arp
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.171Z caller=node_exporter.go:117 level=info collector=bcache
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.171Z caller=node_exporter.go:117 level=info collector=bonding
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.171Z caller=node_exporter.go:117 level=info collector=btrfs
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.171Z caller=node_exporter.go:117 level=info collector=conntrack
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.171Z caller=node_exporter.go:117 level=info collector=cpu
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=diskstats
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=edac
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=filefd
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=filesystem
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=infiniband
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=ipvs
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=loadavg
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=mdadm
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=meminfo
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=netclass
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=netdev
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=netstat
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=nfs
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=nfsd
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=nvme
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=schedstat
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=sockstat
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=softnet
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=systemd
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=tapestats
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=vmstat
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=xfs
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.172Z caller=node_exporter.go:117 level=info collector=zfs
Oct 02 19:33:15 compute-0 podman[368378]: 2025-10-02 19:33:15.172890295 +0000 UTC m=+0.232226346 container start fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.173Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Oct 02 19:33:15 compute-0 node_exporter[368393]: ts=2025-10-02T19:33:15.174Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Oct 02 19:33:15 compute-0 podman[368378]: node_exporter
Oct 02 19:33:15 compute-0 systemd[1]: Started node_exporter container.
Oct 02 19:33:15 compute-0 sudo[368332]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:15 compute-0 podman[368403]: 2025-10-02 19:33:15.278008754 +0000 UTC m=+0.093475859 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:33:16 compute-0 sudo[368573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfrdxjhvtrggiyixyrtdojgzukkhfclr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433595.4876966-590-1343630476303/AnsiballZ_stat.py'
Oct 02 19:33:16 compute-0 sudo[368573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:16 compute-0 python3.9[368575]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:33:16 compute-0 sudo[368573]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:17 compute-0 ceph-mon[191910]: pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:17 compute-0 sudo[368651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whshtynhwzqciklcplbyxhyvaocyiqxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433595.4876966-590-1343630476303/AnsiballZ_file.py'
Oct 02 19:33:17 compute-0 sudo[368651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:18 compute-0 python3.9[368653]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/podman_exporter/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/podman_exporter/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:33:18 compute-0 sudo[368651]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:19 compute-0 ceph-mon[191910]: pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:19 compute-0 sudo[368803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozlsldhddpeljkychajvpwqkmtqmuaik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433598.5940828-604-189522042848830/AnsiballZ_container_config_data.py'
Oct 02 19:33:19 compute-0 sudo[368803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:19 compute-0 python3.9[368805]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Oct 02 19:33:19 compute-0 sudo[368803]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:33:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:20 compute-0 sudo[368956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jerxxxkacsebraqsgljvrspqoosiuxrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433600.1071494-613-164300338449052/AnsiballZ_container_config_hash.py'
Oct 02 19:33:20 compute-0 sudo[368956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:20 compute-0 python3.9[368958]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:33:20 compute-0 sudo[368956]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:21 compute-0 ceph-mon[191910]: pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:21 compute-0 sudo[369108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djynxgqmwodiphojlphtgimwnaeczbmx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759433601.237191-623-32203501062359/AnsiballZ_edpm_container_manage.py'
Oct 02 19:33:21 compute-0 sudo[369108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:22 compute-0 python3[369110]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:33:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:22 compute-0 python3[369110]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815",
                                                     "Digest": "sha256:7b7f37816f4a78244e32f90a517fdec0c458a6d3cd132212bb6bc16a9dc4fade",
                                                     "RepoTags": [
                                                          "quay.io/navidys/prometheus-podman-exporter:v1.10.1"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.io/navidys/prometheus-podman-exporter@sha256:7b7f37816f4a78244e32f90a517fdec0c458a6d3cd132212bb6bc16a9dc4fade",
                                                          "quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2024-03-17T01:45:00.251170784Z",
                                                     "Config": {
                                                          "User": "nobody",
                                                          "ExposedPorts": {
                                                               "9882/tcp": {}
                                                          },
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
                                                          ],
                                                          "Entrypoint": [
                                                               "/bin/podman_exporter"
                                                          ],
                                                          "Labels": {
                                                               "maintainer": "Navid Yaghoobi <navidys@fedoraproject.org>"
                                                          }
                                                     },
                                                     "Version": "",
                                                     "Author": "The Prometheus Authors <prometheus-developers@googlegroups.com>",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 33863535,
                                                     "VirtualSize": 33863535,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1/diff:/var/lib/containers/storage/overlay/1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed",
                                                               "sha256:6b83872188a9e8912bee1d43add5e9bc518601b02a14a364c0da43b0d59acf33",
                                                               "sha256:7a73cdcd46b4e3c3a632bae42ad152935f204b50dd02f0a46070f81446516318"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "maintainer": "Navid Yaghoobi <navidys@fedoraproject.org>"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                     "User": "nobody",
                                                     "History": [
                                                          {
                                                               "created": "2023-12-05T20:23:06.467739954Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:ee9bb8755ccbdd689b434d9b4ac7518e972699604ecda33e4ddc2a15d2831443 in / "
                                                          },
                                                          {
                                                               "created": "2023-12-05T20:23:06.550971969Z",
                                                               "created_by": "/bin/sh -c #(nop)  CMD [\"sh\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2023-12-15T10:54:58.99835989Z",
                                                               "created_by": "MAINTAINER The Prometheus Authors <prometheus-developers@googlegroups.com>",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2023-12-15T10:54:58.99835989Z",
                                                               "created_by": "COPY /rootfs / # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "LABEL maintainer=Navid Yaghoobi <navidys@fedoraproject.org>",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "ARG TARGETPLATFORM",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "ARG TARGETOS",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "ARG TARGETARCH",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "COPY ./bin/remote/prometheus-podman-exporter-amd64 /bin/podman_exporter # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "EXPOSE map[9882/tcp:{}]",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "USER nobody",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "ENTRYPOINT [\"/bin/podman_exporter\"]",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.io/navidys/prometheus-podman-exporter:v1.10.1"
                                                     ]
                                                }
                                           ]
                                           : quay.io/navidys/prometheus-podman-exporter:v1.10.1
Oct 02 19:33:22 compute-0 podman[157186]: @ - - [02/Oct/2025:19:02:49 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 3188427 "" "Go-http-client/1.1"
Oct 02 19:33:22 compute-0 systemd[1]: libpod-ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f.scope: Deactivated successfully.
Oct 02 19:33:22 compute-0 systemd[1]: libpod-ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f.scope: Consumed 4.114s CPU time.
Oct 02 19:33:22 compute-0 podman[369154]: 2025-10-02 19:33:22.59096163 +0000 UTC m=+0.089444831 container died ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:33:22 compute-0 systemd[1]: ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f-5a628c5f05488a2f.timer: Deactivated successfully.
Oct 02 19:33:22 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f.
Oct 02 19:33:22 compute-0 systemd[1]: ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f-5a628c5f05488a2f.service: Failed to open /run/systemd/transient/ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f-5a628c5f05488a2f.service: No such file or directory
Oct 02 19:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f-userdata-shm.mount: Deactivated successfully.
Oct 02 19:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c01cdb2b5d837c9752b51f05dcf62960f0de0a536f40fb083d0036db55f0da90-merged.mount: Deactivated successfully.
Oct 02 19:33:22 compute-0 podman[369154]: 2025-10-02 19:33:22.676804814 +0000 UTC m=+0.175287995 container cleanup ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:33:22 compute-0 python3[369110]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop podman_exporter
Oct 02 19:33:22 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 02 19:33:22 compute-0 systemd[1]: ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f-5a628c5f05488a2f.timer: Failed to open /run/systemd/transient/ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f-5a628c5f05488a2f.timer: No such file or directory
Oct 02 19:33:22 compute-0 systemd[1]: ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f-5a628c5f05488a2f.service: Failed to open /run/systemd/transient/ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f-5a628c5f05488a2f.service: No such file or directory
Oct 02 19:33:22 compute-0 podman[369181]: 2025-10-02 19:33:22.833811519 +0000 UTC m=+0.112202049 container remove ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:33:22 compute-0 podman[369182]: Error: no container with ID ce6c93c093e2aaccb5944d9b72ac62c44e8b5b56851d2547e708114b62cb806f found in database: no such container
Oct 02 19:33:22 compute-0 systemd[1]: edpm_podman_exporter.service: Control process exited, code=exited, status=125/n/a
Oct 02 19:33:22 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Oct 02 19:33:22 compute-0 python3[369110]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force podman_exporter
Oct 02 19:33:22 compute-0 podman[369204]: 2025-10-02 19:33:22.960153565 +0000 UTC m=+0.094958879 container create 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter)
Oct 02 19:33:22 compute-0 podman[369204]: 2025-10-02 19:33:22.910549739 +0000 UTC m=+0.045355063 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Oct 02 19:33:22 compute-0 python3[369110]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Oct 02 19:33:23 compute-0 systemd[1]: edpm_podman_exporter.service: Scheduled restart job, restart counter is at 1.
Oct 02 19:33:23 compute-0 systemd[1]: Stopped podman_exporter container.
Oct 02 19:33:23 compute-0 systemd[1]: Starting podman_exporter container...
Oct 02 19:33:23 compute-0 systemd[1]: Started libpod-conmon-308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431.scope.
Oct 02 19:33:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac65a8b43e42480e30c538ff41b8a7be8ae2d155e84a87eab14e602cc4f5095/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac65a8b43e42480e30c538ff41b8a7be8ae2d155e84a87eab14e602cc4f5095/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:23 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431.
Oct 02 19:33:23 compute-0 podman[369217]: 2025-10-02 19:33:23.255317232 +0000 UTC m=+0.253748762 container init 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:33:23 compute-0 podman_exporter[369240]: ts=2025-10-02T19:33:23.289Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Oct 02 19:33:23 compute-0 podman_exporter[369240]: ts=2025-10-02T19:33:23.290Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Oct 02 19:33:23 compute-0 podman[157186]: @ - - [02/Oct/2025:19:33:23 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Oct 02 19:33:23 compute-0 podman_exporter[369240]: ts=2025-10-02T19:33:23.290Z caller=handler.go:94 level=info msg="enabled collectors"
Oct 02 19:33:23 compute-0 podman_exporter[369240]: ts=2025-10-02T19:33:23.290Z caller=handler.go:105 level=info collector=container
Oct 02 19:33:23 compute-0 podman[157186]: time="2025-10-02T19:33:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:33:23 compute-0 podman[369217]: 2025-10-02 19:33:23.300667524 +0000 UTC m=+0.299099024 container start 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:33:23 compute-0 python3[369110]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start podman_exporter
Oct 02 19:33:23 compute-0 ceph-mon[191910]: pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:23 compute-0 podman[369226]: podman_exporter
Oct 02 19:33:23 compute-0 systemd[1]: Started podman_exporter container.
Oct 02 19:33:23 compute-0 podman[157186]: @ - - [02/Oct/2025:19:33:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 45757 "" "Go-http-client/1.1"
Oct 02 19:33:23 compute-0 podman_exporter[369240]: ts=2025-10-02T19:33:23.418Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Oct 02 19:33:23 compute-0 podman[369249]: 2025-10-02 19:33:23.41690186 +0000 UTC m=+0.102475269 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:33:23 compute-0 podman_exporter[369240]: ts=2025-10-02T19:33:23.419Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Oct 02 19:33:23 compute-0 podman_exporter[369240]: ts=2025-10-02T19:33:23.419Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Oct 02 19:33:23 compute-0 systemd[1]: 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431-22b02e066b8f56dd.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:33:23 compute-0 systemd[1]: 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431-22b02e066b8f56dd.service: Failed with result 'exit-code'.
Oct 02 19:33:23 compute-0 sudo[369108]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:24 compute-0 sudo[369444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzusevsobzdalxjlegyqegdevejrlvun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433603.868308-631-157807072520724/AnsiballZ_stat.py'
Oct 02 19:33:24 compute-0 sudo[369444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:24 compute-0 python3.9[369446]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:33:24 compute-0 sudo[369444]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:24 compute-0 podman[369447]: 2025-10-02 19:33:24.708595587 +0000 UTC m=+0.131082374 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 19:33:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:33:25 compute-0 ceph-mon[191910]: pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:25 compute-0 sudo[369615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyisxntyhjvdlonnysyrjcpmciffwgde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433605.0546207-640-167812849531541/AnsiballZ_file.py'
Oct 02 19:33:25 compute-0 sudo[369615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:25 compute-0 python3.9[369617]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:33:25 compute-0 sudo[369615]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:26 compute-0 sudo[369766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njvmnlymwbqxgpfbrlatzsbpkltmgngm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433606.030251-640-406766814832/AnsiballZ_copy.py'
Oct 02 19:33:26 compute-0 sudo[369766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:27 compute-0 python3.9[369768]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759433606.030251-640-406766814832/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:33:27 compute-0 sudo[369766]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:27 compute-0 ceph-mon[191910]: pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:27 compute-0 sudo[369842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkrgazlgnygsqgihrvtdqqrpejgeefpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433606.030251-640-406766814832/AnsiballZ_systemd.py'
Oct 02 19:33:27 compute-0 sudo[369842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:27 compute-0 python3.9[369844]: ansible-systemd Invoked with state=started name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:33:27 compute-0 sudo[369842]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:28 compute-0 sudo[369905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:33:28 compute-0 sudo[369905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:33:28 compute-0 sudo[369905]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:28 compute-0 sudo[369956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:33:28 compute-0 sudo[369956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:33:28 compute-0 sudo[369956]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:28 compute-0 sudo[370003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:33:28 compute-0 sudo[370003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:33:28 compute-0 sudo[370003]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:28 compute-0 sudo[370054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:33:28 compute-0 sudo[370054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:33:28 compute-0 sudo[370092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpfusktelzwnujrjfpuvuvqbgzonbiku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433608.2594328-660-105162579232799/AnsiballZ_systemd.py'
Oct 02 19:33:28 compute-0 sudo[370092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:29 compute-0 python3.9[370098]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:33:29 compute-0 systemd[1]: Stopping podman_exporter container...
Oct 02 19:33:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:33:23 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 3549 "" "Go-http-client/1.1"
Oct 02 19:33:29 compute-0 systemd[1]: libpod-308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431.scope: Deactivated successfully.
Oct 02 19:33:29 compute-0 podman[370119]: 2025-10-02 19:33:29.322611673 +0000 UTC m=+0.075705594 container died 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:33:29 compute-0 systemd[1]: 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431-22b02e066b8f56dd.timer: Deactivated successfully.
Oct 02 19:33:29 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431.
Oct 02 19:33:29 compute-0 ceph-mon[191910]: pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431-userdata-shm.mount: Deactivated successfully.
Oct 02 19:33:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ac65a8b43e42480e30c538ff41b8a7be8ae2d155e84a87eab14e602cc4f5095-merged.mount: Deactivated successfully.
Oct 02 19:33:29 compute-0 podman[370119]: 2025-10-02 19:33:29.397651138 +0000 UTC m=+0.150745029 container cleanup 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:33:29 compute-0 podman[370119]: podman_exporter
Oct 02 19:33:29 compute-0 sudo[370054]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:29 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 02 19:33:29 compute-0 systemd[1]: libpod-conmon-308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431.scope: Deactivated successfully.
Oct 02 19:33:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:33:29 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:33:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:33:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:33:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:33:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:33:29 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 8813f2d2-4aaa-4c9a-aaab-bb93095b6b6b does not exist
Oct 02 19:33:29 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev acc2d45d-6969-48b6-b067-cceb544115ea does not exist
Oct 02 19:33:29 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev ff287527-794f-4b41-a410-d1931af61174 does not exist
Oct 02 19:33:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:33:29 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:33:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:33:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:33:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:33:29 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:33:29 compute-0 podman[370164]: podman_exporter
Oct 02 19:33:29 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Oct 02 19:33:29 compute-0 systemd[1]: Stopped podman_exporter container.
Oct 02 19:33:29 compute-0 systemd[1]: Starting podman_exporter container...
Oct 02 19:33:29 compute-0 sudo[370177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:33:29 compute-0 sudo[370177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:33:29 compute-0 sudo[370177]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac65a8b43e42480e30c538ff41b8a7be8ae2d155e84a87eab14e602cc4f5095/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac65a8b43e42480e30c538ff41b8a7be8ae2d155e84a87eab14e602cc4f5095/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:29 compute-0 sudo[370218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:33:29 compute-0 sudo[370218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:33:29 compute-0 sudo[370218]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:29 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431.
Oct 02 19:33:29 compute-0 podman[370181]: 2025-10-02 19:33:29.763010991 +0000 UTC m=+0.213241399 container init 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:33:29 compute-0 podman_exporter[370219]: ts=2025-10-02T19:33:29.796Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Oct 02 19:33:29 compute-0 podman_exporter[370219]: ts=2025-10-02T19:33:29.796Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Oct 02 19:33:29 compute-0 podman_exporter[370219]: ts=2025-10-02T19:33:29.797Z caller=handler.go:94 level=info msg="enabled collectors"
Oct 02 19:33:29 compute-0 podman_exporter[370219]: ts=2025-10-02T19:33:29.797Z caller=handler.go:105 level=info collector=container
Oct 02 19:33:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:33:29 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Oct 02 19:33:29 compute-0 podman[157186]: time="2025-10-02T19:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:33:29 compute-0 podman[370181]: 2025-10-02 19:33:29.814534608 +0000 UTC m=+0.264765036 container start 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:33:29 compute-0 podman[370181]: podman_exporter
Oct 02 19:33:29 compute-0 sudo[370247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:33:29 compute-0 sudo[370247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:33:29 compute-0 sudo[370247]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:29 compute-0 systemd[1]: Started podman_exporter container.
Oct 02 19:33:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:33:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 45753 "" "Go-http-client/1.1"
Oct 02 19:33:29 compute-0 podman_exporter[370219]: ts=2025-10-02T19:33:29.869Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Oct 02 19:33:29 compute-0 podman_exporter[370219]: ts=2025-10-02T19:33:29.870Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Oct 02 19:33:29 compute-0 podman_exporter[370219]: ts=2025-10-02T19:33:29.870Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Oct 02 19:33:29 compute-0 sudo[370092]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:29 compute-0 podman[370274]: 2025-10-02 19:33:29.920222272 +0000 UTC m=+0.094973768 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:33:29 compute-0 sudo[370286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:33:29 compute-0 sudo[370286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:33:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:33:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:33:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:33:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:33:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:33:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:33:30 compute-0 podman[370452]: 2025-10-02 19:33:30.446463774 +0000 UTC m=+0.060218220 container create 82620f2ec5ad40ad354a54df111705a1faadcc0043e79c3087adb1f4a82cefce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bouman, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:33:30 compute-0 systemd[1]: Started libpod-conmon-82620f2ec5ad40ad354a54df111705a1faadcc0043e79c3087adb1f4a82cefce.scope.
Oct 02 19:33:30 compute-0 podman[370452]: 2025-10-02 19:33:30.42163959 +0000 UTC m=+0.035394036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:33:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:33:30 compute-0 podman[370452]: 2025-10-02 19:33:30.577505625 +0000 UTC m=+0.191260061 container init 82620f2ec5ad40ad354a54df111705a1faadcc0043e79c3087adb1f4a82cefce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bouman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:33:30 compute-0 podman[370452]: 2025-10-02 19:33:30.597281494 +0000 UTC m=+0.211035910 container start 82620f2ec5ad40ad354a54df111705a1faadcc0043e79c3087adb1f4a82cefce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bouman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 19:33:30 compute-0 podman[370452]: 2025-10-02 19:33:30.602045911 +0000 UTC m=+0.215800327 container attach 82620f2ec5ad40ad354a54df111705a1faadcc0043e79c3087adb1f4a82cefce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:33:30 compute-0 thirsty_bouman[370502]: 167 167
Oct 02 19:33:30 compute-0 systemd[1]: libpod-82620f2ec5ad40ad354a54df111705a1faadcc0043e79c3087adb1f4a82cefce.scope: Deactivated successfully.
Oct 02 19:33:30 compute-0 podman[370452]: 2025-10-02 19:33:30.60574028 +0000 UTC m=+0.219494736 container died 82620f2ec5ad40ad354a54df111705a1faadcc0043e79c3087adb1f4a82cefce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bouman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:33:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e1483d817f3af68a29787f2837c714ea15314c1893e27611ea2852f42f9f8c3-merged.mount: Deactivated successfully.
Oct 02 19:33:30 compute-0 podman[370452]: 2025-10-02 19:33:30.665414914 +0000 UTC m=+0.279169330 container remove 82620f2ec5ad40ad354a54df111705a1faadcc0043e79c3087adb1f4a82cefce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 19:33:30 compute-0 sudo[370540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqyucnhdauqvhceisvuzzelmyzkycmii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433610.1532154-668-34717665119448/AnsiballZ_stat.py'
Oct 02 19:33:30 compute-0 sudo[370540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:30 compute-0 systemd[1]: libpod-conmon-82620f2ec5ad40ad354a54df111705a1faadcc0043e79c3087adb1f4a82cefce.scope: Deactivated successfully.
Oct 02 19:33:30 compute-0 python3.9[370546]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:33:30 compute-0 sudo[370540]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:30 compute-0 podman[370554]: 2025-10-02 19:33:30.93392125 +0000 UTC m=+0.074044360 container create 60cd85dcb176ee6776662ffbda181b19b391298bae23e185032dbc27630080bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_zhukovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:33:30 compute-0 systemd[1]: Started libpod-conmon-60cd85dcb176ee6776662ffbda181b19b391298bae23e185032dbc27630080bf.scope.
Oct 02 19:33:31 compute-0 podman[370554]: 2025-10-02 19:33:30.912839946 +0000 UTC m=+0.052963076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:33:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53fef15d900fa41a10d9d7c4c3400d66a8751889851a96bcc8e64499b93ca655/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53fef15d900fa41a10d9d7c4c3400d66a8751889851a96bcc8e64499b93ca655/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53fef15d900fa41a10d9d7c4c3400d66a8751889851a96bcc8e64499b93ca655/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53fef15d900fa41a10d9d7c4c3400d66a8751889851a96bcc8e64499b93ca655/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53fef15d900fa41a10d9d7c4c3400d66a8751889851a96bcc8e64499b93ca655/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:31 compute-0 podman[370554]: 2025-10-02 19:33:31.04320793 +0000 UTC m=+0.183331070 container init 60cd85dcb176ee6776662ffbda181b19b391298bae23e185032dbc27630080bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:33:31 compute-0 podman[370554]: 2025-10-02 19:33:31.060780069 +0000 UTC m=+0.200903179 container start 60cd85dcb176ee6776662ffbda181b19b391298bae23e185032dbc27630080bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_zhukovsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:33:31 compute-0 podman[370554]: 2025-10-02 19:33:31.065162647 +0000 UTC m=+0.205285847 container attach 60cd85dcb176ee6776662ffbda181b19b391298bae23e185032dbc27630080bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct 02 19:33:31 compute-0 sudo[370650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsxarvzaduoxwgurvlixosmgnbrkmmjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433610.1532154-668-34717665119448/AnsiballZ_file.py'
Oct 02 19:33:31 compute-0 sudo[370650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:31 compute-0 openstack_network_exporter[159337]: ERROR   19:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:33:31 compute-0 openstack_network_exporter[159337]: ERROR   19:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:33:31 compute-0 openstack_network_exporter[159337]: ERROR   19:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:33:31 compute-0 openstack_network_exporter[159337]: ERROR   19:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:33:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:33:31 compute-0 openstack_network_exporter[159337]: ERROR   19:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:33:31 compute-0 openstack_network_exporter[159337]: 
Oct 02 19:33:31 compute-0 python3.9[370652]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/openstack_network_exporter/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:33:31 compute-0 sudo[370650]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:31 compute-0 ceph-mon[191910]: pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:33:32.278 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:33:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:33:32.279 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:33:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:33:32.279 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:33:32 compute-0 infallible_zhukovsky[370590]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:33:32 compute-0 infallible_zhukovsky[370590]: --> relative data size: 1.0
Oct 02 19:33:32 compute-0 infallible_zhukovsky[370590]: --> All data devices are unavailable
Oct 02 19:33:32 compute-0 systemd[1]: libpod-60cd85dcb176ee6776662ffbda181b19b391298bae23e185032dbc27630080bf.scope: Deactivated successfully.
Oct 02 19:33:32 compute-0 systemd[1]: libpod-60cd85dcb176ee6776662ffbda181b19b391298bae23e185032dbc27630080bf.scope: Consumed 1.352s CPU time.
Oct 02 19:33:32 compute-0 podman[370554]: 2025-10-02 19:33:32.506543303 +0000 UTC m=+1.646666473 container died 60cd85dcb176ee6776662ffbda181b19b391298bae23e185032dbc27630080bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_zhukovsky, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 19:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-53fef15d900fa41a10d9d7c4c3400d66a8751889851a96bcc8e64499b93ca655-merged.mount: Deactivated successfully.
Oct 02 19:33:32 compute-0 podman[370554]: 2025-10-02 19:33:32.606935446 +0000 UTC m=+1.747058556 container remove 60cd85dcb176ee6776662ffbda181b19b391298bae23e185032dbc27630080bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_zhukovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:33:32 compute-0 systemd[1]: libpod-conmon-60cd85dcb176ee6776662ffbda181b19b391298bae23e185032dbc27630080bf.scope: Deactivated successfully.
Oct 02 19:33:32 compute-0 sudo[370286]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:32 compute-0 podman[370762]: 2025-10-02 19:33:32.661143184 +0000 UTC m=+0.103694812 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Oct 02 19:33:32 compute-0 systemd[1]: b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca-6061c4c565629b81.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:33:32 compute-0 systemd[1]: b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca-6061c4c565629b81.service: Failed with result 'exit-code'.
Oct 02 19:33:32 compute-0 sudo[370814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:33:32 compute-0 sudo[370814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:33:32 compute-0 sudo[370814]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:32 compute-0 sudo[370889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmhyrbyhasunulsuqgbldcokupmckpdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433612.319248-682-169366396598229/AnsiballZ_container_config_data.py'
Oct 02 19:33:32 compute-0 sudo[370889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:32 compute-0 sudo[370870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:33:32 compute-0 sudo[370870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:33:32 compute-0 sudo[370870]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:33 compute-0 sudo[370909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:33:33 compute-0 sudo[370909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:33:33 compute-0 sudo[370909]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:33 compute-0 python3.9[370901]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Oct 02 19:33:33 compute-0 sudo[370889]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:33 compute-0 sudo[370934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:33:33 compute-0 sudo[370934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:33:33 compute-0 ceph-mon[191910]: pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:33:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:33:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:33:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:33:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:33:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:33:33 compute-0 podman[371101]: 2025-10-02 19:33:33.789922917 +0000 UTC m=+0.101776321 container create 933cdcd801441421f937070aabe20843a0432cf04b22100d10ce2cba6c325d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shaw, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 19:33:33 compute-0 podman[371101]: 2025-10-02 19:33:33.7346621 +0000 UTC m=+0.046515544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:33:33 compute-0 systemd[1]: Started libpod-conmon-933cdcd801441421f937070aabe20843a0432cf04b22100d10ce2cba6c325d40.scope.
Oct 02 19:33:33 compute-0 sudo[371163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqpqtftlkozfbcklezngzplpzcljyniq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433613.3698847-691-162432132006119/AnsiballZ_container_config_hash.py'
Oct 02 19:33:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:33:33 compute-0 sudo[371163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:33 compute-0 podman[371101]: 2025-10-02 19:33:33.934280404 +0000 UTC m=+0.246133818 container init 933cdcd801441421f937070aabe20843a0432cf04b22100d10ce2cba6c325d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:33:33 compute-0 podman[371101]: 2025-10-02 19:33:33.945789722 +0000 UTC m=+0.257643096 container start 933cdcd801441421f937070aabe20843a0432cf04b22100d10ce2cba6c325d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 19:33:33 compute-0 podman[371101]: 2025-10-02 19:33:33.955149692 +0000 UTC m=+0.267003066 container attach 933cdcd801441421f937070aabe20843a0432cf04b22100d10ce2cba6c325d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:33:33 compute-0 lucid_shaw[371162]: 167 167
Oct 02 19:33:33 compute-0 systemd[1]: libpod-933cdcd801441421f937070aabe20843a0432cf04b22100d10ce2cba6c325d40.scope: Deactivated successfully.
Oct 02 19:33:33 compute-0 podman[371101]: 2025-10-02 19:33:33.957094744 +0000 UTC m=+0.268948118 container died 933cdcd801441421f937070aabe20843a0432cf04b22100d10ce2cba6c325d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shaw, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 19:33:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a267b192902f285061387bc71437eb09c2be1bb1978ea2fc9b4bd46c2d8500c4-merged.mount: Deactivated successfully.
Oct 02 19:33:34 compute-0 podman[371101]: 2025-10-02 19:33:34.013028068 +0000 UTC m=+0.324881442 container remove 933cdcd801441421f937070aabe20843a0432cf04b22100d10ce2cba6c325d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shaw, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 19:33:34 compute-0 systemd[1]: libpod-conmon-933cdcd801441421f937070aabe20843a0432cf04b22100d10ce2cba6c325d40.scope: Deactivated successfully.
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.111 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.112 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:34 compute-0 python3.9[371167]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.182 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.182 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.182 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.182 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.183 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.183 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.183 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:33:34 compute-0 sudo[371163]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:34 compute-0 podman[371188]: 2025-10-02 19:33:34.213204988 +0000 UTC m=+0.061599147 container create 863ae12b16049f7a54935d505c3ecf42946136985970fd11069147ec30166e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heisenberg, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:33:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:34 compute-0 systemd[1]: Started libpod-conmon-863ae12b16049f7a54935d505c3ecf42946136985970fd11069147ec30166e01.scope.
Oct 02 19:33:34 compute-0 podman[371188]: 2025-10-02 19:33:34.194356314 +0000 UTC m=+0.042750503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:33:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63404de523e4621e31d3e7e7455227b0a401a6283b0dcae6eee6392c53333652/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63404de523e4621e31d3e7e7455227b0a401a6283b0dcae6eee6392c53333652/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63404de523e4621e31d3e7e7455227b0a401a6283b0dcae6eee6392c53333652/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63404de523e4621e31d3e7e7455227b0a401a6283b0dcae6eee6392c53333652/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:34 compute-0 podman[371188]: 2025-10-02 19:33:34.334583371 +0000 UTC m=+0.182977580 container init 863ae12b16049f7a54935d505c3ecf42946136985970fd11069147ec30166e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heisenberg, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 19:33:34 compute-0 podman[371188]: 2025-10-02 19:33:34.355585812 +0000 UTC m=+0.203979971 container start 863ae12b16049f7a54935d505c3ecf42946136985970fd11069147ec30166e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 19:33:34 compute-0 podman[371188]: 2025-10-02 19:33:34.360533335 +0000 UTC m=+0.208927504 container attach 863ae12b16049f7a54935d505c3ecf42946136985970fd11069147ec30166e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heisenberg, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.591 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.592 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.632 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.634 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.635 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.637 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:33:34 compute-0 nova_compute[355794]: 2025-10-02 19:33:34.638 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:33:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:33:35 compute-0 sudo[371379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bprfnexmbhfsytxsuxpgpnlgwfdghecv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759433614.57272-701-84732699019748/AnsiballZ_edpm_container_manage.py'
Oct 02 19:33:35 compute-0 sudo[371379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]: {
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:     "0": [
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:         {
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "devices": [
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "/dev/loop3"
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             ],
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "lv_name": "ceph_lv0",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "lv_size": "21470642176",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "name": "ceph_lv0",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "tags": {
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.cluster_name": "ceph",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.crush_device_class": "",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.encrypted": "0",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.osd_id": "0",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.type": "block",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.vdo": "0"
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             },
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "type": "block",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "vg_name": "ceph_vg0"
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:         }
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:     ],
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:     "1": [
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:         {
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "devices": [
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "/dev/loop4"
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             ],
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "lv_name": "ceph_lv1",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "lv_size": "21470642176",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "name": "ceph_lv1",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "tags": {
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.cluster_name": "ceph",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.crush_device_class": "",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.encrypted": "0",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.osd_id": "1",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.type": "block",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.vdo": "0"
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             },
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "type": "block",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "vg_name": "ceph_vg1"
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:         }
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:     ],
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:     "2": [
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:         {
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "devices": [
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "/dev/loop5"
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             ],
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "lv_name": "ceph_lv2",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "lv_size": "21470642176",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "name": "ceph_lv2",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "tags": {
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.cluster_name": "ceph",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.crush_device_class": "",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.encrypted": "0",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.osd_id": "2",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.type": "block",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:                 "ceph.vdo": "0"
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             },
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "type": "block",
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:             "vg_name": "ceph_vg2"
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:         }
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]:     ]
Oct 02 19:33:35 compute-0 goofy_heisenberg[371218]: }
Oct 02 19:33:35 compute-0 systemd[1]: libpod-863ae12b16049f7a54935d505c3ecf42946136985970fd11069147ec30166e01.scope: Deactivated successfully.
Oct 02 19:33:35 compute-0 podman[371188]: 2025-10-02 19:33:35.165219697 +0000 UTC m=+1.013613896 container died 863ae12b16049f7a54935d505c3ecf42946136985970fd11069147ec30166e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heisenberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:33:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-63404de523e4621e31d3e7e7455227b0a401a6283b0dcae6eee6392c53333652-merged.mount: Deactivated successfully.
Oct 02 19:33:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:33:35 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1890408672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:33:35 compute-0 nova_compute[355794]: 2025-10-02 19:33:35.261 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:33:35 compute-0 podman[371188]: 2025-10-02 19:33:35.294956234 +0000 UTC m=+1.143350393 container remove 863ae12b16049f7a54935d505c3ecf42946136985970fd11069147ec30166e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heisenberg, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:33:35 compute-0 systemd[1]: libpod-conmon-863ae12b16049f7a54935d505c3ecf42946136985970fd11069147ec30166e01.scope: Deactivated successfully.
Oct 02 19:33:35 compute-0 sudo[370934]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:35 compute-0 python3[371382]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:33:35 compute-0 sudo[371400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:33:35 compute-0 sudo[371400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:33:35 compute-0 sudo[371400]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:35 compute-0 ceph-mon[191910]: pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:35 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1890408672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:33:35 compute-0 sudo[371441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:33:35 compute-0 sudo[371441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:33:35 compute-0 sudo[371441]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:35 compute-0 nova_compute[355794]: 2025-10-02 19:33:35.613 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:33:35 compute-0 nova_compute[355794]: 2025-10-02 19:33:35.614 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4548MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:33:35 compute-0 nova_compute[355794]: 2025-10-02 19:33:35.615 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:33:35 compute-0 nova_compute[355794]: 2025-10-02 19:33:35.615 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:33:35 compute-0 sudo[371479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:33:35 compute-0 sudo[371479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:33:35 compute-0 sudo[371479]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:35 compute-0 python3[371382]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1",
                                                     "Digest": "sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7",
                                                     "RepoTags": [
                                                          "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2025-08-26T15:52:54.446618393Z",
                                                     "Config": {
                                                          "ExposedPorts": {
                                                               "1981/tcp": {}
                                                          },
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                               "container=oci"
                                                          ],
                                                          "Cmd": [
                                                               "/app/openstack-network-exporter"
                                                          ],
                                                          "WorkingDir": "/",
                                                          "Labels": {
                                                               "architecture": "x86_64",
                                                               "build-date": "2025-08-20T13:12:41",
                                                               "com.redhat.component": "ubi9-minimal-container",
                                                               "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
                                                               "description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                               "distribution-scope": "public",
                                                               "io.buildah.version": "1.33.7",
                                                               "io.k8s.description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                               "io.k8s.display-name": "Red Hat Universal Base Image 9 Minimal",
                                                               "io.openshift.expose-services": "",
                                                               "io.openshift.tags": "minimal rhel9",
                                                               "maintainer": "Red Hat, Inc.",
                                                               "name": "ubi9-minimal",
                                                               "release": "1755695350",
                                                               "summary": "Provides the latest release of the minimal Red Hat Universal Base Image 9.",
                                                               "url": "https://catalog.redhat.com/en/search?searchType=containers",
                                                               "vcs-ref": "f4b088292653bbf5ca8188a5e59ffd06a8671d4b",
                                                               "vcs-type": "git",
                                                               "vendor": "Red Hat, Inc.",
                                                               "version": "9.6"
                                                          }
                                                     },
                                                     "Version": "",
                                                     "Author": "Red Hat",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 142088877,
                                                     "VirtualSize": 142088877,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/157961e3a1fe369d02893b19044a0e08e15689974ef810b235cb5ec194c7142c/diff:/var/lib/containers/storage/overlay/778d8c610941586099cac6c507cad2d1156b71b2bb54c42cebedf8808c68edb9/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/cd505d6f54e550fae708d1680b6b8d44753cf72fac8d36345974b92245bc660c/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/cd505d6f54e550fae708d1680b6b8d44753cf72fac8d36345974b92245bc660c/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:778d8c610941586099cac6c507cad2d1156b71b2bb54c42cebedf8808c68edb9",
                                                               "sha256:60984b2898b5b4ad1680d36433001b7e2bebb1073775d06b4c2ff80f985caccb",
                                                               "sha256:866ed9f0f685cc1d741f560227443a94926fc22494aa7808be751e7247cda421"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "architecture": "x86_64",
                                                          "build-date": "2025-08-20T13:12:41",
                                                          "com.redhat.component": "ubi9-minimal-container",
                                                          "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
                                                          "description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                          "distribution-scope": "public",
                                                          "io.buildah.version": "1.33.7",
                                                          "io.k8s.description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                          "io.k8s.display-name": "Red Hat Universal Base Image 9 Minimal",
                                                          "io.openshift.expose-services": "",
                                                          "io.openshift.tags": "minimal rhel9",
                                                          "maintainer": "Red Hat, Inc.",
                                                          "name": "ubi9-minimal",
                                                          "release": "1755695350",
                                                          "summary": "Provides the latest release of the minimal Red Hat Universal Base Image 9.",
                                                          "url": "https://catalog.redhat.com/en/search?searchType=containers",
                                                          "vcs-ref": "f4b088292653bbf5ca8188a5e59ffd06a8671d4b",
                                                          "vcs-type": "git",
                                                          "vendor": "Red Hat, Inc.",
                                                          "version": "9.6"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                     "User": "",
                                                     "History": [
                                                          {
                                                               "created": "2025-08-20T13:14:24.836114247Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"Red Hat, Inc.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:24.907067406Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL vendor=\"Red Hat, Inc.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:24.953912498Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL url=\"https://catalog.redhat.com/en/search?searchType=containers\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:24.99202543Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL com.redhat.component=\"ubi9-minimal-container\"       name=\"ubi9-minimal\"       version=\"9.6\"       distribution-scope=\"public\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.033232759Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL com.redhat.license_terms=\"https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.116880439Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL summary=\"Provides the latest release of the minimal Red Hat Universal Base Image 9.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.167988017Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL description=\"The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.205286235Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.k8s.description=\"The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.239930205Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.k8s.display-name=\"Red Hat Universal Base Image 9 Minimal\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.298417937Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.openshift.expose-services=\"\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.346108994Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.openshift.tags=\"minimal rhel9\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.381850293Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV container oci",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.998561869Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY dir:e1f22eafd6489859288910ef7585f9d694693aa84a31ba9d54dea9e7a451abe6 in / ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:26.169088157Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY file:b37d593713ee21ad52a4cd1424dc019a24f7966f85df0ac4b86d234302695328 in /etc/yum.repos.d/. ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:26.222750062Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:26.44502305Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY file:58cc94f5b3b2d60de2c77a6ed4b1797dcede502ccdb429a72e7a72d994235b3c in /usr/share/buildinfo/content-sets.json ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:26.581849716Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY file:58cc94f5b3b2d60de2c77a6ed4b1797dcede502ccdb429a72e7a72d994235b3c in /root/buildinfo/content_manifests/content-sets.json ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:26.902035614Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"build-date\"=\"2025-08-20T13:12:41\" \"architecture\"=\"x86_64\" \"vcs-type\"=\"git\" \"vcs-ref\"=\"f4b088292653bbf5ca8188a5e59ffd06a8671d4b\" \"release\"=\"1755695350\""
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:52.889456996Z",
                                                               "created_by": "/bin/sh -c microdnf update -y && rm -rf /var/cache/yum",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:54.116955892Z",
                                                               "created_by": "/bin/sh -c microdnf install -y iproute && microdnf clean all",
                                                               "comment": "FROM registry.access.redhat.com/ubi9/ubi-minimal:latest"
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:54.314008349Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY file:fab61bc60c39fae33dbfa4e382d473ceab94ebaf876018d5034ba62f04740767 in /etc/openstack-network-exporter.yaml ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:54.407547534Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY file:be836064c1a23a46d9411cf2aafe0d43f5d498cf2fd92e788160ae2e0f30bb86 in /app/openstack-network-exporter ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:54.420490087Z",
                                                               "created_by": "/bin/sh -c #(nop) MAINTAINER Red Hat",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:54.432520013Z",
                                                               "created_by": "/bin/sh -c #(nop) EXPOSE 1981",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:54.48363818Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"/app/openstack-network-exporter\"]",
                                                               "author": "Red Hat",
                                                               "comment": "FROM 688666ea38a8"
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified"
                                                     ]
                                                }
                                           ]
                                           : quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct 02 19:33:35 compute-0 nova_compute[355794]: 2025-10-02 19:33:35.743 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:33:35 compute-0 nova_compute[355794]: 2025-10-02 19:33:35.744 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:33:35 compute-0 systemd[1]: libpod-2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38.scope: Deactivated successfully.
Oct 02 19:33:35 compute-0 systemd[1]: libpod-2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38.scope: Consumed 5.180s CPU time.
Oct 02 19:33:35 compute-0 nova_compute[355794]: 2025-10-02 19:33:35.762 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:33:35 compute-0 sudo[371520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:33:35 compute-0 podman[371521]: 2025-10-02 19:33:35.7685794 +0000 UTC m=+0.067122954 container died 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, distribution-scope=public, name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible)
Oct 02 19:33:35 compute-0 sudo[371520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:33:35 compute-0 systemd[1]: 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38-21152a751d75a4be.timer: Deactivated successfully.
Oct 02 19:33:35 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38.
Oct 02 19:33:35 compute-0 systemd[1]: 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38-21152a751d75a4be.service: Failed to open /run/systemd/transient/2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38-21152a751d75a4be.service: No such file or directory
Oct 02 19:33:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38-userdata-shm.mount: Deactivated successfully.
Oct 02 19:33:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-89429ad713d311412b587f61436e5af086a987cd352971cdba0bbe58ade0f3f1-merged.mount: Deactivated successfully.
Oct 02 19:33:35 compute-0 podman[371521]: 2025-10-02 19:33:35.827120745 +0000 UTC m=+0.125664299 container cleanup 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 02 19:33:35 compute-0 python3[371382]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop openstack_network_exporter
Oct 02 19:33:35 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 02 19:33:35 compute-0 systemd[1]: 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38-21152a751d75a4be.timer: Failed to open /run/systemd/transient/2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38-21152a751d75a4be.timer: No such file or directory
Oct 02 19:33:35 compute-0 systemd[1]: 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38-21152a751d75a4be.service: Failed to open /run/systemd/transient/2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38-21152a751d75a4be.service: No such file or directory
Oct 02 19:33:35 compute-0 podman[371575]: 2025-10-02 19:33:35.945226091 +0000 UTC m=+0.081616082 container remove 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.buildah.version=1.33.7, release=1755695350)
Oct 02 19:33:35 compute-0 podman[371576]: Error: no container with ID 2648ce379dfd716acc4dd1ccf47c64e348034d98dd19dbe518e505bf512e1f38 found in database: no such container
Oct 02 19:33:35 compute-0 python3[371382]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force openstack_network_exporter
Oct 02 19:33:35 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Control process exited, code=exited, status=125/n/a
Oct 02 19:33:35 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Oct 02 19:33:36 compute-0 podman[371628]: 2025-10-02 19:33:36.053663398 +0000 UTC m=+0.083008469 container create c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, distribution-scope=public, name=ubi9-minimal, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, io.openshift.expose-services=, io.openshift.tags=minimal rhel9)
Oct 02 19:33:36 compute-0 podman[371628]: 2025-10-02 19:33:36.011644626 +0000 UTC m=+0.040989677 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct 02 19:33:36 compute-0 python3[371382]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct 02 19:33:36 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Scheduled restart job, restart counter is at 1.
Oct 02 19:33:36 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Oct 02 19:33:36 compute-0 systemd[1]: Starting openstack_network_exporter container...
Oct 02 19:33:36 compute-0 systemd[1]: Started libpod-conmon-c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc.scope.
Oct 02 19:33:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:33:36 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3349552256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:33:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:33:36 compute-0 nova_compute[355794]: 2025-10-02 19:33:36.246 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a19e210ffeb7e901fba74f018fcf4dc89a63164ef7a78022886ea9ac39aadc88/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a19e210ffeb7e901fba74f018fcf4dc89a63164ef7a78022886ea9ac39aadc88/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a19e210ffeb7e901fba74f018fcf4dc89a63164ef7a78022886ea9ac39aadc88/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:36 compute-0 nova_compute[355794]: 2025-10-02 19:33:36.279 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:33:36 compute-0 nova_compute[355794]: 2025-10-02 19:33:36.303 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:33:36 compute-0 nova_compute[355794]: 2025-10-02 19:33:36.308 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:33:36 compute-0 nova_compute[355794]: 2025-10-02 19:33:36.309 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:33:36 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc.
Oct 02 19:33:36 compute-0 podman[371655]: 2025-10-02 19:33:36.331186044 +0000 UTC m=+0.250789772 container init c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6)
Oct 02 19:33:36 compute-0 podman[371655]: 2025-10-02 19:33:36.369049156 +0000 UTC m=+0.288652854 container start c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9)
Oct 02 19:33:36 compute-0 openstack_network_exporter[371688]: INFO    19:33:36 main.go:48: registering *bridge.Collector
Oct 02 19:33:36 compute-0 openstack_network_exporter[371688]: INFO    19:33:36 main.go:48: registering *coverage.Collector
Oct 02 19:33:36 compute-0 openstack_network_exporter[371688]: INFO    19:33:36 main.go:48: registering *datapath.Collector
Oct 02 19:33:36 compute-0 openstack_network_exporter[371688]: INFO    19:33:36 main.go:48: registering *iface.Collector
Oct 02 19:33:36 compute-0 openstack_network_exporter[371688]: INFO    19:33:36 main.go:48: registering *memory.Collector
Oct 02 19:33:36 compute-0 openstack_network_exporter[371688]: INFO    19:33:36 main.go:48: registering *ovnnorthd.Collector
Oct 02 19:33:36 compute-0 openstack_network_exporter[371688]: INFO    19:33:36 main.go:48: registering *ovn.Collector
Oct 02 19:33:36 compute-0 openstack_network_exporter[371688]: INFO    19:33:36 main.go:48: registering *ovsdbserver.Collector
Oct 02 19:33:36 compute-0 openstack_network_exporter[371688]: INFO    19:33:36 main.go:48: registering *pmd_perf.Collector
Oct 02 19:33:36 compute-0 openstack_network_exporter[371688]: INFO    19:33:36 main.go:48: registering *pmd_rxq.Collector
Oct 02 19:33:36 compute-0 openstack_network_exporter[371688]: INFO    19:33:36 main.go:48: registering *vswitch.Collector
Oct 02 19:33:36 compute-0 openstack_network_exporter[371688]: NOTICE  19:33:36 main.go:76: listening on https://:9105/metrics
Oct 02 19:33:36 compute-0 podman[371675]: openstack_network_exporter
Oct 02 19:33:36 compute-0 python3[371382]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start openstack_network_exporter
Oct 02 19:33:36 compute-0 systemd[1]: Started openstack_network_exporter container.
Oct 02 19:33:36 compute-0 podman[371701]: 2025-10-02 19:33:36.484270915 +0000 UTC m=+0.100072645 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, release=1755695350, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 02 19:33:36 compute-0 podman[371712]: 2025-10-02 19:33:36.504219908 +0000 UTC m=+0.068194113 container create 18ae3036df816ac3509ce950853aabc2c76282e0352e2e966d7f1470c31a17d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lehmann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 19:33:36 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3349552256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:33:36 compute-0 systemd[1]: Started libpod-conmon-18ae3036df816ac3509ce950853aabc2c76282e0352e2e966d7f1470c31a17d2.scope.
Oct 02 19:33:36 compute-0 podman[371712]: 2025-10-02 19:33:36.479429186 +0000 UTC m=+0.043403401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:33:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:33:36 compute-0 sudo[371379]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:36 compute-0 podman[371712]: 2025-10-02 19:33:36.619949881 +0000 UTC m=+0.183924106 container init 18ae3036df816ac3509ce950853aabc2c76282e0352e2e966d7f1470c31a17d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 19:33:36 compute-0 podman[371712]: 2025-10-02 19:33:36.632040104 +0000 UTC m=+0.196014309 container start 18ae3036df816ac3509ce950853aabc2c76282e0352e2e966d7f1470c31a17d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lehmann, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:33:36 compute-0 podman[371712]: 2025-10-02 19:33:36.637477209 +0000 UTC m=+0.201451434 container attach 18ae3036df816ac3509ce950853aabc2c76282e0352e2e966d7f1470c31a17d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lehmann, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 19:33:36 compute-0 condescending_lehmann[371761]: 167 167
Oct 02 19:33:36 compute-0 systemd[1]: libpod-18ae3036df816ac3509ce950853aabc2c76282e0352e2e966d7f1470c31a17d2.scope: Deactivated successfully.
Oct 02 19:33:36 compute-0 podman[371712]: 2025-10-02 19:33:36.640524101 +0000 UTC m=+0.204498336 container died 18ae3036df816ac3509ce950853aabc2c76282e0352e2e966d7f1470c31a17d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lehmann, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:33:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c79312a66cc3dd4996eb74c00a666535ae0c318917bf7e6cce517aa61802201b-merged.mount: Deactivated successfully.
Oct 02 19:33:36 compute-0 podman[371712]: 2025-10-02 19:33:36.70786476 +0000 UTC m=+0.271838965 container remove 18ae3036df816ac3509ce950853aabc2c76282e0352e2e966d7f1470c31a17d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:33:36 compute-0 systemd[1]: libpod-conmon-18ae3036df816ac3509ce950853aabc2c76282e0352e2e966d7f1470c31a17d2.scope: Deactivated successfully.
Oct 02 19:33:36 compute-0 podman[371830]: 2025-10-02 19:33:36.954749577 +0000 UTC m=+0.068525892 container create ceba45c0b11ebd8e05612e751feb1011b51f5b1a89b31f14ba2592adb2c2527e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 19:33:37 compute-0 systemd[1]: Started libpod-conmon-ceba45c0b11ebd8e05612e751feb1011b51f5b1a89b31f14ba2592adb2c2527e.scope.
Oct 02 19:33:37 compute-0 podman[371830]: 2025-10-02 19:33:36.934812714 +0000 UTC m=+0.048589049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:33:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:33:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edc17c6446092f0bba70c882a0eb024ae7d6ef82182e07846d4038879c74b0bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edc17c6446092f0bba70c882a0eb024ae7d6ef82182e07846d4038879c74b0bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edc17c6446092f0bba70c882a0eb024ae7d6ef82182e07846d4038879c74b0bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edc17c6446092f0bba70c882a0eb024ae7d6ef82182e07846d4038879c74b0bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:37 compute-0 podman[371830]: 2025-10-02 19:33:37.071106047 +0000 UTC m=+0.184882402 container init ceba45c0b11ebd8e05612e751feb1011b51f5b1a89b31f14ba2592adb2c2527e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dubinsky, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:33:37 compute-0 podman[371830]: 2025-10-02 19:33:37.08919596 +0000 UTC m=+0.202972305 container start ceba45c0b11ebd8e05612e751feb1011b51f5b1a89b31f14ba2592adb2c2527e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dubinsky, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 19:33:37 compute-0 podman[371830]: 2025-10-02 19:33:37.095773376 +0000 UTC m=+0.209549721 container attach ceba45c0b11ebd8e05612e751feb1011b51f5b1a89b31f14ba2592adb2c2527e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dubinsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 19:33:37 compute-0 sudo[371953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajqvqponiffjwdjanbmentavtmlbpfev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433616.900727-709-205343287351514/AnsiballZ_stat.py'
Oct 02 19:33:37 compute-0 sudo[371953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:37 compute-0 ceph-mon[191910]: pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:37 compute-0 python3.9[371955]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:33:37 compute-0 sudo[371953]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]: {
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:         "osd_id": 1,
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:         "type": "bluestore"
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:     },
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:         "osd_id": 2,
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:         "type": "bluestore"
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:     },
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:         "osd_id": 0,
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:         "type": "bluestore"
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]:     }
Oct 02 19:33:38 compute-0 exciting_dubinsky[371867]: }
Oct 02 19:33:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:38 compute-0 systemd[1]: libpod-ceba45c0b11ebd8e05612e751feb1011b51f5b1a89b31f14ba2592adb2c2527e.scope: Deactivated successfully.
Oct 02 19:33:38 compute-0 systemd[1]: libpod-ceba45c0b11ebd8e05612e751feb1011b51f5b1a89b31f14ba2592adb2c2527e.scope: Consumed 1.150s CPU time.
Oct 02 19:33:38 compute-0 podman[371830]: 2025-10-02 19:33:38.243552496 +0000 UTC m=+1.357328851 container died ceba45c0b11ebd8e05612e751feb1011b51f5b1a89b31f14ba2592adb2c2527e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 19:33:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-edc17c6446092f0bba70c882a0eb024ae7d6ef82182e07846d4038879c74b0bb-merged.mount: Deactivated successfully.
Oct 02 19:33:38 compute-0 podman[371830]: 2025-10-02 19:33:38.337900677 +0000 UTC m=+1.451677052 container remove ceba45c0b11ebd8e05612e751feb1011b51f5b1a89b31f14ba2592adb2c2527e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dubinsky, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:33:38 compute-0 systemd[1]: libpod-conmon-ceba45c0b11ebd8e05612e751feb1011b51f5b1a89b31f14ba2592adb2c2527e.scope: Deactivated successfully.
Oct 02 19:33:38 compute-0 sudo[371520]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:33:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:33:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:33:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:33:38 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev dc725077-c6e5-410d-bec0-1f6ade976e9b does not exist
Oct 02 19:33:38 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 665d8ee3-8a09-45fb-9180-25332388489f does not exist
Oct 02 19:33:38 compute-0 sudo[372118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:33:38 compute-0 sudo[372118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:33:38 compute-0 sudo[372118]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:38 compute-0 sudo[372174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpinrvyddbfqpsurftkojxnneaumaqjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433618.1067817-718-41709931268694/AnsiballZ_file.py'
Oct 02 19:33:38 compute-0 sudo[372174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:38 compute-0 sudo[372173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:33:38 compute-0 sudo[372173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:33:38 compute-0 sudo[372173]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:38 compute-0 python3.9[372180]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:33:38 compute-0 sudo[372174]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:39 compute-0 ceph-mon[191910]: pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:33:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:33:39 compute-0 sudo[372420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqusujhnhinezxhzwgjsaeyeucanqxol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433618.947716-718-168262390588142/AnsiballZ_copy.py'
Oct 02 19:33:39 compute-0 podman[372319]: 2025-10-02 19:33:39.708100581 +0000 UTC m=+0.129307756 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:33:39 compute-0 sudo[372420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:39 compute-0 podman[372324]: 2025-10-02 19:33:39.71330704 +0000 UTC m=+0.117939502 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:33:39 compute-0 podman[372323]: 2025-10-02 19:33:39.72600849 +0000 UTC m=+0.139706784 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct 02 19:33:39 compute-0 podman[372332]: 2025-10-02 19:33:39.745339166 +0000 UTC m=+0.150146663 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.buildah.version=1.29.0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, release-0.7.12=, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, distribution-scope=public, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc.)
Oct 02 19:33:39 compute-0 podman[372325]: 2025-10-02 19:33:39.766168643 +0000 UTC m=+0.158089325 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 02 19:33:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:33:39 compute-0 python3.9[372441]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759433618.947716-718-168262390588142/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:33:39 compute-0 sudo[372420]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:41 compute-0 sudo[372522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwothldntbkhqvuebhrotreqcsaxnnlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433618.947716-718-168262390588142/AnsiballZ_systemd.py'
Oct 02 19:33:41 compute-0 sudo[372522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:41 compute-0 python3.9[372524]: ansible-systemd Invoked with state=started name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:33:41 compute-0 ceph-mon[191910]: pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:41 compute-0 sudo[372522]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:42 compute-0 sudo[372676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obldasazmldhuntvxcnsuyvenpnwuhir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433621.7706442-738-207087615314136/AnsiballZ_systemd.py'
Oct 02 19:33:42 compute-0 sudo[372676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:43 compute-0 python3.9[372678]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:33:43 compute-0 systemd[1]: Stopping openstack_network_exporter container...
Oct 02 19:33:43 compute-0 systemd[1]: libpod-c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc.scope: Deactivated successfully.
Oct 02 19:33:43 compute-0 podman[372682]: 2025-10-02 19:33:43.325355041 +0000 UTC m=+0.098857043 container died c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 02 19:33:43 compute-0 systemd[1]: c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc-4a589f81ff3d6c91.timer: Deactivated successfully.
Oct 02 19:33:43 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc.
Oct 02 19:33:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc-userdata-shm.mount: Deactivated successfully.
Oct 02 19:33:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a19e210ffeb7e901fba74f018fcf4dc89a63164ef7a78022886ea9ac39aadc88-merged.mount: Deactivated successfully.
Oct 02 19:33:43 compute-0 podman[372682]: 2025-10-02 19:33:43.416171167 +0000 UTC m=+0.189673139 container cleanup c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:33:43 compute-0 podman[372682]: openstack_network_exporter
Oct 02 19:33:43 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 02 19:33:43 compute-0 ceph-mon[191910]: pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:43 compute-0 systemd[1]: libpod-conmon-c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc.scope: Deactivated successfully.
Oct 02 19:33:43 compute-0 podman[372711]: openstack_network_exporter
Oct 02 19:33:43 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Oct 02 19:33:43 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Oct 02 19:33:43 compute-0 systemd[1]: Starting openstack_network_exporter container...
Oct 02 19:33:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:33:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a19e210ffeb7e901fba74f018fcf4dc89a63164ef7a78022886ea9ac39aadc88/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a19e210ffeb7e901fba74f018fcf4dc89a63164ef7a78022886ea9ac39aadc88/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a19e210ffeb7e901fba74f018fcf4dc89a63164ef7a78022886ea9ac39aadc88/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:33:43 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc.
Oct 02 19:33:43 compute-0 podman[372722]: 2025-10-02 19:33:43.742710773 +0000 UTC m=+0.162967356 container init c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, distribution-scope=public, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, build-date=2025-08-20T13:12:41)
Oct 02 19:33:43 compute-0 openstack_network_exporter[372736]: INFO    19:33:43 main.go:48: registering *bridge.Collector
Oct 02 19:33:43 compute-0 openstack_network_exporter[372736]: INFO    19:33:43 main.go:48: registering *coverage.Collector
Oct 02 19:33:43 compute-0 openstack_network_exporter[372736]: INFO    19:33:43 main.go:48: registering *datapath.Collector
Oct 02 19:33:43 compute-0 openstack_network_exporter[372736]: INFO    19:33:43 main.go:48: registering *iface.Collector
Oct 02 19:33:43 compute-0 openstack_network_exporter[372736]: INFO    19:33:43 main.go:48: registering *memory.Collector
Oct 02 19:33:43 compute-0 openstack_network_exporter[372736]: INFO    19:33:43 main.go:48: registering *ovnnorthd.Collector
Oct 02 19:33:43 compute-0 openstack_network_exporter[372736]: INFO    19:33:43 main.go:48: registering *ovn.Collector
Oct 02 19:33:43 compute-0 openstack_network_exporter[372736]: INFO    19:33:43 main.go:48: registering *ovsdbserver.Collector
Oct 02 19:33:43 compute-0 openstack_network_exporter[372736]: INFO    19:33:43 main.go:48: registering *pmd_perf.Collector
Oct 02 19:33:43 compute-0 openstack_network_exporter[372736]: INFO    19:33:43 main.go:48: registering *pmd_rxq.Collector
Oct 02 19:33:43 compute-0 openstack_network_exporter[372736]: INFO    19:33:43 main.go:48: registering *vswitch.Collector
Oct 02 19:33:43 compute-0 openstack_network_exporter[372736]: NOTICE  19:33:43 main.go:76: listening on https://:9105/metrics
Oct 02 19:33:43 compute-0 podman[372722]: 2025-10-02 19:33:43.778681285 +0000 UTC m=+0.198937858 container start c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, config_id=edpm, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Oct 02 19:33:43 compute-0 podman[372722]: openstack_network_exporter
Oct 02 19:33:43 compute-0 systemd[1]: Started openstack_network_exporter container.
Oct 02 19:33:43 compute-0 sudo[372676]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:43 compute-0 podman[372746]: 2025-10-02 19:33:43.90052487 +0000 UTC m=+0.099033067 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container)
Oct 02 19:33:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:44 compute-0 sudo[372916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izeghlvdbuxhmdmfsrdqxhwbvhpaoabp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433624.0980892-746-244203387629465/AnsiballZ_find.py'
Oct 02 19:33:44 compute-0 sudo[372916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:44 compute-0 python3.9[372918]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:33:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:33:44 compute-0 sudo[372916]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:45 compute-0 ceph-mon[191910]: pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:45 compute-0 podman[372966]: 2025-10-02 19:33:45.720498033 +0000 UTC m=+0.139170380 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:33:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:46 compute-0 sudo[373091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsxazjoqrcjmvgfxgpudtzpzpqwseser ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433625.4744468-756-242887300387061/AnsiballZ_podman_container_info.py'
Oct 02 19:33:46 compute-0 sudo[373091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:46 compute-0 python3.9[373093]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Oct 02 19:33:46 compute-0 sudo[373091]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:47 compute-0 ceph-mon[191910]: pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:47 compute-0 sudo[373256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-staimdmipzflumhatjxbouzirfotljix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433627.0408714-764-97028218031975/AnsiballZ_podman_container_exec.py'
Oct 02 19:33:47 compute-0 sudo[373256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:48 compute-0 python3.9[373258]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:33:48 compute-0 systemd[1]: Started libpod-conmon-daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97.scope.
Oct 02 19:33:48 compute-0 podman[373259]: 2025-10-02 19:33:48.200940004 +0000 UTC m=+0.140786373 container exec daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 19:33:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:48 compute-0 podman[373259]: 2025-10-02 19:33:48.242232608 +0000 UTC m=+0.182078937 container exec_died daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller)
Oct 02 19:33:48 compute-0 systemd[1]: libpod-conmon-daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97.scope: Deactivated successfully.
Oct 02 19:33:48 compute-0 sudo[373256]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:49 compute-0 sudo[373436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnupqzmompmbuntkxassstcdcnievetr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433628.5708017-772-78276975221347/AnsiballZ_podman_container_exec.py'
Oct 02 19:33:49 compute-0 sudo[373436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:49 compute-0 python3.9[373438]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:33:49 compute-0 systemd[1]: Started libpod-conmon-daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97.scope.
Oct 02 19:33:49 compute-0 podman[373439]: 2025-10-02 19:33:49.411260386 +0000 UTC m=+0.113994387 container exec daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 19:33:49 compute-0 podman[373439]: 2025-10-02 19:33:49.445486141 +0000 UTC m=+0.148220202 container exec_died daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:33:49 compute-0 ceph-mon[191910]: pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:49 compute-0 sudo[373436]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:49 compute-0 systemd[1]: libpod-conmon-daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97.scope: Deactivated successfully.
Oct 02 19:33:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:33:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:50 compute-0 sudo[373618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nozzwhcnptqeumfblgreywbkeutlvaav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433629.8077078-780-177071780744915/AnsiballZ_file.py'
Oct 02 19:33:50 compute-0 sudo[373618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:50 compute-0 python3.9[373620]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:33:50 compute-0 sudo[373618]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:51 compute-0 ceph-mon[191910]: pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:52 compute-0 sudo[373770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rigpxuhhcwhqokdvtcxrbqvesoklwgbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433631.1084006-789-239896476668444/AnsiballZ_podman_container_info.py'
Oct 02 19:33:52 compute-0 sudo[373770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:52 compute-0 python3.9[373772]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Oct 02 19:33:52 compute-0 sudo[373770]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:53 compute-0 ceph-mon[191910]: pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:54 compute-0 sudo[373935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktokxoqjqtcdcwbvhvmghroqyxsnqqje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433633.0268047-797-227803655547081/AnsiballZ_podman_container_exec.py'
Oct 02 19:33:54 compute-0 sudo[373935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:54 compute-0 python3.9[373937]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:33:54 compute-0 systemd[1]: Started libpod-conmon-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope.
Oct 02 19:33:54 compute-0 podman[373938]: 2025-10-02 19:33:54.551243985 +0000 UTC m=+0.130400450 container exec b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20250930, managed_by=edpm_ansible, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Oct 02 19:33:54 compute-0 podman[373938]: 2025-10-02 19:33:54.585841775 +0000 UTC m=+0.164998250 container exec_died b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Oct 02 19:33:54 compute-0 systemd[1]: libpod-conmon-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope: Deactivated successfully.
Oct 02 19:33:54 compute-0 sudo[373935]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:33:55 compute-0 sudo[374132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxevfbwtxvxfrmeeuaywmwmljtiuyiqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433634.91865-805-107923732021487/AnsiballZ_podman_container_exec.py'
Oct 02 19:33:55 compute-0 sudo[374132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:55 compute-0 podman[374091]: 2025-10-02 19:33:55.505809726 +0000 UTC m=+0.120764664 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 19:33:55 compute-0 ceph-mon[191910]: pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:55 compute-0 python3.9[374138]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:33:55 compute-0 systemd[1]: Started libpod-conmon-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope.
Oct 02 19:33:55 compute-0 podman[374139]: 2025-10-02 19:33:55.893876448 +0000 UTC m=+0.154911391 container exec b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 19:33:55 compute-0 podman[374139]: 2025-10-02 19:33:55.931228442 +0000 UTC m=+0.192263325 container exec_died b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 19:33:55 compute-0 sudo[374132]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:56 compute-0 systemd[1]: libpod-conmon-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope: Deactivated successfully.
Oct 02 19:33:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:56 compute-0 sudo[374317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-connuedbzwykouhquyymhimcxtlsbuut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433636.2879615-813-189974346711517/AnsiballZ_file.py'
Oct 02 19:33:56 compute-0 sudo[374317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:57 compute-0 python3.9[374319]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:33:57 compute-0 sudo[374317]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:57 compute-0 ceph-mon[191910]: pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:57 compute-0 sudo[374469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqhtddcsphfcxcmglkmeoyynzptvlrcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433637.4085088-822-31355411008354/AnsiballZ_podman_container_info.py'
Oct 02 19:33:57 compute-0 sudo[374469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:58 compute-0 python3.9[374471]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Oct 02 19:33:58 compute-0 sudo[374469]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:59 compute-0 sudo[374631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aggqonicmzqtzsuqbxypyvtfzilfexay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433638.5197604-830-9756805696958/AnsiballZ_podman_container_exec.py'
Oct 02 19:33:59 compute-0 sudo[374631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:59 compute-0 python3.9[374633]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:33:59 compute-0 ceph-mon[191910]: pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:33:59 compute-0 systemd[1]: Started libpod-conmon-fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2.scope.
Oct 02 19:33:59 compute-0 podman[374634]: 2025-10-02 19:33:59.599781356 +0000 UTC m=+0.149018025 container exec fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:33:59 compute-0 podman[374634]: 2025-10-02 19:33:59.63189161 +0000 UTC m=+0.181128319 container exec_died fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:33:59 compute-0 sudo[374631]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:59 compute-0 systemd[1]: libpod-conmon-fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2.scope: Deactivated successfully.
Oct 02 19:33:59 compute-0 podman[157186]: time="2025-10-02T19:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:33:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45034 "" "Go-http-client/1.1"
Oct 02 19:33:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8501 "" "Go-http-client/1.1"
Oct 02 19:33:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:34:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:00 compute-0 podman[374790]: 2025-10-02 19:34:00.493030327 +0000 UTC m=+0.113405928 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:34:00 compute-0 sudo[374829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buqbwlfgggcpheuiubhzqxwagtbpsjmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433639.9794936-838-238272584308146/AnsiballZ_podman_container_exec.py'
Oct 02 19:34:00 compute-0 sudo[374829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:00 compute-0 python3.9[374838]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:34:00 compute-0 systemd[1]: Started libpod-conmon-fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2.scope.
Oct 02 19:34:00 compute-0 podman[374839]: 2025-10-02 19:34:00.852755936 +0000 UTC m=+0.129302371 container exec fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:34:00 compute-0 podman[374839]: 2025-10-02 19:34:00.887242483 +0000 UTC m=+0.163788948 container exec_died fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:34:00 compute-0 sudo[374829]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:00 compute-0 systemd[1]: libpod-conmon-fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2.scope: Deactivated successfully.
Oct 02 19:34:01 compute-0 openstack_network_exporter[372736]: ERROR   19:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:34:01 compute-0 openstack_network_exporter[372736]: ERROR   19:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:34:01 compute-0 openstack_network_exporter[372736]: ERROR   19:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:34:01 compute-0 openstack_network_exporter[372736]: ERROR   19:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:34:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:34:01 compute-0 openstack_network_exporter[372736]: ERROR   19:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:34:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:34:01 compute-0 ceph-mon[191910]: pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:01 compute-0 sudo[375021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byininaacciqpyxqqcvgapiltnxsqoyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433641.2328217-846-124029801065529/AnsiballZ_file.py'
Oct 02 19:34:01 compute-0 sudo[375021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:02 compute-0 python3.9[375023]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:34:02 compute-0 sudo[375021]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:34:03
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['images', 'default.rgw.control', '.mgr', 'backups', 'cephfs.cephfs.data', 'vms', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'default.rgw.log']
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:34:03 compute-0 ceph-mon[191910]: pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:03 compute-0 sudo[375185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfrarkytulszkrxyzhmzvwimtuhnavsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433642.8534412-855-163560157555610/AnsiballZ_podman_container_info.py'
Oct 02 19:34:03 compute-0 sudo[375185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:34:03 compute-0 podman[375147]: 2025-10-02 19:34:03.635269771 +0000 UTC m=+0.165774360 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:34:03 compute-0 python3.9[375192]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:34:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:34:03 compute-0 sudo[375185]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:34:05 compute-0 sudo[375355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evcrgneegfripvmttrsngfknwnxmsplm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433644.2675-863-93398183068191/AnsiballZ_podman_container_exec.py'
Oct 02 19:34:05 compute-0 sudo[375355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:05 compute-0 ceph-mon[191910]: pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:05 compute-0 python3.9[375357]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:34:05 compute-0 systemd[1]: Started libpod-conmon-308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431.scope.
Oct 02 19:34:05 compute-0 podman[375358]: 2025-10-02 19:34:05.869766699 +0000 UTC m=+0.166176730 container exec 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:34:05 compute-0 podman[375358]: 2025-10-02 19:34:05.904877293 +0000 UTC m=+0.201287294 container exec_died 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:34:05 compute-0 systemd[1]: libpod-conmon-308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431.scope: Deactivated successfully.
Oct 02 19:34:05 compute-0 sudo[375355]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:06 compute-0 sudo[375535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wogpdihdmfppiqhigovzwtjibztreetb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433646.2757087-871-34795471671599/AnsiballZ_podman_container_exec.py'
Oct 02 19:34:06 compute-0 sudo[375535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:07 compute-0 python3.9[375537]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:34:07 compute-0 systemd[1]: Started libpod-conmon-308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431.scope.
Oct 02 19:34:07 compute-0 podman[375538]: 2025-10-02 19:34:07.172697637 +0000 UTC m=+0.131382615 container exec 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:34:07 compute-0 podman[375538]: 2025-10-02 19:34:07.204663038 +0000 UTC m=+0.163348036 container exec_died 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:34:07 compute-0 sudo[375535]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:07 compute-0 systemd[1]: libpod-conmon-308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431.scope: Deactivated successfully.
Oct 02 19:34:07 compute-0 ceph-mon[191910]: pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:08 compute-0 sudo[375717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwyttodosyhfcpewwupgiobsraynzfww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433647.6131237-879-191478768461743/AnsiballZ_file.py'
Oct 02 19:34:08 compute-0 sudo[375717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:08 compute-0 python3.9[375719]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:34:08 compute-0 sudo[375717]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:09 compute-0 sudo[375869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvfelbbspufivejqeekqplybdlcmczjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433648.8675747-888-236147658198008/AnsiballZ_podman_container_info.py'
Oct 02 19:34:09 compute-0 sudo[375869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:09 compute-0 ceph-mon[191910]: pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:09 compute-0 python3.9[375871]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Oct 02 19:34:09 compute-0 sudo[375869]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:34:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:10 compute-0 sudo[376099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeqfxwgqcexsbjtrrcdeojovvrvtxfvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433650.1235259-896-104780768909505/AnsiballZ_podman_container_exec.py'
Oct 02 19:34:10 compute-0 sudo[376099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:10 compute-0 podman[376012]: 2025-10-02 19:34:10.716447521 +0000 UTC m=+0.111750653 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, maintainer=Red Hat, Inc., release-0.7.12=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, managed_by=edpm_ansible, container_name=kepler, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.buildah.version=1.29.0)
Oct 02 19:34:10 compute-0 podman[376010]: 2025-10-02 19:34:10.72129478 +0000 UTC m=+0.122160120 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct 02 19:34:10 compute-0 podman[376009]: 2025-10-02 19:34:10.728757999 +0000 UTC m=+0.130401200 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:34:10 compute-0 podman[376008]: 2025-10-02 19:34:10.736237428 +0000 UTC m=+0.142410129 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:34:10 compute-0 podman[376011]: 2025-10-02 19:34:10.766689378 +0000 UTC m=+0.156218236 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 02 19:34:10 compute-0 python3.9[376127]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:34:11 compute-0 systemd[1]: Started libpod-conmon-c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc.scope.
Oct 02 19:34:11 compute-0 podman[376134]: 2025-10-02 19:34:11.05700213 +0000 UTC m=+0.138736631 container exec c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1755695350, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, vcs-type=git, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Oct 02 19:34:11 compute-0 podman[376134]: 2025-10-02 19:34:11.097018385 +0000 UTC m=+0.178752836 container exec_died c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 19:34:11 compute-0 systemd[1]: libpod-conmon-c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc.scope: Deactivated successfully.
Oct 02 19:34:11 compute-0 sudo[376099]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:11 compute-0 PackageKit[338154]: daemon quit
Oct 02 19:34:11 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 02 19:34:11 compute-0 ceph-mon[191910]: pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:12 compute-0 sudo[376312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmkrtrbolqohapffcljwlbagmuqsptvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433651.4781444-904-273710426263956/AnsiballZ_podman_container_exec.py'
Oct 02 19:34:12 compute-0 sudo[376312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:12 compute-0 python3.9[376314]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:34:12 compute-0 systemd[1]: Started libpod-conmon-c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc.scope.
Oct 02 19:34:12 compute-0 podman[376315]: 2025-10-02 19:34:12.455871931 +0000 UTC m=+0.159640088 container exec c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public)
Oct 02 19:34:12 compute-0 podman[376315]: 2025-10-02 19:34:12.48968012 +0000 UTC m=+0.193448197 container exec_died c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, io.buildah.version=1.33.7, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:34:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:34:12 compute-0 sudo[376312]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:12 compute-0 systemd[1]: libpod-conmon-c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc.scope: Deactivated successfully.
Oct 02 19:34:13 compute-0 sudo[376494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dufokwkdfcejyruredtpjzumjqbhvcjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433652.8517754-912-15700329626657/AnsiballZ_file.py'
Oct 02 19:34:13 compute-0 sudo[376494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:13 compute-0 ceph-mon[191910]: pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:13 compute-0 python3.9[376496]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:34:13 compute-0 sudo[376494]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:14 compute-0 podman[376521]: 2025-10-02 19:34:14.724720132 +0000 UTC m=+0.138009852 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible)
Oct 02 19:34:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:34:15 compute-0 sudo[376666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxzlqmascngtflrhrsvipnwllqamaivo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433654.720476-921-280194258672179/AnsiballZ_podman_container_info.py'
Oct 02 19:34:15 compute-0 sudo[376666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:15 compute-0 python3.9[376668]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Oct 02 19:34:15 compute-0 sudo[376666]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:15 compute-0 ceph-mon[191910]: pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:16 compute-0 podman[376780]: 2025-10-02 19:34:16.719000549 +0000 UTC m=+0.140397235 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:34:16 compute-0 sudo[376852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edbdldlivbwcixneukfyjrrpauvmvjot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433656.2968862-929-194078375152797/AnsiballZ_podman_container_exec.py'
Oct 02 19:34:16 compute-0 sudo[376852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:17 compute-0 python3.9[376854]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:34:17 compute-0 systemd[1]: Started libpod-conmon-0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.scope.
Oct 02 19:34:17 compute-0 podman[376855]: 2025-10-02 19:34:17.325627766 +0000 UTC m=+0.142072660 container exec 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:34:17 compute-0 podman[376855]: 2025-10-02 19:34:17.362345953 +0000 UTC m=+0.178790817 container exec_died 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm)
Oct 02 19:34:17 compute-0 systemd[1]: libpod-conmon-0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.scope: Deactivated successfully.
Oct 02 19:34:17 compute-0 sudo[376852]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:17 compute-0 ceph-mon[191910]: pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:18 compute-0 sudo[377037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rukhxbgwykmhfgnkdzbmmvtplpnzhzsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433657.7055771-937-156145820397678/AnsiballZ_podman_container_exec.py'
Oct 02 19:34:18 compute-0 sudo[377037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:18 compute-0 python3.9[377039]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:34:18 compute-0 systemd[1]: Started libpod-conmon-0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.scope.
Oct 02 19:34:18 compute-0 podman[377040]: 2025-10-02 19:34:18.818095606 +0000 UTC m=+0.146567390 container exec 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:34:18 compute-0 podman[377040]: 2025-10-02 19:34:18.854682049 +0000 UTC m=+0.183153773 container exec_died 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Oct 02 19:34:18 compute-0 systemd[1]: libpod-conmon-0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.scope: Deactivated successfully.
Oct 02 19:34:18 compute-0 sudo[377037]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:19 compute-0 ceph-mon[191910]: pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:19 compute-0 sudo[377219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyebtzcvcgvwyvkigbrboqtsytqztbms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433659.2385826-945-230624584646969/AnsiballZ_file.py'
Oct 02 19:34:19 compute-0 sudo[377219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:34:19 compute-0 python3.9[377221]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:34:19 compute-0 sudo[377219]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:20 compute-0 sudo[377371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viaaszlgorludkvyziqbbrzavbtslrog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433660.3142455-954-128914160514644/AnsiballZ_podman_container_info.py'
Oct 02 19:34:20 compute-0 sudo[377371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:21 compute-0 python3.9[377373]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Oct 02 19:34:21 compute-0 sudo[377371]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:21 compute-0 ceph-mon[191910]: pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:22 compute-0 sudo[377535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jruouzbvydyoaaoqnwdojbyafiiycbwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433661.5155137-962-281181000966498/AnsiballZ_podman_container_exec.py'
Oct 02 19:34:22 compute-0 sudo[377535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:22 compute-0 python3.9[377537]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:34:22 compute-0 systemd[1]: Started libpod-conmon-df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c.scope.
Oct 02 19:34:22 compute-0 podman[377538]: 2025-10-02 19:34:22.398344981 +0000 UTC m=+0.118931294 container exec df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, distribution-scope=public, config_id=edpm, managed_by=edpm_ansible, release-0.7.12=, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=base rhel9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:34:22 compute-0 podman[377538]: 2025-10-02 19:34:22.440069351 +0000 UTC m=+0.160655634 container exec_died df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=kepler, distribution-scope=public, build-date=2024-09-18T21:23:30, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, managed_by=edpm_ansible, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9)
Oct 02 19:34:22 compute-0 systemd[1]: libpod-conmon-df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c.scope: Deactivated successfully.
Oct 02 19:34:22 compute-0 sudo[377535]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:23 compute-0 sudo[377719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laaeqdisldngdbnmhuytfqksoiihxnqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433662.8019562-970-202744411885850/AnsiballZ_podman_container_exec.py'
Oct 02 19:34:23 compute-0 sudo[377719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:23 compute-0 python3.9[377721]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:34:23 compute-0 systemd[1]: Started libpod-conmon-df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c.scope.
Oct 02 19:34:23 compute-0 ceph-mon[191910]: pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:23 compute-0 podman[377722]: 2025-10-02 19:34:23.738259462 +0000 UTC m=+0.154136290 container exec df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, vcs-type=git, distribution-scope=public, version=9.4, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.29.0, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, architecture=x86_64, io.openshift.tags=base rhel9)
Oct 02 19:34:23 compute-0 podman[377722]: 2025-10-02 19:34:23.772733619 +0000 UTC m=+0.188610397 container exec_died df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_id=edpm, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, release=1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, vcs-type=git, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, version=9.4, io.openshift.tags=base rhel9, io.openshift.expose-services=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64)
Oct 02 19:34:23 compute-0 systemd[1]: libpod-conmon-df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c.scope: Deactivated successfully.
Oct 02 19:34:23 compute-0 sudo[377719]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:24 compute-0 sudo[377900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcvxavevcllqtxutmtlriwxkndkfqldp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433664.1284792-978-126951174822244/AnsiballZ_file.py'
Oct 02 19:34:24 compute-0 sudo[377900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:34:24 compute-0 python3.9[377902]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:34:24 compute-0 sudo[377900]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:25 compute-0 ceph-mon[191910]: pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:25 compute-0 podman[378010]: 2025-10-02 19:34:25.725086183 +0000 UTC m=+0.141343711 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 02 19:34:25 compute-0 sudo[378070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oybovnmyoyejzqsrftvegfiiivjejtrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433665.2064543-987-20831412586313/AnsiballZ_podman_container_info.py'
Oct 02 19:34:25 compute-0 sudo[378070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:26 compute-0 python3.9[378072]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Oct 02 19:34:26 compute-0 sudo[378070]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:27 compute-0 sudo[378233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbfwfjtawwucxqpfgpjuxgngnrfxavzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433666.6606312-995-205332520647560/AnsiballZ_podman_container_exec.py'
Oct 02 19:34:27 compute-0 sudo[378233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:27 compute-0 python3.9[378235]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:34:27 compute-0 systemd[1]: Started libpod-conmon-6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19.scope.
Oct 02 19:34:27 compute-0 podman[378236]: 2025-10-02 19:34:27.652759558 +0000 UTC m=+0.156932705 container exec 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:34:27 compute-0 podman[378236]: 2025-10-02 19:34:27.688069328 +0000 UTC m=+0.192242465 container exec_died 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:34:27 compute-0 ceph-mon[191910]: pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:27 compute-0 sudo[378233]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:27 compute-0 systemd[1]: libpod-conmon-6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19.scope: Deactivated successfully.
Oct 02 19:34:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:29 compute-0 sudo[378416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsjrasxrmppoxktjnvjiwviskpizurtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433668.0327961-1003-117460620968187/AnsiballZ_podman_container_exec.py'
Oct 02 19:34:29 compute-0 sudo[378416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:29 compute-0 python3.9[378418]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:34:29 compute-0 systemd[1]: Started libpod-conmon-6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19.scope.
Oct 02 19:34:29 compute-0 podman[378419]: 2025-10-02 19:34:29.648959048 +0000 UTC m=+0.150346050 container exec 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 19:34:29 compute-0 podman[378419]: 2025-10-02 19:34:29.684733669 +0000 UTC m=+0.186120661 container exec_died 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 19:34:29 compute-0 systemd[1]: libpod-conmon-6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19.scope: Deactivated successfully.
Oct 02 19:34:29 compute-0 podman[157186]: time="2025-10-02T19:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:34:29 compute-0 ceph-mon[191910]: pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:29 compute-0 sudo[378416]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:34:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8523 "" "Go-http-client/1.1"
Oct 02 19:34:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:34:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:30 compute-0 sudo[378599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aivomakywfhpnvuxzwjxjtnxiizewyaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433670.0113142-1011-179419278144243/AnsiballZ_file.py'
Oct 02 19:34:30 compute-0 sudo[378599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:30 compute-0 podman[378600]: 2025-10-02 19:34:30.734977616 +0000 UTC m=+0.153325429 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:34:30 compute-0 python3.9[378604]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:34:30 compute-0 sudo[378599]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:31 compute-0 openstack_network_exporter[372736]: ERROR   19:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:34:31 compute-0 openstack_network_exporter[372736]: ERROR   19:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:34:31 compute-0 openstack_network_exporter[372736]: ERROR   19:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:34:31 compute-0 openstack_network_exporter[372736]: ERROR   19:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:34:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:34:31 compute-0 openstack_network_exporter[372736]: ERROR   19:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:34:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:34:31 compute-0 sudo[378774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnlmljmacivtzknhtmxnpqgvxndebgtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433671.1481895-1020-101826179712869/AnsiballZ_podman_container_info.py'
Oct 02 19:34:31 compute-0 sudo[378774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:31 compute-0 ceph-mon[191910]: pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:31 compute-0 python3.9[378776]: ansible-containers.podman.podman_container_info Invoked with name=['iscsid'] executable=podman
Oct 02 19:34:32 compute-0 sudo[378774]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:34:32.279 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:34:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:34:32.279 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:34:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:34:32.280 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:34:32 compute-0 sudo[378938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqjyzfivvgprwenfivefpyvyykogdlbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433672.2588727-1028-82976734837008/AnsiballZ_podman_container_exec.py'
Oct 02 19:34:32 compute-0 sudo[378938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:33 compute-0 python3.9[378940]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:34:33 compute-0 systemd[1]: Started libpod-conmon-a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92.scope.
Oct 02 19:34:33 compute-0 podman[378941]: 2025-10-02 19:34:33.206714674 +0000 UTC m=+0.169225272 container exec a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 19:34:33 compute-0 podman[378941]: 2025-10-02 19:34:33.242298231 +0000 UTC m=+0.204808759 container exec_died a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:34:33 compute-0 systemd[1]: libpod-conmon-a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92.scope: Deactivated successfully.
Oct 02 19:34:33 compute-0 sudo[378938]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:34:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:34:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:34:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:34:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:34:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:34:33 compute-0 ceph-mon[191910]: pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:34 compute-0 sudo[379135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egwjfyqpcneeyfimffoarlqpbikzpvxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433673.5988948-1036-148207395541725/AnsiballZ_podman_container_exec.py'
Oct 02 19:34:34 compute-0 sudo[379135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:34 compute-0 podman[379094]: 2025-10-02 19:34:34.185137941 +0000 UTC m=+0.138595858 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:34:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:34 compute-0 python3.9[379140]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:34:34 compute-0 systemd[1]: Started libpod-conmon-a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92.scope.
Oct 02 19:34:34 compute-0 podman[379141]: 2025-10-02 19:34:34.578403571 +0000 UTC m=+0.158038074 container exec a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 19:34:34 compute-0 podman[379141]: 2025-10-02 19:34:34.614022018 +0000 UTC m=+0.193656541 container exec_died a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 19:34:34 compute-0 sudo[379135]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:34 compute-0 systemd[1]: libpod-conmon-a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92.scope: Deactivated successfully.
Oct 02 19:34:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:34:35 compute-0 nova_compute[355794]: 2025-10-02 19:34:35.292 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:35 compute-0 nova_compute[355794]: 2025-10-02 19:34:35.293 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:35 compute-0 nova_compute[355794]: 2025-10-02 19:34:35.293 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:35 compute-0 nova_compute[355794]: 2025-10-02 19:34:35.293 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:35 compute-0 nova_compute[355794]: 2025-10-02 19:34:35.294 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:35 compute-0 nova_compute[355794]: 2025-10-02 19:34:35.294 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:35 compute-0 nova_compute[355794]: 2025-10-02 19:34:35.294 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:35 compute-0 nova_compute[355794]: 2025-10-02 19:34:35.294 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:34:35 compute-0 sudo[379321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekixbzrodohrvkjpwjktpadottxdbllt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433674.9678605-1044-126140109792256/AnsiballZ_file.py'
Oct 02 19:34:35 compute-0 sudo[379321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:35 compute-0 python3.9[379323]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/iscsid recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:34:35 compute-0 sudo[379321]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:35 compute-0 ceph-mon[191910]: pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:35.808263) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433675808800, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2042, "num_deletes": 251, "total_data_size": 3475056, "memory_usage": 3542872, "flush_reason": "Manual Compaction"}
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433675838251, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3410038, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16310, "largest_seqno": 18351, "table_properties": {"data_size": 3400771, "index_size": 5889, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18154, "raw_average_key_size": 19, "raw_value_size": 3382358, "raw_average_value_size": 3680, "num_data_blocks": 267, "num_entries": 919, "num_filter_entries": 919, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759433441, "oldest_key_time": 1759433441, "file_creation_time": 1759433675, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 30138 microseconds, and 16753 cpu microseconds.
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:35.838357) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3410038 bytes OK
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:35.838436) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:35.841727) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:35.841750) EVENT_LOG_v1 {"time_micros": 1759433675841743, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:35.841773) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3466535, prev total WAL file size 3466535, number of live WAL files 2.
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:35.843768) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3330KB)], [38(7552KB)]
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433675843890, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11143459, "oldest_snapshot_seqno": -1}
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4379 keys, 9361250 bytes, temperature: kUnknown
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433675916488, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9361250, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9328151, "index_size": 21021, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 105791, "raw_average_key_size": 24, "raw_value_size": 9245125, "raw_average_value_size": 2111, "num_data_blocks": 894, "num_entries": 4379, "num_filter_entries": 4379, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759433675, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:35.916808) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9361250 bytes
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:35.920046) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.3 rd, 128.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 4893, records dropped: 514 output_compression: NoCompression
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:35.920109) EVENT_LOG_v1 {"time_micros": 1759433675920080, "job": 18, "event": "compaction_finished", "compaction_time_micros": 72684, "compaction_time_cpu_micros": 41979, "output_level": 6, "num_output_files": 1, "total_output_size": 9361250, "num_input_records": 4893, "num_output_records": 4379, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433675921570, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433675924347, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:35.843231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:35.924616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:35.924624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:35.924627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:35.924630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:34:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:35.924633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:34:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:36 compute-0 nova_compute[355794]: 2025-10-02 19:34:36.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:36 compute-0 nova_compute[355794]: 2025-10-02 19:34:36.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:34:36 compute-0 nova_compute[355794]: 2025-10-02 19:34:36.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:34:36 compute-0 sudo[379473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgvljlclhnjeuvlufqwfhiznabecqoyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433676.1115026-1053-147320864318069/AnsiballZ_podman_container_info.py'
Oct 02 19:34:36 compute-0 sudo[379473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:36 compute-0 nova_compute[355794]: 2025-10-02 19:34:36.598 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:34:36 compute-0 nova_compute[355794]: 2025-10-02 19:34:36.599 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:36 compute-0 nova_compute[355794]: 2025-10-02 19:34:36.631 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:34:36 compute-0 nova_compute[355794]: 2025-10-02 19:34:36.632 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:34:36 compute-0 nova_compute[355794]: 2025-10-02 19:34:36.633 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:34:36 compute-0 nova_compute[355794]: 2025-10-02 19:34:36.633 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:34:36 compute-0 nova_compute[355794]: 2025-10-02 19:34:36.634 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:34:36 compute-0 python3.9[379475]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Oct 02 19:34:36 compute-0 sudo[379473]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:34:37 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2133027906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:34:37 compute-0 nova_compute[355794]: 2025-10-02 19:34:37.153 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:34:37 compute-0 nova_compute[355794]: 2025-10-02 19:34:37.609 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:34:37 compute-0 nova_compute[355794]: 2025-10-02 19:34:37.610 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4571MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:34:37 compute-0 nova_compute[355794]: 2025-10-02 19:34:37.610 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:34:37 compute-0 nova_compute[355794]: 2025-10-02 19:34:37.610 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:34:37 compute-0 nova_compute[355794]: 2025-10-02 19:34:37.699 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:34:37 compute-0 nova_compute[355794]: 2025-10-02 19:34:37.699 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:34:37 compute-0 nova_compute[355794]: 2025-10-02 19:34:37.723 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:34:37 compute-0 sudo[379661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuyimxcgusvkbsxuqlwmukpaclhksnft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433677.2885714-1061-51314257260971/AnsiballZ_podman_container_exec.py'
Oct 02 19:34:37 compute-0 sudo[379661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:37 compute-0 ceph-mon[191910]: pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:37 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2133027906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:34:38 compute-0 python3.9[379663]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:34:38 compute-0 systemd[1]: Started libpod-conmon-21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd.scope.
Oct 02 19:34:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:34:38 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/839539441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:34:38 compute-0 podman[379683]: 2025-10-02 19:34:38.229507661 +0000 UTC m=+0.170497147 container exec 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:34:38 compute-0 nova_compute[355794]: 2025-10-02 19:34:38.253 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:34:38 compute-0 nova_compute[355794]: 2025-10-02 19:34:38.262 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:34:38 compute-0 podman[379683]: 2025-10-02 19:34:38.267022619 +0000 UTC m=+0.208012045 container exec_died 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 19:34:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:38 compute-0 nova_compute[355794]: 2025-10-02 19:34:38.278 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:34:38 compute-0 nova_compute[355794]: 2025-10-02 19:34:38.280 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:34:38 compute-0 nova_compute[355794]: 2025-10-02 19:34:38.280 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:34:38 compute-0 sudo[379661]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:38 compute-0 systemd[1]: libpod-conmon-21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd.scope: Deactivated successfully.
Oct 02 19:34:38 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/839539441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:34:38 compute-0 sudo[379715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:34:38 compute-0 sudo[379715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:38 compute-0 sudo[379715]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:38 compute-0 sudo[379740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:34:38 compute-0 sudo[379740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:38 compute-0 sudo[379740]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:39 compute-0 sudo[379765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:34:39 compute-0 sudo[379765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:39 compute-0 sudo[379765]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:39 compute-0 sudo[379803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 19:34:39 compute-0 sudo[379803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:39 compute-0 ceph-mon[191910]: pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:39.873203) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433679873259, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 291, "num_deletes": 250, "total_data_size": 71725, "memory_usage": 78096, "flush_reason": "Manual Compaction"}
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433679877252, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 70979, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18352, "largest_seqno": 18642, "table_properties": {"data_size": 69053, "index_size": 154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5241, "raw_average_key_size": 19, "raw_value_size": 65252, "raw_average_value_size": 239, "num_data_blocks": 7, "num_entries": 272, "num_filter_entries": 272, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759433676, "oldest_key_time": 1759433676, "file_creation_time": 1759433679, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 4091 microseconds, and 1474 cpu microseconds.
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:39.877298) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 70979 bytes OK
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:39.877317) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:39.879449) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:39.879472) EVENT_LOG_v1 {"time_micros": 1759433679879465, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:39.879493) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 69582, prev total WAL file size 69582, number of live WAL files 2.
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:39.880176) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(69KB)], [41(9141KB)]
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433679880260, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9432229, "oldest_snapshot_seqno": -1}
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4144 keys, 6144840 bytes, temperature: kUnknown
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433679933246, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6144840, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6118016, "index_size": 15387, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10373, "raw_key_size": 101347, "raw_average_key_size": 24, "raw_value_size": 6043747, "raw_average_value_size": 1458, "num_data_blocks": 649, "num_entries": 4144, "num_filter_entries": 4144, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759433679, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:39.933562) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6144840 bytes
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:39.936027) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.8 rd, 115.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 8.9 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(219.5) write-amplify(86.6) OK, records in: 4651, records dropped: 507 output_compression: NoCompression
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:39.936078) EVENT_LOG_v1 {"time_micros": 1759433679936064, "job": 20, "event": "compaction_finished", "compaction_time_micros": 53054, "compaction_time_cpu_micros": 35968, "output_level": 6, "num_output_files": 1, "total_output_size": 6144840, "num_input_records": 4651, "num_output_records": 4144, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433679936312, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433679939591, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:39.879940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:39.939868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:39.939877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:39.939880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:39.939883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:34:39 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:34:39.939887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:34:39 compute-0 sudo[380032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpzwwqvotfyxubpcynpvcnpcppoqffns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433679.4069517-1069-43384846450854/AnsiballZ_podman_container_exec.py'
Oct 02 19:34:39 compute-0 sudo[380032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:40 compute-0 podman[380030]: 2025-10-02 19:34:40.052976826 +0000 UTC m=+0.104970094 container exec a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:34:40 compute-0 python3.9[380042]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:34:40 compute-0 podman[380030]: 2025-10-02 19:34:40.199420951 +0000 UTC m=+0.251414229 container exec_died a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 19:34:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:40 compute-0 systemd[1]: Started libpod-conmon-21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd.scope.
Oct 02 19:34:40 compute-0 podman[380059]: 2025-10-02 19:34:40.345779574 +0000 UTC m=+0.118065091 container exec 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 19:34:40 compute-0 podman[380059]: 2025-10-02 19:34:40.392343073 +0000 UTC m=+0.164628610 container exec_died 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:34:40 compute-0 systemd[1]: libpod-conmon-21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd.scope: Deactivated successfully.
Oct 02 19:34:40 compute-0 sudo[380032]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:40 compute-0 podman[380193]: 2025-10-02 19:34:40.977992921 +0000 UTC m=+0.107761377 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_id=edpm, release-0.7.12=, vcs-type=git, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, release=1214.1726694543, container_name=kepler, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:34:40 compute-0 podman[380189]: 2025-10-02 19:34:40.978425993 +0000 UTC m=+0.127407980 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct 02 19:34:40 compute-0 podman[380191]: 2025-10-02 19:34:40.978184996 +0000 UTC m=+0.106827232 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001)
Oct 02 19:34:40 compute-0 podman[380190]: 2025-10-02 19:34:40.999616246 +0000 UTC m=+0.134415786 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct 02 19:34:41 compute-0 podman[380192]: 2025-10-02 19:34:41.036751194 +0000 UTC m=+0.166526510 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 19:34:41 compute-0 sudo[379803]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:34:41 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:34:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:34:41 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:34:41 compute-0 sudo[380332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:34:41 compute-0 sudo[380332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:41 compute-0 sudo[380332]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:41 compute-0 sudo[380380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:34:41 compute-0 sudo[380380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:41 compute-0 sudo[380380]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:41 compute-0 sudo[380434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:34:41 compute-0 sudo[380434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:41 compute-0 sudo[380434]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:41 compute-0 sudo[380482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:34:41 compute-0 sudo[380482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:41 compute-0 sudo[380557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrwbanxzfvnemipwzowkxfpuuntzrhke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433681.3946903-1077-45167811972706/AnsiballZ_file.py'
Oct 02 19:34:41 compute-0 sudo[380557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:41 compute-0 ceph-mon[191910]: pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:34:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:34:42 compute-0 python3.9[380559]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:34:42 compute-0 sudo[380557]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:42 compute-0 sudo[380482]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:34:42 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:34:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:34:42 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:34:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:34:42 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:34:42 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 48f17e61-c061-4816-a935-c64deed4c4a3 does not exist
Oct 02 19:34:42 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 29c7fcd5-1346-47d7-9713-077bf103aaf3 does not exist
Oct 02 19:34:42 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 2d2683e9-524d-4c88-8bb6-9368e04ffe75 does not exist
Oct 02 19:34:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:34:42 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:34:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:34:42 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:34:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:34:42 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:34:42 compute-0 sudo[380617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:34:42 compute-0 sudo[380617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:42 compute-0 sudo[380617]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:42 compute-0 sudo[380670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:34:42 compute-0 sudo[380670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:42 compute-0 sudo[380670]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:42 compute-0 sudo[380718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:34:42 compute-0 sudo[380718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:42 compute-0 sudo[380718]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:42 compute-0 sudo[380766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:34:42 compute-0 sudo[380766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:34:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:34:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:34:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:34:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:34:42 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:34:43 compute-0 sudo[380848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auczceqwkcgjwdcsuaegjnnlcdaaslbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433682.4681864-1086-211128981034498/AnsiballZ_file.py'
Oct 02 19:34:43 compute-0 sudo[380848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:43 compute-0 python3.9[380856]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:34:43 compute-0 sudo[380848]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:43 compute-0 podman[380882]: 2025-10-02 19:34:43.436348794 +0000 UTC m=+0.083554624 container create c512426a27a8b48d7b1977689e0f29a0e046f4102fa05879e33a552f0efff86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_montalcini, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 19:34:43 compute-0 podman[380882]: 2025-10-02 19:34:43.406466959 +0000 UTC m=+0.053672859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:34:43 compute-0 systemd[1]: Started libpod-conmon-c512426a27a8b48d7b1977689e0f29a0e046f4102fa05879e33a552f0efff86e.scope.
Oct 02 19:34:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:34:43 compute-0 podman[380882]: 2025-10-02 19:34:43.569921017 +0000 UTC m=+0.217126917 container init c512426a27a8b48d7b1977689e0f29a0e046f4102fa05879e33a552f0efff86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_montalcini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:34:43 compute-0 podman[380882]: 2025-10-02 19:34:43.589027265 +0000 UTC m=+0.236233115 container start c512426a27a8b48d7b1977689e0f29a0e046f4102fa05879e33a552f0efff86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_montalcini, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 19:34:43 compute-0 podman[380882]: 2025-10-02 19:34:43.596187706 +0000 UTC m=+0.243393626 container attach c512426a27a8b48d7b1977689e0f29a0e046f4102fa05879e33a552f0efff86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_montalcini, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:34:43 compute-0 nostalgic_montalcini[380916]: 167 167
Oct 02 19:34:43 compute-0 systemd[1]: libpod-c512426a27a8b48d7b1977689e0f29a0e046f4102fa05879e33a552f0efff86e.scope: Deactivated successfully.
Oct 02 19:34:43 compute-0 podman[380882]: 2025-10-02 19:34:43.602081252 +0000 UTC m=+0.249287172 container died c512426a27a8b48d7b1977689e0f29a0e046f4102fa05879e33a552f0efff86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:34:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-25cf0fc767277dc3aec05a22991be33cd1d3449c9e7df58bb34af444a77c1398-merged.mount: Deactivated successfully.
Oct 02 19:34:43 compute-0 podman[380882]: 2025-10-02 19:34:43.693931046 +0000 UTC m=+0.341136876 container remove c512426a27a8b48d7b1977689e0f29a0e046f4102fa05879e33a552f0efff86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_montalcini, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 19:34:43 compute-0 systemd[1]: libpod-conmon-c512426a27a8b48d7b1977689e0f29a0e046f4102fa05879e33a552f0efff86e.scope: Deactivated successfully.
Oct 02 19:34:43 compute-0 ceph-mon[191910]: pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:43 compute-0 podman[380998]: 2025-10-02 19:34:43.937557526 +0000 UTC m=+0.069732296 container create 898b5a787d56c1c18c821f8728afe7192ffd79bf5190e655898867ebceab8e8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hermann, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:34:44 compute-0 podman[380998]: 2025-10-02 19:34:43.913232759 +0000 UTC m=+0.045407559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:34:44 compute-0 systemd[1]: Started libpod-conmon-898b5a787d56c1c18c821f8728afe7192ffd79bf5190e655898867ebceab8e8e.scope.
Oct 02 19:34:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488e1df67a18df226539603141ddc79f9806bf1b5f7901f093307ed4b601e278/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488e1df67a18df226539603141ddc79f9806bf1b5f7901f093307ed4b601e278/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488e1df67a18df226539603141ddc79f9806bf1b5f7901f093307ed4b601e278/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488e1df67a18df226539603141ddc79f9806bf1b5f7901f093307ed4b601e278/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488e1df67a18df226539603141ddc79f9806bf1b5f7901f093307ed4b601e278/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:34:44 compute-0 podman[380998]: 2025-10-02 19:34:44.117000889 +0000 UTC m=+0.249175759 container init 898b5a787d56c1c18c821f8728afe7192ffd79bf5190e655898867ebceab8e8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hermann, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:34:44 compute-0 podman[380998]: 2025-10-02 19:34:44.12905771 +0000 UTC m=+0.261232510 container start 898b5a787d56c1c18c821f8728afe7192ffd79bf5190e655898867ebceab8e8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hermann, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 19:34:44 compute-0 podman[380998]: 2025-10-02 19:34:44.13542815 +0000 UTC m=+0.267602960 container attach 898b5a787d56c1c18c821f8728afe7192ffd79bf5190e655898867ebceab8e8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hermann, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 19:34:44 compute-0 sudo[381090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwhyzebhqzpvwaxpbybxjnietxbadhzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433683.721824-1094-121592888711925/AnsiballZ_stat.py'
Oct 02 19:34:44 compute-0 sudo[381090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:44 compute-0 python3.9[381092]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:34:44 compute-0 sudo[381090]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:34:44 compute-0 podman[381142]: 2025-10-02 19:34:44.930049877 +0000 UTC m=+0.090122018 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, version=9.6, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Oct 02 19:34:44 compute-0 sudo[381190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zetzsqotfztgrwswtzkzxxiebfffhxhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433683.721824-1094-121592888711925/AnsiballZ_file.py'
Oct 02 19:34:44 compute-0 sudo[381190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:45 compute-0 python3.9[381197]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/edpm-config/firewall/telemetry.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/telemetry.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:34:45 compute-0 sudo[381190]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:45 compute-0 intelligent_hermann[381041]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:34:45 compute-0 intelligent_hermann[381041]: --> relative data size: 1.0
Oct 02 19:34:45 compute-0 intelligent_hermann[381041]: --> All data devices are unavailable
Oct 02 19:34:45 compute-0 systemd[1]: libpod-898b5a787d56c1c18c821f8728afe7192ffd79bf5190e655898867ebceab8e8e.scope: Deactivated successfully.
Oct 02 19:34:45 compute-0 systemd[1]: libpod-898b5a787d56c1c18c821f8728afe7192ffd79bf5190e655898867ebceab8e8e.scope: Consumed 1.151s CPU time.
Oct 02 19:34:45 compute-0 conmon[381041]: conmon 898b5a787d56c1c18c82 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-898b5a787d56c1c18c821f8728afe7192ffd79bf5190e655898867ebceab8e8e.scope/container/memory.events
Oct 02 19:34:45 compute-0 podman[380998]: 2025-10-02 19:34:45.347653104 +0000 UTC m=+1.479827864 container died 898b5a787d56c1c18c821f8728afe7192ffd79bf5190e655898867ebceab8e8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hermann, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:34:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-488e1df67a18df226539603141ddc79f9806bf1b5f7901f093307ed4b601e278-merged.mount: Deactivated successfully.
Oct 02 19:34:45 compute-0 podman[380998]: 2025-10-02 19:34:45.431817913 +0000 UTC m=+1.563992673 container remove 898b5a787d56c1c18c821f8728afe7192ffd79bf5190e655898867ebceab8e8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hermann, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:34:45 compute-0 systemd[1]: libpod-conmon-898b5a787d56c1c18c821f8728afe7192ffd79bf5190e655898867ebceab8e8e.scope: Deactivated successfully.
Oct 02 19:34:45 compute-0 sudo[380766]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:45 compute-0 sudo[381274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:34:45 compute-0 sudo[381274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:45 compute-0 sudo[381274]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:45 compute-0 sudo[381326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:34:45 compute-0 sudo[381326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:45 compute-0 sudo[381326]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:45 compute-0 sudo[381376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:34:45 compute-0 sudo[381376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:45 compute-0 sudo[381376]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:45 compute-0 ceph-mon[191910]: pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:46 compute-0 sudo[381424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:34:46 compute-0 sudo[381424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:46 compute-0 sudo[381476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvhvxivnrvsxhxnqfmosoqgskpkyixri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433685.5192907-1107-230507122535034/AnsiballZ_file.py'
Oct 02 19:34:46 compute-0 sudo[381476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:46 compute-0 python3.9[381478]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:34:46 compute-0 sudo[381476]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:46 compute-0 podman[381540]: 2025-10-02 19:34:46.589999111 +0000 UTC m=+0.071859912 container create 8146968d978b265e33eee5073aa7987ca57fba1c0b8545c4fbe1e973391b55ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatelet, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 19:34:46 compute-0 podman[381540]: 2025-10-02 19:34:46.550085209 +0000 UTC m=+0.031946090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:34:46 compute-0 systemd[1]: Started libpod-conmon-8146968d978b265e33eee5073aa7987ca57fba1c0b8545c4fbe1e973391b55ef.scope.
Oct 02 19:34:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:34:46 compute-0 podman[381540]: 2025-10-02 19:34:46.732117042 +0000 UTC m=+0.213977883 container init 8146968d978b265e33eee5073aa7987ca57fba1c0b8545c4fbe1e973391b55ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatelet, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 19:34:46 compute-0 podman[381540]: 2025-10-02 19:34:46.751153918 +0000 UTC m=+0.233014739 container start 8146968d978b265e33eee5073aa7987ca57fba1c0b8545c4fbe1e973391b55ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 19:34:46 compute-0 condescending_chatelet[381579]: 167 167
Oct 02 19:34:46 compute-0 podman[381540]: 2025-10-02 19:34:46.76026175 +0000 UTC m=+0.242122581 container attach 8146968d978b265e33eee5073aa7987ca57fba1c0b8545c4fbe1e973391b55ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:34:46 compute-0 systemd[1]: libpod-8146968d978b265e33eee5073aa7987ca57fba1c0b8545c4fbe1e973391b55ef.scope: Deactivated successfully.
Oct 02 19:34:46 compute-0 podman[381540]: 2025-10-02 19:34:46.762332745 +0000 UTC m=+0.244193546 container died 8146968d978b265e33eee5073aa7987ca57fba1c0b8545c4fbe1e973391b55ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatelet, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 19:34:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3c86eeba38354ce47f0a9f7f61128dd07cfcc564961fe03d9b2174d2afb59c6-merged.mount: Deactivated successfully.
Oct 02 19:34:46 compute-0 podman[381540]: 2025-10-02 19:34:46.834521756 +0000 UTC m=+0.316382587 container remove 8146968d978b265e33eee5073aa7987ca57fba1c0b8545c4fbe1e973391b55ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 19:34:46 compute-0 systemd[1]: libpod-conmon-8146968d978b265e33eee5073aa7987ca57fba1c0b8545c4fbe1e973391b55ef.scope: Deactivated successfully.
Oct 02 19:34:46 compute-0 podman[381600]: 2025-10-02 19:34:46.88918127 +0000 UTC m=+0.084658183 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:34:47 compute-0 podman[381679]: 2025-10-02 19:34:47.067130053 +0000 UTC m=+0.072365666 container create 5960dfe1b8a3adcbd7048fe4bf780f140082d978915f4a5ad401c882dd10dd50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lalande, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 19:34:47 compute-0 podman[381679]: 2025-10-02 19:34:47.040086934 +0000 UTC m=+0.045322577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:34:47 compute-0 systemd[1]: Started libpod-conmon-5960dfe1b8a3adcbd7048fe4bf780f140082d978915f4a5ad401c882dd10dd50.scope.
Oct 02 19:34:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:34:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecee6d8d938e5f08954a1e75d24ef0b49643ae9fae13c40cf5b02dd1840f1aa5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:34:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecee6d8d938e5f08954a1e75d24ef0b49643ae9fae13c40cf5b02dd1840f1aa5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:34:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecee6d8d938e5f08954a1e75d24ef0b49643ae9fae13c40cf5b02dd1840f1aa5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:34:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecee6d8d938e5f08954a1e75d24ef0b49643ae9fae13c40cf5b02dd1840f1aa5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:34:47 compute-0 podman[381679]: 2025-10-02 19:34:47.213006003 +0000 UTC m=+0.218241686 container init 5960dfe1b8a3adcbd7048fe4bf780f140082d978915f4a5ad401c882dd10dd50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lalande, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:34:47 compute-0 podman[381679]: 2025-10-02 19:34:47.240127075 +0000 UTC m=+0.245362718 container start 5960dfe1b8a3adcbd7048fe4bf780f140082d978915f4a5ad401c882dd10dd50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:34:47 compute-0 podman[381679]: 2025-10-02 19:34:47.247912842 +0000 UTC m=+0.253148475 container attach 5960dfe1b8a3adcbd7048fe4bf780f140082d978915f4a5ad401c882dd10dd50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lalande, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 19:34:47 compute-0 sudo[381751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fybaizrgyjnulxnijwqwyfwasambxhcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433686.698855-1115-278952150726912/AnsiballZ_stat.py'
Oct 02 19:34:47 compute-0 sudo[381751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:47 compute-0 python3.9[381753]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:34:47 compute-0 sudo[381751]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:47 compute-0 ceph-mon[191910]: pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]: {
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:     "0": [
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:         {
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "devices": [
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "/dev/loop3"
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             ],
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "lv_name": "ceph_lv0",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "lv_size": "21470642176",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "name": "ceph_lv0",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "tags": {
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.cluster_name": "ceph",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.crush_device_class": "",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.encrypted": "0",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.osd_id": "0",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.type": "block",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.vdo": "0"
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             },
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "type": "block",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "vg_name": "ceph_vg0"
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:         }
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:     ],
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:     "1": [
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:         {
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "devices": [
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "/dev/loop4"
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             ],
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "lv_name": "ceph_lv1",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "lv_size": "21470642176",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "name": "ceph_lv1",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "tags": {
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.cluster_name": "ceph",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.crush_device_class": "",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.encrypted": "0",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.osd_id": "1",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.type": "block",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.vdo": "0"
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             },
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "type": "block",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "vg_name": "ceph_vg1"
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:         }
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:     ],
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:     "2": [
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:         {
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "devices": [
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "/dev/loop5"
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             ],
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "lv_name": "ceph_lv2",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "lv_size": "21470642176",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "name": "ceph_lv2",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "tags": {
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.cluster_name": "ceph",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.crush_device_class": "",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.encrypted": "0",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.osd_id": "2",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.type": "block",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:                 "ceph.vdo": "0"
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             },
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "type": "block",
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:             "vg_name": "ceph_vg2"
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:         }
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]:     ]
Oct 02 19:34:48 compute-0 upbeat_lalande[381720]: }
Oct 02 19:34:48 compute-0 sudo[381833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugwmumarowjcgrqmsmcfqwhojamqstrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433686.698855-1115-278952150726912/AnsiballZ_file.py'
Oct 02 19:34:48 compute-0 systemd[1]: libpod-5960dfe1b8a3adcbd7048fe4bf780f140082d978915f4a5ad401c882dd10dd50.scope: Deactivated successfully.
Oct 02 19:34:48 compute-0 podman[381679]: 2025-10-02 19:34:48.083523029 +0000 UTC m=+1.088758672 container died 5960dfe1b8a3adcbd7048fe4bf780f140082d978915f4a5ad401c882dd10dd50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lalande, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:34:48 compute-0 sudo[381833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecee6d8d938e5f08954a1e75d24ef0b49643ae9fae13c40cf5b02dd1840f1aa5-merged.mount: Deactivated successfully.
Oct 02 19:34:48 compute-0 podman[381679]: 2025-10-02 19:34:48.190971328 +0000 UTC m=+1.196206941 container remove 5960dfe1b8a3adcbd7048fe4bf780f140082d978915f4a5ad401c882dd10dd50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:34:48 compute-0 systemd[1]: libpod-conmon-5960dfe1b8a3adcbd7048fe4bf780f140082d978915f4a5ad401c882dd10dd50.scope: Deactivated successfully.
Oct 02 19:34:48 compute-0 sudo[381424]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:48 compute-0 sudo[381849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:34:48 compute-0 sudo[381849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:48 compute-0 sudo[381849]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:48 compute-0 python3.9[381836]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:34:48 compute-0 sudo[381833]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:48 compute-0 sudo[381874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:34:48 compute-0 sudo[381874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:48 compute-0 sudo[381874]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:48 compute-0 sudo[381923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:34:48 compute-0 sudo[381923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:48 compute-0 sudo[381923]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:48 compute-0 sudo[381969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:34:48 compute-0 sudo[381969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:49 compute-0 sudo[382130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrghewjtxnnyyceajpqpgchyyzpowlkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433688.693401-1127-104068717104121/AnsiballZ_stat.py'
Oct 02 19:34:49 compute-0 sudo[382130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:49 compute-0 podman[382138]: 2025-10-02 19:34:49.34634272 +0000 UTC m=+0.059844953 container create a063829a38a0cfcd7fbc31e59c401284cdda661961033f862c791443aef2ea4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:34:49 compute-0 systemd[1]: Started libpod-conmon-a063829a38a0cfcd7fbc31e59c401284cdda661961033f862c791443aef2ea4b.scope.
Oct 02 19:34:49 compute-0 podman[382138]: 2025-10-02 19:34:49.323488472 +0000 UTC m=+0.036990745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:34:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:34:49 compute-0 python3.9[382137]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:34:49 compute-0 podman[382138]: 2025-10-02 19:34:49.488118741 +0000 UTC m=+0.201621054 container init a063829a38a0cfcd7fbc31e59c401284cdda661961033f862c791443aef2ea4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hellman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 19:34:49 compute-0 podman[382138]: 2025-10-02 19:34:49.506277104 +0000 UTC m=+0.219779367 container start a063829a38a0cfcd7fbc31e59c401284cdda661961033f862c791443aef2ea4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hellman, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 19:34:49 compute-0 podman[382138]: 2025-10-02 19:34:49.512703685 +0000 UTC m=+0.226206018 container attach a063829a38a0cfcd7fbc31e59c401284cdda661961033f862c791443aef2ea4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hellman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:34:49 compute-0 keen_hellman[382154]: 167 167
Oct 02 19:34:49 compute-0 systemd[1]: libpod-a063829a38a0cfcd7fbc31e59c401284cdda661961033f862c791443aef2ea4b.scope: Deactivated successfully.
Oct 02 19:34:49 compute-0 podman[382138]: 2025-10-02 19:34:49.521620582 +0000 UTC m=+0.235122825 container died a063829a38a0cfcd7fbc31e59c401284cdda661961033f862c791443aef2ea4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:34:49 compute-0 sudo[382130]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6925b095a25c31b5ac45350618486679535f3068e79732ec47522d11b8c6503-merged.mount: Deactivated successfully.
Oct 02 19:34:49 compute-0 podman[382138]: 2025-10-02 19:34:49.603207523 +0000 UTC m=+0.316709786 container remove a063829a38a0cfcd7fbc31e59c401284cdda661961033f862c791443aef2ea4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hellman, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:34:49 compute-0 systemd[1]: libpod-conmon-a063829a38a0cfcd7fbc31e59c401284cdda661961033f862c791443aef2ea4b.scope: Deactivated successfully.
Oct 02 19:34:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:34:49 compute-0 podman[382224]: 2025-10-02 19:34:49.884246368 +0000 UTC m=+0.084786286 container create 609ca3f471321e938721426dbeaf68fc0d75387b371a43053d56f80d59530330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:34:49 compute-0 podman[382224]: 2025-10-02 19:34:49.848075056 +0000 UTC m=+0.048615044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:34:49 compute-0 ceph-mon[191910]: pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:49 compute-0 systemd[1]: Started libpod-conmon-609ca3f471321e938721426dbeaf68fc0d75387b371a43053d56f80d59530330.scope.
Oct 02 19:34:49 compute-0 sudo[382267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpnzcexismwztzvrdhqtqkkjvuskoyup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433688.693401-1127-104068717104121/AnsiballZ_file.py'
Oct 02 19:34:49 compute-0 sudo[382267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/255968e6e1c3d052aeb0e58d1f02ec3f30b4fa4c0e41895ff6d84cb3a72dbd35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/255968e6e1c3d052aeb0e58d1f02ec3f30b4fa4c0e41895ff6d84cb3a72dbd35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/255968e6e1c3d052aeb0e58d1f02ec3f30b4fa4c0e41895ff6d84cb3a72dbd35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/255968e6e1c3d052aeb0e58d1f02ec3f30b4fa4c0e41895ff6d84cb3a72dbd35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:34:50 compute-0 podman[382224]: 2025-10-02 19:34:50.056269584 +0000 UTC m=+0.256809522 container init 609ca3f471321e938721426dbeaf68fc0d75387b371a43053d56f80d59530330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lewin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:34:50 compute-0 podman[382224]: 2025-10-02 19:34:50.075349252 +0000 UTC m=+0.275889140 container start 609ca3f471321e938721426dbeaf68fc0d75387b371a43053d56f80d59530330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lewin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 19:34:50 compute-0 podman[382224]: 2025-10-02 19:34:50.079877233 +0000 UTC m=+0.280417161 container attach 609ca3f471321e938721426dbeaf68fc0d75387b371a43053d56f80d59530330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lewin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 19:34:50 compute-0 python3.9[382271]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.i5oo6ocx recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:34:50 compute-0 sudo[382267]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:51 compute-0 ceph-mon[191910]: pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]: {
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:         "osd_id": 1,
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:         "type": "bluestore"
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:     },
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:         "osd_id": 2,
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:         "type": "bluestore"
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:     },
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:         "osd_id": 0,
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:         "type": "bluestore"
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]:     }
Oct 02 19:34:51 compute-0 beautiful_lewin[382272]: }
Oct 02 19:34:51 compute-0 podman[382224]: 2025-10-02 19:34:51.191851381 +0000 UTC m=+1.392391309 container died 609ca3f471321e938721426dbeaf68fc0d75387b371a43053d56f80d59530330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:34:51 compute-0 systemd[1]: libpod-609ca3f471321e938721426dbeaf68fc0d75387b371a43053d56f80d59530330.scope: Deactivated successfully.
Oct 02 19:34:51 compute-0 systemd[1]: libpod-609ca3f471321e938721426dbeaf68fc0d75387b371a43053d56f80d59530330.scope: Consumed 1.113s CPU time.
Oct 02 19:34:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-255968e6e1c3d052aeb0e58d1f02ec3f30b4fa4c0e41895ff6d84cb3a72dbd35-merged.mount: Deactivated successfully.
Oct 02 19:34:51 compute-0 podman[382224]: 2025-10-02 19:34:51.311239957 +0000 UTC m=+1.511779885 container remove 609ca3f471321e938721426dbeaf68fc0d75387b371a43053d56f80d59530330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lewin, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:34:51 compute-0 systemd[1]: libpod-conmon-609ca3f471321e938721426dbeaf68fc0d75387b371a43053d56f80d59530330.scope: Deactivated successfully.
Oct 02 19:34:51 compute-0 sudo[381969]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:34:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:34:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:34:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:34:51 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev fbb3bba0-63ca-4f6a-8bc3-ed373759272e does not exist
Oct 02 19:34:51 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 82fb014a-b698-4552-a768-16f236da7123 does not exist
Oct 02 19:34:51 compute-0 sudo[382417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:34:51 compute-0 sudo[382417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:51 compute-0 sudo[382417]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:51 compute-0 sudo[382466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:34:51 compute-0 sudo[382466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:34:51 compute-0 sudo[382466]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:51 compute-0 sudo[382517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmajqtjldevljkrvirqimqajmbchvcfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433691.1334827-1139-234838767704159/AnsiballZ_stat.py'
Oct 02 19:34:51 compute-0 sudo[382517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:51 compute-0 python3.9[382519]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:34:52 compute-0 sudo[382517]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:52 compute-0 sudo[382595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkxoolbymcqgiqmjpylxwwhmbfeznmwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433691.1334827-1139-234838767704159/AnsiballZ_file.py'
Oct 02 19:34:52 compute-0 sudo[382595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:34:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:34:52 compute-0 python3.9[382597]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:34:52 compute-0 sudo[382595]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:53 compute-0 ceph-mon[191910]: pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:53 compute-0 sudo[382747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlkfkfrlnuwujbcwwqunzqpqogyffkmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433693.0555787-1152-193668694114625/AnsiballZ_command.py'
Oct 02 19:34:53 compute-0 sudo[382747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:53 compute-0 python3.9[382749]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:34:53 compute-0 sudo[382747]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:54 compute-0 sudo[382900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoaruxlqegnhyofembnehqzfwawczddf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759433694.1149683-1160-164368572326336/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 19:34:54 compute-0 sudo[382900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:34:54 compute-0 python3[382902]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 19:34:55 compute-0 sudo[382900]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:55 compute-0 ceph-mon[191910]: pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:55 compute-0 sudo[383067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnqezcfyhqmabelkalscgjarxxaklfql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433695.3107038-1168-146826850799064/AnsiballZ_stat.py'
Oct 02 19:34:55 compute-0 podman[383026]: 2025-10-02 19:34:55.97377052 +0000 UTC m=+0.123948237 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 19:34:55 compute-0 sudo[383067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:56 compute-0 python3.9[383072]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:34:56 compute-0 sudo[383067]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:56 compute-0 sudo[383148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swixccxeifxjubzvktgrpgjbjqgypoja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433695.3107038-1168-146826850799064/AnsiballZ_file.py'
Oct 02 19:34:56 compute-0 sudo[383148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:56 compute-0 python3.9[383150]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:34:56 compute-0 sudo[383148]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:57 compute-0 ceph-mon[191910]: pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:57 compute-0 sudo[383300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqqttnyjnwkztvpvybjatzuzhlsmxxtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433697.145535-1180-10211480917561/AnsiballZ_stat.py'
Oct 02 19:34:57 compute-0 sudo[383300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:57 compute-0 python3.9[383302]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:34:57 compute-0 sudo[383300]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:58 compute-0 sudo[383378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yadobfxpjwokivjybirnwzrdnusxrezi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433697.145535-1180-10211480917561/AnsiballZ_file.py'
Oct 02 19:34:58 compute-0 sudo[383378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:58 compute-0 python3.9[383380]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:34:58 compute-0 sudo[383378]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:59 compute-0 ceph-mon[191910]: pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:34:59 compute-0 sudo[383530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpedjdvaflvogkbbvbrnwawxbenwmnjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433698.8789105-1192-202822363203680/AnsiballZ_stat.py'
Oct 02 19:34:59 compute-0 sudo[383530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:59 compute-0 python3.9[383532]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:34:59 compute-0 sudo[383530]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:59 compute-0 podman[157186]: time="2025-10-02T19:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:34:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:34:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8532 "" "Go-http-client/1.1"
Oct 02 19:34:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:35:00 compute-0 sudo[383608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guylcvwnudrqbofcwaqwmympgawnndpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433698.8789105-1192-202822363203680/AnsiballZ_file.py'
Oct 02 19:35:00 compute-0 sudo[383608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:00 compute-0 python3.9[383610]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:35:00 compute-0 sudo[383608]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:01 compute-0 podman[383734]: 2025-10-02 19:35:01.271727707 +0000 UTC m=+0.093238911 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:35:01 compute-0 sudo[383783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoepvmownvmonnkgvjrohkimgpkqtytf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433700.7397032-1204-259861164423808/AnsiballZ_stat.py'
Oct 02 19:35:01 compute-0 sudo[383783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:01 compute-0 openstack_network_exporter[372736]: ERROR   19:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:35:01 compute-0 openstack_network_exporter[372736]: ERROR   19:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:35:01 compute-0 openstack_network_exporter[372736]: ERROR   19:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:35:01 compute-0 openstack_network_exporter[372736]: ERROR   19:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:35:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:35:01 compute-0 openstack_network_exporter[372736]: ERROR   19:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:35:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:35:01 compute-0 ceph-mon[191910]: pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:01 compute-0 python3.9[383785]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:35:01 compute-0 sudo[383783]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:02 compute-0 sudo[383861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsprqmeyehaoeacqeumzrblfwroerfag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433700.7397032-1204-259861164423808/AnsiballZ_file.py'
Oct 02 19:35:02 compute-0 sudo[383861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:02 compute-0 python3.9[383863]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:35:02 compute-0 sudo[383861]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:03 compute-0 ceph-mon[191910]: pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:35:03
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'images', 'volumes', 'cephfs.cephfs.meta', '.mgr']
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:35:03 compute-0 sudo[384013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnwkvgvjcakwuwlyytbnttlrdothyetw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433703.165235-1216-71863279695319/AnsiballZ_stat.py'
Oct 02 19:35:03 compute-0 sudo[384013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:35:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:35:03 compute-0 python3.9[384015]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:35:04 compute-0 sudo[384013]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.291 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.292 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes.delta': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.309 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes.delta': [], 'network.incoming.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.310 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes.delta': [], 'network.incoming.packets.error': [], 'disk.device.read.bytes': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes.delta': [], 'network.incoming.packets.error': [], 'disk.device.read.bytes': [], 'network.outgoing.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.313 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.313 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes.delta': [], 'network.incoming.packets.error': [], 'disk.device.read.bytes': [], 'network.outgoing.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes.rate': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.315 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes.delta': [], 'network.incoming.packets.error': [], 'disk.device.read.bytes': [], 'network.outgoing.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.315 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.316 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.317 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes.delta': [], 'network.incoming.packets.error': [], 'disk.device.read.bytes': [], 'network.outgoing.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes.delta': [], 'network.incoming.packets.error': [], 'disk.device.read.bytes': [], 'network.outgoing.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'disk.device.capacity': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.318 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.318 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.318 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.322 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.322 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.322 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.322 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.322 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.322 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.323 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.323 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.323 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.323 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:35:04.323 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:04 compute-0 sudo[384106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evxcwtdrrxlcxjsqpdizofeoqnctwtsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433703.165235-1216-71863279695319/AnsiballZ_file.py'
Oct 02 19:35:04 compute-0 sudo[384106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:04 compute-0 podman[384066]: 2025-10-02 19:35:04.503941133 +0000 UTC m=+0.122885199 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20250930, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:35:04 compute-0 python3.9[384110]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:35:04 compute-0 sudo[384106]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:35:05 compute-0 ceph-mon[191910]: pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:06 compute-0 sudo[384263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vttlsbmqhhbrnehfziwmzdpokhaygexy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433705.783599-1229-212582451983053/AnsiballZ_command.py'
Oct 02 19:35:06 compute-0 sudo[384263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:06 compute-0 python3.9[384265]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:35:06 compute-0 sudo[384263]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:07 compute-0 ceph-mon[191910]: pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:07 compute-0 sudo[384418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekbrsigrkoldsudidclfowyhhfchuiwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433706.8178997-1237-88886593536395/AnsiballZ_blockinfile.py'
Oct 02 19:35:07 compute-0 sudo[384418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:07 compute-0 python3.9[384420]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:35:07 compute-0 sudo[384418]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:08 compute-0 sudo[384570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkysqarlyurtglvudmgnphlovmtmckpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433708.1476648-1246-216679083010780/AnsiballZ_command.py'
Oct 02 19:35:08 compute-0 sudo[384570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:08 compute-0 python3.9[384572]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:35:08 compute-0 sudo[384570]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:09 compute-0 ceph-mon[191910]: pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:09 compute-0 sudo[384723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovfjltnzapaihqqxygvbmzujbinbmsey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433709.1881056-1254-261839626760137/AnsiballZ_stat.py'
Oct 02 19:35:09 compute-0 sudo[384723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:35:09 compute-0 python3.9[384725]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:35:09 compute-0 sudo[384723]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:10 compute-0 sudo[384875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnymfmrtiqhuaprrcidbyaecnyulmniq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433710.2338586-1263-78743052215201/AnsiballZ_file.py'
Oct 02 19:35:10 compute-0 sudo[384875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:10 compute-0 python3.9[384877]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:35:10 compute-0 sudo[384875]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:11 compute-0 sshd-session[356313]: Connection closed by 192.168.122.30 port 47316
Oct 02 19:35:11 compute-0 sshd-session[356310]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:35:11 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Oct 02 19:35:11 compute-0 systemd[1]: session-59.scope: Consumed 2min 45.017s CPU time.
Oct 02 19:35:11 compute-0 systemd-logind[793]: Session 59 logged out. Waiting for processes to exit.
Oct 02 19:35:11 compute-0 ceph-mon[191910]: pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:11 compute-0 systemd-logind[793]: Removed session 59.
Oct 02 19:35:11 compute-0 podman[384902]: 2025-10-02 19:35:11.658596028 +0000 UTC m=+0.124706499 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:35:11 compute-0 podman[384903]: 2025-10-02 19:35:11.674315116 +0000 UTC m=+0.124543954 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 19:35:11 compute-0 podman[384912]: 2025-10-02 19:35:11.682587546 +0000 UTC m=+0.097976377 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, name=ubi9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, distribution-scope=public, managed_by=edpm_ansible, version=9.4, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git)
Oct 02 19:35:11 compute-0 podman[384904]: 2025-10-02 19:35:11.697983185 +0000 UTC m=+0.142672996 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, config_id=iscsid, io.buildah.version=1.41.3)
Oct 02 19:35:11 compute-0 podman[384910]: 2025-10-02 19:35:11.715532862 +0000 UTC m=+0.145356877 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:35:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:35:13 compute-0 ceph-mon[191910]: pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:35:15 compute-0 ceph-mon[191910]: pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:15 compute-0 podman[384998]: 2025-10-02 19:35:15.72189164 +0000 UTC m=+0.140021076 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, version=9.6, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.33.7, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 02 19:35:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:17 compute-0 ceph-mon[191910]: pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:17 compute-0 podman[385018]: 2025-10-02 19:35:17.655509623 +0000 UTC m=+0.089439009 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:35:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:18 compute-0 sshd-session[385041]: Accepted publickey for zuul from 192.168.122.30 port 36444 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:35:18 compute-0 systemd-logind[793]: New session 60 of user zuul.
Oct 02 19:35:18 compute-0 systemd[1]: Started Session 60 of User zuul.
Oct 02 19:35:18 compute-0 sshd-session[385041]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:35:19 compute-0 ceph-mon[191910]: pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:19 compute-0 sudo[385195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tosfrbhwaolgdyyypdqcrjhlejsltxhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433718.5313385-24-8190515146841/AnsiballZ_systemd_service.py'
Oct 02 19:35:19 compute-0 sudo[385195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:35:19 compute-0 python3.9[385197]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:35:19 compute-0 systemd[1]: Reloading.
Oct 02 19:35:20 compute-0 systemd-rc-local-generator[385223]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:35:20 compute-0 systemd-sysv-generator[385226]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:35:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:20 compute-0 sudo[385195]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:21 compute-0 ceph-mon[191910]: pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:21 compute-0 python3.9[385381]: ansible-ansible.builtin.service_facts Invoked
Oct 02 19:35:21 compute-0 network[385398]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 19:35:21 compute-0 network[385399]: 'network-scripts' will be removed from distribution in near future.
Oct 02 19:35:21 compute-0 network[385400]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 19:35:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:23 compute-0 ceph-mon[191910]: pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:35:25 compute-0 ceph-mon[191910]: pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:26 compute-0 podman[385497]: 2025-10-02 19:35:26.163692663 +0000 UTC m=+0.110485230 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd)
Oct 02 19:35:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:27 compute-0 ceph-mon[191910]: pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:28 compute-0 sudo[385696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwaibjudbtxbuerrxsfzonsvmmdpbtrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433727.5066617-47-100610389277391/AnsiballZ_systemd_service.py'
Oct 02 19:35:28 compute-0 sudo[385696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:28 compute-0 python3.9[385698]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:35:28 compute-0 sudo[385696]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:29 compute-0 ceph-mon[191910]: pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:29 compute-0 podman[157186]: time="2025-10-02T19:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:35:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:35:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8530 "" "Go-http-client/1.1"
Oct 02 19:35:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:35:29 compute-0 sudo[385849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiotikvnumbltgiouayvhshnppeiuots ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433729.2320924-57-272057797048911/AnsiballZ_file.py'
Oct 02 19:35:29 compute-0 sudo[385849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:30 compute-0 python3.9[385851]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:35:30 compute-0 sudo[385849]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:31 compute-0 openstack_network_exporter[372736]: ERROR   19:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:35:31 compute-0 openstack_network_exporter[372736]: ERROR   19:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:35:31 compute-0 openstack_network_exporter[372736]: ERROR   19:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:35:31 compute-0 openstack_network_exporter[372736]: ERROR   19:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:35:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:35:31 compute-0 openstack_network_exporter[372736]: ERROR   19:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:35:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:35:31 compute-0 ceph-mon[191910]: pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:31 compute-0 sudo[386023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhquqljxyichzlclrxjmrtdkcvsdtqvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433731.1630383-65-170168883647347/AnsiballZ_file.py'
Oct 02 19:35:31 compute-0 sudo[386023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:31 compute-0 podman[385976]: 2025-10-02 19:35:31.706442531 +0000 UTC m=+0.140415005 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:35:31 compute-0 python3.9[386028]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:35:31 compute-0 sudo[386023]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:35:32.280 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:35:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:35:32.281 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:35:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:35:32.281 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:35:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v906: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:33 compute-0 sudo[386178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztzsyskunbeqdodhukijhaallsdqwypd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433732.3149493-74-96080751902143/AnsiballZ_command.py'
Oct 02 19:35:33 compute-0 sudo[386178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:33 compute-0 python3.9[386180]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:35:33 compute-0 sudo[386178]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:35:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:35:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:35:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:35:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:35:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:35:33 compute-0 ceph-mon[191910]: pgmap v906: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v907: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:34 compute-0 python3.9[386332]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:35:34 compute-0 podman[386333]: 2025-10-02 19:35:34.666092549 +0000 UTC m=+0.097390582 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Oct 02 19:35:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:35:35 compute-0 sudo[386499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvfbubdogtpeecuzwflaucybvdcwkxdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433734.8747702-92-82233591695582/AnsiballZ_systemd_service.py'
Oct 02 19:35:35 compute-0 sudo[386499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:35 compute-0 ceph-mon[191910]: pgmap v907: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:35 compute-0 python3.9[386501]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:35:35 compute-0 systemd[1]: Reloading.
Oct 02 19:35:36 compute-0 systemd-rc-local-generator[386527]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:35:36 compute-0 systemd-sysv-generator[386532]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:35:36 compute-0 nova_compute[355794]: 2025-10-02 19:35:36.256 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:36 compute-0 nova_compute[355794]: 2025-10-02 19:35:36.257 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:36 compute-0 nova_compute[355794]: 2025-10-02 19:35:36.257 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:36 compute-0 nova_compute[355794]: 2025-10-02 19:35:36.258 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:36 compute-0 nova_compute[355794]: 2025-10-02 19:35:36.258 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:36 compute-0 nova_compute[355794]: 2025-10-02 19:35:36.258 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:36 compute-0 nova_compute[355794]: 2025-10-02 19:35:36.258 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:35:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v908: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:36 compute-0 sudo[386499]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:36 compute-0 nova_compute[355794]: 2025-10-02 19:35:36.570 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:36 compute-0 nova_compute[355794]: 2025-10-02 19:35:36.598 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:36 compute-0 nova_compute[355794]: 2025-10-02 19:35:36.598 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:35:36 compute-0 nova_compute[355794]: 2025-10-02 19:35:36.598 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:35:36 compute-0 nova_compute[355794]: 2025-10-02 19:35:36.616 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:35:36 compute-0 nova_compute[355794]: 2025-10-02 19:35:36.616 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:37 compute-0 sudo[386686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pefrjupopqinutefukrztlqqwjnxmzov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433736.6827354-100-16297552044331/AnsiballZ_command.py'
Oct 02 19:35:37 compute-0 sudo[386686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:37 compute-0 python3.9[386688]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:35:37 compute-0 sudo[386686]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:37 compute-0 nova_compute[355794]: 2025-10-02 19:35:37.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:37 compute-0 nova_compute[355794]: 2025-10-02 19:35:37.615 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:35:37 compute-0 nova_compute[355794]: 2025-10-02 19:35:37.616 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:35:37 compute-0 nova_compute[355794]: 2025-10-02 19:35:37.616 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:35:37 compute-0 nova_compute[355794]: 2025-10-02 19:35:37.617 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:35:37 compute-0 nova_compute[355794]: 2025-10-02 19:35:37.617 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:35:37 compute-0 ceph-mon[191910]: pgmap v908: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:35:38 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/416507050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:35:38 compute-0 nova_compute[355794]: 2025-10-02 19:35:38.174 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:35:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v909: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:38 compute-0 sudo[386861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmbwewjwuijalywfdsfrbyjasjdvgdqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433737.778183-109-36681749949896/AnsiballZ_file.py'
Oct 02 19:35:38 compute-0 sudo[386861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:38 compute-0 python3.9[386863]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:35:38 compute-0 sudo[386861]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:38 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/416507050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:35:38 compute-0 nova_compute[355794]: 2025-10-02 19:35:38.769 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:35:38 compute-0 nova_compute[355794]: 2025-10-02 19:35:38.772 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4559MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:35:38 compute-0 nova_compute[355794]: 2025-10-02 19:35:38.773 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:35:38 compute-0 nova_compute[355794]: 2025-10-02 19:35:38.774 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:35:38 compute-0 nova_compute[355794]: 2025-10-02 19:35:38.853 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:35:38 compute-0 nova_compute[355794]: 2025-10-02 19:35:38.854 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:35:38 compute-0 nova_compute[355794]: 2025-10-02 19:35:38.872 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:35:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:35:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3946445769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:35:39 compute-0 nova_compute[355794]: 2025-10-02 19:35:39.407 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:35:39 compute-0 nova_compute[355794]: 2025-10-02 19:35:39.417 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:35:39 compute-0 nova_compute[355794]: 2025-10-02 19:35:39.432 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:35:39 compute-0 nova_compute[355794]: 2025-10-02 19:35:39.435 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:35:39 compute-0 nova_compute[355794]: 2025-10-02 19:35:39.435 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:35:39 compute-0 ceph-mon[191910]: pgmap v909: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:39 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3946445769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:35:39 compute-0 python3.9[387035]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:35:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:35:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v910: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:41 compute-0 python3.9[387187]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:35:41 compute-0 ceph-mon[191910]: pgmap v910: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:41 compute-0 podman[387237]: 2025-10-02 19:35:41.955311103 +0000 UTC m=+0.119011247 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:35:41 compute-0 podman[387239]: 2025-10-02 19:35:41.971920894 +0000 UTC m=+0.124585505 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid)
Oct 02 19:35:41 compute-0 podman[387238]: 2025-10-02 19:35:41.973474426 +0000 UTC m=+0.133659337 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:35:41 compute-0 podman[387241]: 2025-10-02 19:35:41.989260736 +0000 UTC m=+0.134672454 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vcs-type=git, build-date=2024-09-18T21:23:30, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, io.openshift.expose-services=)
Oct 02 19:35:41 compute-0 podman[387240]: 2025-10-02 19:35:41.998124301 +0000 UTC m=+0.129932047 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:35:42 compute-0 python3.9[387320]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:35:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v911: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:43 compute-0 ceph-mon[191910]: pgmap v911: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:43 compute-0 sudo[387512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riwtwyhugiqpjqdwdwkysthszoijkldb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433742.6155415-140-2277132988177/AnsiballZ_getent.py'
Oct 02 19:35:43 compute-0 sudo[387512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:35:44 compute-0 python3.9[387514]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Oct 02 19:35:44 compute-0 sudo[387512]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v912: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:35:44.893532) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433744893695, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 783, "num_deletes": 256, "total_data_size": 1029247, "memory_usage": 1044664, "flush_reason": "Manual Compaction"}
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433744906972, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1009575, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18643, "largest_seqno": 19425, "table_properties": {"data_size": 1005603, "index_size": 1752, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8376, "raw_average_key_size": 18, "raw_value_size": 997588, "raw_average_value_size": 2168, "num_data_blocks": 80, "num_entries": 460, "num_filter_entries": 460, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759433680, "oldest_key_time": 1759433680, "file_creation_time": 1759433744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 13513 microseconds, and 8164 cpu microseconds.
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:35:44.907068) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1009575 bytes OK
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:35:44.907100) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:35:44.910579) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:35:44.910605) EVENT_LOG_v1 {"time_micros": 1759433744910596, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:35:44.910630) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1025286, prev total WAL file size 1025286, number of live WAL files 2.
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:35:44.911822) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(985KB)], [44(6000KB)]
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433744911862, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7154415, "oldest_snapshot_seqno": -1}
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4080 keys, 7017950 bytes, temperature: kUnknown
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433744958754, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7017950, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6990033, "index_size": 16607, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 101091, "raw_average_key_size": 24, "raw_value_size": 6915420, "raw_average_value_size": 1694, "num_data_blocks": 698, "num_entries": 4080, "num_filter_entries": 4080, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759433744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:35:44.959155) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7017950 bytes
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:35:44.961838) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.1 rd, 149.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 5.9 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(14.0) write-amplify(7.0) OK, records in: 4604, records dropped: 524 output_compression: NoCompression
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:35:44.961869) EVENT_LOG_v1 {"time_micros": 1759433744961854, "job": 22, "event": "compaction_finished", "compaction_time_micros": 47048, "compaction_time_cpu_micros": 34489, "output_level": 6, "num_output_files": 1, "total_output_size": 7017950, "num_input_records": 4604, "num_output_records": 4080, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433744962533, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433744965083, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:35:44.911613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:35:44.965351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:35:44.965359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:35:44.965362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:35:44.965365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:35:44 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:35:44.965368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:35:45 compute-0 ceph-mon[191910]: pgmap v912: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:45 compute-0 python3.9[387665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:35:46 compute-0 podman[387666]: 2025-10-02 19:35:46.121964324 +0000 UTC m=+0.141696879 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git, version=9.6, io.buildah.version=1.33.7)
Oct 02 19:35:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v913: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:46 compute-0 python3.9[387762]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf _original_basename=ceilometer.conf recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:35:47 compute-0 python3.9[387912]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:35:47 compute-0 ceph-mon[191910]: pgmap v913: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:48 compute-0 podman[387962]: 2025-10-02 19:35:48.117119866 +0000 UTC m=+0.122392937 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:35:48 compute-0 python3.9[388004]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml _original_basename=polling.yaml recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:35:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v914: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:49 compute-0 python3.9[388161]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:35:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:35:49 compute-0 python3.9[388238]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf _original_basename=custom.conf recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:35:49 compute-0 ceph-mon[191910]: pgmap v914: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v915: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:50 compute-0 python3.9[388388]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:35:51 compute-0 sudo[388529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:35:51 compute-0 sudo[388529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:35:51 compute-0 sudo[388529]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:51 compute-0 ceph-mon[191910]: pgmap v915: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:51 compute-0 sudo[388566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:35:51 compute-0 sudo[388566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:35:51 compute-0 sudo[388566]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:51 compute-0 python3.9[388553]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:35:52 compute-0 sudo[388592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:35:52 compute-0 sudo[388592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:35:52 compute-0 sudo[388592]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:52 compute-0 sudo[388639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:35:52 compute-0 sudo[388639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:35:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v916: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:52 compute-0 sudo[388639]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:35:52 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:35:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:35:52 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:35:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:35:52 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:35:52 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 00dfb4d8-02aa-498b-9165-30b21412e2fc does not exist
Oct 02 19:35:52 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 5fed8973-bd38-456e-9886-487248bc820a does not exist
Oct 02 19:35:52 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev fda2f0c6-e9ea-48ca-b237-98fd4bf7eac2 does not exist
Oct 02 19:35:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:35:52 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:35:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:35:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:35:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:35:52 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:35:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:35:52 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:35:53 compute-0 python3.9[388821]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:35:53 compute-0 sudo[388822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:35:53 compute-0 sudo[388822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:35:53 compute-0 sudo[388822]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:53 compute-0 sudo[388849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:35:53 compute-0 sudo[388849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:35:53 compute-0 sudo[388849]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:53 compute-0 sudo[388897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:35:53 compute-0 sudo[388897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:35:53 compute-0 sudo[388897]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:53 compute-0 sudo[388946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:35:53 compute-0 sudo[388946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:35:53 compute-0 python3.9[388995]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json _original_basename=ceilometer-agent-ipmi.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:35:53 compute-0 ceph-mon[191910]: pgmap v916: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:53 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:35:53 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:35:53 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:35:53 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:35:54 compute-0 podman[389083]: 2025-10-02 19:35:54.087507559 +0000 UTC m=+0.066380157 container create b357cfbd94d11a738924d12ed31b2abcfa8e13ddc598288353a6631ccbefafe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mirzakhani, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:35:54 compute-0 systemd[1]: Started libpod-conmon-b357cfbd94d11a738924d12ed31b2abcfa8e13ddc598288353a6631ccbefafe2.scope.
Oct 02 19:35:54 compute-0 podman[389083]: 2025-10-02 19:35:54.058445986 +0000 UTC m=+0.037318614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:35:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:35:54 compute-0 podman[389083]: 2025-10-02 19:35:54.22062786 +0000 UTC m=+0.199500508 container init b357cfbd94d11a738924d12ed31b2abcfa8e13ddc598288353a6631ccbefafe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mirzakhani, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:35:54 compute-0 podman[389083]: 2025-10-02 19:35:54.23455384 +0000 UTC m=+0.213426438 container start b357cfbd94d11a738924d12ed31b2abcfa8e13ddc598288353a6631ccbefafe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mirzakhani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:35:54 compute-0 podman[389083]: 2025-10-02 19:35:54.240059697 +0000 UTC m=+0.218932305 container attach b357cfbd94d11a738924d12ed31b2abcfa8e13ddc598288353a6631ccbefafe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:35:54 compute-0 gallant_mirzakhani[389130]: 167 167
Oct 02 19:35:54 compute-0 systemd[1]: libpod-b357cfbd94d11a738924d12ed31b2abcfa8e13ddc598288353a6631ccbefafe2.scope: Deactivated successfully.
Oct 02 19:35:54 compute-0 podman[389083]: 2025-10-02 19:35:54.248106841 +0000 UTC m=+0.226979469 container died b357cfbd94d11a738924d12ed31b2abcfa8e13ddc598288353a6631ccbefafe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:35:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4b7f7346824f789258a0c4d0393246858d7d1d4f27aca2b42555ddbdd0cc51a-merged.mount: Deactivated successfully.
Oct 02 19:35:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v917: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:54 compute-0 podman[389083]: 2025-10-02 19:35:54.323801314 +0000 UTC m=+0.302673942 container remove b357cfbd94d11a738924d12ed31b2abcfa8e13ddc598288353a6631ccbefafe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 19:35:54 compute-0 systemd[1]: libpod-conmon-b357cfbd94d11a738924d12ed31b2abcfa8e13ddc598288353a6631ccbefafe2.scope: Deactivated successfully.
Oct 02 19:35:54 compute-0 podman[389151]: 2025-10-02 19:35:54.588312731 +0000 UTC m=+0.091428334 container create 082125da5dc7a74844604ed1eb9c61e46bfac6ac92835ec72d02efb64fd483cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lamarr, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 19:35:54 compute-0 podman[389151]: 2025-10-02 19:35:54.539205364 +0000 UTC m=+0.042320977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:35:54 compute-0 systemd[1]: Started libpod-conmon-082125da5dc7a74844604ed1eb9c61e46bfac6ac92835ec72d02efb64fd483cd.scope.
Oct 02 19:35:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/869994ef15c9e29f35576a89666086e0f7764c0910919b646d866aa4a1b61905/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/869994ef15c9e29f35576a89666086e0f7764c0910919b646d866aa4a1b61905/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/869994ef15c9e29f35576a89666086e0f7764c0910919b646d866aa4a1b61905/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/869994ef15c9e29f35576a89666086e0f7764c0910919b646d866aa4a1b61905/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/869994ef15c9e29f35576a89666086e0f7764c0910919b646d866aa4a1b61905/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:35:54 compute-0 podman[389151]: 2025-10-02 19:35:54.770046985 +0000 UTC m=+0.273162608 container init 082125da5dc7a74844604ed1eb9c61e46bfac6ac92835ec72d02efb64fd483cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lamarr, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 02 19:35:54 compute-0 podman[389151]: 2025-10-02 19:35:54.800543756 +0000 UTC m=+0.303659339 container start 082125da5dc7a74844604ed1eb9c61e46bfac6ac92835ec72d02efb64fd483cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:35:54 compute-0 podman[389151]: 2025-10-02 19:35:54.806042062 +0000 UTC m=+0.309157645 container attach 082125da5dc7a74844604ed1eb9c61e46bfac6ac92835ec72d02efb64fd483cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 02 19:35:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:35:55 compute-0 python3.9[389253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:35:55 compute-0 ceph-mon[191910]: pgmap v917: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:56 compute-0 nostalgic_lamarr[389167]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:35:56 compute-0 nostalgic_lamarr[389167]: --> relative data size: 1.0
Oct 02 19:35:56 compute-0 nostalgic_lamarr[389167]: --> All data devices are unavailable
Oct 02 19:35:56 compute-0 systemd[1]: libpod-082125da5dc7a74844604ed1eb9c61e46bfac6ac92835ec72d02efb64fd483cd.scope: Deactivated successfully.
Oct 02 19:35:56 compute-0 systemd[1]: libpod-082125da5dc7a74844604ed1eb9c61e46bfac6ac92835ec72d02efb64fd483cd.scope: Consumed 1.213s CPU time.
Oct 02 19:35:56 compute-0 podman[389151]: 2025-10-02 19:35:56.08526638 +0000 UTC m=+1.588381993 container died 082125da5dc7a74844604ed1eb9c61e46bfac6ac92835ec72d02efb64fd483cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:35:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-869994ef15c9e29f35576a89666086e0f7764c0910919b646d866aa4a1b61905-merged.mount: Deactivated successfully.
Oct 02 19:35:56 compute-0 podman[389151]: 2025-10-02 19:35:56.208302573 +0000 UTC m=+1.711418186 container remove 082125da5dc7a74844604ed1eb9c61e46bfac6ac92835ec72d02efb64fd483cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lamarr, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:35:56 compute-0 systemd[1]: libpod-conmon-082125da5dc7a74844604ed1eb9c61e46bfac6ac92835ec72d02efb64fd483cd.scope: Deactivated successfully.
Oct 02 19:35:56 compute-0 sudo[388946]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:56 compute-0 sudo[389358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:35:56 compute-0 sudo[389358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:35:56 compute-0 sudo[389358]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:56 compute-0 podman[389357]: 2025-10-02 19:35:56.397342481 +0000 UTC m=+0.119566211 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 19:35:56 compute-0 python3.9[389356]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:35:56 compute-0 sudo[389399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:35:56 compute-0 sudo[389399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:35:56 compute-0 sudo[389399]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:56 compute-0 sudo[389438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:35:56 compute-0 sudo[389438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:35:56 compute-0 sudo[389438]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:56 compute-0 sudo[389488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:35:56 compute-0 sudo[389488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:35:57 compute-0 podman[389592]: 2025-10-02 19:35:57.360969983 +0000 UTC m=+0.086164853 container create 9d9c761f7339b634add2eb7f709ac53d53207d3a84732e004c624f1283f151dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bhabha, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 19:35:57 compute-0 podman[389592]: 2025-10-02 19:35:57.328657843 +0000 UTC m=+0.053852763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:35:57 compute-0 systemd[1]: Started libpod-conmon-9d9c761f7339b634add2eb7f709ac53d53207d3a84732e004c624f1283f151dd.scope.
Oct 02 19:35:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:35:57 compute-0 podman[389592]: 2025-10-02 19:35:57.51010512 +0000 UTC m=+0.235299990 container init 9d9c761f7339b634add2eb7f709ac53d53207d3a84732e004c624f1283f151dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:35:57 compute-0 podman[389592]: 2025-10-02 19:35:57.52926971 +0000 UTC m=+0.254464570 container start 9d9c761f7339b634add2eb7f709ac53d53207d3a84732e004c624f1283f151dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bhabha, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:35:57 compute-0 podman[389592]: 2025-10-02 19:35:57.536832301 +0000 UTC m=+0.262027161 container attach 9d9c761f7339b634add2eb7f709ac53d53207d3a84732e004c624f1283f151dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bhabha, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 19:35:57 compute-0 nervous_bhabha[389607]: 167 167
Oct 02 19:35:57 compute-0 systemd[1]: libpod-9d9c761f7339b634add2eb7f709ac53d53207d3a84732e004c624f1283f151dd.scope: Deactivated successfully.
Oct 02 19:35:57 compute-0 conmon[389607]: conmon 9d9c761f7339b634add2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9d9c761f7339b634add2eb7f709ac53d53207d3a84732e004c624f1283f151dd.scope/container/memory.events
Oct 02 19:35:57 compute-0 podman[389612]: 2025-10-02 19:35:57.602857817 +0000 UTC m=+0.042890382 container died 9d9c761f7339b634add2eb7f709ac53d53207d3a84732e004c624f1283f151dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 19:35:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dd47e39797ed0ab3e082f6250a6cfe921cf0786b2b50848a7792359f1b23310-merged.mount: Deactivated successfully.
Oct 02 19:35:57 compute-0 podman[389612]: 2025-10-02 19:35:57.671138004 +0000 UTC m=+0.111170579 container remove 9d9c761f7339b634add2eb7f709ac53d53207d3a84732e004c624f1283f151dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bhabha, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 19:35:57 compute-0 systemd[1]: libpod-conmon-9d9c761f7339b634add2eb7f709ac53d53207d3a84732e004c624f1283f151dd.scope: Deactivated successfully.
Oct 02 19:35:57 compute-0 podman[389679]: 2025-10-02 19:35:57.940838018 +0000 UTC m=+0.072719845 container create e119968c8bcf34567a2765c27cbdf563a6bad79ab81229ad0aae820908f948b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 19:35:57 compute-0 ceph-mon[191910]: pgmap v918: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:57 compute-0 podman[389679]: 2025-10-02 19:35:57.903828723 +0000 UTC m=+0.035710520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:35:58 compute-0 systemd[1]: Started libpod-conmon-e119968c8bcf34567a2765c27cbdf563a6bad79ab81229ad0aae820908f948b1.scope.
Oct 02 19:35:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206431d865398f313093c4c80fe7e64a481663dc529451c1298ee06d99db6791/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206431d865398f313093c4c80fe7e64a481663dc529451c1298ee06d99db6791/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206431d865398f313093c4c80fe7e64a481663dc529451c1298ee06d99db6791/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206431d865398f313093c4c80fe7e64a481663dc529451c1298ee06d99db6791/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:35:58 compute-0 podman[389679]: 2025-10-02 19:35:58.109770292 +0000 UTC m=+0.241652119 container init e119968c8bcf34567a2765c27cbdf563a6bad79ab81229ad0aae820908f948b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:35:58 compute-0 podman[389679]: 2025-10-02 19:35:58.134085199 +0000 UTC m=+0.265967016 container start e119968c8bcf34567a2765c27cbdf563a6bad79ab81229ad0aae820908f948b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_grothendieck, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 19:35:58 compute-0 podman[389679]: 2025-10-02 19:35:58.141987609 +0000 UTC m=+0.273869436 container attach e119968c8bcf34567a2765c27cbdf563a6bad79ab81229ad0aae820908f948b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:35:58 compute-0 python3.9[389720]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:35:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v919: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:58 compute-0 python3.9[389803]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json _original_basename=ceilometer_agent_ipmi.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]: {
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:     "0": [
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:         {
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "devices": [
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "/dev/loop3"
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             ],
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "lv_name": "ceph_lv0",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "lv_size": "21470642176",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "name": "ceph_lv0",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "tags": {
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.cluster_name": "ceph",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.crush_device_class": "",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.encrypted": "0",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.osd_id": "0",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.type": "block",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.vdo": "0"
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             },
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "type": "block",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "vg_name": "ceph_vg0"
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:         }
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:     ],
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:     "1": [
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:         {
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "devices": [
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "/dev/loop4"
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             ],
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "lv_name": "ceph_lv1",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "lv_size": "21470642176",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "name": "ceph_lv1",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "tags": {
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.cluster_name": "ceph",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.crush_device_class": "",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.encrypted": "0",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.osd_id": "1",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.type": "block",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.vdo": "0"
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             },
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "type": "block",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "vg_name": "ceph_vg1"
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:         }
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:     ],
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:     "2": [
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:         {
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "devices": [
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "/dev/loop5"
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             ],
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "lv_name": "ceph_lv2",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "lv_size": "21470642176",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "name": "ceph_lv2",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "tags": {
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.cluster_name": "ceph",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.crush_device_class": "",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.encrypted": "0",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.osd_id": "2",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.type": "block",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:                 "ceph.vdo": "0"
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             },
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "type": "block",
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:             "vg_name": "ceph_vg2"
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:         }
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]:     ]
Oct 02 19:35:59 compute-0 admiring_grothendieck[389723]: }
Oct 02 19:35:59 compute-0 ceph-mon[191910]: pgmap v919: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:35:59 compute-0 systemd[1]: libpod-e119968c8bcf34567a2765c27cbdf563a6bad79ab81229ad0aae820908f948b1.scope: Deactivated successfully.
Oct 02 19:35:59 compute-0 podman[389679]: 2025-10-02 19:35:59.072708436 +0000 UTC m=+1.204590263 container died e119968c8bcf34567a2765c27cbdf563a6bad79ab81229ad0aae820908f948b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 19:35:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-206431d865398f313093c4c80fe7e64a481663dc529451c1298ee06d99db6791-merged.mount: Deactivated successfully.
Oct 02 19:35:59 compute-0 podman[389679]: 2025-10-02 19:35:59.159644529 +0000 UTC m=+1.291526316 container remove e119968c8bcf34567a2765c27cbdf563a6bad79ab81229ad0aae820908f948b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 19:35:59 compute-0 systemd[1]: libpod-conmon-e119968c8bcf34567a2765c27cbdf563a6bad79ab81229ad0aae820908f948b1.scope: Deactivated successfully.
Oct 02 19:35:59 compute-0 sudo[389488]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:59 compute-0 sudo[389885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:35:59 compute-0 sudo[389885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:35:59 compute-0 sudo[389885]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:59 compute-0 sudo[389932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:35:59 compute-0 sudo[389932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:35:59 compute-0 sudo[389932]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:59 compute-0 sudo[389974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:35:59 compute-0 sudo[389974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:35:59 compute-0 sudo[389974]: pam_unix(sudo:session): session closed for user root
Oct 02 19:35:59 compute-0 sudo[390025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:35:59 compute-0 sudo[390025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:35:59 compute-0 podman[157186]: time="2025-10-02T19:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:35:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:35:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8526 "" "Go-http-client/1.1"
Oct 02 19:35:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:35:59 compute-0 python3.9[390064]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:36:00 compute-0 podman[390154]: 2025-10-02 19:36:00.294175688 +0000 UTC m=+0.083581044 container create 86caabb592234378c04dc23ba86c1e665b85ce6093e1965f83c19f55122eb070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_turing, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 19:36:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v920: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:00 compute-0 podman[390154]: 2025-10-02 19:36:00.262454114 +0000 UTC m=+0.051859540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:36:00 compute-0 systemd[1]: Started libpod-conmon-86caabb592234378c04dc23ba86c1e665b85ce6093e1965f83c19f55122eb070.scope.
Oct 02 19:36:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:36:00 compute-0 podman[390154]: 2025-10-02 19:36:00.451187544 +0000 UTC m=+0.240592930 container init 86caabb592234378c04dc23ba86c1e665b85ce6093e1965f83c19f55122eb070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_turing, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:36:00 compute-0 podman[390154]: 2025-10-02 19:36:00.467179709 +0000 UTC m=+0.256585085 container start 86caabb592234378c04dc23ba86c1e665b85ce6093e1965f83c19f55122eb070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_turing, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:36:00 compute-0 podman[390154]: 2025-10-02 19:36:00.472813219 +0000 UTC m=+0.262218595 container attach 86caabb592234378c04dc23ba86c1e665b85ce6093e1965f83c19f55122eb070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:36:00 compute-0 elated_turing[390200]: 167 167
Oct 02 19:36:00 compute-0 systemd[1]: libpod-86caabb592234378c04dc23ba86c1e665b85ce6093e1965f83c19f55122eb070.scope: Deactivated successfully.
Oct 02 19:36:00 compute-0 podman[390154]: 2025-10-02 19:36:00.477928315 +0000 UTC m=+0.267333681 container died 86caabb592234378c04dc23ba86c1e665b85ce6093e1965f83c19f55122eb070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 19:36:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-86941e96fddc23a5935b1cfc63e67497fb522b83637fc415d16bf9c9ae81617a-merged.mount: Deactivated successfully.
Oct 02 19:36:00 compute-0 podman[390154]: 2025-10-02 19:36:00.53715912 +0000 UTC m=+0.326564476 container remove 86caabb592234378c04dc23ba86c1e665b85ce6093e1965f83c19f55122eb070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_turing, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 19:36:00 compute-0 systemd[1]: libpod-conmon-86caabb592234378c04dc23ba86c1e665b85ce6093e1965f83c19f55122eb070.scope: Deactivated successfully.
Oct 02 19:36:00 compute-0 python3.9[390199]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:36:00 compute-0 podman[390231]: 2025-10-02 19:36:00.773939059 +0000 UTC m=+0.074326028 container create e0d1fd1b17cc055a8e5930a34d7bded1ce50d899e81b413a0b474d88692c92f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_keller, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:36:00 compute-0 podman[390231]: 2025-10-02 19:36:00.744141406 +0000 UTC m=+0.044528455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:36:00 compute-0 systemd[1]: Started libpod-conmon-e0d1fd1b17cc055a8e5930a34d7bded1ce50d899e81b413a0b474d88692c92f5.scope.
Oct 02 19:36:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a19c0c94eb0c0467299d59041b93c3e6048a944ba34966fea8106f0f215bed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a19c0c94eb0c0467299d59041b93c3e6048a944ba34966fea8106f0f215bed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a19c0c94eb0c0467299d59041b93c3e6048a944ba34966fea8106f0f215bed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a19c0c94eb0c0467299d59041b93c3e6048a944ba34966fea8106f0f215bed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:36:00 compute-0 podman[390231]: 2025-10-02 19:36:00.97172222 +0000 UTC m=+0.272109259 container init e0d1fd1b17cc055a8e5930a34d7bded1ce50d899e81b413a0b474d88692c92f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_keller, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:36:00 compute-0 podman[390231]: 2025-10-02 19:36:00.990439358 +0000 UTC m=+0.290826357 container start e0d1fd1b17cc055a8e5930a34d7bded1ce50d899e81b413a0b474d88692c92f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 19:36:00 compute-0 podman[390231]: 2025-10-02 19:36:00.997313841 +0000 UTC m=+0.297700840 container attach e0d1fd1b17cc055a8e5930a34d7bded1ce50d899e81b413a0b474d88692c92f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_keller, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:36:01 compute-0 ceph-mon[191910]: pgmap v920: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:01 compute-0 openstack_network_exporter[372736]: ERROR   19:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:36:01 compute-0 openstack_network_exporter[372736]: ERROR   19:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:36:01 compute-0 openstack_network_exporter[372736]: ERROR   19:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:36:01 compute-0 openstack_network_exporter[372736]: ERROR   19:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:36:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:36:01 compute-0 openstack_network_exporter[372736]: ERROR   19:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:36:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:36:01 compute-0 python3.9[390392]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:36:02 compute-0 podman[390459]: 2025-10-02 19:36:02.117442256 +0000 UTC m=+0.152335183 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:36:02 compute-0 compassionate_keller[390285]: {
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:         "osd_id": 1,
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:         "type": "bluestore"
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:     },
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:         "osd_id": 2,
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:         "type": "bluestore"
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:     },
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:         "osd_id": 0,
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:         "type": "bluestore"
Oct 02 19:36:02 compute-0 compassionate_keller[390285]:     }
Oct 02 19:36:02 compute-0 compassionate_keller[390285]: }
Oct 02 19:36:02 compute-0 systemd[1]: libpod-e0d1fd1b17cc055a8e5930a34d7bded1ce50d899e81b413a0b474d88692c92f5.scope: Deactivated successfully.
Oct 02 19:36:02 compute-0 systemd[1]: libpod-e0d1fd1b17cc055a8e5930a34d7bded1ce50d899e81b413a0b474d88692c92f5.scope: Consumed 1.168s CPU time.
Oct 02 19:36:02 compute-0 podman[390231]: 2025-10-02 19:36:02.171203047 +0000 UTC m=+1.471590056 container died e0d1fd1b17cc055a8e5930a34d7bded1ce50d899e81b413a0b474d88692c92f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 19:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8a19c0c94eb0c0467299d59041b93c3e6048a944ba34966fea8106f0f215bed-merged.mount: Deactivated successfully.
Oct 02 19:36:02 compute-0 podman[390231]: 2025-10-02 19:36:02.268798313 +0000 UTC m=+1.569185282 container remove e0d1fd1b17cc055a8e5930a34d7bded1ce50d899e81b413a0b474d88692c92f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 19:36:02 compute-0 systemd[1]: libpod-conmon-e0d1fd1b17cc055a8e5930a34d7bded1ce50d899e81b413a0b474d88692c92f5.scope: Deactivated successfully.
Oct 02 19:36:02 compute-0 python3.9[390505]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:36:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v921: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:02 compute-0 sudo[390025]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:36:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:36:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:36:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:36:02 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 26b027b5-1a36-4e44-8557-57e2ba3b56fa does not exist
Oct 02 19:36:02 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev f14c2128-b013-4100-8457-2b221a01acd7 does not exist
Oct 02 19:36:02 compute-0 sudo[390530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:36:02 compute-0 sudo[390530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:36:02 compute-0 sudo[390530]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:02 compute-0 sudo[390578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:36:02 compute-0 sudo[390578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:36:02 compute-0 sudo[390578]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:03 compute-0 python3.9[390728]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:36:03 compute-0 ceph-mon[191910]: pgmap v921: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:36:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:36:03
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'backups', 'volumes', 'default.rgw.control', '.mgr', 'default.rgw.log']
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:36:03 compute-0 python3.9[390804]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json _original_basename=kepler.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:36:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:36:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v922: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:04 compute-0 podman[390928]: 2025-10-02 19:36:04.878736337 +0000 UTC m=+0.138566897 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 19:36:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:36:05 compute-0 python3.9[390969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:36:05 compute-0 ceph-mon[191910]: pgmap v922: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:05 compute-0 python3.9[391048]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:36:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:06 compute-0 sudo[391198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pywdllwnlcwrunlaxcpmcjqszwclweby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433766.0885968-298-101205902095804/AnsiballZ_file.py'
Oct 02 19:36:06 compute-0 sudo[391198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:06 compute-0 python3.9[391200]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:36:06 compute-0 sudo[391198]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:07 compute-0 ceph-mon[191910]: pgmap v923: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:07 compute-0 sudo[391350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjavivzvexhknvzxutonqcwdiighsith ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433767.1570659-306-217594779262553/AnsiballZ_file.py'
Oct 02 19:36:07 compute-0 sudo[391350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:07 compute-0 python3.9[391352]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:36:08 compute-0 sudo[391350]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v924: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:09 compute-0 sudo[391502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqrlajdjpxeajnswklaofvzvrenomqae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433768.738824-314-250838419422318/AnsiballZ_file.py'
Oct 02 19:36:09 compute-0 sudo[391502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:09 compute-0 ceph-mon[191910]: pgmap v924: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:09 compute-0 python3.9[391504]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:36:09 compute-0 sudo[391502]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:36:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v925: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:11 compute-0 sudo[391654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chkwxekpfpygkadtzrhfkvxuifbkhigh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433769.8584406-322-23888150429525/AnsiballZ_stat.py'
Oct 02 19:36:11 compute-0 sudo[391654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:11 compute-0 ceph-mon[191910]: pgmap v925: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:11 compute-0 python3.9[391656]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:36:11 compute-0 sudo[391654]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:12 compute-0 sudo[391732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onaicpbigwgglbmohruthmpmswtbtfux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433769.8584406-322-23888150429525/AnsiballZ_file.py'
Oct 02 19:36:12 compute-0 sudo[391732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:12 compute-0 podman[391742]: 2025-10-02 19:36:12.18447571 +0000 UTC m=+0.098869661 container health_status df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=base rhel9, version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release-0.7.12=, distribution-scope=public, name=ubi9, architecture=x86_64, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:36:12 compute-0 podman[391734]: 2025-10-02 19:36:12.189061912 +0000 UTC m=+0.131470748 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible)
Oct 02 19:36:12 compute-0 podman[391736]: 2025-10-02 19:36:12.205657874 +0000 UTC m=+0.129435604 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 19:36:12 compute-0 podman[391737]: 2025-10-02 19:36:12.216936184 +0000 UTC m=+0.135938747 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Oct 02 19:36:12 compute-0 podman[391735]: 2025-10-02 19:36:12.219016009 +0000 UTC m=+0.145503621 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 19:36:12 compute-0 python3.9[391749]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v926: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:12 compute-0 sudo[391732]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:36:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:36:12 compute-0 sudo[391906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzpxgicogmuuugyzlwhdgcocbexxpxvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433769.8584406-322-23888150429525/AnsiballZ_stat.py'
Oct 02 19:36:12 compute-0 sudo[391906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:13 compute-0 python3.9[391908]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:36:13 compute-0 sudo[391906]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:13 compute-0 ceph-mon[191910]: pgmap v926: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:13 compute-0 sudo[391984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xckzjohompyjpfhrtflzqstlhbqdnipy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433769.8584406-322-23888150429525/AnsiballZ_file.py'
Oct 02 19:36:13 compute-0 sudo[391984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:13 compute-0 python3.9[391986]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ _original_basename=healthcheck.future recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:36:13 compute-0 sudo[391984]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v927: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:14 compute-0 sudo[392136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgldmaudbxeacmazjwxiqhjfhyjspmcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433774.0599484-322-144516327569367/AnsiballZ_stat.py'
Oct 02 19:36:14 compute-0 sudo[392136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:14 compute-0 python3.9[392138]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:36:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:36:14 compute-0 sudo[392136]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:15 compute-0 sudo[392214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvfiidpifjeabefxcwuwvayvzdwdepnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433774.0599484-322-144516327569367/AnsiballZ_file.py'
Oct 02 19:36:15 compute-0 sudo[392214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:15 compute-0 ceph-mon[191910]: pgmap v927: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:15 compute-0 python3.9[392216]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/kepler/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/kepler/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:36:15 compute-0 sudo[392214]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:16 compute-0 podman[392336]: 2025-10-02 19:36:16.694102097 +0000 UTC m=+0.125296974 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, name=ubi9-minimal)
Oct 02 19:36:16 compute-0 sudo[392384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riunklmjncdcfulyudikiptslrqfoall ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433775.993224-355-5942378219850/AnsiballZ_container_config_data.py'
Oct 02 19:36:16 compute-0 sudo[392384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:17 compute-0 python3.9[392386]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Oct 02 19:36:17 compute-0 sudo[392384]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:17 compute-0 ceph-mon[191910]: pgmap v928: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:18 compute-0 sudo[392536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aspgatwvedskacsosmmykmzpcannwnhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433777.4104383-364-125198560734648/AnsiballZ_container_config_hash.py'
Oct 02 19:36:18 compute-0 sudo[392536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:18 compute-0 podman[392539]: 2025-10-02 19:36:18.292890495 +0000 UTC m=+0.117933558 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:36:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:18 compute-0 python3.9[392538]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:36:18 compute-0 sudo[392536]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:19 compute-0 ceph-mon[191910]: pgmap v929: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:19 compute-0 sudo[392714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aihtcnxatnkunydfiqprfzwqedxxlpdz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759433778.7732594-374-74639490100455/AnsiballZ_edpm_container_manage.py'
Oct 02 19:36:19 compute-0 sudo[392714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:36:19 compute-0 python3[392716]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:36:20 compute-0 python3[392716]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "4e3fcb5b1fba62258ff06f167ae06a1ec1b5619d7c6c0d986039bf8e54f8eb69",
                                                     "Digest": "sha256:31c0d98fec7ff16416903874af0addeff03a7e72ede256990f2a71589e8be5ce",
                                                     "RepoTags": [
                                                          "quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:31c0d98fec7ff16416903874af0addeff03a7e72ede256990f2a71589e8be5ce"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2025-10-02T06:24:36.894186563Z",
                                                     "Config": {
                                                          "User": "ceilometer",
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                               "LANG=en_US.UTF-8",
                                                               "TZ=UTC",
                                                               "container=oci"
                                                          ],
                                                          "Entrypoint": [
                                                               "dumb-init",
                                                               "--single-child",
                                                               "--"
                                                          ],
                                                          "Cmd": [
                                                               "kolla_start"
                                                          ],
                                                          "Labels": {
                                                               "io.buildah.version": "1.41.3",
                                                               "maintainer": "OpenStack Kubernetes Operator team",
                                                               "org.label-schema.build-date": "20251001",
                                                               "org.label-schema.license": "GPLv2",
                                                               "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                               "org.label-schema.schema-version": "1.0",
                                                               "org.label-schema.vendor": "CentOS",
                                                               "tcib_build_tag": "a0eac564d779a7eaac46c9816bff261a",
                                                               "tcib_managed": "true"
                                                          },
                                                          "StopSignal": "SIGTERM"
                                                     },
                                                     "Version": "",
                                                     "Author": "",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 506042235,
                                                     "VirtualSize": 506042235,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/34365f170072023ec5c8c572d7511714609a26f43c067a32144a7059987f02c5/diff:/var/lib/containers/storage/overlay/42e6de9e7202d00f42d3bd209135e03f782967c2586fadb6628837faf9793f24/diff:/var/lib/containers/storage/overlay/661e15e0dfc445ecdff08d434d5cb11b0b9a54f42dd69506bb77f4c8cd8adb25/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/2cf9b56c3b0130b731829b54bdaf9d18e56e469fb556a8f57c1e6996fceabdd0/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/2cf9b56c3b0130b731829b54bdaf9d18e56e469fb556a8f57c1e6996fceabdd0/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",
                                                               "sha256:c7c80f27a004d53fb75b6d30a961f2416ea855138d9e550000fa093a1e5e384d",
                                                               "sha256:b750c0fcea5f2ef8ddf8bac392b882b999626fea5ad4fc74394b8a33125ae898",
                                                               "sha256:bff6b53cc8f5f5da3c1e46587d75b635f64cdcfabc11cc88956a45d827a92462",
                                                               "sha256:102959d6671d6451dd9b4b86320438fa167f5fbd2002b179c4620bad7a13f452"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "io.buildah.version": "1.41.3",
                                                          "maintainer": "OpenStack Kubernetes Operator team",
                                                          "org.label-schema.build-date": "20251001",
                                                          "org.label-schema.license": "GPLv2",
                                                          "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                          "org.label-schema.schema-version": "1.0",
                                                          "org.label-schema.vendor": "CentOS",
                                                          "tcib_build_tag": "a0eac564d779a7eaac46c9816bff261a",
                                                          "tcib_managed": "true"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                     "User": "ceilometer",
                                                     "History": [
                                                          {
                                                               "created": "2025-10-01T03:48:01.636308726Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-01T03:48:01.636415187Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251001\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-01T03:48:09.404099909Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:09.757191184Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                               "comment": "FROM quay.io/centos/centos:stream9",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:09.757211565Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:09.757229405Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:09.757245856Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:09.757279147Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:09.757304688Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:10.233672718Z",
                                                               "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:47.227633956Z",
                                                               "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:50.639117027Z",
                                                               "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:51.032972349Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:51.419814064Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:52.143664292Z",
                                                               "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:52.537669617Z",
                                                               "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:52.939739979Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:53.354487155Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:53.748982134Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:54.090941713Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:54.48363415Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:54.858704521Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:55.151167986Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:55.41361541Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:55.720650713Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:56.087416219Z",
                                                               "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:56.402825868Z",
                                                               "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:10:59.881750329Z",
                                                               "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:11:00.217806143Z",
                                                               "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:11:00.573407121Z",
                                                               "created_by": "/bin/sh -c mkdir -p /openstack",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:11:02.069855698Z",
                                                               "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:11:03.929362102Z",
                                                               "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:11:03.929402883Z",
                                                               "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:11:03.929411243Z",
                                                               "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:11:03.929417844Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:11:04.966176997Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"a0eac564d779a7eaac46c9816bff261a\""
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:13:11.861828926Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:a0eac564d779a7eaac46c9816bff261a",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:13:51.789400717Z",
                                                               "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:13:55.149279056Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"a0eac564d779a7eaac46c9816bff261a\""
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:16:23.232552004Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:a0eac564d779a7eaac46c9816bff261a",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:16:23.929559009Z",
                                                               "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage ceilometer",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:17:50.161277656Z",
                                                               "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-common && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:17:59.557357581Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"a0eac564d779a7eaac46c9816bff261a\""
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:23:51.875963515Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-ceilometer-base:a0eac564d779a7eaac46c9816bff261a",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:24:34.389795609Z",
                                                               "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-ipmi && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:24:34.38983283Z",
                                                               "created_by": "/bin/sh -c #(nop) USER ceilometer",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-10-02T06:24:39.222033801Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"a0eac564d779a7eaac46c9816bff261a\""
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified"
                                                     ]
                                                }
                                           ]
                                           : quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Oct 02 19:36:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v930: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:20 compute-0 sudo[392714]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:21 compute-0 ceph-mon[191910]: pgmap v930: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:21 compute-0 sudo[392921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avswjdnotcchuxajwzmvymygzzahmndy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433781.0559983-382-101609023706327/AnsiballZ_stat.py'
Oct 02 19:36:21 compute-0 sudo[392921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:21 compute-0 python3.9[392923]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:36:21 compute-0 sudo[392921]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v931: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:23 compute-0 sudo[393075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwdxidokabmbqrkedcsjezfujrtyapth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433782.236271-391-85611081359170/AnsiballZ_file.py'
Oct 02 19:36:23 compute-0 sudo[393075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:23 compute-0 ceph-mon[191910]: pgmap v931: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:23 compute-0 python3.9[393077]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:36:23 compute-0 sudo[393075]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v932: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:24 compute-0 sudo[393226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgfavqbtowdchixvanziccywcqlvgsce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433783.700799-391-75669800627219/AnsiballZ_copy.py'
Oct 02 19:36:24 compute-0 sudo[393226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:36:24 compute-0 python3.9[393228]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759433783.700799-391-75669800627219/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:36:25 compute-0 sudo[393226]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:25 compute-0 ceph-mon[191910]: pgmap v932: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:25 compute-0 sudo[393302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zruhjmtuduedbwavfdvywoasisccskkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433783.700799-391-75669800627219/AnsiballZ_systemd.py'
Oct 02 19:36:25 compute-0 sudo[393302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:26 compute-0 python3.9[393304]: ansible-systemd Invoked with state=started name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:36:26 compute-0 sudo[393302]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:26 compute-0 podman[393332]: 2025-10-02 19:36:26.755337067 +0000 UTC m=+0.171831682 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 19:36:27 compute-0 sudo[393474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttmruzbvqtpechyzqiagmiirtugpgczo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433786.6390808-413-10334865206695/AnsiballZ_container_config_data.py'
Oct 02 19:36:27 compute-0 sudo[393474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:27 compute-0 python3.9[393476]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Oct 02 19:36:27 compute-0 sudo[393474]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:27 compute-0 ceph-mon[191910]: pgmap v933: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:28 compute-0 sudo[393626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qndopzzyeqxofmmubhmxfcaeobkaixnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433787.7623928-422-272173992935423/AnsiballZ_container_config_hash.py'
Oct 02 19:36:28 compute-0 sudo[393626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v934: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:28 compute-0 python3.9[393628]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:36:28 compute-0 sudo[393626]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:29 compute-0 ceph-mon[191910]: pgmap v934: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:29 compute-0 sudo[393778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dthiklcabzuohhgadzykpwglblvwmkrd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759433789.1002345-432-148315332410046/AnsiballZ_edpm_container_manage.py'
Oct 02 19:36:29 compute-0 sudo[393778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:29 compute-0 podman[157186]: time="2025-10-02T19:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:36:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:36:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8521 "" "Go-http-client/1.1"
Oct 02 19:36:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:36:29 compute-0 python3[393780]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:36:30 compute-0 python3[393780]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7",
                                                     "Digest": "sha256:c74e63cd5740586d4c62182467bb463ef5e3dd809027aedc92c05ac19e93b086",
                                                     "RepoTags": [
                                                          "quay.io/sustainable_computing_io/kepler:release-0.7.12"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd",
                                                          "quay.io/sustainable_computing_io/kepler@sha256:c74e63cd5740586d4c62182467bb463ef5e3dd809027aedc92c05ac19e93b086"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2024-10-15T06:30:56.315982344Z",
                                                     "Config": {
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                               "container=oci",
                                                               "NVIDIA_VISIBLE_DEVICES=all",
                                                               "NVIDIA_DRIVER_CAPABILITIES=utility",
                                                               "NVIDIA_MIG_MONITOR_DEVICES=all",
                                                               "NVIDIA_MIG_CONFIG_DEVICES=all"
                                                          ],
                                                          "Entrypoint": [
                                                               "/usr/bin/kepler"
                                                          ],
                                                          "Labels": {
                                                               "architecture": "x86_64",
                                                               "build-date": "2024-09-18T21:23:30",
                                                               "com.redhat.component": "ubi9-container",
                                                               "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
                                                               "description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                               "distribution-scope": "public",
                                                               "io.buildah.version": "1.29.0",
                                                               "io.k8s.description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                               "io.k8s.display-name": "Red Hat Universal Base Image 9",
                                                               "io.openshift.expose-services": "",
                                                               "io.openshift.tags": "base rhel9",
                                                               "maintainer": "Red Hat, Inc.",
                                                               "name": "ubi9",
                                                               "release": "1214.1726694543",
                                                               "release-0.7.12": "",
                                                               "summary": "Provides the latest release of Red Hat Universal Base Image 9.",
                                                               "url": "https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543",
                                                               "vcs-ref": "e309397d02fc53f7fa99db1371b8700eb49f268f",
                                                               "vcs-type": "git",
                                                               "vendor": "Red Hat, Inc.",
                                                               "version": "9.4"
                                                          }
                                                     },
                                                     "Version": "",
                                                     "Author": "",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 331545571,
                                                     "VirtualSize": 331545571,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/de1557109facda5eb038045e25371b06ad2baf5cf32c60a7fe84a603bee1e079/diff:/var/lib/containers/storage/overlay/725f7e4e3b8edde36f0bdcd313bbaf872dbe55b162264f8008ee3c09a0b89b66/diff:/var/lib/containers/storage/overlay/573769ea2305456dffa2f0674424aa020c1494387d36bcccb339788fd220d39b/diff:/var/lib/containers/storage/overlay/56a7d751d1997fb4e9fb31bd07356a0c9a7699a9bb524feeb3c7fe2b433b8223/diff:/var/lib/containers/storage/overlay/0560e6233aa93f1e1ac7bed53255811f32dc680869ef7f31dd630efc1203b853/diff:/var/lib/containers/storage/overlay/8d984035cdde48f32944ddaa464ac42d376faabc98415168800b2b8c9aec0930/diff:/var/lib/containers/storage/overlay/e7328e803158cca63d8efdbe1caefb1b51654de77e5fa8691079ad06db1abf75/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/ed698de2bb3f7ef46422d45edf0654a1764e700cec794f481dab0a1f34f51932/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/ed698de2bb3f7ef46422d45edf0654a1764e700cec794f481dab0a1f34f51932/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:e7328e803158cca63d8efdbe1caefb1b51654de77e5fa8691079ad06db1abf75",
                                                               "sha256:f947b23b2d0723eac9b608b79e6d48e59d90f74958e05f2762295489e0088e86",
                                                               "sha256:3bf6ab40cc16a103a087232c2c6a1a093dcb6141e70397de57907f5d00741429",
                                                               "sha256:2f5269f1ade14b3b0806305a0b2d3efffe65a187b302789a50ac00bcb815b960",
                                                               "sha256:413f5abb84bd1c03bdfd9c1e0dec8f4be92159c9c6116c4e44247efcdcc6b518",
                                                               "sha256:60c06a2423851502fc43aec0680b91181b0d62b52812c019d3fc66f1546c4529",
                                                               "sha256:323ce4bcad35618db6032dd5bfbd6c8ebb0cde882f730b19296d0ceaf5e39427",
                                                               "sha256:270b3386a8e4a2127a32b007abfea7cb394ae1dee577ee7fefdbb79cd2bea856"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "architecture": "x86_64",
                                                          "build-date": "2024-09-18T21:23:30",
                                                          "com.redhat.component": "ubi9-container",
                                                          "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
                                                          "description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                          "distribution-scope": "public",
                                                          "io.buildah.version": "1.29.0",
                                                          "io.k8s.description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                          "io.k8s.display-name": "Red Hat Universal Base Image 9",
                                                          "io.openshift.expose-services": "",
                                                          "io.openshift.tags": "base rhel9",
                                                          "maintainer": "Red Hat, Inc.",
                                                          "name": "ubi9",
                                                          "release": "1214.1726694543",
                                                          "release-0.7.12": "",
                                                          "summary": "Provides the latest release of Red Hat Universal Base Image 9.",
                                                          "url": "https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543",
                                                          "vcs-ref": "e309397d02fc53f7fa99db1371b8700eb49f268f",
                                                          "vcs-type": "git",
                                                          "vendor": "Red Hat, Inc.",
                                                          "version": "9.4"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.oci.image.manifest.v1+json",
                                                     "User": "",
                                                     "History": [
                                                          {
                                                               "created": "2024-09-18T21:36:31.099323493Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:0067eb9f2ee25ab2d666a7639a85fe707b582902a09242761abf30c53664069b in / ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.031010231Z",
                                                               "created_by": "/bin/sh -c mv -f /etc/yum.repos.d/ubi.repo /tmp || :",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.418413433Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:5b1f650e1376d79fa3a65df4a154ea5166def95154b52c1c1097dfd8fc7d58eb in /tmp/tls-ca-bundle.pem ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.91238548Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD multi:7a67822d03b1a3ddb205cc3fcf7acd9d3180aef5988a5d25887bc0753a7a493b in /etc/yum.repos.d/ ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912448474Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"Red Hat, Inc.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912573716Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL com.redhat.component=\"ubi9-container\"       name=\"ubi9\"       version=\"9.4\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912652474Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL com.redhat.license_terms=\"https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912740628Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL summary=\"Provides the latest release of Red Hat Universal Base Image 9.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912866673Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL description=\"The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912921304Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.k8s.display-name=\"Red Hat Universal Base Image 9\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912962586Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.openshift.expose-services=\"\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.913001888Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.openshift.tags=\"base rhel9\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.913021599Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV container oci",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.913081151Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.913091001Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:33.824802353Z",
                                                               "created_by": "/bin/sh -c rm -rf /var/log/*",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:34.766737128Z",
                                                               "created_by": "/bin/sh -c mkdir -p /var/log/rhsm",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:35.121320055Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:ed34e436a5c2cc729eecd8b15b94c75028aea1cb18b739cafbb293b5e4ad5dae in /root/buildinfo/content_manifests/ubi9-container-9.4-1214.1726694543.json ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:35.525712655Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:d56bb1961538221b52d7e292418978f186bf67b9906771f38530fc3996a9d0d4 in /root/buildinfo/Dockerfile-ubi9-9.4-1214.1726694543 ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:35.526152969Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"release\"=\"1214.1726694543\" \"distribution-scope\"=\"public\" \"vendor\"=\"Red Hat, Inc.\" \"build-date\"=\"2024-09-18T21:23:30\" \"architecture\"=\"x86_64\" \"vcs-type\"=\"git\" \"vcs-ref\"=\"e309397d02fc53f7fa99db1371b8700eb49f268f\" \"io.k8s.description\"=\"The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.\" \"url\"=\"https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:36.481014095Z",
                                                               "created_by": "/bin/sh -c rm -f '/etc/yum.repos.d/odcs-3496925-3b364.repo' '/etc/yum.repos.d/rhel-9.4-compose-34ae9.repo'",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:37.364179091Z",
                                                               "created_by": "/bin/sh -c rm -f /tmp/tls-ca-bundle.pem",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:41.423178117Z",
                                                               "created_by": "/bin/sh -c mv -fZ /tmp/ubi.repo /etc/yum.repos.d/ubi.repo || :"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "SHELL [/bin/bash -c]",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ARG INSTALL_DCGM=false",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ARG INSTALL_HABANA=false",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ARG TARGETARCH=amd64",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ENV NVIDIA_VISIBLE_DEVICES=all",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ENV NVIDIA_DRIVER_CAPABILITIES=utility",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ENV NVIDIA_MIG_MONITOR_DEVICES=all",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ENV NVIDIA_MIG_CONFIG_DEVICES=all",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "RUN |3 INSTALL_DCGM=false INSTALL_HABANA=false TARGETARCH=amd64 /bin/bash -c yum -y update-minimal --security --sec-severity=Important --sec-severity=Critical && yum clean all # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:38.991358946Z",
                                                               "created_by": "RUN |3 INSTALL_DCGM=false INSTALL_HABANA=false TARGETARCH=amd64 /bin/bash -c set -e -x ;\t\tINSTALL_PKGS=\" \t\t\tlibbpf  \t\t\" ;\t\tyum install -y $INSTALL_PKGS ;\t\t\t\tif [[ \"$TARGETARCH\" == \"amd64\" ]]; then \t\t\tyum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm; \t\t\tyum install -y cpuid; \t\t\tif [[ \"$INSTALL_DCGM\" == \"true\" ]]; then \t\t\t\tdnf config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/cuda-rhel9.repo; \t\t\t\tyum install -y datacenter-gpu-manager libnvidia-ml; \t\t\tfi; \t\t\tif [[ \"$INSTALL_HABANA\" == \"true\" ]]; then \t\t\t\trpm -Uvh https://vault.habana.ai/artifactory/rhel/9/9.2/habanalabs-firmware-tools-1.15.1-15.el9.x86_64.rpm --nodeps; \t\t\t\techo /usr/lib/habanalabs > /etc/ld.so.conf.d/habanalabs.conf; \t\t\t\tldconfig; \t\t\tfi; \t\tfi;\t\tyum clean all # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:30:56.146511902Z",
                                                               "created_by": "COPY /workspace/_output/bin/kepler /usr/bin/kepler # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:30:56.168608119Z",
                                                               "created_by": "COPY /libbpf-source/linux-5.14.0-424.el9/tools/bpf/bpftool/bpftool /usr/bin/bpftool # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:30:56.24706386Z",
                                                               "created_by": "RUN |3 INSTALL_DCGM=false INSTALL_HABANA=false TARGETARCH=amd64 /bin/bash -c mkdir -p /var/lib/kepler/data # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:30:56.299132132Z",
                                                               "created_by": "COPY /workspace/data/cpus.yaml /var/lib/kepler/data/cpus.yaml # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:30:56.315982344Z",
                                                               "created_by": "COPY /workspace/data/model_weight /var/lib/kepler/data/model_weight # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:30:56.315982344Z",
                                                               "created_by": "ENTRYPOINT [\"/usr/bin/kepler\"]",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.io/sustainable_computing_io/kepler:release-0.7.12"
                                                     ]
                                                }
                                           ]
                                           : quay.io/sustainable_computing_io/kepler:release-0.7.12
Oct 02 19:36:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:30 compute-0 kepler[177012]: I1002 19:36:30.388311       1 exporter.go:218] Received shutdown signal
Oct 02 19:36:30 compute-0 kepler[177012]: I1002 19:36:30.388762       1 exporter.go:226] Exiting...
Oct 02 19:36:30 compute-0 systemd[1]: libpod-df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c.scope: Deactivated successfully.
Oct 02 19:36:30 compute-0 systemd[1]: libpod-df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c.scope: Consumed 38.006s CPU time.
Oct 02 19:36:30 compute-0 podman[393825]: 2025-10-02 19:36:30.580941408 +0000 UTC m=+0.268783131 container died df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, name=ubi9, vendor=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Oct 02 19:36:30 compute-0 systemd[1]: df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c-10ee41276df8e535.timer: Deactivated successfully.
Oct 02 19:36:30 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c.
Oct 02 19:36:30 compute-0 systemd[1]: df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c-10ee41276df8e535.service: Failed to open /run/systemd/transient/df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c-10ee41276df8e535.service: No such file or directory
Oct 02 19:36:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c-userdata-shm.mount: Deactivated successfully.
Oct 02 19:36:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-fec69d7149010c9ce04b706cc88e5698403e870d64bbbb9ba31e78bedc9b025c-merged.mount: Deactivated successfully.
Oct 02 19:36:30 compute-0 podman[393825]: 2025-10-02 19:36:30.643843542 +0000 UTC m=+0.331685245 container cleanup df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, com.redhat.component=ubi9-container, config_id=edpm, name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, distribution-scope=public, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-type=git)
Oct 02 19:36:30 compute-0 python3[393780]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop kepler
Oct 02 19:36:30 compute-0 systemd[1]: df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c-10ee41276df8e535.timer: Failed to open /run/systemd/transient/df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c-10ee41276df8e535.timer: No such file or directory
Oct 02 19:36:30 compute-0 systemd[1]: df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c-10ee41276df8e535.service: Failed to open /run/systemd/transient/df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c-10ee41276df8e535.service: No such file or directory
Oct 02 19:36:30 compute-0 podman[393851]: 2025-10-02 19:36:30.746915493 +0000 UTC m=+0.078092368 container remove df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, io.buildah.version=1.29.0, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., name=ubi9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, version=9.4, container_name=kepler, architecture=x86_64)
Oct 02 19:36:30 compute-0 podman[393852]: Error: no container with ID df00015e780fdabaae5c1e3b6cba0a14b41c8f07261f9fb0887e4f92a6c4c02c found in database: no such container
Oct 02 19:36:30 compute-0 systemd[1]: edpm_kepler.service: Control process exited, code=exited, status=125/n/a
Oct 02 19:36:30 compute-0 python3[393780]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force kepler
Oct 02 19:36:30 compute-0 podman[393876]: Error: no container with name or ID "kepler" found: no such container
Oct 02 19:36:30 compute-0 systemd[1]: edpm_kepler.service: Control process exited, code=exited, status=125/n/a
Oct 02 19:36:30 compute-0 systemd[1]: edpm_kepler.service: Failed with result 'exit-code'.
Oct 02 19:36:30 compute-0 podman[393877]: 2025-10-02 19:36:30.87458032 +0000 UTC m=+0.091625799 container create 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release=1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9)
Oct 02 19:36:30 compute-0 podman[393877]: 2025-10-02 19:36:30.836411554 +0000 UTC m=+0.053457093 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Oct 02 19:36:30 compute-0 python3[393780]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Oct 02 19:36:30 compute-0 systemd[1]: edpm_kepler.service: Scheduled restart job, restart counter is at 1.
Oct 02 19:36:30 compute-0 systemd[1]: Stopped kepler container.
Oct 02 19:36:30 compute-0 systemd[1]: Starting kepler container...
Oct 02 19:36:30 compute-0 systemd[1]: Started libpod-conmon-584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733.scope.
Oct 02 19:36:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:36:31 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733.
Oct 02 19:36:31 compute-0 podman[393901]: 2025-10-02 19:36:31.066173696 +0000 UTC m=+0.162281688 container init 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, version=9.4, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_id=edpm, io.buildah.version=1.29.0, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=)
Oct 02 19:36:31 compute-0 podman[393901]: 2025-10-02 19:36:31.097069378 +0000 UTC m=+0.193177330 container start 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1214.1726694543, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, vcs-type=git, com.redhat.component=ubi9-container, config_id=edpm, io.buildah.version=1.29.0, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Oct 02 19:36:31 compute-0 kepler[393922]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct 02 19:36:31 compute-0 podman[393914]: kepler
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.110725       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.111493       1 config.go:293] using gCgroup ID in the BPF program: true
Oct 02 19:36:31 compute-0 python3[393780]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start kepler
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.111518       1 config.go:295] kernel version: 5.14
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.112109       1 power.go:78] Unable to obtain power, use estimate method
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.112125       1 redfish.go:169] failed to get redfish credential file path
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.112520       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.112527       1 power.go:79] using none to obtain power
Oct 02 19:36:31 compute-0 kepler[393922]: E1002 19:36:31.112541       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Oct 02 19:36:31 compute-0 kepler[393922]: E1002 19:36:31.112562       1 exporter.go:154] failed to init GPU accelerators: no devices found
Oct 02 19:36:31 compute-0 kepler[393922]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.114477       1 exporter.go:84] Number of CPUs: 8
Oct 02 19:36:31 compute-0 systemd[1]: Started kepler container.
Oct 02 19:36:31 compute-0 podman[393937]: 2025-10-02 19:36:31.215985951 +0000 UTC m=+0.096245391 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, version=9.4, release-0.7.12=, distribution-scope=public, io.openshift.expose-services=, name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.buildah.version=1.29.0, config_id=edpm, container_name=kepler, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 19:36:31 compute-0 systemd[1]: 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733-5f7e6d9ef28a6375.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:36:31 compute-0 systemd[1]: 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733-5f7e6d9ef28a6375.service: Failed with result 'exit-code'.
Oct 02 19:36:31 compute-0 sudo[393778]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:31 compute-0 openstack_network_exporter[372736]: ERROR   19:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:36:31 compute-0 openstack_network_exporter[372736]: ERROR   19:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:36:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:36:31 compute-0 openstack_network_exporter[372736]: ERROR   19:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:36:31 compute-0 openstack_network_exporter[372736]: ERROR   19:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:36:31 compute-0 openstack_network_exporter[372736]: ERROR   19:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:36:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:36:31 compute-0 ceph-mon[191910]: pgmap v935: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.704033       1 watcher.go:83] Using in cluster k8s config
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.705073       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Oct 02 19:36:31 compute-0 kepler[393922]: E1002 19:36:31.705227       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.713202       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.713279       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.727321       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.727424       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.744030       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.744104       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.744138       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.759999       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.760070       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.760083       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.760094       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.760108       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.760132       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.760287       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.760342       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.760467       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.760507       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.760804       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Oct 02 19:36:31 compute-0 kepler[393922]: I1002 19:36:31.762200       1 exporter.go:208] Started Kepler in 651.658654ms
Oct 02 19:36:31 compute-0 sudo[394148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkxxkblbnpjqzdxtnpotdbyvmbckxtmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433791.5037005-440-197530564261508/AnsiballZ_stat.py'
Oct 02 19:36:31 compute-0 sudo[394148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:36:32.282 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:36:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:36:32.282 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:36:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:36:32.282 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:36:32 compute-0 python3.9[394150]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:36:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v936: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:32 compute-0 sudo[394148]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:32 compute-0 nova_compute[355794]: 2025-10-02 19:36:32.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:32 compute-0 nova_compute[355794]: 2025-10-02 19:36:32.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 19:36:32 compute-0 nova_compute[355794]: 2025-10-02 19:36:32.592 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 19:36:32 compute-0 nova_compute[355794]: 2025-10-02 19:36:32.594 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:32 compute-0 nova_compute[355794]: 2025-10-02 19:36:32.594 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 19:36:32 compute-0 nova_compute[355794]: 2025-10-02 19:36:32.605 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:32 compute-0 podman[394177]: 2025-10-02 19:36:32.700561781 +0000 UTC m=+0.109389560 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:36:33 compute-0 sudo[394326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehfdjbaumijvjgvxatqzcedvmmqbaeag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433792.7091324-449-87314965799845/AnsiballZ_file.py'
Oct 02 19:36:33 compute-0 sudo[394326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:33 compute-0 python3.9[394328]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:36:33 compute-0 sudo[394326]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:33 compute-0 ceph-mon[191910]: pgmap v936: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:36:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:36:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:36:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:36:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:36:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:36:34 compute-0 sudo[394477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeuszpufqxhboqkqscclsxxtmbjeqavt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433793.5284066-449-235651243767138/AnsiballZ_copy.py'
Oct 02 19:36:34 compute-0 sudo[394477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v937: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:34 compute-0 python3.9[394479]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759433793.5284066-449-235651243767138/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:36:34 compute-0 sudo[394477]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:34 compute-0 nova_compute[355794]: 2025-10-02 19:36:34.658 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:34 compute-0 nova_compute[355794]: 2025-10-02 19:36:34.659 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:36:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:36:35 compute-0 sudo[394568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjqgjnfrmcnombqlcspgxsbprupsisgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433793.5284066-449-235651243767138/AnsiballZ_systemd.py'
Oct 02 19:36:35 compute-0 sudo[394568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:35 compute-0 podman[394527]: 2025-10-02 19:36:35.096708731 +0000 UTC m=+0.136454961 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20250930, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:36:35 compute-0 python3.9[394573]: ansible-systemd Invoked with state=started name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:36:35 compute-0 sudo[394568]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:35 compute-0 nova_compute[355794]: 2025-10-02 19:36:35.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:35 compute-0 ceph-mon[191910]: pgmap v937: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:36 compute-0 sudo[394727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fijfsawzwwqxbkoqyhihnibvckudywpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433795.8626971-469-115257558034275/AnsiballZ_systemd.py'
Oct 02 19:36:36 compute-0 sudo[394727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:36 compute-0 nova_compute[355794]: 2025-10-02 19:36:36.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:36 compute-0 nova_compute[355794]: 2025-10-02 19:36:36.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:36 compute-0 python3.9[394729]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:36:36 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Oct 02 19:36:36 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:36:36.911 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:36:37.013 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:36:37.014 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:36:37.014 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[176762]: 2025-10-02 19:36:37.025 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Oct 02 19:36:37 compute-0 systemd[1]: libpod-0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.scope: Deactivated successfully.
Oct 02 19:36:37 compute-0 systemd[1]: libpod-0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.scope: Consumed 3.440s CPU time.
Oct 02 19:36:37 compute-0 podman[394733]: 2025-10-02 19:36:37.252283898 +0000 UTC m=+0.402534108 container died 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 19:36:37 compute-0 systemd[1]: 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17-43fb96673b7be5c0.timer: Deactivated successfully.
Oct 02 19:36:37 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.
Oct 02 19:36:37 compute-0 systemd[1]: 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17-43fb96673b7be5c0.service: Failed to open /run/systemd/transient/0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17-43fb96673b7be5c0.service: No such file or directory
Oct 02 19:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17-userdata-shm.mount: Deactivated successfully.
Oct 02 19:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-131a66069c3cdfad53cc80c45e2ab83119ea33ca0a45ae7f8b3723a71c50dc63-merged.mount: Deactivated successfully.
Oct 02 19:36:37 compute-0 podman[394733]: 2025-10-02 19:36:37.383761156 +0000 UTC m=+0.534011356 container cleanup 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 19:36:37 compute-0 podman[394733]: ceilometer_agent_ipmi
Oct 02 19:36:37 compute-0 podman[394763]: ceilometer_agent_ipmi
Oct 02 19:36:37 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Oct 02 19:36:37 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Oct 02 19:36:37 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Oct 02 19:36:37 compute-0 nova_compute[355794]: 2025-10-02 19:36:37.570 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:37 compute-0 nova_compute[355794]: 2025-10-02 19:36:37.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:37 compute-0 nova_compute[355794]: 2025-10-02 19:36:37.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:36:37 compute-0 nova_compute[355794]: 2025-10-02 19:36:37.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:36:37 compute-0 ceph-mon[191910]: pgmap v938: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:37 compute-0 nova_compute[355794]: 2025-10-02 19:36:37.599 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:36:37 compute-0 nova_compute[355794]: 2025-10-02 19:36:37.600 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:37 compute-0 nova_compute[355794]: 2025-10-02 19:36:37.600 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:37 compute-0 nova_compute[355794]: 2025-10-02 19:36:37.600 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:37 compute-0 nova_compute[355794]: 2025-10-02 19:36:37.641 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:36:37 compute-0 nova_compute[355794]: 2025-10-02 19:36:37.641 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:36:37 compute-0 nova_compute[355794]: 2025-10-02 19:36:37.641 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:36:37 compute-0 nova_compute[355794]: 2025-10-02 19:36:37.641 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:36:37 compute-0 nova_compute[355794]: 2025-10-02 19:36:37.642 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:36:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a66069c3cdfad53cc80c45e2ab83119ea33ca0a45ae7f8b3723a71c50dc63/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a66069c3cdfad53cc80c45e2ab83119ea33ca0a45ae7f8b3723a71c50dc63/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a66069c3cdfad53cc80c45e2ab83119ea33ca0a45ae7f8b3723a71c50dc63/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Oct 02 19:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a66069c3cdfad53cc80c45e2ab83119ea33ca0a45ae7f8b3723a71c50dc63/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct 02 19:36:37 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.
Oct 02 19:36:37 compute-0 podman[394775]: 2025-10-02 19:36:37.762463039 +0000 UTC m=+0.216378006 container init 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: + sudo -E kolla_set_configs
Oct 02 19:36:37 compute-0 podman[394775]: 2025-10-02 19:36:37.810058686 +0000 UTC m=+0.263973573 container start 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:36:37 compute-0 podman[394775]: ceilometer_agent_ipmi
Oct 02 19:36:37 compute-0 sudo[394798]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 19:36:37 compute-0 sudo[394798]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:36:37 compute-0 sudo[394798]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:36:37 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Oct 02 19:36:37 compute-0 sudo[394727]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: INFO:__main__:Validating config file
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: INFO:__main__:Copying service configuration files
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: INFO:__main__:Writing out command to execute
Oct 02 19:36:37 compute-0 sudo[394798]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: ++ cat /run_command
Oct 02 19:36:37 compute-0 podman[394800]: 2025-10-02 19:36:37.895223081 +0000 UTC m=+0.073052194 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: + ARGS=
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: + sudo kolla_copy_cacerts
Oct 02 19:36:37 compute-0 systemd[1]: 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17-41f91025dc7de8b6.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:36:37 compute-0 systemd[1]: 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17-41f91025dc7de8b6.service: Failed with result 'exit-code'.
Oct 02 19:36:37 compute-0 sudo[394838]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 19:36:37 compute-0 sudo[394838]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:36:37 compute-0 sudo[394838]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:36:37 compute-0 sudo[394838]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: + [[ ! -n '' ]]
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: + . kolla_extend_start
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: + umask 0022
Oct 02 19:36:37 compute-0 ceilometer_agent_ipmi[394790]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Oct 02 19:36:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:36:38 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2153195461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:36:38 compute-0 nova_compute[355794]: 2025-10-02 19:36:38.226 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:36:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:38 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2153195461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:36:38 compute-0 sudo[394992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gglaaibghcqcdhphgbdqsslwmihfohgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433798.1205652-477-268380896318023/AnsiballZ_systemd.py'
Oct 02 19:36:38 compute-0 sudo[394992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:38 compute-0 nova_compute[355794]: 2025-10-02 19:36:38.727 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:36:38 compute-0 nova_compute[355794]: 2025-10-02 19:36:38.728 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4681MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:36:38 compute-0 nova_compute[355794]: 2025-10-02 19:36:38.729 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:36:38 compute-0 nova_compute[355794]: 2025-10-02 19:36:38.729 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:36:38 compute-0 nova_compute[355794]: 2025-10-02 19:36:38.891 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:36:38 compute-0 nova_compute[355794]: 2025-10-02 19:36:38.893 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:36:38 compute-0 nova_compute[355794]: 2025-10-02 19:36:38.994 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing inventories for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 19:36:39 compute-0 python3.9[394994]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:36:39 compute-0 systemd[1]: Stopping kepler container...
Oct 02 19:36:39 compute-0 nova_compute[355794]: 2025-10-02 19:36:39.072 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating ProviderTree inventory for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 19:36:39 compute-0 nova_compute[355794]: 2025-10-02 19:36:39.073 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating inventory in ProviderTree for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:36:39 compute-0 nova_compute[355794]: 2025-10-02 19:36:39.092 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing aggregate associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 19:36:39 compute-0 nova_compute[355794]: 2025-10-02 19:36:39.123 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing trait associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, traits: COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 19:36:39 compute-0 nova_compute[355794]: 2025-10-02 19:36:39.143 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:36:39 compute-0 kepler[393922]: I1002 19:36:39.159669       1 exporter.go:218] Received shutdown signal
Oct 02 19:36:39 compute-0 kepler[393922]: I1002 19:36:39.160762       1 exporter.go:226] Exiting...
Oct 02 19:36:39 compute-0 systemd[1]: libpod-584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733.scope: Deactivated successfully.
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.353 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.354 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.354 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.354 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.354 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.354 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.354 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.354 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.354 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.355 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.355 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.355 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 podman[394998]: 2025-10-02 19:36:39.35605135 +0000 UTC m=+0.283820401 container died 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.buildah.version=1.29.0, container_name=kepler, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, managed_by=edpm_ansible, release-0.7.12=, io.openshift.expose-services=, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vendor=Red Hat, Inc., version=9.4, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-type=git)
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.355 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.355 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.355 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.355 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.355 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.356 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.356 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.356 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.356 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.356 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.356 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.356 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.356 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.356 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.357 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.357 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.357 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.357 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.357 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.357 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.357 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.357 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.357 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.357 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.357 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.358 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.358 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.358 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.358 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.358 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.358 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.358 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.358 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.358 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.359 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.359 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.359 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.359 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.359 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.359 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.359 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.359 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.359 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.360 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.360 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.360 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.360 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.360 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.360 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.360 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.360 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.360 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.361 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.361 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.361 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.361 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.361 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.361 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.361 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.361 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.361 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.362 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.362 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.362 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.362 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.362 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.362 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.362 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.362 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.362 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.362 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.363 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.363 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.363 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.363 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.363 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.363 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.363 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.363 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.363 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.364 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.364 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.364 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.364 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.364 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.364 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.364 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.364 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.364 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.365 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.365 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.365 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.365 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.365 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.365 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.365 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.366 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.366 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.366 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.366 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.366 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.366 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.366 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.366 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.367 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.367 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.367 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.367 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.367 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.367 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.367 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.367 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.367 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.368 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.368 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.368 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.368 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.368 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.368 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.368 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.368 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.368 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.368 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.368 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.369 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.369 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.369 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.369 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.369 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.369 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.369 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.369 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.369 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.369 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.369 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.369 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.369 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.369 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.370 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.370 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.370 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.370 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.370 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.370 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.370 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.370 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.370 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.370 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.370 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.370 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.371 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.371 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.371 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 19:36:39 compute-0 systemd[1]: 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733-5f7e6d9ef28a6375.timer: Deactivated successfully.
Oct 02 19:36:39 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733.
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.392 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.393 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.393 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Oct 02 19:36:39 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:39.416 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpr7p3ejt8/privsep.sock']
Oct 02 19:36:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f44bf831e15c3a41af4556fb5f9d16c9b06bdb872ac6c2709f6cbcbd518f2f56-merged.mount: Deactivated successfully.
Oct 02 19:36:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733-userdata-shm.mount: Deactivated successfully.
Oct 02 19:36:39 compute-0 sudo[395047]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpr7p3ejt8/privsep.sock
Oct 02 19:36:39 compute-0 sudo[395047]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:36:39 compute-0 sudo[395047]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:36:39 compute-0 podman[394998]: 2025-10-02 19:36:39.45343695 +0000 UTC m=+0.381206031 container cleanup 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=base rhel9, version=9.4, container_name=kepler, distribution-scope=public, release=1214.1726694543, vcs-type=git, architecture=x86_64, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Oct 02 19:36:39 compute-0 podman[394998]: kepler
Oct 02 19:36:39 compute-0 systemd[1]: libpod-conmon-584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733.scope: Deactivated successfully.
Oct 02 19:36:39 compute-0 podman[395050]: kepler
Oct 02 19:36:39 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Oct 02 19:36:39 compute-0 systemd[1]: Stopped kepler container.
Oct 02 19:36:39 compute-0 systemd[1]: Starting kepler container...
Oct 02 19:36:39 compute-0 ceph-mon[191910]: pgmap v939: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:36:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3870061898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:36:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:36:39 compute-0 nova_compute[355794]: 2025-10-02 19:36:39.712 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:36:39 compute-0 nova_compute[355794]: 2025-10-02 19:36:39.720 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:36:39 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733.
Oct 02 19:36:39 compute-0 podman[395062]: 2025-10-02 19:36:39.766047605 +0000 UTC m=+0.189875151 container init 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, name=ubi9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, distribution-scope=public, config_id=edpm, release=1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct 02 19:36:39 compute-0 kepler[395077]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct 02 19:36:39 compute-0 kepler[395077]: I1002 19:36:39.800126       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Oct 02 19:36:39 compute-0 kepler[395077]: I1002 19:36:39.800275       1 config.go:293] using gCgroup ID in the BPF program: true
Oct 02 19:36:39 compute-0 kepler[395077]: I1002 19:36:39.800339       1 config.go:295] kernel version: 5.14
Oct 02 19:36:39 compute-0 kepler[395077]: I1002 19:36:39.801112       1 power.go:78] Unable to obtain power, use estimate method
Oct 02 19:36:39 compute-0 kepler[395077]: I1002 19:36:39.801151       1 redfish.go:169] failed to get redfish credential file path
Oct 02 19:36:39 compute-0 kepler[395077]: I1002 19:36:39.801689       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Oct 02 19:36:39 compute-0 kepler[395077]: I1002 19:36:39.801709       1 power.go:79] using none to obtain power
Oct 02 19:36:39 compute-0 kepler[395077]: E1002 19:36:39.801730       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Oct 02 19:36:39 compute-0 kepler[395077]: E1002 19:36:39.801763       1 exporter.go:154] failed to init GPU accelerators: no devices found
Oct 02 19:36:39 compute-0 kepler[395077]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct 02 19:36:39 compute-0 kepler[395077]: I1002 19:36:39.804260       1 exporter.go:84] Number of CPUs: 8
Oct 02 19:36:39 compute-0 podman[395062]: 2025-10-02 19:36:39.806766428 +0000 UTC m=+0.230593994 container start 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, io.openshift.tags=base rhel9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, release-0.7.12=, config_id=edpm, managed_by=edpm_ansible, com.redhat.component=ubi9-container, container_name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct 02 19:36:39 compute-0 podman[395062]: kepler
Oct 02 19:36:39 compute-0 systemd[1]: Started kepler container.
Oct 02 19:36:39 compute-0 sudo[394992]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:36:39 compute-0 podman[395090]: 2025-10-02 19:36:39.905616287 +0000 UTC m=+0.081349965 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, release-0.7.12=)
Oct 02 19:36:39 compute-0 systemd[1]: 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733-998361b3e8f7a89.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:36:39 compute-0 systemd[1]: 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733-998361b3e8f7a89.service: Failed with result 'exit-code'.
Oct 02 19:36:40 compute-0 sudo[395047]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.154 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.155 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpr7p3ejt8/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.004 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.010 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.014 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.015 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.272 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.273 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.274 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.274 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.274 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.274 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.274 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.275 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.275 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.275 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.275 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.275 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.275 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.283 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.283 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.284 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.284 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.284 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.284 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.284 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.284 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.284 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.284 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.284 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.284 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.284 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.285 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.285 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.285 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.285 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.285 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.285 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.285 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.285 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.285 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.285 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.286 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.286 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.286 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.286 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.286 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.286 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.286 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.286 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.286 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.286 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.286 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.286 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.287 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.287 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.287 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.287 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.287 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.287 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.287 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.287 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.287 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.287 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.288 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.288 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.288 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.288 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.288 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.288 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.288 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.288 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.288 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.288 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.288 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.288 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.289 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.289 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.289 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.289 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.289 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.289 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.289 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.289 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.289 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.289 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.289 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.289 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.290 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.290 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.290 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.290 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.290 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.290 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.290 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.290 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.290 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.290 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.290 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.290 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.291 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.291 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.291 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.291 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.291 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.291 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.291 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.291 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.291 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.291 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.291 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.292 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.292 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.292 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.292 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.292 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.292 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.292 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.292 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.292 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.292 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.292 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.292 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.293 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.293 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.293 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.293 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.293 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.293 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.293 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.293 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.293 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.293 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.293 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.294 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.294 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.294 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.294 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.294 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.294 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.294 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.294 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.294 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.294 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.294 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.295 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.295 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.295 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.295 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.295 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.295 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.295 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.295 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.295 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.295 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.295 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.295 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.296 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.296 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.296 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.296 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.296 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.296 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.296 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.296 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.296 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.296 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.296 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.296 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.297 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.297 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.297 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.297 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.297 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.297 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.297 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.297 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.297 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.297 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.298 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.298 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.298 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.298 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.298 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.298 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.298 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.298 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.298 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.298 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.298 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.299 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.299 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.299 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.299 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.299 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.299 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.299 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.299 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.299 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.299 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.299 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.300 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.300 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.300 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.300 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.300 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.300 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.300 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.300 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.300 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.300 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.301 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.301 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.301 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.301 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.301 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.301 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.301 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.301 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.301 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.302 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.302 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Oct 02 19:36:40 compute-0 ceilometer_agent_ipmi[394790]: 2025-10-02 19:36:40.304 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Oct 02 19:36:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v940: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:40 compute-0 sudo[395266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfctcrfplmptapexrwyabgvbfctxwgso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433800.0667653-485-27384742309619/AnsiballZ_find.py'
Oct 02 19:36:40 compute-0 sudo[395266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.400334       1 watcher.go:83] Using in cluster k8s config
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.400478       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Oct 02 19:36:40 compute-0 kepler[395077]: E1002 19:36:40.400618       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.409604       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.409674       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.415339       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.415775       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Oct 02 19:36:40 compute-0 nova_compute[355794]: 2025-10-02 19:36:40.426 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:36:40 compute-0 nova_compute[355794]: 2025-10-02 19:36:40.429 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:36:40 compute-0 nova_compute[355794]: 2025-10-02 19:36:40.429 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.431611       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.432030       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.432518       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.447075       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.447491       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.447770       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.448007       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.448276       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.448593       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.449198       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.449682       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.449993       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.450640       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.450988       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Oct 02 19:36:40 compute-0 kepler[395077]: I1002 19:36:40.451893       1 exporter.go:208] Started Kepler in 652.107056ms
Oct 02 19:36:40 compute-0 python3.9[395271]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:36:40 compute-0 sudo[395266]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:40 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3870061898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:36:41 compute-0 ceph-mon[191910]: pgmap v940: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:41 compute-0 sudo[395428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdebmeoilzvfbgwjzkxmfvgvqdrquwss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433801.1298165-495-180603001227249/AnsiballZ_podman_container_info.py'
Oct 02 19:36:41 compute-0 sudo[395428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:42 compute-0 python3.9[395430]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Oct 02 19:36:42 compute-0 sudo[395428]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:42 compute-0 podman[395490]: 2025-10-02 19:36:42.697125052 +0000 UTC m=+0.101032758 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 02 19:36:42 compute-0 podman[395488]: 2025-10-02 19:36:42.702130965 +0000 UTC m=+0.114765953 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 19:36:42 compute-0 podman[395491]: 2025-10-02 19:36:42.772144288 +0000 UTC m=+0.171117633 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:36:43 compute-0 ceph-mon[191910]: pgmap v941: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:44 compute-0 sudo[395649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xraflukqnnqwatgjokotcuqdlextrhzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433802.5704808-503-137039045289159/AnsiballZ_podman_container_exec.py'
Oct 02 19:36:44 compute-0 sudo[395649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:44 compute-0 python3.9[395651]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:36:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:44 compute-0 systemd[1]: Started libpod-conmon-daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97.scope.
Oct 02 19:36:44 compute-0 podman[395652]: 2025-10-02 19:36:44.533076729 +0000 UTC m=+0.158089677 container exec daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 19:36:44 compute-0 podman[395652]: 2025-10-02 19:36:44.567014941 +0000 UTC m=+0.192027859 container exec_died daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:36:44 compute-0 sudo[395649]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:44 compute-0 systemd[1]: libpod-conmon-daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97.scope: Deactivated successfully.
Oct 02 19:36:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:36:45 compute-0 ceph-mon[191910]: pgmap v942: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:46 compute-0 sudo[395830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozcnfifgyydwnfqtfknbqsrzqakxclpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433804.9164605-511-44146799343145/AnsiballZ_podman_container_exec.py'
Oct 02 19:36:46 compute-0 sudo[395830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:46 compute-0 python3.9[395832]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:36:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:46 compute-0 systemd[1]: Started libpod-conmon-daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97.scope.
Oct 02 19:36:46 compute-0 podman[395833]: 2025-10-02 19:36:46.549689412 +0000 UTC m=+0.173033224 container exec daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 19:36:46 compute-0 podman[395833]: 2025-10-02 19:36:46.58723171 +0000 UTC m=+0.210575512 container exec_died daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:36:46 compute-0 sudo[395830]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:46 compute-0 systemd[1]: libpod-conmon-daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97.scope: Deactivated successfully.
Oct 02 19:36:47 compute-0 sudo[396028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtgzxhoyuvltnrqdbwpfvpqbowtksmke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433806.9577239-519-258053065468856/AnsiballZ_file.py'
Oct 02 19:36:47 compute-0 podman[395988]: 2025-10-02 19:36:47.539957352 +0000 UTC m=+0.111955769 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, config_id=edpm)
Oct 02 19:36:47 compute-0 sudo[396028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:47 compute-0 python3.9[396036]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:36:47 compute-0 sudo[396028]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:47 compute-0 ceph-mon[191910]: pgmap v943: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:48 compute-0 podman[396160]: 2025-10-02 19:36:48.699127536 +0000 UTC m=+0.125483519 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:36:48 compute-0 sudo[396201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsvugdgqtzcihvyblwnxlpyffvkddisf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433808.1570427-528-61435456650579/AnsiballZ_podman_container_info.py'
Oct 02 19:36:48 compute-0 sudo[396201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:48 compute-0 python3.9[396211]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Oct 02 19:36:49 compute-0 sudo[396201]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:49 compute-0 rsyslogd[187702]: imjournal: 3553 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 02 19:36:49 compute-0 ceph-mon[191910]: pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:36:50 compute-0 sudo[396374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flqiifbbeutqadpshrzhadvmmwhrvpjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433809.4447374-536-176997849509752/AnsiballZ_podman_container_exec.py'
Oct 02 19:36:50 compute-0 sudo[396374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:50 compute-0 python3.9[396376]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:36:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:50 compute-0 systemd[1]: Started libpod-conmon-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope.
Oct 02 19:36:50 compute-0 podman[396377]: 2025-10-02 19:36:50.442225842 +0000 UTC m=+0.164305300 container exec b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute)
Oct 02 19:36:50 compute-0 podman[396377]: 2025-10-02 19:36:50.476444913 +0000 UTC m=+0.198524341 container exec_died b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 19:36:50 compute-0 sudo[396374]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:50 compute-0 systemd[1]: libpod-conmon-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope: Deactivated successfully.
Oct 02 19:36:51 compute-0 sudo[396555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tozsxyswueeeqlrzurtqkrrnyrxuapoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433810.8397002-544-256465076918662/AnsiballZ_podman_container_exec.py'
Oct 02 19:36:51 compute-0 sudo[396555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:51 compute-0 python3.9[396557]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:36:51 compute-0 systemd[1]: Started libpod-conmon-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope.
Oct 02 19:36:51 compute-0 podman[396558]: 2025-10-02 19:36:51.829655699 +0000 UTC m=+0.152885188 container exec b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 19:36:51 compute-0 ceph-mon[191910]: pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:51 compute-0 podman[396558]: 2025-10-02 19:36:51.867592658 +0000 UTC m=+0.190822147 container exec_died b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:36:51 compute-0 sudo[396555]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:51 compute-0 systemd[1]: libpod-conmon-b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca.scope: Deactivated successfully.
Oct 02 19:36:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:52 compute-0 sudo[396738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxxaulspcyhegzpbjqzgnghhezjsutac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433812.241425-552-254143993903842/AnsiballZ_file.py'
Oct 02 19:36:52 compute-0 sudo[396738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:52 compute-0 python3.9[396740]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:36:52 compute-0 sudo[396738]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:53 compute-0 sudo[396890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-effuvlxtrbgdbpjgcphtggzcggtsyrwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433813.3181543-561-137146497972127/AnsiballZ_podman_container_info.py'
Oct 02 19:36:53 compute-0 sudo[396890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:53 compute-0 ceph-mon[191910]: pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:54 compute-0 python3.9[396892]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Oct 02 19:36:54 compute-0 sudo[396890]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:36:55 compute-0 ceph-mon[191910]: pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:55 compute-0 sudo[397055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyntvomvuosvrrkxtkyspjceaqojtsdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433814.5957425-569-95431991436497/AnsiballZ_podman_container_exec.py'
Oct 02 19:36:55 compute-0 sudo[397055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:55 compute-0 python3.9[397057]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:36:55 compute-0 systemd[1]: Started libpod-conmon-fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2.scope.
Oct 02 19:36:55 compute-0 podman[397058]: 2025-10-02 19:36:55.592879191 +0000 UTC m=+0.170971329 container exec fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:36:55 compute-0 podman[397058]: 2025-10-02 19:36:55.627294956 +0000 UTC m=+0.205387034 container exec_died fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:36:55 compute-0 systemd[1]: libpod-conmon-fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2.scope: Deactivated successfully.
Oct 02 19:36:55 compute-0 sudo[397055]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:56 compute-0 sudo[397237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqygmmjbhthbtabxgsnbddntsrhrkoam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433815.9802513-577-149575697695674/AnsiballZ_podman_container_exec.py'
Oct 02 19:36:56 compute-0 sudo[397237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:56 compute-0 python3.9[397239]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:36:56 compute-0 systemd[1]: Started libpod-conmon-fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2.scope.
Oct 02 19:36:56 compute-0 podman[397240]: 2025-10-02 19:36:56.960721876 +0000 UTC m=+0.130555244 container exec fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:36:56 compute-0 podman[397240]: 2025-10-02 19:36:56.99697492 +0000 UTC m=+0.166808278 container exec_died fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:36:57 compute-0 sudo[397237]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:57 compute-0 systemd[1]: libpod-conmon-fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2.scope: Deactivated successfully.
Oct 02 19:36:57 compute-0 podman[397255]: 2025-10-02 19:36:57.094224517 +0000 UTC m=+0.127683497 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 19:36:57 compute-0 ceph-mon[191910]: pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:57 compute-0 sudo[397436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmjtqhlufgmrizjybnqllryiwiuwdaff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433817.332598-585-26407111531278/AnsiballZ_file.py'
Oct 02 19:36:57 compute-0 sudo[397436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:58 compute-0 python3.9[397438]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:36:58 compute-0 sudo[397436]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:59 compute-0 sudo[397588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuvbwjjruuzevnijarhuljlaorhttccz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433818.4688497-594-73006807212503/AnsiballZ_podman_container_info.py'
Oct 02 19:36:59 compute-0 sudo[397588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:36:59 compute-0 python3.9[397590]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Oct 02 19:36:59 compute-0 ceph-mon[191910]: pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:36:59 compute-0 sudo[397588]: pam_unix(sudo:session): session closed for user root
Oct 02 19:36:59 compute-0 podman[157186]: time="2025-10-02T19:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:36:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45035 "" "Go-http-client/1.1"
Oct 02 19:36:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8517 "" "Go-http-client/1.1"
Oct 02 19:36:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:37:00 compute-0 sudo[397752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdgcuyhnajfhgnkjukslkmrjqwpccuww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433819.7567568-602-56571341890297/AnsiballZ_podman_container_exec.py'
Oct 02 19:37:00 compute-0 sudo[397752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:00 compute-0 python3.9[397754]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:37:00 compute-0 systemd[1]: Started libpod-conmon-308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431.scope.
Oct 02 19:37:00 compute-0 podman[397755]: 2025-10-02 19:37:00.715892224 +0000 UTC m=+0.161577299 container exec 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:37:00 compute-0 podman[397755]: 2025-10-02 19:37:00.751922282 +0000 UTC m=+0.197607317 container exec_died 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:37:00 compute-0 sudo[397752]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:00 compute-0 systemd[1]: libpod-conmon-308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431.scope: Deactivated successfully.
Oct 02 19:37:01 compute-0 openstack_network_exporter[372736]: ERROR   19:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:37:01 compute-0 openstack_network_exporter[372736]: ERROR   19:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:37:01 compute-0 openstack_network_exporter[372736]: ERROR   19:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:37:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:37:01 compute-0 openstack_network_exporter[372736]: ERROR   19:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:37:01 compute-0 openstack_network_exporter[372736]: ERROR   19:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:37:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:37:01 compute-0 ceph-mon[191910]: pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:01 compute-0 sudo[397936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsukosmzhuhcqokmatnksltzzaongrcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433821.0500693-610-85228773383838/AnsiballZ_podman_container_exec.py'
Oct 02 19:37:01 compute-0 sudo[397936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:01 compute-0 python3.9[397938]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:37:02 compute-0 systemd[1]: Started libpod-conmon-308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431.scope.
Oct 02 19:37:02 compute-0 podman[397939]: 2025-10-02 19:37:02.040009835 +0000 UTC m=+0.164836076 container exec 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:37:02 compute-0 podman[397939]: 2025-10-02 19:37:02.074294137 +0000 UTC m=+0.199120268 container exec_died 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:37:02 compute-0 sudo[397936]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:02 compute-0 systemd[1]: libpod-conmon-308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431.scope: Deactivated successfully.
Oct 02 19:37:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:02 compute-0 sudo[398059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:37:02 compute-0 sudo[398059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:37:02 compute-0 sudo[398059]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:02 compute-0 sudo[398126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:37:02 compute-0 sudo[398126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:37:02 compute-0 sudo[398126]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:02 compute-0 podman[398110]: 2025-10-02 19:37:02.864918228 +0000 UTC m=+0.114240020 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:37:02 compute-0 sudo[398189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwzzeknddjmbfnlnfhtttifbyxpznfko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433822.383816-618-49600708660621/AnsiballZ_file.py'
Oct 02 19:37:02 compute-0 sudo[398189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:02 compute-0 sudo[398195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:37:02 compute-0 sudo[398195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:37:02 compute-0 sudo[398195]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:03 compute-0 python3.9[398196]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:37:03 compute-0 sudo[398189]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:03 compute-0 sudo[398222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:37:03 compute-0 sudo[398222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:37:03 compute-0 ceph-mon[191910]: pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:37:03
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'images', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', '.mgr']
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:37:03 compute-0 sudo[398222]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:37:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:37:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:37:03 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:37:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:37:03 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev bec99492-41ea-4300-ba6c-0d4e6982593c does not exist
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 00f0267c-1f8b-4d77-9492-3a9c9a5cc539 does not exist
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev c3186cc8-dd44-443b-9adf-cd85d34e9416 does not exist
Oct 02 19:37:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:37:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:37:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:37:03 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:37:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:37:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:37:03 compute-0 sudo[398436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irmevfmshsnxttbhaefpmoudomwagipg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433823.404384-627-139971223463046/AnsiballZ_podman_container_info.py'
Oct 02 19:37:03 compute-0 sudo[398436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:37:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:37:03 compute-0 sudo[398420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:37:03 compute-0 sudo[398420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:37:03 compute-0 sudo[398420]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:04 compute-0 sudo[398456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:37:04 compute-0 sudo[398456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:37:04 compute-0 sudo[398456]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:04 compute-0 python3.9[398451]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Oct 02 19:37:04 compute-0 sudo[398481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:37:04 compute-0 sudo[398481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:37:04 compute-0 sudo[398481]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:04 compute-0 sudo[398436]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.292 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.293 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.309 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.309 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.310 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.310 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.310 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.311 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.311 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.313 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.313 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:37:04.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:04 compute-0 sudo[398516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:37:04 compute-0 sudo[398516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:37:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:37:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:37:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:37:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:37:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:37:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:37:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:37:05 compute-0 podman[398676]: 2025-10-02 19:37:04.931088258 +0000 UTC m=+0.064782845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:37:05 compute-0 podman[398676]: 2025-10-02 19:37:05.01950896 +0000 UTC m=+0.153203477 container create 55e4218506eebb0736112c9e11fe73e1dc92946621c002190893634025a9d31b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mendel, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:37:05 compute-0 systemd[1]: Started libpod-conmon-55e4218506eebb0736112c9e11fe73e1dc92946621c002190893634025a9d31b.scope.
Oct 02 19:37:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:37:05 compute-0 sudo[398747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulddwbnsxxpchymnmugmmlwwrwkwjaku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433824.5914805-635-123488127495735/AnsiballZ_podman_container_exec.py'
Oct 02 19:37:05 compute-0 sudo[398747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:05 compute-0 podman[398676]: 2025-10-02 19:37:05.209953645 +0000 UTC m=+0.343648172 container init 55e4218506eebb0736112c9e11fe73e1dc92946621c002190893634025a9d31b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mendel, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:37:05 compute-0 podman[398676]: 2025-10-02 19:37:05.226216958 +0000 UTC m=+0.359911445 container start 55e4218506eebb0736112c9e11fe73e1dc92946621c002190893634025a9d31b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mendel, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:37:05 compute-0 zen_mendel[398744]: 167 167
Oct 02 19:37:05 compute-0 systemd[1]: libpod-55e4218506eebb0736112c9e11fe73e1dc92946621c002190893634025a9d31b.scope: Deactivated successfully.
Oct 02 19:37:05 compute-0 podman[398676]: 2025-10-02 19:37:05.239921663 +0000 UTC m=+0.373616150 container attach 55e4218506eebb0736112c9e11fe73e1dc92946621c002190893634025a9d31b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 19:37:05 compute-0 podman[398676]: 2025-10-02 19:37:05.240813796 +0000 UTC m=+0.374508293 container died 55e4218506eebb0736112c9e11fe73e1dc92946621c002190893634025a9d31b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 19:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-8677c71535d5239307953ad7eddf123cdd49eacdcbd7e0c05428baec9891924a-merged.mount: Deactivated successfully.
Oct 02 19:37:05 compute-0 podman[398745]: 2025-10-02 19:37:05.321274627 +0000 UTC m=+0.155335023 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2)
Oct 02 19:37:05 compute-0 podman[398676]: 2025-10-02 19:37:05.330637166 +0000 UTC m=+0.464331643 container remove 55e4218506eebb0736112c9e11fe73e1dc92946621c002190893634025a9d31b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 19:37:05 compute-0 systemd[1]: libpod-conmon-55e4218506eebb0736112c9e11fe73e1dc92946621c002190893634025a9d31b.scope: Deactivated successfully.
Oct 02 19:37:05 compute-0 python3.9[398757]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:37:05 compute-0 podman[398790]: 2025-10-02 19:37:05.554978743 +0000 UTC m=+0.096305463 container create b8b6a1f73f9d5a0ed11cfb158cc2b50e9151da7875a646b744e7edaa0fdd4e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 19:37:05 compute-0 podman[398790]: 2025-10-02 19:37:05.48944441 +0000 UTC m=+0.030771110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:37:05 compute-0 systemd[1]: Started libpod-conmon-c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc.scope.
Oct 02 19:37:05 compute-0 podman[398787]: 2025-10-02 19:37:05.631466348 +0000 UTC m=+0.180231515 container exec c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, version=9.6, architecture=x86_64, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public)
Oct 02 19:37:05 compute-0 systemd[1]: Started libpod-conmon-b8b6a1f73f9d5a0ed11cfb158cc2b50e9151da7875a646b744e7edaa0fdd4e19.scope.
Oct 02 19:37:05 compute-0 podman[398787]: 2025-10-02 19:37:05.67405067 +0000 UTC m=+0.222815797 container exec_died c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, managed_by=edpm_ansible, release=1755695350)
Oct 02 19:37:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58acbc2d21e64dbc39e1a52aee50029a6e1be6c74ab04eb1536ffa28ec62f5de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58acbc2d21e64dbc39e1a52aee50029a6e1be6c74ab04eb1536ffa28ec62f5de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58acbc2d21e64dbc39e1a52aee50029a6e1be6c74ab04eb1536ffa28ec62f5de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58acbc2d21e64dbc39e1a52aee50029a6e1be6c74ab04eb1536ffa28ec62f5de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58acbc2d21e64dbc39e1a52aee50029a6e1be6c74ab04eb1536ffa28ec62f5de/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:37:05 compute-0 podman[398790]: 2025-10-02 19:37:05.749903068 +0000 UTC m=+0.291229788 container init b8b6a1f73f9d5a0ed11cfb158cc2b50e9151da7875a646b744e7edaa0fdd4e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 02 19:37:05 compute-0 podman[398790]: 2025-10-02 19:37:05.769005016 +0000 UTC m=+0.310331706 container start b8b6a1f73f9d5a0ed11cfb158cc2b50e9151da7875a646b744e7edaa0fdd4e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:37:05 compute-0 sudo[398747]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:05 compute-0 systemd[1]: libpod-conmon-c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc.scope: Deactivated successfully.
Oct 02 19:37:05 compute-0 podman[398790]: 2025-10-02 19:37:05.783763049 +0000 UTC m=+0.325089739 container attach b8b6a1f73f9d5a0ed11cfb158cc2b50e9151da7875a646b744e7edaa0fdd4e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct 02 19:37:05 compute-0 ceph-mon[191910]: pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:06 compute-0 sudo[398991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjorpiylukysxkiqrimkwhjgnuldejta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433826.0192277-643-157193922264186/AnsiballZ_podman_container_exec.py'
Oct 02 19:37:06 compute-0 sudo[398991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:06 compute-0 python3.9[398995]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:37:06 compute-0 systemd[1]: Started libpod-conmon-c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc.scope.
Oct 02 19:37:06 compute-0 podman[399006]: 2025-10-02 19:37:06.914203539 +0000 UTC m=+0.124609366 container exec c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.expose-services=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, build-date=2025-08-20T13:12:41, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Oct 02 19:37:06 compute-0 podman[399006]: 2025-10-02 19:37:06.948777199 +0000 UTC m=+0.159183036 container exec_died c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, build-date=2025-08-20T13:12:41)
Oct 02 19:37:06 compute-0 systemd[1]: libpod-conmon-c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc.scope: Deactivated successfully.
Oct 02 19:37:07 compute-0 sudo[398991]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:07 compute-0 objective_leakey[398825]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:37:07 compute-0 objective_leakey[398825]: --> relative data size: 1.0
Oct 02 19:37:07 compute-0 objective_leakey[398825]: --> All data devices are unavailable
Oct 02 19:37:07 compute-0 systemd[1]: libpod-b8b6a1f73f9d5a0ed11cfb158cc2b50e9151da7875a646b744e7edaa0fdd4e19.scope: Deactivated successfully.
Oct 02 19:37:07 compute-0 podman[398790]: 2025-10-02 19:37:07.076208488 +0000 UTC m=+1.617535208 container died b8b6a1f73f9d5a0ed11cfb158cc2b50e9151da7875a646b744e7edaa0fdd4e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:37:07 compute-0 systemd[1]: libpod-b8b6a1f73f9d5a0ed11cfb158cc2b50e9151da7875a646b744e7edaa0fdd4e19.scope: Consumed 1.224s CPU time.
Oct 02 19:37:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-58acbc2d21e64dbc39e1a52aee50029a6e1be6c74ab04eb1536ffa28ec62f5de-merged.mount: Deactivated successfully.
Oct 02 19:37:07 compute-0 podman[398790]: 2025-10-02 19:37:07.146085957 +0000 UTC m=+1.687412637 container remove b8b6a1f73f9d5a0ed11cfb158cc2b50e9151da7875a646b744e7edaa0fdd4e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 19:37:07 compute-0 systemd[1]: libpod-conmon-b8b6a1f73f9d5a0ed11cfb158cc2b50e9151da7875a646b744e7edaa0fdd4e19.scope: Deactivated successfully.
Oct 02 19:37:07 compute-0 sudo[398516]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:07 compute-0 sudo[399107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:37:07 compute-0 sudo[399107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:37:07 compute-0 sudo[399107]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:07 compute-0 sudo[399155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:37:07 compute-0 sudo[399155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:37:07 compute-0 sudo[399155]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:07 compute-0 sudo[399209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:37:07 compute-0 sudo[399209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:37:07 compute-0 sudo[399209]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:07 compute-0 sudo[399241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:37:07 compute-0 sudo[399241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:37:07 compute-0 sudo[399309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzhtpfgbvqmohmxlpwqiyomjzuvavpcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433827.2148008-651-37913152649470/AnsiballZ_file.py'
Oct 02 19:37:07 compute-0 sudo[399309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:07 compute-0 ceph-mon[191910]: pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:07 compute-0 python3.9[399313]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:37:07 compute-0 sudo[399309]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:08 compute-0 podman[399347]: 2025-10-02 19:37:08.141645599 +0000 UTC m=+0.096284812 container create 7bd251e9906bb4429cd4ed8f0e81f9796158a4bb325d12a180e74c3eb696d641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cori, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:37:08 compute-0 podman[399347]: 2025-10-02 19:37:08.099546919 +0000 UTC m=+0.054186232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:37:08 compute-0 systemd[1]: Started libpod-conmon-7bd251e9906bb4429cd4ed8f0e81f9796158a4bb325d12a180e74c3eb696d641.scope.
Oct 02 19:37:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:37:08 compute-0 podman[399347]: 2025-10-02 19:37:08.28902248 +0000 UTC m=+0.243661703 container init 7bd251e9906bb4429cd4ed8f0e81f9796158a4bb325d12a180e74c3eb696d641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cori, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:37:08 compute-0 podman[399347]: 2025-10-02 19:37:08.302328314 +0000 UTC m=+0.256967547 container start 7bd251e9906bb4429cd4ed8f0e81f9796158a4bb325d12a180e74c3eb696d641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 19:37:08 compute-0 quizzical_cori[399389]: 167 167
Oct 02 19:37:08 compute-0 systemd[1]: libpod-7bd251e9906bb4429cd4ed8f0e81f9796158a4bb325d12a180e74c3eb696d641.scope: Deactivated successfully.
Oct 02 19:37:08 compute-0 podman[399347]: 2025-10-02 19:37:08.317315332 +0000 UTC m=+0.271954545 container attach 7bd251e9906bb4429cd4ed8f0e81f9796158a4bb325d12a180e74c3eb696d641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cori, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Oct 02 19:37:08 compute-0 podman[399347]: 2025-10-02 19:37:08.317677272 +0000 UTC m=+0.272316485 container died 7bd251e9906bb4429cd4ed8f0e81f9796158a4bb325d12a180e74c3eb696d641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 02 19:37:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v954: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:08 compute-0 podman[399386]: 2025-10-02 19:37:08.368898273 +0000 UTC m=+0.134637431 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi)
Oct 02 19:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-769503ea0913e82d57cf01790fc6b0fd821730af4a0ee5194fda7f0fcf7f392f-merged.mount: Deactivated successfully.
Oct 02 19:37:08 compute-0 systemd[1]: 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17-41f91025dc7de8b6.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:37:08 compute-0 systemd[1]: 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17-41f91025dc7de8b6.service: Failed with result 'exit-code'.
Oct 02 19:37:08 compute-0 podman[399347]: 2025-10-02 19:37:08.437647032 +0000 UTC m=+0.392286245 container remove 7bd251e9906bb4429cd4ed8f0e81f9796158a4bb325d12a180e74c3eb696d641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:37:08 compute-0 systemd[1]: libpod-conmon-7bd251e9906bb4429cd4ed8f0e81f9796158a4bb325d12a180e74c3eb696d641.scope: Deactivated successfully.
Oct 02 19:37:08 compute-0 podman[399505]: 2025-10-02 19:37:08.691973468 +0000 UTC m=+0.078525350 container create 24b2db2f2ff02c530456933b9ff1cbdc5b98862c396449503f2ebd20cdc678ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hamilton, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 19:37:08 compute-0 podman[399505]: 2025-10-02 19:37:08.65071073 +0000 UTC m=+0.037262652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:37:08 compute-0 systemd[1]: Started libpod-conmon-24b2db2f2ff02c530456933b9ff1cbdc5b98862c396449503f2ebd20cdc678ff.scope.
Oct 02 19:37:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117e1458932ef474414cc6c5815a28b481cdafc0ed8d1d08b314bd84a537a4e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117e1458932ef474414cc6c5815a28b481cdafc0ed8d1d08b314bd84a537a4e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117e1458932ef474414cc6c5815a28b481cdafc0ed8d1d08b314bd84a537a4e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117e1458932ef474414cc6c5815a28b481cdafc0ed8d1d08b314bd84a537a4e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:37:08 compute-0 podman[399505]: 2025-10-02 19:37:08.863817519 +0000 UTC m=+0.250369441 container init 24b2db2f2ff02c530456933b9ff1cbdc5b98862c396449503f2ebd20cdc678ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hamilton, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct 02 19:37:08 compute-0 sudo[399574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcardcwzdpwoftxlevwlrrvxxtjsskqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433828.347314-660-128337125581041/AnsiballZ_podman_container_info.py'
Oct 02 19:37:08 compute-0 sudo[399574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:08 compute-0 podman[399505]: 2025-10-02 19:37:08.883681767 +0000 UTC m=+0.270233659 container start 24b2db2f2ff02c530456933b9ff1cbdc5b98862c396449503f2ebd20cdc678ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 19:37:08 compute-0 podman[399505]: 2025-10-02 19:37:08.89283356 +0000 UTC m=+0.279385452 container attach 24b2db2f2ff02c530456933b9ff1cbdc5b98862c396449503f2ebd20cdc678ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:37:09 compute-0 python3.9[399577]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Oct 02 19:37:09 compute-0 sudo[399574]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:09 compute-0 strange_hamilton[399565]: {
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:     "0": [
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:         {
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "devices": [
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "/dev/loop3"
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             ],
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "lv_name": "ceph_lv0",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "lv_size": "21470642176",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "name": "ceph_lv0",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "tags": {
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.cluster_name": "ceph",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.crush_device_class": "",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.encrypted": "0",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.osd_id": "0",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.type": "block",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.vdo": "0"
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             },
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "type": "block",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "vg_name": "ceph_vg0"
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:         }
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:     ],
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:     "1": [
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:         {
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "devices": [
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "/dev/loop4"
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             ],
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "lv_name": "ceph_lv1",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "lv_size": "21470642176",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "name": "ceph_lv1",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "tags": {
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.cluster_name": "ceph",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.crush_device_class": "",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.encrypted": "0",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.osd_id": "1",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.type": "block",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.vdo": "0"
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             },
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "type": "block",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "vg_name": "ceph_vg1"
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:         }
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:     ],
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:     "2": [
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:         {
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "devices": [
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "/dev/loop5"
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             ],
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "lv_name": "ceph_lv2",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "lv_size": "21470642176",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "name": "ceph_lv2",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "tags": {
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.cluster_name": "ceph",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.crush_device_class": "",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.encrypted": "0",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.osd_id": "2",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.type": "block",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:                 "ceph.vdo": "0"
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             },
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "type": "block",
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:             "vg_name": "ceph_vg2"
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:         }
Oct 02 19:37:09 compute-0 strange_hamilton[399565]:     ]
Oct 02 19:37:09 compute-0 strange_hamilton[399565]: }
Oct 02 19:37:09 compute-0 systemd[1]: libpod-24b2db2f2ff02c530456933b9ff1cbdc5b98862c396449503f2ebd20cdc678ff.scope: Deactivated successfully.
Oct 02 19:37:09 compute-0 podman[399505]: 2025-10-02 19:37:09.74731049 +0000 UTC m=+1.133862402 container died 24b2db2f2ff02c530456933b9ff1cbdc5b98862c396449503f2ebd20cdc678ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:37:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-117e1458932ef474414cc6c5815a28b481cdafc0ed8d1d08b314bd84a537a4e5-merged.mount: Deactivated successfully.
Oct 02 19:37:09 compute-0 podman[399505]: 2025-10-02 19:37:09.857603103 +0000 UTC m=+1.244154985 container remove 24b2db2f2ff02c530456933b9ff1cbdc5b98862c396449503f2ebd20cdc678ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 19:37:09 compute-0 ceph-mon[191910]: pgmap v954: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:09 compute-0 systemd[1]: libpod-conmon-24b2db2f2ff02c530456933b9ff1cbdc5b98862c396449503f2ebd20cdc678ff.scope: Deactivated successfully.
Oct 02 19:37:09 compute-0 sudo[399241]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:37:10 compute-0 sudo[399719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:37:10 compute-0 sudo[399719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:37:10 compute-0 sudo[399719]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:10 compute-0 sudo[399816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnmsjfphtdzwlglkfgvmcpaivgvcgjfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433829.5291348-668-41226459626252/AnsiballZ_podman_container_exec.py'
Oct 02 19:37:10 compute-0 sudo[399816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:10 compute-0 sudo[399764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:37:10 compute-0 sudo[399764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:37:10 compute-0 sudo[399764]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:10 compute-0 podman[399753]: 2025-10-02 19:37:10.168422992 +0000 UTC m=+0.137810867 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, io.openshift.tags=base rhel9, release=1214.1726694543, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, config_id=edpm, maintainer=Red Hat, Inc.)
Oct 02 19:37:10 compute-0 sudo[399827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:37:10 compute-0 sudo[399827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:37:10 compute-0 sudo[399827]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:10 compute-0 python3.9[399824]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:37:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:10 compute-0 sudo[399852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:37:10 compute-0 sudo[399852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:37:10 compute-0 systemd[1]: Started libpod-conmon-0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.scope.
Oct 02 19:37:10 compute-0 podman[399861]: 2025-10-02 19:37:10.546857749 +0000 UTC m=+0.188683800 container exec 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:37:10 compute-0 podman[399861]: 2025-10-02 19:37:10.600962198 +0000 UTC m=+0.242788199 container exec_died 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:37:10 compute-0 systemd[1]: libpod-conmon-0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.scope: Deactivated successfully.
Oct 02 19:37:10 compute-0 sudo[399816]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:11 compute-0 podman[399972]: 2025-10-02 19:37:11.175728607 +0000 UTC m=+0.109591496 container create 188be20e5b6e0e4760ae05a9be6e2ef40a63821a4aa986f268bb3c6decc12087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brown, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 19:37:11 compute-0 podman[399972]: 2025-10-02 19:37:11.134635004 +0000 UTC m=+0.068497903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:37:11 compute-0 systemd[1]: Started libpod-conmon-188be20e5b6e0e4760ae05a9be6e2ef40a63821a4aa986f268bb3c6decc12087.scope.
Oct 02 19:37:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:37:11 compute-0 podman[399972]: 2025-10-02 19:37:11.332890507 +0000 UTC m=+0.266753386 container init 188be20e5b6e0e4760ae05a9be6e2ef40a63821a4aa986f268bb3c6decc12087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Oct 02 19:37:11 compute-0 podman[399972]: 2025-10-02 19:37:11.349430787 +0000 UTC m=+0.283293636 container start 188be20e5b6e0e4760ae05a9be6e2ef40a63821a4aa986f268bb3c6decc12087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brown, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 19:37:11 compute-0 focused_brown[400040]: 167 167
Oct 02 19:37:11 compute-0 systemd[1]: libpod-188be20e5b6e0e4760ae05a9be6e2ef40a63821a4aa986f268bb3c6decc12087.scope: Deactivated successfully.
Oct 02 19:37:11 compute-0 podman[399972]: 2025-10-02 19:37:11.364242851 +0000 UTC m=+0.298105730 container attach 188be20e5b6e0e4760ae05a9be6e2ef40a63821a4aa986f268bb3c6decc12087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 19:37:11 compute-0 podman[399972]: 2025-10-02 19:37:11.364872428 +0000 UTC m=+0.298735317 container died 188be20e5b6e0e4760ae05a9be6e2ef40a63821a4aa986f268bb3c6decc12087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 19:37:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-92aefd9d8abedecf0323b5c9087b19a38065dc3527f63cd4ab44f1df3d302271-merged.mount: Deactivated successfully.
Oct 02 19:37:11 compute-0 podman[399972]: 2025-10-02 19:37:11.5450385 +0000 UTC m=+0.478901379 container remove 188be20e5b6e0e4760ae05a9be6e2ef40a63821a4aa986f268bb3c6decc12087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brown, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 19:37:11 compute-0 systemd[1]: libpod-conmon-188be20e5b6e0e4760ae05a9be6e2ef40a63821a4aa986f268bb3c6decc12087.scope: Deactivated successfully.
Oct 02 19:37:11 compute-0 sudo[400131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhsgbulmhfxydmxarngvwqxtdpqlpwql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433831.1383102-676-206137814736547/AnsiballZ_podman_container_exec.py'
Oct 02 19:37:11 compute-0 sudo[400131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:11 compute-0 podman[400139]: 2025-10-02 19:37:11.826818616 +0000 UTC m=+0.108942359 container create 7c6f6e60941783b1d19a0e9c29fde2f8046657ac92e2f47e17e3446151797ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kilby, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct 02 19:37:11 compute-0 podman[400139]: 2025-10-02 19:37:11.774897235 +0000 UTC m=+0.057020978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:37:11 compute-0 ceph-mon[191910]: pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:11 compute-0 python3.9[400133]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:37:11 compute-0 systemd[1]: Started libpod-conmon-7c6f6e60941783b1d19a0e9c29fde2f8046657ac92e2f47e17e3446151797ca2.scope.
Oct 02 19:37:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9401d76b90ad98cf59b4239dd802408ef57f2f002def1c010d406c71ca9ab450/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9401d76b90ad98cf59b4239dd802408ef57f2f002def1c010d406c71ca9ab450/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9401d76b90ad98cf59b4239dd802408ef57f2f002def1c010d406c71ca9ab450/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9401d76b90ad98cf59b4239dd802408ef57f2f002def1c010d406c71ca9ab450/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:37:12 compute-0 systemd[1]: Started libpod-conmon-0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.scope.
Oct 02 19:37:12 compute-0 podman[400139]: 2025-10-02 19:37:12.146972981 +0000 UTC m=+0.429096754 container init 7c6f6e60941783b1d19a0e9c29fde2f8046657ac92e2f47e17e3446151797ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 19:37:12 compute-0 podman[400155]: 2025-10-02 19:37:12.167205679 +0000 UTC m=+0.229161347 container exec 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct 02 19:37:12 compute-0 podman[400139]: 2025-10-02 19:37:12.179712112 +0000 UTC m=+0.461835855 container start 7c6f6e60941783b1d19a0e9c29fde2f8046657ac92e2f47e17e3446151797ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:37:12 compute-0 podman[400139]: 2025-10-02 19:37:12.255854697 +0000 UTC m=+0.537978440 container attach 7c6f6e60941783b1d19a0e9c29fde2f8046657ac92e2f47e17e3446151797ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:37:12 compute-0 podman[400155]: 2025-10-02 19:37:12.260604304 +0000 UTC m=+0.322559962 container exec_died 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v956: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:12 compute-0 sudo[400131]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:12 compute-0 systemd[1]: libpod-conmon-0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17.scope: Deactivated successfully.
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:37:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:37:13 compute-0 competent_kilby[400161]: {
Oct 02 19:37:13 compute-0 competent_kilby[400161]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:37:13 compute-0 competent_kilby[400161]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:37:13 compute-0 competent_kilby[400161]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:37:13 compute-0 competent_kilby[400161]:         "osd_id": 1,
Oct 02 19:37:13 compute-0 competent_kilby[400161]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:37:13 compute-0 competent_kilby[400161]:         "type": "bluestore"
Oct 02 19:37:13 compute-0 competent_kilby[400161]:     },
Oct 02 19:37:13 compute-0 competent_kilby[400161]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:37:13 compute-0 competent_kilby[400161]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:37:13 compute-0 competent_kilby[400161]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:37:13 compute-0 competent_kilby[400161]:         "osd_id": 2,
Oct 02 19:37:13 compute-0 competent_kilby[400161]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:37:13 compute-0 competent_kilby[400161]:         "type": "bluestore"
Oct 02 19:37:13 compute-0 competent_kilby[400161]:     },
Oct 02 19:37:13 compute-0 competent_kilby[400161]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:37:13 compute-0 competent_kilby[400161]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:37:13 compute-0 competent_kilby[400161]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:37:13 compute-0 competent_kilby[400161]:         "osd_id": 0,
Oct 02 19:37:13 compute-0 competent_kilby[400161]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:37:13 compute-0 competent_kilby[400161]:         "type": "bluestore"
Oct 02 19:37:13 compute-0 competent_kilby[400161]:     }
Oct 02 19:37:13 compute-0 competent_kilby[400161]: }
Oct 02 19:37:13 compute-0 systemd[1]: libpod-7c6f6e60941783b1d19a0e9c29fde2f8046657ac92e2f47e17e3446151797ca2.scope: Deactivated successfully.
Oct 02 19:37:13 compute-0 systemd[1]: libpod-7c6f6e60941783b1d19a0e9c29fde2f8046657ac92e2f47e17e3446151797ca2.scope: Consumed 1.306s CPU time.
Oct 02 19:37:13 compute-0 podman[400139]: 2025-10-02 19:37:13.489101982 +0000 UTC m=+1.771225775 container died 7c6f6e60941783b1d19a0e9c29fde2f8046657ac92e2f47e17e3446151797ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 19:37:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-9401d76b90ad98cf59b4239dd802408ef57f2f002def1c010d406c71ca9ab450-merged.mount: Deactivated successfully.
Oct 02 19:37:13 compute-0 podman[400139]: 2025-10-02 19:37:13.710640805 +0000 UTC m=+1.992764508 container remove 7c6f6e60941783b1d19a0e9c29fde2f8046657ac92e2f47e17e3446151797ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kilby, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 19:37:13 compute-0 podman[400294]: 2025-10-02 19:37:13.722504431 +0000 UTC m=+0.185612319 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 02 19:37:13 compute-0 systemd[1]: libpod-conmon-7c6f6e60941783b1d19a0e9c29fde2f8046657ac92e2f47e17e3446151797ca2.scope: Deactivated successfully.
Oct 02 19:37:13 compute-0 sudo[399852]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:37:13 compute-0 podman[400300]: 2025-10-02 19:37:13.7717087 +0000 UTC m=+0.240054927 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 19:37:13 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:37:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:37:13 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:37:13 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 4f2dde96-2291-4e78-9842-4008b3db567b does not exist
Oct 02 19:37:13 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 7d596390-6590-4276-86bc-b9e0b4c69cd2 does not exist
Oct 02 19:37:13 compute-0 podman[400301]: 2025-10-02 19:37:13.818259508 +0000 UTC m=+0.286232315 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct 02 19:37:13 compute-0 sudo[400439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wumbkfwaboerutyvodqljvcremjyejig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433833.2855937-684-182130134371324/AnsiballZ_file.py'
Oct 02 19:37:13 compute-0 sudo[400439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:13 compute-0 sudo[400432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:37:13 compute-0 sudo[400432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:37:13 compute-0 sudo[400432]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:13 compute-0 sudo[400463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:37:13 compute-0 sudo[400463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:37:13 compute-0 sudo[400463]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:13 compute-0 ceph-mon[191910]: pgmap v956: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:13 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:37:13 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:37:14 compute-0 python3.9[400455]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:37:14 compute-0 sudo[400439]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:37:15 compute-0 sudo[400637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpvwbbxvbgaegmmmmgmoexjclveuhpjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433834.4733627-693-76378591848059/AnsiballZ_podman_container_info.py'
Oct 02 19:37:15 compute-0 sudo[400637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:15 compute-0 python3.9[400639]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Oct 02 19:37:15 compute-0 sudo[400637]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:16 compute-0 ceph-mon[191910]: pgmap v957: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:16 compute-0 sudo[400802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pobjshlshydwolrcwzkztgzdgxfridda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433835.7469876-701-275429813245794/AnsiballZ_podman_container_exec.py'
Oct 02 19:37:16 compute-0 sudo[400802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:16 compute-0 python3.9[400804]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:37:16 compute-0 systemd[1]: Started libpod-conmon-584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733.scope.
Oct 02 19:37:16 compute-0 podman[400805]: 2025-10-02 19:37:16.806282779 +0000 UTC m=+0.156016991 container exec 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, container_name=kepler, version=9.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=base rhel9, managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:37:16 compute-0 podman[400805]: 2025-10-02 19:37:16.841731592 +0000 UTC m=+0.191465744 container exec_died 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, managed_by=edpm_ansible, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, com.redhat.component=ubi9-container)
Oct 02 19:37:16 compute-0 systemd[1]: libpod-conmon-584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733.scope: Deactivated successfully.
Oct 02 19:37:16 compute-0 sudo[400802]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:17 compute-0 sudo[400999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xihsjekasavtfrwnmaaavifohzhxpitf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433837.2505932-709-275718010089881/AnsiballZ_podman_container_exec.py'
Oct 02 19:37:17 compute-0 sudo[400999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:17 compute-0 podman[400957]: 2025-10-02 19:37:17.842230146 +0000 UTC m=+0.136637216 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, release=1755695350, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:37:18 compute-0 ceph-mon[191910]: pgmap v958: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:18 compute-0 python3.9[401004]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:37:18 compute-0 systemd[1]: Started libpod-conmon-584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733.scope.
Oct 02 19:37:18 compute-0 podman[401008]: 2025-10-02 19:37:18.183932645 +0000 UTC m=+0.126101805 container exec 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, io.openshift.expose-services=, io.buildah.version=1.29.0, managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_id=edpm, release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Oct 02 19:37:18 compute-0 podman[401008]: 2025-10-02 19:37:18.218938946 +0000 UTC m=+0.161108126 container exec_died 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, version=9.4, container_name=kepler, release-0.7.12=, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0)
Oct 02 19:37:18 compute-0 sudo[400999]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:18 compute-0 systemd[1]: libpod-conmon-584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733.scope: Deactivated successfully.
Oct 02 19:37:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:19 compute-0 ceph-mon[191910]: pgmap v959: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:19 compute-0 podman[401163]: 2025-10-02 19:37:19.169282275 +0000 UTC m=+0.102084745 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:37:19 compute-0 sudo[401206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oetcgxblsakjadrnoumvbnzqvcezodbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433838.6084635-717-203928180956318/AnsiballZ_file.py'
Oct 02 19:37:19 compute-0 sudo[401206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:19 compute-0 python3.9[401215]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:37:19 compute-0 sudo[401206]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:37:20 compute-0 sudo[401366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsinpgyucfioqyxqrrvlmwxqhzrjctmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433839.7397213-726-77061838889668/AnsiballZ_podman_container_info.py'
Oct 02 19:37:20 compute-0 sudo[401366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:20 compute-0 python3.9[401368]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Oct 02 19:37:20 compute-0 sudo[401366]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:21 compute-0 ceph-mon[191910]: pgmap v960: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:21 compute-0 sudo[401531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dihcrwunczlfvzjybplenzlechduucjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433841.0219975-734-115845928706718/AnsiballZ_podman_container_exec.py'
Oct 02 19:37:21 compute-0 sudo[401531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:21 compute-0 python3.9[401533]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:37:21 compute-0 systemd[1]: Started libpod-conmon-6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19.scope.
Oct 02 19:37:21 compute-0 podman[401534]: 2025-10-02 19:37:21.949262404 +0000 UTC m=+0.139315057 container exec 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:37:21 compute-0 podman[401534]: 2025-10-02 19:37:21.98597102 +0000 UTC m=+0.176023673 container exec_died 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 19:37:22 compute-0 systemd[1]: libpod-conmon-6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19.scope: Deactivated successfully.
Oct 02 19:37:22 compute-0 sudo[401531]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:22 compute-0 sudo[401714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfnvxdgmfrsifrbtmusymimvxjinempw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433842.3198023-742-223289913356357/AnsiballZ_podman_container_exec.py'
Oct 02 19:37:22 compute-0 sudo[401714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:23 compute-0 python3.9[401716]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:37:23 compute-0 systemd[1]: Started libpod-conmon-6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19.scope.
Oct 02 19:37:23 compute-0 podman[401717]: 2025-10-02 19:37:23.3408775 +0000 UTC m=+0.248635305 container exec 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 19:37:23 compute-0 podman[401717]: 2025-10-02 19:37:23.393761777 +0000 UTC m=+0.301519512 container exec_died 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct 02 19:37:23 compute-0 systemd[1]: libpod-conmon-6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19.scope: Deactivated successfully.
Oct 02 19:37:23 compute-0 ceph-mon[191910]: pgmap v961: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:23 compute-0 sudo[401714]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v962: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:24 compute-0 sudo[401897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldixlvvjxxgphmlbzgxmthqxzxvmaofh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433844.3677597-750-217701350564282/AnsiballZ_file.py'
Oct 02 19:37:24 compute-0 sudo[401897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:37:25 compute-0 python3.9[401899]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:37:25 compute-0 sudo[401897]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:25 compute-0 ceph-mon[191910]: pgmap v962: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:27 compute-0 sudo[402049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oecptuxugsdwwympnvwifphfcaajjnxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433845.514894-759-133535966752132/AnsiballZ_podman_container_info.py'
Oct 02 19:37:27 compute-0 sudo[402049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:27 compute-0 podman[402051]: 2025-10-02 19:37:27.322219275 +0000 UTC m=+0.133488732 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 19:37:27 compute-0 python3.9[402052]: ansible-containers.podman.podman_container_info Invoked with name=['iscsid'] executable=podman
Oct 02 19:37:27 compute-0 ceph-mon[191910]: pgmap v963: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:27 compute-0 sudo[402049]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:28 compute-0 sudo[402232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxixdyahdakmkzfqohuqoppugckhpczc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433847.9354467-767-160922857419408/AnsiballZ_podman_container_exec.py'
Oct 02 19:37:28 compute-0 sudo[402232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:28 compute-0 python3.9[402234]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:37:28 compute-0 systemd[1]: Started libpod-conmon-a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92.scope.
Oct 02 19:37:28 compute-0 podman[402235]: 2025-10-02 19:37:28.986976648 +0000 UTC m=+0.212498843 container exec a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 19:37:29 compute-0 podman[402235]: 2025-10-02 19:37:29.057107764 +0000 UTC m=+0.282629879 container exec_died a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 19:37:29 compute-0 systemd[1]: libpod-conmon-a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92.scope: Deactivated successfully.
Oct 02 19:37:29 compute-0 sudo[402232]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:29 compute-0 podman[157186]: time="2025-10-02T19:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:37:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45034 "" "Go-http-client/1.1"
Oct 02 19:37:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8524 "" "Go-http-client/1.1"
Oct 02 19:37:29 compute-0 ceph-mon[191910]: pgmap v964: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:37:30 compute-0 sudo[402414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxtxiweknujzjcyxtsgfhrvdkeovmimw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433849.5238545-775-154442464878175/AnsiballZ_podman_container_exec.py'
Oct 02 19:37:30 compute-0 sudo[402414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:30 compute-0 python3.9[402416]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:37:30 compute-0 systemd[1]: Started libpod-conmon-a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92.scope.
Oct 02 19:37:30 compute-0 podman[402417]: 2025-10-02 19:37:30.550157909 +0000 UTC m=+0.131457928 container exec a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 19:37:30 compute-0 podman[402417]: 2025-10-02 19:37:30.586608488 +0000 UTC m=+0.167908547 container exec_died a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 19:37:30 compute-0 systemd[1]: libpod-conmon-a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92.scope: Deactivated successfully.
Oct 02 19:37:30 compute-0 sudo[402414]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:31 compute-0 openstack_network_exporter[372736]: ERROR   19:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:37:31 compute-0 openstack_network_exporter[372736]: ERROR   19:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:37:31 compute-0 openstack_network_exporter[372736]: ERROR   19:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:37:31 compute-0 openstack_network_exporter[372736]: ERROR   19:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:37:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:37:31 compute-0 openstack_network_exporter[372736]: ERROR   19:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:37:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:37:31 compute-0 sudo[402596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvrcpuuumjbdvbxllszregqjhicfrxbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433850.9555619-783-253911122398020/AnsiballZ_file.py'
Oct 02 19:37:31 compute-0 sudo[402596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:31 compute-0 python3.9[402598]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/iscsid recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:37:31 compute-0 sudo[402596]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:31 compute-0 ceph-mon[191910]: pgmap v965: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:37:32.283 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:37:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:37:32.284 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:37:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:37:32.284 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:37:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:32 compute-0 sudo[402748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eammhaiwsvdvqzexjcsnpjhwmwbzvxmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433852.1772997-792-44182597407227/AnsiballZ_podman_container_info.py'
Oct 02 19:37:32 compute-0 sudo[402748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:33 compute-0 python3.9[402750]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Oct 02 19:37:33 compute-0 sudo[402748]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:37:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:37:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:37:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:37:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:37:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:37:33 compute-0 podman[402813]: 2025-10-02 19:37:33.671516657 +0000 UTC m=+0.100402512 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:37:33 compute-0 ceph-mon[191910]: pgmap v966: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:34 compute-0 sudo[402936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioghbgicpojwzlnfudxgkqidzogoglpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433853.5103407-800-124498816668857/AnsiballZ_podman_container_exec.py'
Oct 02 19:37:34 compute-0 sudo[402936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:34 compute-0 python3.9[402938]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:37:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:34 compute-0 systemd[1]: Started libpod-conmon-21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd.scope.
Oct 02 19:37:34 compute-0 podman[402939]: 2025-10-02 19:37:34.588471438 +0000 UTC m=+0.221236776 container exec 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 02 19:37:34 compute-0 podman[402939]: 2025-10-02 19:37:34.634338988 +0000 UTC m=+0.267104256 container exec_died 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 19:37:34 compute-0 systemd[1]: libpod-conmon-21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd.scope: Deactivated successfully.
Oct 02 19:37:34 compute-0 sudo[402936]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:37:35 compute-0 sudo[403133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkddzopzpqqphcoxafmdgljqwijgupws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433855.0289125-808-9958696789540/AnsiballZ_podman_container_exec.py'
Oct 02 19:37:35 compute-0 sudo[403133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:35 compute-0 podman[403092]: 2025-10-02 19:37:35.63401469 +0000 UTC m=+0.137543189 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:37:35 compute-0 python3.9[403140]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:37:35 compute-0 ceph-mon[191910]: pgmap v967: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:36 compute-0 systemd[1]: Started libpod-conmon-21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd.scope.
Oct 02 19:37:36 compute-0 podman[403141]: 2025-10-02 19:37:36.159610121 +0000 UTC m=+0.285068804 container exec 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:37:36 compute-0 podman[403160]: 2025-10-02 19:37:36.276538382 +0000 UTC m=+0.086724028 container exec_died 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 19:37:36 compute-0 podman[403141]: 2025-10-02 19:37:36.357023363 +0000 UTC m=+0.482481986 container exec_died 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 19:37:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:36 compute-0 systemd[1]: libpod-conmon-21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd.scope: Deactivated successfully.
Oct 02 19:37:36 compute-0 sudo[403133]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:37 compute-0 sudo[403321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwlnckypysxyjuhrhpyyqaxahgjinbru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433856.8305218-816-59822206544367/AnsiballZ_file.py'
Oct 02 19:37:37 compute-0 sudo[403321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:37 compute-0 python3.9[403323]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:37:37 compute-0 sudo[403321]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:38 compute-0 ceph-mon[191910]: pgmap v968: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:38 compute-0 podman[403423]: 2025-10-02 19:37:38.70653183 +0000 UTC m=+0.123957069 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:37:39 compute-0 ceph-mon[191910]: pgmap v969: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:39 compute-0 sudo[403493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rohsobdfjuesyuggfqbvnizhvkwbvbgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433857.9462178-825-223093550024305/AnsiballZ_file.py'
Oct 02 19:37:39 compute-0 sudo[403493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.406 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.408 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.408 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.408 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.409 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.409 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.572 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.573 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.595 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.596 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.597 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.618 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.620 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.620 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:39 compute-0 python3.9[403495]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:37:39 compute-0 sudo[403493]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.671 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.672 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.673 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.674 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:37:39 compute-0 nova_compute[355794]: 2025-10-02 19:37:39.675 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:37:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:37:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:37:40 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/579477793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:37:40 compute-0 nova_compute[355794]: 2025-10-02 19:37:40.238 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:37:40 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/579477793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:37:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:40 compute-0 sudo[403682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhhkqtbkbczsarpmztyjzhjjwhtsjnwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433859.9835143-833-62603703974586/AnsiballZ_stat.py'
Oct 02 19:37:40 compute-0 sudo[403682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:40 compute-0 podman[403641]: 2025-10-02 19:37:40.534508784 +0000 UTC m=+0.096883118 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, container_name=kepler, release-0.7.12=, vcs-type=git, distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, version=9.4)
Oct 02 19:37:40 compute-0 nova_compute[355794]: 2025-10-02 19:37:40.624 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:37:40 compute-0 nova_compute[355794]: 2025-10-02 19:37:40.625 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4526MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:37:40 compute-0 nova_compute[355794]: 2025-10-02 19:37:40.626 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:37:40 compute-0 nova_compute[355794]: 2025-10-02 19:37:40.627 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:37:40 compute-0 nova_compute[355794]: 2025-10-02 19:37:40.684 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:37:40 compute-0 nova_compute[355794]: 2025-10-02 19:37:40.685 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:37:40 compute-0 nova_compute[355794]: 2025-10-02 19:37:40.705 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:37:40 compute-0 python3.9[403689]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:37:40 compute-0 sudo[403682]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:37:41 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2310359590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:37:41 compute-0 nova_compute[355794]: 2025-10-02 19:37:41.251 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:37:41 compute-0 nova_compute[355794]: 2025-10-02 19:37:41.262 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:37:41 compute-0 nova_compute[355794]: 2025-10-02 19:37:41.285 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:37:41 compute-0 nova_compute[355794]: 2025-10-02 19:37:41.287 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:37:41 compute-0 nova_compute[355794]: 2025-10-02 19:37:41.287 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:37:41 compute-0 ceph-mon[191910]: pgmap v970: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:41 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2310359590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:37:42 compute-0 sudo[403788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdotegjcmdsdelnktfxfeapmfbsggxoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433859.9835143-833-62603703974586/AnsiballZ_file.py'
Oct 02 19:37:42 compute-0 sudo[403788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:42 compute-0 python3.9[403790]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/edpm-config/firewall/kepler.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/kepler.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:37:42 compute-0 sudo[403788]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:43 compute-0 ceph-mon[191910]: pgmap v971: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:43 compute-0 sudo[403940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceallsxfdclccksyvauxipxdbgkrxqcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433862.881088-846-219386722602082/AnsiballZ_file.py'
Oct 02 19:37:43 compute-0 sudo[403940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:43 compute-0 python3.9[403942]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:37:43 compute-0 sudo[403940]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v972: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:44 compute-0 sudo[404140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opinilspgtpeqmzoccinzczhbtcdlmec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433864.0299828-854-63577558147980/AnsiballZ_stat.py'
Oct 02 19:37:44 compute-0 sudo[404140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:44 compute-0 podman[404066]: 2025-10-02 19:37:44.618764824 +0000 UTC m=+0.133214225 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, managed_by=edpm_ansible)
Oct 02 19:37:44 compute-0 podman[404067]: 2025-10-02 19:37:44.630951018 +0000 UTC m=+0.135044793 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:37:44 compute-0 podman[404068]: 2025-10-02 19:37:44.680586698 +0000 UTC m=+0.178349395 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:37:44 compute-0 python3.9[404147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:37:44 compute-0 sudo[404140]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:37:45 compute-0 sudo[404229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgoxoqftvnqaagpszlmgjrxiacmgfsqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433864.0299828-854-63577558147980/AnsiballZ_file.py'
Oct 02 19:37:45 compute-0 sudo[404229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:45 compute-0 python3.9[404231]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:37:45 compute-0 sudo[404229]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:45 compute-0 ceph-mon[191910]: pgmap v972: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:46 compute-0 sudo[404381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thvyxpnmpvlgdgxuhsgaccqjxemvtofh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433865.804477-866-104229988374258/AnsiballZ_stat.py'
Oct 02 19:37:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:46 compute-0 sudo[404381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:46 compute-0 python3.9[404383]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:37:46 compute-0 sudo[404381]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:47 compute-0 sudo[404459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsxrgivflvxygsnptwujdbdjebwrmtbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433865.804477-866-104229988374258/AnsiballZ_file.py'
Oct 02 19:37:47 compute-0 sudo[404459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:47 compute-0 python3.9[404461]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.al836ij4 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:37:47 compute-0 sudo[404459]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:47 compute-0 ceph-mon[191910]: pgmap v973: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:48 compute-0 sudo[404628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icsfpgyutnubgszayumgwbbhwswpxqyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433867.6294665-878-60288056947967/AnsiballZ_stat.py'
Oct 02 19:37:48 compute-0 sudo[404628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:48 compute-0 podman[404585]: 2025-10-02 19:37:48.226101011 +0000 UTC m=+0.146423456 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, version=9.6, config_id=edpm, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 19:37:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:48 compute-0 python3.9[404632]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:37:48 compute-0 sudo[404628]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:48 compute-0 sudo[404708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foumlpitmhdtubkxqkbdwsldbbggkujy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433867.6294665-878-60288056947967/AnsiballZ_file.py'
Oct 02 19:37:48 compute-0 sudo[404708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:49 compute-0 python3.9[404710]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:37:49 compute-0 sudo[404708]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:49 compute-0 ceph-mon[191910]: pgmap v974: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:49 compute-0 podman[404759]: 2025-10-02 19:37:49.695719704 +0000 UTC m=+0.112395451 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:37:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:37:50 compute-0 sudo[404885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnhngxocthnirkzwbzypkviqljbrfveh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433869.5532892-891-75556853391338/AnsiballZ_command.py'
Oct 02 19:37:50 compute-0 sudo[404885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:50 compute-0 python3.9[404887]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:37:50 compute-0 sudo[404885]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:51 compute-0 sudo[405038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snyftrmlykyftbctbqnhakqdqokiyqzr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759433870.8477314-899-103360220877805/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 19:37:51 compute-0 sudo[405038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:51 compute-0 ceph-mon[191910]: pgmap v975: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:51 compute-0 python3[405040]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 19:37:51 compute-0 sudo[405038]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:53 compute-0 sudo[405190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shspfdtbejsestxwjahudzjrckuxlpnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433872.1399562-907-220500533897962/AnsiballZ_stat.py'
Oct 02 19:37:53 compute-0 sudo[405190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:53 compute-0 python3.9[405192]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:37:53 compute-0 sudo[405190]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:53 compute-0 ceph-mon[191910]: pgmap v976: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:54 compute-0 sudo[405268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuubnvalrkmqeskdhmlxxigyryoyiplj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433872.1399562-907-220500533897962/AnsiballZ_file.py'
Oct 02 19:37:54 compute-0 sudo[405268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:54 compute-0 python3.9[405270]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:37:54 compute-0 sudo[405268]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:37:55 compute-0 ceph-mon[191910]: pgmap v977: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:56 compute-0 sudo[405420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-behtbokkoklyznmyyhalzjmvvpiwsmiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433875.4818501-919-206617786728570/AnsiballZ_stat.py'
Oct 02 19:37:56 compute-0 sudo[405420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:56 compute-0 python3.9[405422]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:37:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:56 compute-0 sudo[405420]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:56 compute-0 sudo[405498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulazadscliuzzuwvbrhukhhzpeorzpve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433875.4818501-919-206617786728570/AnsiballZ_file.py'
Oct 02 19:37:56 compute-0 sudo[405498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:57 compute-0 python3.9[405500]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:37:57 compute-0 sudo[405498]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:57 compute-0 podman[405577]: 2025-10-02 19:37:57.670193487 +0000 UTC m=+0.105661012 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:37:57 compute-0 sudo[405670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbgevlkogjgntkittmsucohopxaljgbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433877.410287-931-274062488936189/AnsiballZ_stat.py'
Oct 02 19:37:57 compute-0 sudo[405670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:57 compute-0 ceph-mon[191910]: pgmap v978: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:58 compute-0 python3.9[405672]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:37:58 compute-0 sudo[405670]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:37:58 compute-0 sudo[405748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfwczzydnzhoakxyhqgtwdsljzbglbiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433877.410287-931-274062488936189/AnsiballZ_file.py'
Oct 02 19:37:58 compute-0 sudo[405748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:58 compute-0 python3.9[405750]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:37:58 compute-0 sudo[405748]: pam_unix(sudo:session): session closed for user root
Oct 02 19:37:59 compute-0 sudo[405900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hufokjhuxikvfgabwystaeigjrcjvhat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433879.1566727-943-96318710366304/AnsiballZ_stat.py'
Oct 02 19:37:59 compute-0 sudo[405900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:37:59 compute-0 podman[157186]: time="2025-10-02T19:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:37:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:37:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8527 "" "Go-http-client/1.1"
Oct 02 19:37:59 compute-0 ceph-mon[191910]: pgmap v979: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:38:00 compute-0 python3.9[405902]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:38:00 compute-0 sudo[405900]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:00 compute-0 sudo[405978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpnrkghvgkfopiyullpagadgenvakhqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433879.1566727-943-96318710366304/AnsiballZ_file.py'
Oct 02 19:38:00 compute-0 sudo[405978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:38:00 compute-0 python3.9[405980]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:38:00 compute-0 sudo[405978]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:01 compute-0 openstack_network_exporter[372736]: ERROR   19:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:38:01 compute-0 openstack_network_exporter[372736]: ERROR   19:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:38:01 compute-0 openstack_network_exporter[372736]: ERROR   19:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:38:01 compute-0 openstack_network_exporter[372736]: ERROR   19:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:38:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:38:01 compute-0 openstack_network_exporter[372736]: ERROR   19:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:38:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:38:01 compute-0 sudo[406130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fggapppqeiqkkyrspkwtfjtbiqvxruhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433881.093539-955-184391917852175/AnsiballZ_stat.py'
Oct 02 19:38:01 compute-0 sudo[406130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:38:01 compute-0 python3.9[406132]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:38:01 compute-0 sudo[406130]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:02 compute-0 ceph-mon[191910]: pgmap v980: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:02 compute-0 sudo[406208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epdlybzfgtebznsklibevwfjmqjaucog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433881.093539-955-184391917852175/AnsiballZ_file.py'
Oct 02 19:38:02 compute-0 sudo[406208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:38:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:02 compute-0 python3.9[406210]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:38:02 compute-0 sudo[406208]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:03 compute-0 sudo[406360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjkbgozyauokrjhofcgfpqutitxtlzzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433882.861581-968-281410184896330/AnsiballZ_command.py'
Oct 02 19:38:03 compute-0 sudo[406360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:38:03
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'vms', '.mgr', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'default.rgw.control']
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:38:03 compute-0 python3.9[406362]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:38:03 compute-0 sudo[406360]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:38:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:38:04 compute-0 ceph-mon[191910]: pgmap v981: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:04 compute-0 podman[406468]: 2025-10-02 19:38:04.709895114 +0000 UTC m=+0.124106422 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:38:04 compute-0 sudo[406538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znepmbqsynpzvryzblkfulxlvtfwmlao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433883.9602787-976-11921586642188/AnsiballZ_blockinfile.py'
Oct 02 19:38:04 compute-0 sudo[406538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:38:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:38:05 compute-0 python3.9[406540]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:38:05 compute-0 sudo[406538]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:05 compute-0 ceph-mon[191910]: pgmap v982: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:05 compute-0 podman[406664]: 2025-10-02 19:38:05.996080776 +0000 UTC m=+0.116365916 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930)
Oct 02 19:38:06 compute-0 sudo[406709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jivgmbefztjhbuqivqfmtwcftueunccp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433885.4335446-985-10257410367036/AnsiballZ_command.py'
Oct 02 19:38:06 compute-0 sudo[406709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:38:06 compute-0 python3.9[406712]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:38:06 compute-0 sudo[406709]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:07 compute-0 sudo[406863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqkqabbfxlrlalwctejbyxgsqquiofdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433886.585102-993-54207917313921/AnsiballZ_stat.py'
Oct 02 19:38:07 compute-0 sudo[406863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:38:07 compute-0 python3.9[406865]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:38:07 compute-0 sudo[406863]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:07 compute-0 ceph-mon[191910]: pgmap v983: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:08 compute-0 sudo[407015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbtzgbgejvnthtvkmaygepzrhnpiiotr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433887.7645338-1002-54159773666586/AnsiballZ_file.py'
Oct 02 19:38:08 compute-0 sudo[407015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:38:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:08 compute-0 python3.9[407017]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:38:08 compute-0 sudo[407015]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:09 compute-0 sshd-session[385044]: Connection closed by 192.168.122.30 port 36444
Oct 02 19:38:09 compute-0 sshd-session[385041]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:38:09 compute-0 systemd[1]: session-60.scope: Deactivated successfully.
Oct 02 19:38:09 compute-0 systemd[1]: session-60.scope: Consumed 2min 13.203s CPU time.
Oct 02 19:38:09 compute-0 systemd-logind[793]: Session 60 logged out. Waiting for processes to exit.
Oct 02 19:38:09 compute-0 systemd-logind[793]: Removed session 60.
Oct 02 19:38:09 compute-0 podman[407044]: 2025-10-02 19:38:09.430582245 +0000 UTC m=+0.127463261 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:38:09 compute-0 ceph-mon[191910]: pgmap v984: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:38:10 compute-0 unix_chkpwd[407066]: password check failed for user (root)
Oct 02 19:38:10 compute-0 sshd-session[407018]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=39.162.46.234  user=root
Oct 02 19:38:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:11 compute-0 podman[407067]: 2025-10-02 19:38:11.726010158 +0000 UTC m=+0.144151032 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, container_name=kepler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9, release=1214.1726694543, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Oct 02 19:38:11 compute-0 ceph-mon[191910]: pgmap v985: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:12 compute-0 sshd-session[407018]: Failed password for root from 39.162.46.234 port 33689 ssh2
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:38:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:38:13 compute-0 ceph-mon[191910]: pgmap v986: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:14 compute-0 sshd-session[407018]: Received disconnect from 39.162.46.234 port 33689:11:  [preauth]
Oct 02 19:38:14 compute-0 sshd-session[407018]: Disconnected from authenticating user root 39.162.46.234 port 33689 [preauth]
Oct 02 19:38:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:38:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1800.0 total, 600.0 interval
                                            Cumulative writes: 4600 writes, 20K keys, 4600 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                            Cumulative WAL: 4600 writes, 4600 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1292 writes, 5608 keys, 1292 commit groups, 1.0 writes per commit group, ingest: 8.48 MB, 0.01 MB/s
                                            Interval WAL: 1292 writes, 1292 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     94.2      0.23              0.09        11    0.021       0      0       0.0       0.0
                                              L6      1/0    6.69 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    127.8    104.8      0.66              0.35        10    0.066     42K   5259       0.0       0.0
                                             Sum      1/0    6.69 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     94.6    102.1      0.89              0.44        21    0.043     42K   5259       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.3    123.5    123.6      0.29              0.19         8    0.036     18K   2059       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    127.8    104.8      0.66              0.35        10    0.066     42K   5259       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     95.8      0.23              0.09        10    0.023       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1800.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.021, interval 0.007
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 0.9 seconds
                                            Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x557df35531f0#2 capacity: 308.00 MB usage: 6.62 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.00011 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(409,6.27 MB,2.03526%) FilterBlock(22,127.17 KB,0.0403218%) IndexBlock(22,237.58 KB,0.0753279%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Oct 02 19:38:14 compute-0 sudo[407085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:38:14 compute-0 sudo[407085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:38:14 compute-0 sudo[407085]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:14 compute-0 sudo[407110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:38:14 compute-0 sudo[407110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:38:14 compute-0 sudo[407110]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:14 compute-0 sudo[407135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:38:14 compute-0 sudo[407135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:38:14 compute-0 sudo[407135]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:14 compute-0 sudo[407160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:38:14 compute-0 sudo[407160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:38:14 compute-0 podman[407195]: 2025-10-02 19:38:14.839205603 +0000 UTC m=+0.127464762 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 19:38:14 compute-0 podman[407196]: 2025-10-02 19:38:14.860931609 +0000 UTC m=+0.134687146 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 02 19:38:14 compute-0 podman[407197]: 2025-10-02 19:38:14.882049599 +0000 UTC m=+0.148931361 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:38:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:38:15 compute-0 sudo[407160]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:15 compute-0 sshd-session[407272]: Accepted publickey for zuul from 192.168.122.30 port 44322 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 19:38:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:38:15 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:38:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:38:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:38:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:38:15 compute-0 systemd-logind[793]: New session 61 of user zuul.
Oct 02 19:38:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:38:15 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 88308d99-cd27-48c6-be16-0eae972abb9c does not exist
Oct 02 19:38:15 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 26e10697-d9fb-46d1-9d8a-36b726d59c72 does not exist
Oct 02 19:38:15 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 91a6736d-522f-4775-a89f-ae265dd2e66c does not exist
Oct 02 19:38:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:38:15 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:38:15 compute-0 systemd[1]: Started Session 61 of User zuul.
Oct 02 19:38:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:38:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:38:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:38:15 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:38:15 compute-0 sshd-session[407272]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:38:15 compute-0 sudo[407278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:38:15 compute-0 sudo[407278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:38:15 compute-0 sudo[407278]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:15 compute-0 sudo[407327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:38:15 compute-0 sudo[407327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:38:15 compute-0 sudo[407327]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:15 compute-0 sudo[407352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:38:15 compute-0 sudo[407352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:38:15 compute-0 sudo[407352]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:15 compute-0 sudo[407377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:38:15 compute-0 sudo[407377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:38:15 compute-0 ceph-mon[191910]: pgmap v987: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:38:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:38:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:38:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:38:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:38:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:38:16 compute-0 podman[407470]: 2025-10-02 19:38:16.352835714 +0000 UTC m=+0.088195880 container create cd4c8f34e5fbb85a8cad709e63116c833c01248538f768321e60f6bdb94a122b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:38:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:16 compute-0 podman[407470]: 2025-10-02 19:38:16.320894933 +0000 UTC m=+0.056255079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:38:16 compute-0 systemd[1]: Started libpod-conmon-cd4c8f34e5fbb85a8cad709e63116c833c01248538f768321e60f6bdb94a122b.scope.
Oct 02 19:38:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:38:16 compute-0 podman[407470]: 2025-10-02 19:38:16.511170748 +0000 UTC m=+0.246530974 container init cd4c8f34e5fbb85a8cad709e63116c833c01248538f768321e60f6bdb94a122b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:38:16 compute-0 podman[407470]: 2025-10-02 19:38:16.522593856 +0000 UTC m=+0.257953992 container start cd4c8f34e5fbb85a8cad709e63116c833c01248538f768321e60f6bdb94a122b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_clarke, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:38:16 compute-0 podman[407470]: 2025-10-02 19:38:16.528458754 +0000 UTC m=+0.263819020 container attach cd4c8f34e5fbb85a8cad709e63116c833c01248538f768321e60f6bdb94a122b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:38:16 compute-0 agitated_clarke[407498]: 167 167
Oct 02 19:38:16 compute-0 systemd[1]: libpod-cd4c8f34e5fbb85a8cad709e63116c833c01248538f768321e60f6bdb94a122b.scope: Deactivated successfully.
Oct 02 19:38:16 compute-0 podman[407470]: 2025-10-02 19:38:16.535958607 +0000 UTC m=+0.271318763 container died cd4c8f34e5fbb85a8cad709e63116c833c01248538f768321e60f6bdb94a122b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_clarke, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 19:38:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd14428f0301f8dd4b77f96c20bbea30e9c271182b931a1b273e54cb58b467fc-merged.mount: Deactivated successfully.
Oct 02 19:38:16 compute-0 podman[407470]: 2025-10-02 19:38:16.613603912 +0000 UTC m=+0.348964048 container remove cd4c8f34e5fbb85a8cad709e63116c833c01248538f768321e60f6bdb94a122b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_clarke, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:38:16 compute-0 systemd[1]: libpod-conmon-cd4c8f34e5fbb85a8cad709e63116c833c01248538f768321e60f6bdb94a122b.scope: Deactivated successfully.
Oct 02 19:38:16 compute-0 podman[407554]: 2025-10-02 19:38:16.901657467 +0000 UTC m=+0.088982903 container create 24d98dfd40f2d0a09f0fcc52fcc2460cf6040506f4fb1b78d7b87722ba2ce942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:38:16 compute-0 podman[407554]: 2025-10-02 19:38:16.865483841 +0000 UTC m=+0.052809307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:38:16 compute-0 systemd[1]: Started libpod-conmon-24d98dfd40f2d0a09f0fcc52fcc2460cf6040506f4fb1b78d7b87722ba2ce942.scope.
Oct 02 19:38:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6a5da78c3b747ad0b3755b1c507363429403a67b58a9c9a4344a79ffa9e7b64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6a5da78c3b747ad0b3755b1c507363429403a67b58a9c9a4344a79ffa9e7b64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6a5da78c3b747ad0b3755b1c507363429403a67b58a9c9a4344a79ffa9e7b64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6a5da78c3b747ad0b3755b1c507363429403a67b58a9c9a4344a79ffa9e7b64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6a5da78c3b747ad0b3755b1c507363429403a67b58a9c9a4344a79ffa9e7b64/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:38:17 compute-0 podman[407554]: 2025-10-02 19:38:17.07034797 +0000 UTC m=+0.257673426 container init 24d98dfd40f2d0a09f0fcc52fcc2460cf6040506f4fb1b78d7b87722ba2ce942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:38:17 compute-0 podman[407554]: 2025-10-02 19:38:17.091018088 +0000 UTC m=+0.278343514 container start 24d98dfd40f2d0a09f0fcc52fcc2460cf6040506f4fb1b78d7b87722ba2ce942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 19:38:17 compute-0 podman[407554]: 2025-10-02 19:38:17.096497226 +0000 UTC m=+0.283822652 container attach 24d98dfd40f2d0a09f0fcc52fcc2460cf6040506f4fb1b78d7b87722ba2ce942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:38:17 compute-0 python3.9[407626]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:38:17 compute-0 ceph-mon[191910]: pgmap v988: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:18 compute-0 keen_galileo[407595]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:38:18 compute-0 keen_galileo[407595]: --> relative data size: 1.0
Oct 02 19:38:18 compute-0 keen_galileo[407595]: --> All data devices are unavailable
Oct 02 19:38:18 compute-0 systemd[1]: libpod-24d98dfd40f2d0a09f0fcc52fcc2460cf6040506f4fb1b78d7b87722ba2ce942.scope: Deactivated successfully.
Oct 02 19:38:18 compute-0 podman[407554]: 2025-10-02 19:38:18.371548659 +0000 UTC m=+1.558874135 container died 24d98dfd40f2d0a09f0fcc52fcc2460cf6040506f4fb1b78d7b87722ba2ce942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 19:38:18 compute-0 systemd[1]: libpod-24d98dfd40f2d0a09f0fcc52fcc2460cf6040506f4fb1b78d7b87722ba2ce942.scope: Consumed 1.232s CPU time.
Oct 02 19:38:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6a5da78c3b747ad0b3755b1c507363429403a67b58a9c9a4344a79ffa9e7b64-merged.mount: Deactivated successfully.
Oct 02 19:38:18 compute-0 podman[407554]: 2025-10-02 19:38:18.499136833 +0000 UTC m=+1.686462269 container remove 24d98dfd40f2d0a09f0fcc52fcc2460cf6040506f4fb1b78d7b87722ba2ce942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 19:38:18 compute-0 systemd[1]: libpod-conmon-24d98dfd40f2d0a09f0fcc52fcc2460cf6040506f4fb1b78d7b87722ba2ce942.scope: Deactivated successfully.
Oct 02 19:38:18 compute-0 podman[407731]: 2025-10-02 19:38:18.554919879 +0000 UTC m=+0.147803861 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, version=9.6, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public)
Oct 02 19:38:18 compute-0 sudo[407377]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:18 compute-0 sudo[407784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:38:18 compute-0 sudo[407784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:38:18 compute-0 sudo[407784]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:18 compute-0 sudo[407831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:38:18 compute-0 sudo[407831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:38:18 compute-0 sudo[407831]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:18 compute-0 sudo[407889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqtmoypcyyrvoudjaobccukrljihpspm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433898.005614-34-201571328821944/AnsiballZ_systemd.py'
Oct 02 19:38:18 compute-0 sudo[407889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:38:18 compute-0 sudo[407883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:38:18 compute-0 sudo[407883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:38:18 compute-0 sudo[407883]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:19 compute-0 sudo[407913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:38:19 compute-0 sudo[407913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:38:19 compute-0 python3.9[407906]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Oct 02 19:38:19 compute-0 sudo[407889]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:19 compute-0 podman[408024]: 2025-10-02 19:38:19.629610185 +0000 UTC m=+0.039533398 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:38:19 compute-0 podman[408024]: 2025-10-02 19:38:19.790317042 +0000 UTC m=+0.200240255 container create c4e23ac4e950c2475e7db531e8918e11a3151254e18f07a793689158283ab970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 19:38:19 compute-0 ceph-mon[191910]: pgmap v989: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:19 compute-0 systemd[1]: Started libpod-conmon-c4e23ac4e950c2475e7db531e8918e11a3151254e18f07a793689158283ab970.scope.
Oct 02 19:38:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:38:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:38:20 compute-0 podman[408024]: 2025-10-02 19:38:20.1330064 +0000 UTC m=+0.542929593 container init c4e23ac4e950c2475e7db531e8918e11a3151254e18f07a793689158283ab970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:38:20 compute-0 podman[408024]: 2025-10-02 19:38:20.150468751 +0000 UTC m=+0.560391944 container start c4e23ac4e950c2475e7db531e8918e11a3151254e18f07a793689158283ab970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 19:38:20 compute-0 pedantic_haslett[408101]: 167 167
Oct 02 19:38:20 compute-0 systemd[1]: libpod-c4e23ac4e950c2475e7db531e8918e11a3151254e18f07a793689158283ab970.scope: Deactivated successfully.
Oct 02 19:38:20 compute-0 podman[408024]: 2025-10-02 19:38:20.162518597 +0000 UTC m=+0.572441790 container attach c4e23ac4e950c2475e7db531e8918e11a3151254e18f07a793689158283ab970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_haslett, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:38:20 compute-0 podman[408024]: 2025-10-02 19:38:20.163732649 +0000 UTC m=+0.573655862 container died c4e23ac4e950c2475e7db531e8918e11a3151254e18f07a793689158283ab970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:38:20 compute-0 podman[408089]: 2025-10-02 19:38:20.218453136 +0000 UTC m=+0.358015582 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:38:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d3e679573957e5ae6efe7a9228116d9bebdf0baebfc106f96069b40b6b9f560-merged.mount: Deactivated successfully.
Oct 02 19:38:20 compute-0 podman[408024]: 2025-10-02 19:38:20.268644841 +0000 UTC m=+0.678568034 container remove c4e23ac4e950c2475e7db531e8918e11a3151254e18f07a793689158283ab970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_haslett, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:38:20 compute-0 sudo[408178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdxdjzfkfrndexhejxyuutijbxirjeck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433899.5444732-42-5181713517084/AnsiballZ_setup.py'
Oct 02 19:38:20 compute-0 sudo[408178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:38:20 compute-0 systemd[1]: libpod-conmon-c4e23ac4e950c2475e7db531e8918e11a3151254e18f07a793689158283ab970.scope: Deactivated successfully.
Oct 02 19:38:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:20 compute-0 podman[408188]: 2025-10-02 19:38:20.571074854 +0000 UTC m=+0.111497781 container create fbfba6ce6f7cf94dd21a943485c14324be989f22e1920689717a1f0df453f582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:38:20 compute-0 python3.9[408182]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 19:38:20 compute-0 podman[408188]: 2025-10-02 19:38:20.538082783 +0000 UTC m=+0.078505760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:38:20 compute-0 systemd[1]: Started libpod-conmon-fbfba6ce6f7cf94dd21a943485c14324be989f22e1920689717a1f0df453f582.scope.
Oct 02 19:38:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba33747d71d5a30ef6f7eefef2c764a76a7bfa1ed381f95470288c3788c8c8d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba33747d71d5a30ef6f7eefef2c764a76a7bfa1ed381f95470288c3788c8c8d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba33747d71d5a30ef6f7eefef2c764a76a7bfa1ed381f95470288c3788c8c8d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba33747d71d5a30ef6f7eefef2c764a76a7bfa1ed381f95470288c3788c8c8d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:38:20 compute-0 podman[408188]: 2025-10-02 19:38:20.732701596 +0000 UTC m=+0.273124563 container init fbfba6ce6f7cf94dd21a943485c14324be989f22e1920689717a1f0df453f582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:38:20 compute-0 podman[408188]: 2025-10-02 19:38:20.766700003 +0000 UTC m=+0.307122930 container start fbfba6ce6f7cf94dd21a943485c14324be989f22e1920689717a1f0df453f582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:38:20 compute-0 podman[408188]: 2025-10-02 19:38:20.775028348 +0000 UTC m=+0.315451275 container attach fbfba6ce6f7cf94dd21a943485c14324be989f22e1920689717a1f0df453f582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:38:20 compute-0 sudo[408178]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:21 compute-0 sudo[408291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paieavcviaocrjcwmszjvlfxatkugflc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433899.5444732-42-5181713517084/AnsiballZ_dnf.py'
Oct 02 19:38:21 compute-0 sudo[408291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]: {
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:     "0": [
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:         {
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "devices": [
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "/dev/loop3"
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             ],
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "lv_name": "ceph_lv0",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "lv_size": "21470642176",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "name": "ceph_lv0",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "tags": {
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.cluster_name": "ceph",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.crush_device_class": "",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.encrypted": "0",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.osd_id": "0",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.type": "block",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.vdo": "0"
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             },
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "type": "block",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "vg_name": "ceph_vg0"
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:         }
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:     ],
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:     "1": [
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:         {
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "devices": [
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "/dev/loop4"
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             ],
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "lv_name": "ceph_lv1",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "lv_size": "21470642176",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "name": "ceph_lv1",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "tags": {
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.cluster_name": "ceph",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.crush_device_class": "",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.encrypted": "0",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.osd_id": "1",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.type": "block",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.vdo": "0"
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             },
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "type": "block",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "vg_name": "ceph_vg1"
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:         }
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:     ],
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:     "2": [
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:         {
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "devices": [
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "/dev/loop5"
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             ],
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "lv_name": "ceph_lv2",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "lv_size": "21470642176",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "name": "ceph_lv2",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "tags": {
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.cluster_name": "ceph",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.crush_device_class": "",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.encrypted": "0",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.osd_id": "2",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.type": "block",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:                 "ceph.vdo": "0"
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             },
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "type": "block",
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:             "vg_name": "ceph_vg2"
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:         }
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]:     ]
Oct 02 19:38:21 compute-0 ecstatic_rhodes[408209]: }
Oct 02 19:38:21 compute-0 systemd[1]: libpod-fbfba6ce6f7cf94dd21a943485c14324be989f22e1920689717a1f0df453f582.scope: Deactivated successfully.
Oct 02 19:38:21 compute-0 podman[408188]: 2025-10-02 19:38:21.674292489 +0000 UTC m=+1.214715416 container died fbfba6ce6f7cf94dd21a943485c14324be989f22e1920689717a1f0df453f582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 19:38:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba33747d71d5a30ef6f7eefef2c764a76a7bfa1ed381f95470288c3788c8c8d7-merged.mount: Deactivated successfully.
Oct 02 19:38:21 compute-0 podman[408188]: 2025-10-02 19:38:21.787724191 +0000 UTC m=+1.328147088 container remove fbfba6ce6f7cf94dd21a943485c14324be989f22e1920689717a1f0df453f582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 19:38:21 compute-0 systemd[1]: libpod-conmon-fbfba6ce6f7cf94dd21a943485c14324be989f22e1920689717a1f0df453f582.scope: Deactivated successfully.
Oct 02 19:38:21 compute-0 sudo[407913]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:21 compute-0 python3.9[408293]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:38:21 compute-0 ceph-mon[191910]: pgmap v990: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:21 compute-0 sudo[408306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:38:21 compute-0 sudo[408306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:38:21 compute-0 sudo[408306]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:22 compute-0 sudo[408332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:38:22 compute-0 sudo[408332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:38:22 compute-0 sudo[408332]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:22 compute-0 sudo[408357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:38:22 compute-0 sudo[408357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:38:22 compute-0 sudo[408357]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:22 compute-0 sudo[408382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:38:22 compute-0 sudo[408382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:38:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:22 compute-0 podman[408444]: 2025-10-02 19:38:22.798957824 +0000 UTC m=+0.072136108 container create 5042196919e1b49fe213318b542a3e37d6dacba9a5a18a007c05d642a317d2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dewdney, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 19:38:22 compute-0 podman[408444]: 2025-10-02 19:38:22.766028915 +0000 UTC m=+0.039207269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:38:22 compute-0 systemd[1]: Started libpod-conmon-5042196919e1b49fe213318b542a3e37d6dacba9a5a18a007c05d642a317d2fa.scope.
Oct 02 19:38:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:38:22 compute-0 podman[408444]: 2025-10-02 19:38:22.928072799 +0000 UTC m=+0.201251163 container init 5042196919e1b49fe213318b542a3e37d6dacba9a5a18a007c05d642a317d2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 19:38:22 compute-0 podman[408444]: 2025-10-02 19:38:22.937606226 +0000 UTC m=+0.210784530 container start 5042196919e1b49fe213318b542a3e37d6dacba9a5a18a007c05d642a317d2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dewdney, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:38:22 compute-0 hopeful_dewdney[408458]: 167 167
Oct 02 19:38:22 compute-0 podman[408444]: 2025-10-02 19:38:22.943990549 +0000 UTC m=+0.217168873 container attach 5042196919e1b49fe213318b542a3e37d6dacba9a5a18a007c05d642a317d2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dewdney, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 19:38:22 compute-0 systemd[1]: libpod-5042196919e1b49fe213318b542a3e37d6dacba9a5a18a007c05d642a317d2fa.scope: Deactivated successfully.
Oct 02 19:38:22 compute-0 conmon[408458]: conmon 5042196919e1b49fe213 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5042196919e1b49fe213318b542a3e37d6dacba9a5a18a007c05d642a317d2fa.scope/container/memory.events
Oct 02 19:38:22 compute-0 podman[408444]: 2025-10-02 19:38:22.949818816 +0000 UTC m=+0.222997130 container died 5042196919e1b49fe213318b542a3e37d6dacba9a5a18a007c05d642a317d2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 19:38:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-675b10d017481f18b144d703d235ad097f593165c9c2740a39afdb27aa9e582a-merged.mount: Deactivated successfully.
Oct 02 19:38:23 compute-0 podman[408444]: 2025-10-02 19:38:23.029142727 +0000 UTC m=+0.302321031 container remove 5042196919e1b49fe213318b542a3e37d6dacba9a5a18a007c05d642a317d2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:38:23 compute-0 systemd[1]: libpod-conmon-5042196919e1b49fe213318b542a3e37d6dacba9a5a18a007c05d642a317d2fa.scope: Deactivated successfully.
Oct 02 19:38:23 compute-0 podman[408481]: 2025-10-02 19:38:23.308591179 +0000 UTC m=+0.084764139 container create 71340f3b1803cfccdcce44f0eb775adafec2da238dcbfe35034f00b64777f603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 19:38:23 compute-0 podman[408481]: 2025-10-02 19:38:23.274877829 +0000 UTC m=+0.051050869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:38:23 compute-0 systemd[1]: Started libpod-conmon-71340f3b1803cfccdcce44f0eb775adafec2da238dcbfe35034f00b64777f603.scope.
Oct 02 19:38:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25421016225180281b47b14867c48957befac06b0dcd012b012433a662f21f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25421016225180281b47b14867c48957befac06b0dcd012b012433a662f21f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25421016225180281b47b14867c48957befac06b0dcd012b012433a662f21f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:38:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25421016225180281b47b14867c48957befac06b0dcd012b012433a662f21f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:38:23 compute-0 podman[408481]: 2025-10-02 19:38:23.511028512 +0000 UTC m=+0.287201492 container init 71340f3b1803cfccdcce44f0eb775adafec2da238dcbfe35034f00b64777f603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_allen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:38:23 compute-0 podman[408481]: 2025-10-02 19:38:23.529697586 +0000 UTC m=+0.305870546 container start 71340f3b1803cfccdcce44f0eb775adafec2da238dcbfe35034f00b64777f603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:38:23 compute-0 podman[408481]: 2025-10-02 19:38:23.535907253 +0000 UTC m=+0.312080243 container attach 71340f3b1803cfccdcce44f0eb775adafec2da238dcbfe35034f00b64777f603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:38:23 compute-0 sudo[408291]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:23 compute-0 ceph-mon[191910]: pgmap v991: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:38:23.912825) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433903912901, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1474, "num_deletes": 251, "total_data_size": 2326434, "memory_usage": 2375800, "flush_reason": "Manual Compaction"}
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433903940807, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2293176, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19426, "largest_seqno": 20899, "table_properties": {"data_size": 2286352, "index_size": 3959, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14097, "raw_average_key_size": 19, "raw_value_size": 2272641, "raw_average_value_size": 3187, "num_data_blocks": 181, "num_entries": 713, "num_filter_entries": 713, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759433745, "oldest_key_time": 1759433745, "file_creation_time": 1759433903, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 28076 microseconds, and 11930 cpu microseconds.
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:38:23.940910) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2293176 bytes OK
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:38:23.940938) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:38:23.944112) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:38:23.944143) EVENT_LOG_v1 {"time_micros": 1759433903944133, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:38:23.944174) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2319969, prev total WAL file size 2319969, number of live WAL files 2.
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:38:23.945775) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2239KB)], [47(6853KB)]
Oct 02 19:38:23 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433903945881, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9311126, "oldest_snapshot_seqno": -1}
Oct 02 19:38:24 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4279 keys, 7548009 bytes, temperature: kUnknown
Oct 02 19:38:24 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433904020765, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7548009, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7518366, "index_size": 17821, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 105761, "raw_average_key_size": 24, "raw_value_size": 7439795, "raw_average_value_size": 1738, "num_data_blocks": 749, "num_entries": 4279, "num_filter_entries": 4279, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759433903, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:38:24 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:38:24 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:38:24.021120) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7548009 bytes
Oct 02 19:38:24 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:38:24.024173) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.2 rd, 100.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 6.7 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(7.4) write-amplify(3.3) OK, records in: 4793, records dropped: 514 output_compression: NoCompression
Oct 02 19:38:24 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:38:24.024206) EVENT_LOG_v1 {"time_micros": 1759433904024190, "job": 24, "event": "compaction_finished", "compaction_time_micros": 74966, "compaction_time_cpu_micros": 36974, "output_level": 6, "num_output_files": 1, "total_output_size": 7548009, "num_input_records": 4793, "num_output_records": 4279, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:38:24 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:38:24 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433904025274, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct 02 19:38:24 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:38:24 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759433904029020, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct 02 19:38:24 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:38:23.945524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:38:24 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:38:24.029260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:38:24 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:38:24.029267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:38:24 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:38:24.029269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:38:24 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:38:24.029271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:38:24 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:38:24.029273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:38:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:24 compute-0 infallible_allen[408497]: {
Oct 02 19:38:24 compute-0 infallible_allen[408497]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:38:24 compute-0 infallible_allen[408497]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:38:24 compute-0 infallible_allen[408497]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:38:24 compute-0 infallible_allen[408497]:         "osd_id": 1,
Oct 02 19:38:24 compute-0 infallible_allen[408497]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:38:24 compute-0 infallible_allen[408497]:         "type": "bluestore"
Oct 02 19:38:24 compute-0 infallible_allen[408497]:     },
Oct 02 19:38:24 compute-0 infallible_allen[408497]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:38:24 compute-0 infallible_allen[408497]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:38:24 compute-0 infallible_allen[408497]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:38:24 compute-0 infallible_allen[408497]:         "osd_id": 2,
Oct 02 19:38:24 compute-0 infallible_allen[408497]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:38:24 compute-0 infallible_allen[408497]:         "type": "bluestore"
Oct 02 19:38:24 compute-0 infallible_allen[408497]:     },
Oct 02 19:38:24 compute-0 infallible_allen[408497]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:38:24 compute-0 infallible_allen[408497]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:38:24 compute-0 infallible_allen[408497]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:38:24 compute-0 infallible_allen[408497]:         "osd_id": 0,
Oct 02 19:38:24 compute-0 infallible_allen[408497]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:38:24 compute-0 infallible_allen[408497]:         "type": "bluestore"
Oct 02 19:38:24 compute-0 infallible_allen[408497]:     }
Oct 02 19:38:24 compute-0 infallible_allen[408497]: }
Oct 02 19:38:24 compute-0 systemd[1]: libpod-71340f3b1803cfccdcce44f0eb775adafec2da238dcbfe35034f00b64777f603.scope: Deactivated successfully.
Oct 02 19:38:24 compute-0 systemd[1]: libpod-71340f3b1803cfccdcce44f0eb775adafec2da238dcbfe35034f00b64777f603.scope: Consumed 1.182s CPU time.
Oct 02 19:38:24 compute-0 podman[408481]: 2025-10-02 19:38:24.714629457 +0000 UTC m=+1.490802417 container died 71340f3b1803cfccdcce44f0eb775adafec2da238dcbfe35034f00b64777f603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_allen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 19:38:24 compute-0 sudo[408686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agrcmsizudqbuunghwmvvmilbebqxwti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433903.9975667-54-196663434725039/AnsiballZ_stat.py'
Oct 02 19:38:24 compute-0 sudo[408686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-d25421016225180281b47b14867c48957befac06b0dcd012b012433a662f21f0-merged.mount: Deactivated successfully.
Oct 02 19:38:25 compute-0 python3.9[408691]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:38:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:38:25 compute-0 sudo[408686]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:25 compute-0 ceph-mon[191910]: pgmap v992: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:25 compute-0 podman[408481]: 2025-10-02 19:38:25.20192758 +0000 UTC m=+1.978100570 container remove 71340f3b1803cfccdcce44f0eb775adafec2da238dcbfe35034f00b64777f603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_allen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 19:38:25 compute-0 systemd[1]: libpod-conmon-71340f3b1803cfccdcce44f0eb775adafec2da238dcbfe35034f00b64777f603.scope: Deactivated successfully.
Oct 02 19:38:25 compute-0 sudo[408382]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:38:25 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:38:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:38:25 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:38:25 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev f8fe2d32-f237-47d0-8fde-af69309782d8 does not exist
Oct 02 19:38:25 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 0a0b0164-c730-4a3b-a7bd-c6c3f6169696 does not exist
Oct 02 19:38:25 compute-0 sudo[408698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:38:25 compute-0 sudo[408698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:38:25 compute-0 sudo[408698]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:25 compute-0 sudo[408723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:38:25 compute-0 sudo[408723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:38:25 compute-0 sudo[408723]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:26 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:38:26 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:38:26 compute-0 sudo[408821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnduivxmwqicjwqfkqvsfkxaamfoundu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433903.9975667-54-196663434725039/AnsiballZ_file.py'
Oct 02 19:38:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:26 compute-0 sudo[408821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:38:26 compute-0 python3.9[408823]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/pki/rsyslog/ca-openshift.crt _original_basename=ca-openshift.crt recurse=False state=file path=/etc/pki/rsyslog/ca-openshift.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:38:26 compute-0 sudo[408821]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:27 compute-0 ceph-mon[191910]: pgmap v993: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:27 compute-0 sudo[408973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwydlvwmkonukwmwgurspfpkebnhrvze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433907.0372157-66-7879404828636/AnsiballZ_file.py'
Oct 02 19:38:27 compute-0 sudo[408973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:38:27 compute-0 python3.9[408975]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:38:27 compute-0 sudo[408973]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:28 compute-0 podman[409059]: 2025-10-02 19:38:28.725756467 +0000 UTC m=+0.137589105 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:38:28 compute-0 sudo[409145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzbkftosqjboqiihzqwtwglvhfbxyrfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433908.3314126-74-94839390545130/AnsiballZ_stat.py'
Oct 02 19:38:28 compute-0 sudo[409145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:38:29 compute-0 python3.9[409147]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:38:29 compute-0 sudo[409145]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:29 compute-0 ceph-mon[191910]: pgmap v994: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:29 compute-0 sudo[409223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzllfvgkydpzouxriqdjtetxrommuylp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759433908.3314126-74-94839390545130/AnsiballZ_file.py'
Oct 02 19:38:29 compute-0 sudo[409223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:38:29 compute-0 podman[157186]: time="2025-10-02T19:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:38:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:38:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8527 "" "Go-http-client/1.1"
Oct 02 19:38:29 compute-0 python3.9[409225]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/rsyslog.d/10-telemetry.conf _original_basename=10-telemetry.conf recurse=False state=file path=/etc/rsyslog.d/10-telemetry.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:38:29 compute-0 sudo[409223]: pam_unix(sudo:session): session closed for user root
Oct 02 19:38:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:38:30 compute-0 sshd-session[407279]: Connection closed by 192.168.122.30 port 44322
Oct 02 19:38:30 compute-0 sshd-session[407272]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:38:30 compute-0 systemd[1]: session-61.scope: Deactivated successfully.
Oct 02 19:38:30 compute-0 systemd[1]: session-61.scope: Consumed 10.718s CPU time.
Oct 02 19:38:30 compute-0 systemd-logind[793]: Session 61 logged out. Waiting for processes to exit.
Oct 02 19:38:30 compute-0 systemd-logind[793]: Removed session 61.
Oct 02 19:38:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:31 compute-0 openstack_network_exporter[372736]: ERROR   19:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:38:31 compute-0 openstack_network_exporter[372736]: ERROR   19:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:38:31 compute-0 openstack_network_exporter[372736]: ERROR   19:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:38:31 compute-0 openstack_network_exporter[372736]: ERROR   19:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:38:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:38:31 compute-0 openstack_network_exporter[372736]: ERROR   19:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:38:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:38:31 compute-0 ceph-mon[191910]: pgmap v995: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:38:32.284 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:38:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:38:32.285 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:38:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:38:32.285 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:38:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:33 compute-0 ceph-mon[191910]: pgmap v996: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:38:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:38:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:38:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:38:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:38:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:38:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:38:35 compute-0 ceph-mon[191910]: pgmap v997: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:35 compute-0 podman[409250]: 2025-10-02 19:38:35.73077981 +0000 UTC m=+0.149895216 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:38:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:36 compute-0 podman[409273]: 2025-10-02 19:38:36.720244936 +0000 UTC m=+0.138559811 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Oct 02 19:38:37 compute-0 ceph-mon[191910]: pgmap v998: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:38 compute-0 nova_compute[355794]: 2025-10-02 19:38:38.243 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:38 compute-0 nova_compute[355794]: 2025-10-02 19:38:38.244 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:38 compute-0 nova_compute[355794]: 2025-10-02 19:38:38.244 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:38:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:38 compute-0 nova_compute[355794]: 2025-10-02 19:38:38.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:38 compute-0 nova_compute[355794]: 2025-10-02 19:38:38.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:39 compute-0 ceph-mon[191910]: pgmap v999: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:39 compute-0 nova_compute[355794]: 2025-10-02 19:38:39.570 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:39 compute-0 nova_compute[355794]: 2025-10-02 19:38:39.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:39 compute-0 nova_compute[355794]: 2025-10-02 19:38:39.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:39 compute-0 podman[409292]: 2025-10-02 19:38:39.67375898 +0000 UTC m=+0.104023808 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Oct 02 19:38:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:38:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:41 compute-0 ceph-mon[191910]: pgmap v1000: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:41 compute-0 nova_compute[355794]: 2025-10-02 19:38:41.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:41 compute-0 nova_compute[355794]: 2025-10-02 19:38:41.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:38:41 compute-0 nova_compute[355794]: 2025-10-02 19:38:41.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:38:41 compute-0 nova_compute[355794]: 2025-10-02 19:38:41.624 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:38:41 compute-0 nova_compute[355794]: 2025-10-02 19:38:41.624 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:41 compute-0 nova_compute[355794]: 2025-10-02 19:38:41.667 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:38:41 compute-0 nova_compute[355794]: 2025-10-02 19:38:41.668 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:38:41 compute-0 nova_compute[355794]: 2025-10-02 19:38:41.668 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:38:41 compute-0 nova_compute[355794]: 2025-10-02 19:38:41.669 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:38:41 compute-0 nova_compute[355794]: 2025-10-02 19:38:41.669 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:38:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:38:42 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/530218083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:38:42 compute-0 nova_compute[355794]: 2025-10-02 19:38:42.171 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:38:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:42 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/530218083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:38:42 compute-0 nova_compute[355794]: 2025-10-02 19:38:42.617 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:38:42 compute-0 nova_compute[355794]: 2025-10-02 19:38:42.618 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4575MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:38:42 compute-0 nova_compute[355794]: 2025-10-02 19:38:42.618 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:38:42 compute-0 nova_compute[355794]: 2025-10-02 19:38:42.618 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:38:42 compute-0 nova_compute[355794]: 2025-10-02 19:38:42.674 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:38:42 compute-0 nova_compute[355794]: 2025-10-02 19:38:42.674 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:38:42 compute-0 nova_compute[355794]: 2025-10-02 19:38:42.690 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:38:42 compute-0 podman[409335]: 2025-10-02 19:38:42.72601741 +0000 UTC m=+0.143127224 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, name=ubi9, release-0.7.12=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct 02 19:38:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:38:43 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1825367763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:38:43 compute-0 nova_compute[355794]: 2025-10-02 19:38:43.180 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:38:43 compute-0 nova_compute[355794]: 2025-10-02 19:38:43.189 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:38:43 compute-0 nova_compute[355794]: 2025-10-02 19:38:43.205 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:38:43 compute-0 nova_compute[355794]: 2025-10-02 19:38:43.206 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:38:43 compute-0 nova_compute[355794]: 2025-10-02 19:38:43.206 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:38:43 compute-0 ceph-mon[191910]: pgmap v1001: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:43 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1825367763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:38:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:38:45 compute-0 ceph-mon[191910]: pgmap v1002: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:45 compute-0 podman[409377]: 2025-10-02 19:38:45.701628611 +0000 UTC m=+0.109803114 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:38:45 compute-0 podman[409376]: 2025-10-02 19:38:45.708345003 +0000 UTC m=+0.122668802 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 19:38:45 compute-0 podman[409378]: 2025-10-02 19:38:45.7597251 +0000 UTC m=+0.165849408 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:38:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:47 compute-0 ceph-mon[191910]: pgmap v1003: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:49 compute-0 ceph-mon[191910]: pgmap v1004: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:49 compute-0 podman[409437]: 2025-10-02 19:38:49.703630995 +0000 UTC m=+0.122717633 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 02 19:38:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:38:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:50 compute-0 podman[409458]: 2025-10-02 19:38:50.683213294 +0000 UTC m=+0.103626218 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:38:51 compute-0 ceph-mon[191910]: pgmap v1005: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:53 compute-0 ceph-mon[191910]: pgmap v1006: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:38:55 compute-0 ceph-mon[191910]: pgmap v1007: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:38:57 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1642151460' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:38:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:38:57 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1642151460' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:38:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:38:57 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2536276315' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:38:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:38:57 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2536276315' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:38:57 compute-0 ceph-mon[191910]: pgmap v1008: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:57 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1642151460' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:38:57 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1642151460' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:38:57 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2536276315' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:38:57 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2536276315' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:38:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:38:57 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3142340084' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:38:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:38:57 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3142340084' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:38:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:38:58 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3142340084' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:38:58 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3142340084' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:38:59 compute-0 podman[409481]: 2025-10-02 19:38:59.652839257 +0000 UTC m=+0.079394773 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 19:38:59 compute-0 podman[157186]: time="2025-10-02T19:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:38:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:38:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8534 "" "Go-http-client/1.1"
Oct 02 19:38:59 compute-0 ceph-mon[191910]: pgmap v1009: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:39:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:01 compute-0 openstack_network_exporter[372736]: ERROR   19:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:39:01 compute-0 openstack_network_exporter[372736]: ERROR   19:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:39:01 compute-0 openstack_network_exporter[372736]: ERROR   19:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:39:01 compute-0 openstack_network_exporter[372736]: ERROR   19:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:39:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:39:01 compute-0 openstack_network_exporter[372736]: ERROR   19:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:39:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:39:02 compute-0 ceph-mon[191910]: pgmap v1010: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:39:03
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'vms', 'volumes', 'backups', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.meta']
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:39:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:39:04 compute-0 ceph-mon[191910]: pgmap v1011: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.293 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.293 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.309 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.309 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.309 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.310 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.311 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.311 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.311 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.313 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.313 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.315 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.315 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:39:04.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:04 compute-0 ceph-mgr[192222]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2078717049
Oct 02 19:39:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:39:06 compute-0 ceph-mon[191910]: pgmap v1012: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:06 compute-0 podman[409502]: 2025-10-02 19:39:06.677691139 +0000 UTC m=+0.096355351 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:39:07 compute-0 podman[409524]: 2025-10-02 19:39:07.707990787 +0000 UTC m=+0.131015927 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:39:08 compute-0 ceph-mon[191910]: pgmap v1013: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:09 compute-0 ceph-mon[191910]: pgmap v1014: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:39:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:10 compute-0 podman[409544]: 2025-10-02 19:39:10.710965356 +0000 UTC m=+0.135778725 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001)
Oct 02 19:39:11 compute-0 ceph-mon[191910]: pgmap v1015: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:39:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:39:13 compute-0 ceph-mon[191910]: pgmap v1016: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:13 compute-0 podman[409564]: 2025-10-02 19:39:13.690239776 +0000 UTC m=+0.121526000 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, release-0.7.12=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9)
Oct 02 19:39:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:39:15 compute-0 ceph-mon[191910]: pgmap v1017: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:16 compute-0 podman[409582]: 2025-10-02 19:39:16.68703226 +0000 UTC m=+0.113193426 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:39:16 compute-0 podman[409583]: 2025-10-02 19:39:16.714123561 +0000 UTC m=+0.127995595 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:39:16 compute-0 podman[409584]: 2025-10-02 19:39:16.776113784 +0000 UTC m=+0.182295351 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:39:17 compute-0 ceph-mon[191910]: pgmap v1018: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:19 compute-0 ceph-mon[191910]: pgmap v1019: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:39:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:20 compute-0 podman[409644]: 2025-10-02 19:39:20.693535335 +0000 UTC m=+0.124859671 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-type=git, config_id=edpm, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Oct 02 19:39:20 compute-0 podman[409664]: 2025-10-02 19:39:20.857052077 +0000 UTC m=+0.104881491 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:39:21 compute-0 ceph-mon[191910]: pgmap v1020: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:23 compute-0 ceph-mon[191910]: pgmap v1021: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:39:25 compute-0 ceph-mon[191910]: pgmap v1022: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:25 compute-0 sudo[409688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:39:25 compute-0 sudo[409688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:25 compute-0 sudo[409688]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:25 compute-0 sudo[409713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:39:25 compute-0 sudo[409713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:25 compute-0 sudo[409713]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:26 compute-0 sudo[409738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:39:26 compute-0 sudo[409738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:26 compute-0 sudo[409738]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:26 compute-0 sudo[409763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:39:26 compute-0 sudo[409763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:26 compute-0 sudo[409763]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:26 compute-0 sudo[409819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:39:26 compute-0 sudo[409819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:27 compute-0 sudo[409819]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:27 compute-0 sudo[409844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:39:27 compute-0 sudo[409844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:27 compute-0 sudo[409844]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:27 compute-0 sudo[409869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:39:27 compute-0 sudo[409869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:27 compute-0 sudo[409869]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:27 compute-0 sudo[409894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 19:39:27 compute-0 sudo[409894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:27 compute-0 ceph-mon[191910]: pgmap v1023: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:27 compute-0 sudo[409894]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:39:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:39:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:39:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:39:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:39:27 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:39:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:39:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:39:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:39:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:39:27 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev ae58e16a-88e0-49a5-8038-b334c133a860 does not exist
Oct 02 19:39:27 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 2815609d-70d8-4455-b453-b2477df20d22 does not exist
Oct 02 19:39:27 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 79711775-6cec-45a1-b7ad-2674e970457d does not exist
Oct 02 19:39:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:39:27 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:39:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:39:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:39:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:39:27 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:39:27 compute-0 sudo[409936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:39:28 compute-0 sudo[409936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:28 compute-0 sudo[409936]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:28 compute-0 sudo[409961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:39:28 compute-0 sudo[409961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:28 compute-0 sudo[409961]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:28 compute-0 sudo[409986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:39:28 compute-0 sudo[409986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:28 compute-0 sudo[409986]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:28 compute-0 sudo[410011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:39:28 compute-0 sudo[410011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:39:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:39:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:39:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:39:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:39:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:39:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:39:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:39:28 compute-0 podman[410073]: 2025-10-02 19:39:28.971234389 +0000 UTC m=+0.079985530 container create 5d7cefc3f6040c04ffd6c5296f9b546e9d0891ad646d9be506bccb7ba77fb680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 19:39:29 compute-0 podman[410073]: 2025-10-02 19:39:28.934695113 +0000 UTC m=+0.043446304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:39:29 compute-0 systemd[1]: Started libpod-conmon-5d7cefc3f6040c04ffd6c5296f9b546e9d0891ad646d9be506bccb7ba77fb680.scope.
Oct 02 19:39:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:39:29 compute-0 podman[410073]: 2025-10-02 19:39:29.132358108 +0000 UTC m=+0.241109289 container init 5d7cefc3f6040c04ffd6c5296f9b546e9d0891ad646d9be506bccb7ba77fb680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_liskov, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:39:29 compute-0 podman[410073]: 2025-10-02 19:39:29.154013872 +0000 UTC m=+0.262765003 container start 5d7cefc3f6040c04ffd6c5296f9b546e9d0891ad646d9be506bccb7ba77fb680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 19:39:29 compute-0 podman[410073]: 2025-10-02 19:39:29.163491568 +0000 UTC m=+0.272242759 container attach 5d7cefc3f6040c04ffd6c5296f9b546e9d0891ad646d9be506bccb7ba77fb680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_liskov, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:39:29 compute-0 zealous_liskov[410089]: 167 167
Oct 02 19:39:29 compute-0 systemd[1]: libpod-5d7cefc3f6040c04ffd6c5296f9b546e9d0891ad646d9be506bccb7ba77fb680.scope: Deactivated successfully.
Oct 02 19:39:29 compute-0 podman[410073]: 2025-10-02 19:39:29.17022224 +0000 UTC m=+0.278973381 container died 5d7cefc3f6040c04ffd6c5296f9b546e9d0891ad646d9be506bccb7ba77fb680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:39:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-042b42b9dd62da57c23c51b5a76b419990ebc4652a3831dda14f34d16143dca9-merged.mount: Deactivated successfully.
Oct 02 19:39:29 compute-0 podman[410073]: 2025-10-02 19:39:29.266489278 +0000 UTC m=+0.375240369 container remove 5d7cefc3f6040c04ffd6c5296f9b546e9d0891ad646d9be506bccb7ba77fb680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_liskov, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:39:29 compute-0 systemd[1]: libpod-conmon-5d7cefc3f6040c04ffd6c5296f9b546e9d0891ad646d9be506bccb7ba77fb680.scope: Deactivated successfully.
Oct 02 19:39:29 compute-0 podman[410112]: 2025-10-02 19:39:29.501620754 +0000 UTC m=+0.084355168 container create 9c11f48877e9f319b86ad60c5eaea345a773410f7d4af0cb5a91cf0d802a7b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_einstein, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:39:29 compute-0 podman[410112]: 2025-10-02 19:39:29.460316009 +0000 UTC m=+0.043050493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:39:29 compute-0 systemd[1]: Started libpod-conmon-9c11f48877e9f319b86ad60c5eaea345a773410f7d4af0cb5a91cf0d802a7b7f.scope.
Oct 02 19:39:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fedd4539bfe828a1dc16a39b282f597767fb73eb8ad0dae464fd3aae12c4e35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fedd4539bfe828a1dc16a39b282f597767fb73eb8ad0dae464fd3aae12c4e35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fedd4539bfe828a1dc16a39b282f597767fb73eb8ad0dae464fd3aae12c4e35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fedd4539bfe828a1dc16a39b282f597767fb73eb8ad0dae464fd3aae12c4e35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:39:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fedd4539bfe828a1dc16a39b282f597767fb73eb8ad0dae464fd3aae12c4e35/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:39:29 compute-0 podman[410112]: 2025-10-02 19:39:29.63448474 +0000 UTC m=+0.217219134 container init 9c11f48877e9f319b86ad60c5eaea345a773410f7d4af0cb5a91cf0d802a7b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_einstein, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 19:39:29 compute-0 podman[410112]: 2025-10-02 19:39:29.649740402 +0000 UTC m=+0.232474786 container start 9c11f48877e9f319b86ad60c5eaea345a773410f7d4af0cb5a91cf0d802a7b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:39:29 compute-0 podman[410112]: 2025-10-02 19:39:29.654086039 +0000 UTC m=+0.236820423 container attach 9c11f48877e9f319b86ad60c5eaea345a773410f7d4af0cb5a91cf0d802a7b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:39:29 compute-0 podman[157186]: time="2025-10-02T19:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:39:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46737 "" "Go-http-client/1.1"
Oct 02 19:39:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8944 "" "Go-http-client/1.1"
Oct 02 19:39:29 compute-0 ceph-mon[191910]: pgmap v1024: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:39:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:30 compute-0 podman[410144]: 2025-10-02 19:39:30.68708688 +0000 UTC m=+0.117793360 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:39:30 compute-0 busy_einstein[410128]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:39:30 compute-0 busy_einstein[410128]: --> relative data size: 1.0
Oct 02 19:39:30 compute-0 busy_einstein[410128]: --> All data devices are unavailable
Oct 02 19:39:30 compute-0 systemd[1]: libpod-9c11f48877e9f319b86ad60c5eaea345a773410f7d4af0cb5a91cf0d802a7b7f.scope: Deactivated successfully.
Oct 02 19:39:30 compute-0 systemd[1]: libpod-9c11f48877e9f319b86ad60c5eaea345a773410f7d4af0cb5a91cf0d802a7b7f.scope: Consumed 1.232s CPU time.
Oct 02 19:39:31 compute-0 podman[410176]: 2025-10-02 19:39:31.02128729 +0000 UTC m=+0.047820802 container died 9c11f48877e9f319b86ad60c5eaea345a773410f7d4af0cb5a91cf0d802a7b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_einstein, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 02 19:39:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fedd4539bfe828a1dc16a39b282f597767fb73eb8ad0dae464fd3aae12c4e35-merged.mount: Deactivated successfully.
Oct 02 19:39:31 compute-0 podman[410176]: 2025-10-02 19:39:31.120537809 +0000 UTC m=+0.147071281 container remove 9c11f48877e9f319b86ad60c5eaea345a773410f7d4af0cb5a91cf0d802a7b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_einstein, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:39:31 compute-0 systemd[1]: libpod-conmon-9c11f48877e9f319b86ad60c5eaea345a773410f7d4af0cb5a91cf0d802a7b7f.scope: Deactivated successfully.
Oct 02 19:39:31 compute-0 sudo[410011]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:31 compute-0 sudo[410190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:39:31 compute-0 sudo[410190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:31 compute-0 sudo[410190]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:31 compute-0 openstack_network_exporter[372736]: ERROR   19:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:39:31 compute-0 openstack_network_exporter[372736]: ERROR   19:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:39:31 compute-0 openstack_network_exporter[372736]: ERROR   19:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:39:31 compute-0 openstack_network_exporter[372736]: ERROR   19:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:39:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:39:31 compute-0 openstack_network_exporter[372736]: ERROR   19:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:39:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:39:31 compute-0 sudo[410215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:39:31 compute-0 sudo[410215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:31 compute-0 sudo[410215]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:31 compute-0 sudo[410240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:39:31 compute-0 sudo[410240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:31 compute-0 sudo[410240]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:31 compute-0 sudo[410265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:39:31 compute-0 sudo[410265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:31 compute-0 ceph-mon[191910]: pgmap v1025: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:39:32.287 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:39:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:39:32.288 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:39:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:39:32.288 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:39:32 compute-0 podman[410328]: 2025-10-02 19:39:32.433793762 +0000 UTC m=+0.108808597 container create eab9668ead9394b8682d4b725e8f1d86f15a54bf51b7fa877f4440501166fe92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:39:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:32 compute-0 podman[410328]: 2025-10-02 19:39:32.386501606 +0000 UTC m=+0.061516511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:39:32 compute-0 systemd[1]: Started libpod-conmon-eab9668ead9394b8682d4b725e8f1d86f15a54bf51b7fa877f4440501166fe92.scope.
Oct 02 19:39:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:39:32 compute-0 podman[410328]: 2025-10-02 19:39:32.57636144 +0000 UTC m=+0.251376345 container init eab9668ead9394b8682d4b725e8f1d86f15a54bf51b7fa877f4440501166fe92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cray, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 19:39:32 compute-0 podman[410328]: 2025-10-02 19:39:32.586855754 +0000 UTC m=+0.261870559 container start eab9668ead9394b8682d4b725e8f1d86f15a54bf51b7fa877f4440501166fe92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cray, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:39:32 compute-0 podman[410328]: 2025-10-02 19:39:32.591932781 +0000 UTC m=+0.266947706 container attach eab9668ead9394b8682d4b725e8f1d86f15a54bf51b7fa877f4440501166fe92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cray, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 19:39:32 compute-0 bold_cray[410343]: 167 167
Oct 02 19:39:32 compute-0 systemd[1]: libpod-eab9668ead9394b8682d4b725e8f1d86f15a54bf51b7fa877f4440501166fe92.scope: Deactivated successfully.
Oct 02 19:39:32 compute-0 podman[410328]: 2025-10-02 19:39:32.594640614 +0000 UTC m=+0.269655469 container died eab9668ead9394b8682d4b725e8f1d86f15a54bf51b7fa877f4440501166fe92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cray, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 19:39:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-381c441485d3f13cbe069d8f9e2d49bbf7d4c6736ce361e2e8a4824f716fa335-merged.mount: Deactivated successfully.
Oct 02 19:39:32 compute-0 podman[410328]: 2025-10-02 19:39:32.685590458 +0000 UTC m=+0.360605263 container remove eab9668ead9394b8682d4b725e8f1d86f15a54bf51b7fa877f4440501166fe92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cray, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 19:39:32 compute-0 systemd[1]: libpod-conmon-eab9668ead9394b8682d4b725e8f1d86f15a54bf51b7fa877f4440501166fe92.scope: Deactivated successfully.
Oct 02 19:39:33 compute-0 podman[410368]: 2025-10-02 19:39:33.004531497 +0000 UTC m=+0.115958061 container create 9216c94f3d5c1835d1e1a8373daf2078315e69cd14ea634e77e1c298a35048c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jackson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:39:33 compute-0 podman[410368]: 2025-10-02 19:39:32.964701642 +0000 UTC m=+0.076128216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:39:33 compute-0 systemd[1]: Started libpod-conmon-9216c94f3d5c1835d1e1a8373daf2078315e69cd14ea634e77e1c298a35048c4.scope.
Oct 02 19:39:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:39:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83006889bf6063ff5da4a6e78aa619f67db0e8fe3823c55e80e4895be40153d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:39:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83006889bf6063ff5da4a6e78aa619f67db0e8fe3823c55e80e4895be40153d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:39:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83006889bf6063ff5da4a6e78aa619f67db0e8fe3823c55e80e4895be40153d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:39:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83006889bf6063ff5da4a6e78aa619f67db0e8fe3823c55e80e4895be40153d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:39:33 compute-0 podman[410368]: 2025-10-02 19:39:33.23992161 +0000 UTC m=+0.351348214 container init 9216c94f3d5c1835d1e1a8373daf2078315e69cd14ea634e77e1c298a35048c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 02 19:39:33 compute-0 podman[410368]: 2025-10-02 19:39:33.258178723 +0000 UTC m=+0.369605287 container start 9216c94f3d5c1835d1e1a8373daf2078315e69cd14ea634e77e1c298a35048c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jackson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 19:39:33 compute-0 podman[410368]: 2025-10-02 19:39:33.29958643 +0000 UTC m=+0.411013044 container attach 9216c94f3d5c1835d1e1a8373daf2078315e69cd14ea634e77e1c298a35048c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jackson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:39:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:39:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:39:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:39:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:39:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:39:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:39:34 compute-0 ceph-mon[191910]: pgmap v1026: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:34 compute-0 recursing_jackson[410383]: {
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:     "0": [
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:         {
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "devices": [
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "/dev/loop3"
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             ],
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "lv_name": "ceph_lv0",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "lv_size": "21470642176",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "name": "ceph_lv0",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "tags": {
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.cluster_name": "ceph",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.crush_device_class": "",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.encrypted": "0",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.osd_id": "0",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.type": "block",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.vdo": "0"
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             },
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "type": "block",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "vg_name": "ceph_vg0"
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:         }
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:     ],
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:     "1": [
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:         {
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "devices": [
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "/dev/loop4"
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             ],
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "lv_name": "ceph_lv1",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "lv_size": "21470642176",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "name": "ceph_lv1",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "tags": {
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.cluster_name": "ceph",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.crush_device_class": "",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.encrypted": "0",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.osd_id": "1",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.type": "block",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.vdo": "0"
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             },
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "type": "block",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "vg_name": "ceph_vg1"
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:         }
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:     ],
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:     "2": [
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:         {
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "devices": [
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "/dev/loop5"
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             ],
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "lv_name": "ceph_lv2",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "lv_size": "21470642176",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "name": "ceph_lv2",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "tags": {
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.cluster_name": "ceph",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.crush_device_class": "",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.encrypted": "0",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.osd_id": "2",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.type": "block",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:                 "ceph.vdo": "0"
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             },
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "type": "block",
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:             "vg_name": "ceph_vg2"
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:         }
Oct 02 19:39:34 compute-0 recursing_jackson[410383]:     ]
Oct 02 19:39:34 compute-0 recursing_jackson[410383]: }
Oct 02 19:39:34 compute-0 systemd[1]: libpod-9216c94f3d5c1835d1e1a8373daf2078315e69cd14ea634e77e1c298a35048c4.scope: Deactivated successfully.
Oct 02 19:39:34 compute-0 conmon[410383]: conmon 9216c94f3d5c1835d1e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9216c94f3d5c1835d1e1a8373daf2078315e69cd14ea634e77e1c298a35048c4.scope/container/memory.events
Oct 02 19:39:34 compute-0 podman[410368]: 2025-10-02 19:39:34.118923074 +0000 UTC m=+1.230349618 container died 9216c94f3d5c1835d1e1a8373daf2078315e69cd14ea634e77e1c298a35048c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:39:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-83006889bf6063ff5da4a6e78aa619f67db0e8fe3823c55e80e4895be40153d6-merged.mount: Deactivated successfully.
Oct 02 19:39:34 compute-0 podman[410368]: 2025-10-02 19:39:34.370265178 +0000 UTC m=+1.481691752 container remove 9216c94f3d5c1835d1e1a8373daf2078315e69cd14ea634e77e1c298a35048c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jackson, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:39:34 compute-0 systemd[1]: libpod-conmon-9216c94f3d5c1835d1e1a8373daf2078315e69cd14ea634e77e1c298a35048c4.scope: Deactivated successfully.
Oct 02 19:39:34 compute-0 sudo[410265]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1027: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:34 compute-0 sudo[410403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:39:34 compute-0 sudo[410403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:34 compute-0 sudo[410403]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:34 compute-0 sudo[410428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:39:34 compute-0 sudo[410428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:34 compute-0 sudo[410428]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:34 compute-0 sudo[410453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:39:34 compute-0 sudo[410453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:34 compute-0 sudo[410453]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:34 compute-0 sudo[410478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:39:34 compute-0 sudo[410478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:39:35 compute-0 podman[410543]: 2025-10-02 19:39:35.639199775 +0000 UTC m=+0.141454329 container create b329393b14bf3b8c6b87f04be91327ded7593b38be527eda6e6eca98607d058a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_aryabhata, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:39:35 compute-0 podman[410543]: 2025-10-02 19:39:35.564487919 +0000 UTC m=+0.066742533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:39:35 compute-0 systemd[1]: Started libpod-conmon-b329393b14bf3b8c6b87f04be91327ded7593b38be527eda6e6eca98607d058a.scope.
Oct 02 19:39:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:39:35 compute-0 podman[410543]: 2025-10-02 19:39:35.877350873 +0000 UTC m=+0.379605477 container init b329393b14bf3b8c6b87f04be91327ded7593b38be527eda6e6eca98607d058a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:39:35 compute-0 podman[410543]: 2025-10-02 19:39:35.893687574 +0000 UTC m=+0.395942128 container start b329393b14bf3b8c6b87f04be91327ded7593b38be527eda6e6eca98607d058a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_aryabhata, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:39:35 compute-0 condescending_aryabhata[410558]: 167 167
Oct 02 19:39:35 compute-0 systemd[1]: libpod-b329393b14bf3b8c6b87f04be91327ded7593b38be527eda6e6eca98607d058a.scope: Deactivated successfully.
Oct 02 19:39:35 compute-0 podman[410543]: 2025-10-02 19:39:35.95616023 +0000 UTC m=+0.458414794 container attach b329393b14bf3b8c6b87f04be91327ded7593b38be527eda6e6eca98607d058a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_aryabhata, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:39:35 compute-0 podman[410543]: 2025-10-02 19:39:35.957848776 +0000 UTC m=+0.460103340 container died b329393b14bf3b8c6b87f04be91327ded7593b38be527eda6e6eca98607d058a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 19:39:36 compute-0 ceph-mon[191910]: pgmap v1027: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5256e3448d6e7b7a1e8c3da5bc7e9d205c879a2c4d564c43a4a62903f443896-merged.mount: Deactivated successfully.
Oct 02 19:39:36 compute-0 podman[410543]: 2025-10-02 19:39:36.200024462 +0000 UTC m=+0.702279016 container remove b329393b14bf3b8c6b87f04be91327ded7593b38be527eda6e6eca98607d058a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_aryabhata, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:39:36 compute-0 systemd[1]: libpod-conmon-b329393b14bf3b8c6b87f04be91327ded7593b38be527eda6e6eca98607d058a.scope: Deactivated successfully.
Oct 02 19:39:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:36 compute-0 podman[410583]: 2025-10-02 19:39:36.51077954 +0000 UTC m=+0.107613956 container create d144e142893ebf4406a3bfd0fda7744d19f90fcd2927607e8de6fd246c8dae8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:39:36 compute-0 podman[410583]: 2025-10-02 19:39:36.453240487 +0000 UTC m=+0.050074943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:39:36 compute-0 systemd[1]: Started libpod-conmon-d144e142893ebf4406a3bfd0fda7744d19f90fcd2927607e8de6fd246c8dae8c.scope.
Oct 02 19:39:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47a52b1b91c8744b458164a2bba30cd42d3b58f36128ca10877211f34755cf7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47a52b1b91c8744b458164a2bba30cd42d3b58f36128ca10877211f34755cf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47a52b1b91c8744b458164a2bba30cd42d3b58f36128ca10877211f34755cf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47a52b1b91c8744b458164a2bba30cd42d3b58f36128ca10877211f34755cf7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:39:36 compute-0 podman[410583]: 2025-10-02 19:39:36.694240521 +0000 UTC m=+0.291074957 container init d144e142893ebf4406a3bfd0fda7744d19f90fcd2927607e8de6fd246c8dae8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_sinoussi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 19:39:36 compute-0 podman[410583]: 2025-10-02 19:39:36.725670889 +0000 UTC m=+0.322505275 container start d144e142893ebf4406a3bfd0fda7744d19f90fcd2927607e8de6fd246c8dae8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_sinoussi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:39:36 compute-0 podman[410583]: 2025-10-02 19:39:36.761200868 +0000 UTC m=+0.358035324 container attach d144e142893ebf4406a3bfd0fda7744d19f90fcd2927607e8de6fd246c8dae8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:39:37 compute-0 ceph-mon[191910]: pgmap v1028: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:37 compute-0 podman[410614]: 2025-10-02 19:39:37.685894967 +0000 UTC m=+0.112478617 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]: {
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:         "osd_id": 1,
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:         "type": "bluestore"
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:     },
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:         "osd_id": 2,
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:         "type": "bluestore"
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:     },
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:         "osd_id": 0,
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:         "type": "bluestore"
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]:     }
Oct 02 19:39:37 compute-0 trusting_sinoussi[410599]: }
Oct 02 19:39:37 compute-0 systemd[1]: libpod-d144e142893ebf4406a3bfd0fda7744d19f90fcd2927607e8de6fd246c8dae8c.scope: Deactivated successfully.
Oct 02 19:39:37 compute-0 systemd[1]: libpod-d144e142893ebf4406a3bfd0fda7744d19f90fcd2927607e8de6fd246c8dae8c.scope: Consumed 1.207s CPU time.
Oct 02 19:39:37 compute-0 podman[410583]: 2025-10-02 19:39:37.943075389 +0000 UTC m=+1.539909805 container died d144e142893ebf4406a3bfd0fda7744d19f90fcd2927607e8de6fd246c8dae8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:39:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-a47a52b1b91c8744b458164a2bba30cd42d3b58f36128ca10877211f34755cf7-merged.mount: Deactivated successfully.
Oct 02 19:39:38 compute-0 podman[410655]: 2025-10-02 19:39:38.271890414 +0000 UTC m=+0.284897751 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:39:38 compute-0 podman[410583]: 2025-10-02 19:39:38.314684519 +0000 UTC m=+1.911518935 container remove d144e142893ebf4406a3bfd0fda7744d19f90fcd2927607e8de6fd246c8dae8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 19:39:38 compute-0 systemd[1]: libpod-conmon-d144e142893ebf4406a3bfd0fda7744d19f90fcd2927607e8de6fd246c8dae8c.scope: Deactivated successfully.
Oct 02 19:39:38 compute-0 sudo[410478]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:39:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:39:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:39:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:39:38 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 281fc9f2-6baa-48e0-8725-a2baea2bbf80 does not exist
Oct 02 19:39:38 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e48e9a57-6016-43f5-b826-eb3d4e2bf6d2 does not exist
Oct 02 19:39:38 compute-0 sudo[410687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:39:38 compute-0 sudo[410687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:38 compute-0 sudo[410687]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:38 compute-0 sudo[410712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:39:38 compute-0 sudo[410712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:39:38 compute-0 sudo[410712]: pam_unix(sudo:session): session closed for user root
Oct 02 19:39:39 compute-0 nova_compute[355794]: 2025-10-02 19:39:39.157 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:39 compute-0 nova_compute[355794]: 2025-10-02 19:39:39.157 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:39 compute-0 nova_compute[355794]: 2025-10-02 19:39:39.158 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:39 compute-0 nova_compute[355794]: 2025-10-02 19:39:39.158 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:39:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:39:39 compute-0 ceph-mon[191910]: pgmap v1029: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:39:39 compute-0 nova_compute[355794]: 2025-10-02 19:39:39.570 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:39 compute-0 nova_compute[355794]: 2025-10-02 19:39:39.571 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:39 compute-0 nova_compute[355794]: 2025-10-02 19:39:39.598 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:39:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:40 compute-0 nova_compute[355794]: 2025-10-02 19:39:40.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:41 compute-0 nova_compute[355794]: 2025-10-02 19:39:41.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:41 compute-0 nova_compute[355794]: 2025-10-02 19:39:41.577 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:41 compute-0 ceph-mon[191910]: pgmap v1030: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:41 compute-0 podman[410737]: 2025-10-02 19:39:41.703133252 +0000 UTC m=+0.114118521 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:39:41 compute-0 nova_compute[355794]: 2025-10-02 19:39:41.708 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:39:41 compute-0 nova_compute[355794]: 2025-10-02 19:39:41.708 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:39:41 compute-0 nova_compute[355794]: 2025-10-02 19:39:41.709 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:39:41 compute-0 nova_compute[355794]: 2025-10-02 19:39:41.709 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:39:41 compute-0 nova_compute[355794]: 2025-10-02 19:39:41.710 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:39:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:39:42 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3335030804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:39:42 compute-0 nova_compute[355794]: 2025-10-02 19:39:42.280 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:39:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:42 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3335030804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:39:42 compute-0 nova_compute[355794]: 2025-10-02 19:39:42.881 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:39:42 compute-0 nova_compute[355794]: 2025-10-02 19:39:42.883 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4575MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:39:42 compute-0 nova_compute[355794]: 2025-10-02 19:39:42.884 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:39:42 compute-0 nova_compute[355794]: 2025-10-02 19:39:42.885 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:39:42 compute-0 nova_compute[355794]: 2025-10-02 19:39:42.998 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:39:42 compute-0 nova_compute[355794]: 2025-10-02 19:39:42.999 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:39:43 compute-0 nova_compute[355794]: 2025-10-02 19:39:43.026 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:39:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:39:43 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2268128772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:39:43 compute-0 nova_compute[355794]: 2025-10-02 19:39:43.542 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:39:43 compute-0 nova_compute[355794]: 2025-10-02 19:39:43.553 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:39:43 compute-0 nova_compute[355794]: 2025-10-02 19:39:43.578 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:39:43 compute-0 nova_compute[355794]: 2025-10-02 19:39:43.580 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:39:43 compute-0 nova_compute[355794]: 2025-10-02 19:39:43.580 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:39:43 compute-0 ceph-mon[191910]: pgmap v1031: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:43 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2268128772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:39:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:44 compute-0 podman[410800]: 2025-10-02 19:39:44.724279352 +0000 UTC m=+0.144975183 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, distribution-scope=public, name=ubi9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, version=9.4, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, release=1214.1726694543, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.29.0)
Oct 02 19:39:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:39:45 compute-0 nova_compute[355794]: 2025-10-02 19:39:45.580 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:45 compute-0 nova_compute[355794]: 2025-10-02 19:39:45.581 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:39:45 compute-0 nova_compute[355794]: 2025-10-02 19:39:45.581 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:39:45 compute-0 nova_compute[355794]: 2025-10-02 19:39:45.601 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:39:46 compute-0 ceph-mon[191910]: pgmap v1032: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 02 19:39:46 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3851073056' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 02 19:39:46 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14381 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 02 19:39:46 compute-0 ceph-mgr[192222]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 02 19:39:46 compute-0 ceph-mgr[192222]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 02 19:39:47 compute-0 ceph-mon[191910]: pgmap v1033: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:47 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3851073056' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 02 19:39:47 compute-0 ceph-mon[191910]: from='client.14381 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 02 19:39:47 compute-0 podman[410820]: 2025-10-02 19:39:47.710592702 +0000 UTC m=+0.128539010 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 19:39:47 compute-0 podman[410821]: 2025-10-02 19:39:47.748996339 +0000 UTC m=+0.161427288 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:39:47 compute-0 podman[410822]: 2025-10-02 19:39:47.775053042 +0000 UTC m=+0.178730315 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct 02 19:39:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:49 compute-0 ceph-mon[191910]: pgmap v1034: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:39:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:51 compute-0 ceph-mon[191910]: pgmap v1035: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:51 compute-0 podman[410885]: 2025-10-02 19:39:51.740655033 +0000 UTC m=+0.147980925 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:39:51 compute-0 podman[410884]: 2025-10-02 19:39:51.749556574 +0000 UTC m=+0.166863825 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm)
Oct 02 19:39:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:53 compute-0 ceph-mon[191910]: pgmap v1036: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:39:55 compute-0 ceph-mon[191910]: pgmap v1037: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:57 compute-0 ceph-mon[191910]: pgmap v1038: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:59 compute-0 ceph-mon[191910]: pgmap v1039: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:39:59 compute-0 podman[157186]: time="2025-10-02T19:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:39:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:39:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8537 "" "Go-http-client/1.1"
Oct 02 19:40:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:40:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:01 compute-0 openstack_network_exporter[372736]: ERROR   19:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:40:01 compute-0 openstack_network_exporter[372736]: ERROR   19:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:40:01 compute-0 openstack_network_exporter[372736]: ERROR   19:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:40:01 compute-0 openstack_network_exporter[372736]: ERROR   19:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:40:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:40:01 compute-0 openstack_network_exporter[372736]: ERROR   19:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:40:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:40:01 compute-0 ceph-mon[191910]: pgmap v1040: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:01 compute-0 podman[410930]: 2025-10-02 19:40:01.704797433 +0000 UTC m=+0.121485770 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 19:40:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1041: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 02 19:40:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2330457610' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.14383 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:40:03
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'volumes', '.mgr']
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:40:03 compute-0 ceph-mon[191910]: pgmap v1041: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:03 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2330457610' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 02 19:40:03 compute-0 ceph-mon[191910]: from='client.14383 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:40:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:40:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:40:05 compute-0 ceph-mon[191910]: pgmap v1042: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:07 compute-0 ceph-mon[191910]: pgmap v1043: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:08 compute-0 podman[410949]: 2025-10-02 19:40:08.721933794 +0000 UTC m=+0.139242239 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:40:08 compute-0 podman[410950]: 2025-10-02 19:40:08.732872119 +0000 UTC m=+0.143355050 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:40:09 compute-0 ceph-mon[191910]: pgmap v1044: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:40:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:11 compute-0 ceph-mon[191910]: pgmap v1045: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:40:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:40:12 compute-0 podman[410992]: 2025-10-02 19:40:12.70252876 +0000 UTC m=+0.124901403 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:40:13 compute-0 ceph-mon[191910]: pgmap v1046: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:40:15 compute-0 podman[411012]: 2025-10-02 19:40:15.720287888 +0000 UTC m=+0.141429158 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, version=9.4, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., architecture=x86_64)
Oct 02 19:40:15 compute-0 ceph-mon[191910]: pgmap v1047: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:40:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1800.1 total, 600.0 interval
                                            Cumulative writes: 5800 writes, 24K keys, 5800 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 5800 writes, 959 syncs, 6.05 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                            Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 19:40:17 compute-0 ceph-mon[191910]: pgmap v1048: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:18 compute-0 podman[411031]: 2025-10-02 19:40:18.659774543 +0000 UTC m=+0.077241426 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:40:18 compute-0 podman[411033]: 2025-10-02 19:40:18.704088659 +0000 UTC m=+0.107785200 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 19:40:18 compute-0 podman[411032]: 2025-10-02 19:40:18.704628083 +0000 UTC m=+0.112327463 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:40:19 compute-0 ceph-mon[191910]: pgmap v1049: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:40:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:40:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/924556997' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:40:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:40:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/924556997' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:40:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/924556997' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:40:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/924556997' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:40:21 compute-0 ceph-mon[191910]: pgmap v1050: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:22 compute-0 podman[411092]: 2025-10-02 19:40:22.674174442 +0000 UTC m=+0.097875533 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 19:40:22 compute-0 podman[411093]: 2025-10-02 19:40:22.696795302 +0000 UTC m=+0.124042799 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:40:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:40:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1800.1 total, 600.0 interval
                                            Cumulative writes: 6993 writes, 27K keys, 6993 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 6993 writes, 1319 syncs, 5.30 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                            Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 19:40:23 compute-0 ceph-mon[191910]: pgmap v1051: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:40:25 compute-0 ceph-mon[191910]: pgmap v1052: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:27 compute-0 ceph-mon[191910]: pgmap v1053: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:29 compute-0 podman[157186]: time="2025-10-02T19:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:40:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:40:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8530 "" "Go-http-client/1.1"
Oct 02 19:40:29 compute-0 ceph-mon[191910]: pgmap v1054: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:40:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:40:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1800.1 total, 600.0 interval
                                            Cumulative writes: 5884 writes, 24K keys, 5884 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 5884 writes, 968 syncs, 6.08 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                            Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 19:40:31 compute-0 openstack_network_exporter[372736]: ERROR   19:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:40:31 compute-0 openstack_network_exporter[372736]: ERROR   19:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:40:31 compute-0 openstack_network_exporter[372736]: ERROR   19:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:40:31 compute-0 openstack_network_exporter[372736]: ERROR   19:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:40:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:40:31 compute-0 openstack_network_exporter[372736]: ERROR   19:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:40:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:40:31 compute-0 ceph-mgr[192222]: [devicehealth INFO root] Check health
Oct 02 19:40:31 compute-0 ceph-mon[191910]: pgmap v1055: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:40:32.288 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:40:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:40:32.289 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:40:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:40:32.289 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:40:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:32 compute-0 podman[411133]: 2025-10-02 19:40:32.68864258 +0000 UTC m=+0.105333164 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 19:40:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:40:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:40:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:40:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:40:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:40:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:40:33 compute-0 ceph-mon[191910]: pgmap v1056: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:40:35 compute-0 ceph-mon[191910]: pgmap v1057: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:37 compute-0 ceph-mon[191910]: pgmap v1058: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:38 compute-0 nova_compute[355794]: 2025-10-02 19:40:38.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:38 compute-0 nova_compute[355794]: 2025-10-02 19:40:38.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:38 compute-0 sudo[411151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:40:38 compute-0 sudo[411151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:38 compute-0 sudo[411151]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:39 compute-0 sudo[411188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:40:39 compute-0 podman[411176]: 2025-10-02 19:40:39.022902271 +0000 UTC m=+0.108800208 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Oct 02 19:40:39 compute-0 podman[411175]: 2025-10-02 19:40:39.022964082 +0000 UTC m=+0.113166155 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:40:39 compute-0 sudo[411188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:39 compute-0 sudo[411188]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:39 compute-0 sudo[411240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:40:39 compute-0 sudo[411240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:39 compute-0 sudo[411240]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:39 compute-0 sudo[411265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:40:39 compute-0 sudo[411265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:39 compute-0 nova_compute[355794]: 2025-10-02 19:40:39.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:39 compute-0 nova_compute[355794]: 2025-10-02 19:40:39.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:40:39 compute-0 ceph-mon[191910]: pgmap v1059: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:39 compute-0 sudo[411265]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:40:40 compute-0 sudo[411320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:40:40 compute-0 sudo[411320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:40 compute-0 sudo[411320]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:40 compute-0 sudo[411345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:40:40 compute-0 sudo[411345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:40 compute-0 sudo[411345]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:40 compute-0 sudo[411370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:40:40 compute-0 sudo[411370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:40 compute-0 sudo[411370]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:40 compute-0 nova_compute[355794]: 2025-10-02 19:40:40.569 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:40 compute-0 nova_compute[355794]: 2025-10-02 19:40:40.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:40 compute-0 sudo[411395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- inventory --format=json-pretty --filter-for-batch
Oct 02 19:40:40 compute-0 sudo[411395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:41 compute-0 podman[411458]: 2025-10-02 19:40:41.202724333 +0000 UTC m=+0.090252567 container create 5285f876bbdd13d4171c2fab9472249971dbfb2491ae6bda66c6296d9b8f1a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_elbakyan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 19:40:41 compute-0 podman[411458]: 2025-10-02 19:40:41.169234869 +0000 UTC m=+0.056763173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:40:41 compute-0 systemd[1]: Started libpod-conmon-5285f876bbdd13d4171c2fab9472249971dbfb2491ae6bda66c6296d9b8f1a56.scope.
Oct 02 19:40:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:40:41 compute-0 podman[411458]: 2025-10-02 19:40:41.348174919 +0000 UTC m=+0.235703193 container init 5285f876bbdd13d4171c2fab9472249971dbfb2491ae6bda66c6296d9b8f1a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 19:40:41 compute-0 podman[411458]: 2025-10-02 19:40:41.368201279 +0000 UTC m=+0.255729503 container start 5285f876bbdd13d4171c2fab9472249971dbfb2491ae6bda66c6296d9b8f1a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_elbakyan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 19:40:41 compute-0 podman[411458]: 2025-10-02 19:40:41.374947361 +0000 UTC m=+0.262475595 container attach 5285f876bbdd13d4171c2fab9472249971dbfb2491ae6bda66c6296d9b8f1a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:40:41 compute-0 agitated_elbakyan[411474]: 167 167
Oct 02 19:40:41 compute-0 systemd[1]: libpod-5285f876bbdd13d4171c2fab9472249971dbfb2491ae6bda66c6296d9b8f1a56.scope: Deactivated successfully.
Oct 02 19:40:41 compute-0 podman[411458]: 2025-10-02 19:40:41.382254339 +0000 UTC m=+0.269782573 container died 5285f876bbdd13d4171c2fab9472249971dbfb2491ae6bda66c6296d9b8f1a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_elbakyan, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:40:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-be9a5d2283e448bb336b5fe90fe2d5a10ec03d2d88336e3cf9586ef34e229bfa-merged.mount: Deactivated successfully.
Oct 02 19:40:41 compute-0 podman[411458]: 2025-10-02 19:40:41.457861559 +0000 UTC m=+0.345389783 container remove 5285f876bbdd13d4171c2fab9472249971dbfb2491ae6bda66c6296d9b8f1a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:40:41 compute-0 systemd[1]: libpod-conmon-5285f876bbdd13d4171c2fab9472249971dbfb2491ae6bda66c6296d9b8f1a56.scope: Deactivated successfully.
Oct 02 19:40:41 compute-0 nova_compute[355794]: 2025-10-02 19:40:41.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:41 compute-0 podman[411496]: 2025-10-02 19:40:41.668027361 +0000 UTC m=+0.062409255 container create fc51a93635b58a1edbc39259c547e27bb7b0f0e0f75301081687ae4035aceaa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 19:40:41 compute-0 systemd[1]: Started libpod-conmon-fc51a93635b58a1edbc39259c547e27bb7b0f0e0f75301081687ae4035aceaa9.scope.
Oct 02 19:40:41 compute-0 podman[411496]: 2025-10-02 19:40:41.64535835 +0000 UTC m=+0.039740244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:40:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:40:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2db243148cf5ef0bb2a9cd5135bdb22ec0f90e4ba9af191d31f8258724afd3e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:40:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2db243148cf5ef0bb2a9cd5135bdb22ec0f90e4ba9af191d31f8258724afd3e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:40:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2db243148cf5ef0bb2a9cd5135bdb22ec0f90e4ba9af191d31f8258724afd3e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:40:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2db243148cf5ef0bb2a9cd5135bdb22ec0f90e4ba9af191d31f8258724afd3e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:40:41 compute-0 podman[411496]: 2025-10-02 19:40:41.853152528 +0000 UTC m=+0.247534472 container init fc51a93635b58a1edbc39259c547e27bb7b0f0e0f75301081687ae4035aceaa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_tharp, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 19:40:41 compute-0 podman[411496]: 2025-10-02 19:40:41.872312665 +0000 UTC m=+0.266694579 container start fc51a93635b58a1edbc39259c547e27bb7b0f0e0f75301081687ae4035aceaa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:40:41 compute-0 podman[411496]: 2025-10-02 19:40:41.878558214 +0000 UTC m=+0.272940168 container attach fc51a93635b58a1edbc39259c547e27bb7b0f0e0f75301081687ae4035aceaa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:40:41 compute-0 ceph-mon[191910]: pgmap v1060: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:42 compute-0 nova_compute[355794]: 2025-10-02 19:40:42.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:42 compute-0 nova_compute[355794]: 2025-10-02 19:40:42.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:42 compute-0 nova_compute[355794]: 2025-10-02 19:40:42.680 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:40:42 compute-0 nova_compute[355794]: 2025-10-02 19:40:42.680 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:40:42 compute-0 nova_compute[355794]: 2025-10-02 19:40:42.681 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:40:42 compute-0 nova_compute[355794]: 2025-10-02 19:40:42.681 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:40:42 compute-0 nova_compute[355794]: 2025-10-02 19:40:42.682 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:40:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:40:43 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/322925497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:40:43 compute-0 nova_compute[355794]: 2025-10-02 19:40:43.231 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:40:43 compute-0 nova_compute[355794]: 2025-10-02 19:40:43.598 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:40:43 compute-0 nova_compute[355794]: 2025-10-02 19:40:43.599 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4499MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:40:43 compute-0 nova_compute[355794]: 2025-10-02 19:40:43.599 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:40:43 compute-0 nova_compute[355794]: 2025-10-02 19:40:43.599 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:40:43 compute-0 podman[412189]: 2025-10-02 19:40:43.641995928 +0000 UTC m=+0.077017880 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Oct 02 19:40:43 compute-0 nova_compute[355794]: 2025-10-02 19:40:43.969 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:40:43 compute-0 nova_compute[355794]: 2025-10-02 19:40:43.970 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:40:44 compute-0 ceph-mon[191910]: pgmap v1061: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:44 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/322925497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:40:44 compute-0 nova_compute[355794]: 2025-10-02 19:40:44.024 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]: [
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:     {
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:         "available": false,
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:         "ceph_device": false,
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:         "lsm_data": {},
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:         "lvs": [],
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:         "path": "/dev/sr0",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:         "rejected_reasons": [
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "Has a FileSystem",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "Insufficient space (<5GB)"
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:         ],
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:         "sys_api": {
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "actuators": null,
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "device_nodes": "sr0",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "devname": "sr0",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "human_readable_size": "482.00 KB",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "id_bus": "ata",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "model": "QEMU DVD-ROM",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "nr_requests": "2",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "parent": "/dev/sr0",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "partitions": {},
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "path": "/dev/sr0",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "removable": "1",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "rev": "2.5+",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "ro": "0",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "rotational": "0",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "sas_address": "",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "sas_device_handle": "",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "scheduler_mode": "mq-deadline",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "sectors": 0,
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "sectorsize": "2048",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "size": 493568.0,
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "support_discard": "2048",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "type": "disk",
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:             "vendor": "QEMU"
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:         }
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]:     }
Oct 02 19:40:44 compute-0 heuristic_tharp[411511]: ]
Oct 02 19:40:44 compute-0 systemd[1]: libpod-fc51a93635b58a1edbc39259c547e27bb7b0f0e0f75301081687ae4035aceaa9.scope: Deactivated successfully.
Oct 02 19:40:44 compute-0 systemd[1]: libpod-fc51a93635b58a1edbc39259c547e27bb7b0f0e0f75301081687ae4035aceaa9.scope: Consumed 2.429s CPU time.
Oct 02 19:40:44 compute-0 podman[411496]: 2025-10-02 19:40:44.252151196 +0000 UTC m=+2.646533110 container died fc51a93635b58a1edbc39259c547e27bb7b0f0e0f75301081687ae4035aceaa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 19:40:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-2db243148cf5ef0bb2a9cd5135bdb22ec0f90e4ba9af191d31f8258724afd3e0-merged.mount: Deactivated successfully.
Oct 02 19:40:44 compute-0 podman[411496]: 2025-10-02 19:40:44.358672921 +0000 UTC m=+2.753054805 container remove fc51a93635b58a1edbc39259c547e27bb7b0f0e0f75301081687ae4035aceaa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_tharp, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 02 19:40:44 compute-0 systemd[1]: libpod-conmon-fc51a93635b58a1edbc39259c547e27bb7b0f0e0f75301081687ae4035aceaa9.scope: Deactivated successfully.
Oct 02 19:40:44 compute-0 sudo[411395]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:40:44 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:40:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:40:44 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:40:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:40:44 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:40:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:40:44 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:40:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:40:44 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:40:44 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e72cdd5a-8dd2-4e81-85d0-769e0eacec30 does not exist
Oct 02 19:40:44 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev b11a08d5-1e43-42ba-8fe9-d33bc3da479f does not exist
Oct 02 19:40:44 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 39473280-f6c2-4823-96c1-962048f7378a does not exist
Oct 02 19:40:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:40:44 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:40:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:40:44 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:40:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:40:44 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:40:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:40:44 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2607570922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:40:44 compute-0 nova_compute[355794]: 2025-10-02 19:40:44.495 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:40:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:44 compute-0 nova_compute[355794]: 2025-10-02 19:40:44.508 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:40:44 compute-0 sudo[413693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:40:44 compute-0 sudo[413693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:44 compute-0 sudo[413693]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:44 compute-0 sudo[413720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:40:44 compute-0 sudo[413720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:44 compute-0 sudo[413720]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:44 compute-0 sudo[413745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:40:44 compute-0 sudo[413745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:44 compute-0 sudo[413745]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:44 compute-0 sudo[413770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:40:44 compute-0 sudo[413770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:45 compute-0 nova_compute[355794]: 2025-10-02 19:40:45.003 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:40:45 compute-0 nova_compute[355794]: 2025-10-02 19:40:45.006 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:40:45 compute-0 nova_compute[355794]: 2025-10-02 19:40:45.007 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.407s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:40:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:40:45 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:40:45 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:40:45 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:40:45 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:40:45 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:40:45 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:40:45 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:40:45 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:40:45 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2607570922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:40:45 compute-0 ceph-mon[191910]: pgmap v1062: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:45 compute-0 podman[413836]: 2025-10-02 19:40:45.589938163 +0000 UTC m=+0.083361601 container create ba1f78e08f186e6d412558e5fd33eab6057e98cd5cff87420dcbb367362bfc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 19:40:45 compute-0 podman[413836]: 2025-10-02 19:40:45.563117769 +0000 UTC m=+0.056541207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:40:45 compute-0 systemd[1]: Started libpod-conmon-ba1f78e08f186e6d412558e5fd33eab6057e98cd5cff87420dcbb367362bfc6d.scope.
Oct 02 19:40:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:40:45 compute-0 podman[413836]: 2025-10-02 19:40:45.737063534 +0000 UTC m=+0.230487022 container init ba1f78e08f186e6d412558e5fd33eab6057e98cd5cff87420dcbb367362bfc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 19:40:45 compute-0 podman[413836]: 2025-10-02 19:40:45.758933054 +0000 UTC m=+0.252356482 container start ba1f78e08f186e6d412558e5fd33eab6057e98cd5cff87420dcbb367362bfc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 19:40:45 compute-0 podman[413836]: 2025-10-02 19:40:45.765613014 +0000 UTC m=+0.259036442 container attach ba1f78e08f186e6d412558e5fd33eab6057e98cd5cff87420dcbb367362bfc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_vaughan, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 19:40:45 compute-0 romantic_vaughan[413851]: 167 167
Oct 02 19:40:45 compute-0 systemd[1]: libpod-ba1f78e08f186e6d412558e5fd33eab6057e98cd5cff87420dcbb367362bfc6d.scope: Deactivated successfully.
Oct 02 19:40:45 compute-0 podman[413836]: 2025-10-02 19:40:45.772739627 +0000 UTC m=+0.266163085 container died ba1f78e08f186e6d412558e5fd33eab6057e98cd5cff87420dcbb367362bfc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_vaughan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:40:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a3cc8cc4bfbe46c8f96730a94510e04e73ad1a169a85380d10f629ce4501887-merged.mount: Deactivated successfully.
Oct 02 19:40:45 compute-0 podman[413836]: 2025-10-02 19:40:45.851479672 +0000 UTC m=+0.344903120 container remove ba1f78e08f186e6d412558e5fd33eab6057e98cd5cff87420dcbb367362bfc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_vaughan, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:40:45 compute-0 systemd[1]: libpod-conmon-ba1f78e08f186e6d412558e5fd33eab6057e98cd5cff87420dcbb367362bfc6d.scope: Deactivated successfully.
Oct 02 19:40:45 compute-0 podman[413858]: 2025-10-02 19:40:45.927880894 +0000 UTC m=+0.113569186 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, release=1214.1726694543, release-0.7.12=, container_name=kepler, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30)
Oct 02 19:40:46 compute-0 nova_compute[355794]: 2025-10-02 19:40:46.007 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:46 compute-0 nova_compute[355794]: 2025-10-02 19:40:46.008 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:40:46 compute-0 nova_compute[355794]: 2025-10-02 19:40:46.008 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:40:46 compute-0 nova_compute[355794]: 2025-10-02 19:40:46.029 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:40:46 compute-0 podman[413892]: 2025-10-02 19:40:46.103233027 +0000 UTC m=+0.084269526 container create 0d037c35749a90589ffe1157bd8afce625cb850daea569878b41b758ee214c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:40:46 compute-0 podman[413892]: 2025-10-02 19:40:46.071666325 +0000 UTC m=+0.052702824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:40:46 compute-0 systemd[1]: Started libpod-conmon-0d037c35749a90589ffe1157bd8afce625cb850daea569878b41b758ee214c27.scope.
Oct 02 19:40:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e34704581784b3fbd72890202751e52a715cec3a45b9cbd00ee41c3dec042b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e34704581784b3fbd72890202751e52a715cec3a45b9cbd00ee41c3dec042b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e34704581784b3fbd72890202751e52a715cec3a45b9cbd00ee41c3dec042b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e34704581784b3fbd72890202751e52a715cec3a45b9cbd00ee41c3dec042b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e34704581784b3fbd72890202751e52a715cec3a45b9cbd00ee41c3dec042b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:40:46 compute-0 podman[413892]: 2025-10-02 19:40:46.28045909 +0000 UTC m=+0.261495579 container init 0d037c35749a90589ffe1157bd8afce625cb850daea569878b41b758ee214c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_newton, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:40:46 compute-0 podman[413892]: 2025-10-02 19:40:46.305736202 +0000 UTC m=+0.286772701 container start 0d037c35749a90589ffe1157bd8afce625cb850daea569878b41b758ee214c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_newton, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:40:46 compute-0 podman[413892]: 2025-10-02 19:40:46.311403025 +0000 UTC m=+0.292439534 container attach 0d037c35749a90589ffe1157bd8afce625cb850daea569878b41b758ee214c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_newton, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:40:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:47 compute-0 vibrant_newton[413908]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:40:47 compute-0 vibrant_newton[413908]: --> relative data size: 1.0
Oct 02 19:40:47 compute-0 vibrant_newton[413908]: --> All data devices are unavailable
Oct 02 19:40:47 compute-0 ceph-mon[191910]: pgmap v1063: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:47 compute-0 systemd[1]: libpod-0d037c35749a90589ffe1157bd8afce625cb850daea569878b41b758ee214c27.scope: Deactivated successfully.
Oct 02 19:40:47 compute-0 podman[413892]: 2025-10-02 19:40:47.58616182 +0000 UTC m=+1.567198289 container died 0d037c35749a90589ffe1157bd8afce625cb850daea569878b41b758ee214c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_newton, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 19:40:47 compute-0 systemd[1]: libpod-0d037c35749a90589ffe1157bd8afce625cb850daea569878b41b758ee214c27.scope: Consumed 1.230s CPU time.
Oct 02 19:40:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-63e34704581784b3fbd72890202751e52a715cec3a45b9cbd00ee41c3dec042b-merged.mount: Deactivated successfully.
Oct 02 19:40:47 compute-0 podman[413892]: 2025-10-02 19:40:47.666685513 +0000 UTC m=+1.647721992 container remove 0d037c35749a90589ffe1157bd8afce625cb850daea569878b41b758ee214c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_newton, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:40:47 compute-0 systemd[1]: libpod-conmon-0d037c35749a90589ffe1157bd8afce625cb850daea569878b41b758ee214c27.scope: Deactivated successfully.
Oct 02 19:40:47 compute-0 sudo[413770]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:47 compute-0 sudo[413947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:40:47 compute-0 sudo[413947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:47 compute-0 sudo[413947]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:47 compute-0 sudo[413972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:40:47 compute-0 sudo[413972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:47 compute-0 sudo[413972]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:48 compute-0 sudo[413997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:40:48 compute-0 sudo[413997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:48 compute-0 sudo[413997]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:48 compute-0 sudo[414022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:40:48 compute-0 sudo[414022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:48 compute-0 podman[414085]: 2025-10-02 19:40:48.633501288 +0000 UTC m=+0.052242411 container create 99d8eedbed328c918d981ea351bc6d8acd15eb7c05db620f0651386a484d9d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 19:40:48 compute-0 systemd[1]: Started libpod-conmon-99d8eedbed328c918d981ea351bc6d8acd15eb7c05db620f0651386a484d9d97.scope.
Oct 02 19:40:48 compute-0 podman[414085]: 2025-10-02 19:40:48.614188166 +0000 UTC m=+0.032929319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:40:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:40:48 compute-0 podman[414085]: 2025-10-02 19:40:48.751459511 +0000 UTC m=+0.170200634 container init 99d8eedbed328c918d981ea351bc6d8acd15eb7c05db620f0651386a484d9d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 19:40:48 compute-0 podman[414085]: 2025-10-02 19:40:48.768630965 +0000 UTC m=+0.187372098 container start 99d8eedbed328c918d981ea351bc6d8acd15eb7c05db620f0651386a484d9d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_meninsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 19:40:48 compute-0 podman[414085]: 2025-10-02 19:40:48.774161544 +0000 UTC m=+0.192902667 container attach 99d8eedbed328c918d981ea351bc6d8acd15eb7c05db620f0651386a484d9d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 19:40:48 compute-0 trusting_meninsky[414101]: 167 167
Oct 02 19:40:48 compute-0 systemd[1]: libpod-99d8eedbed328c918d981ea351bc6d8acd15eb7c05db620f0651386a484d9d97.scope: Deactivated successfully.
Oct 02 19:40:48 compute-0 podman[414085]: 2025-10-02 19:40:48.781901333 +0000 UTC m=+0.200642456 container died 99d8eedbed328c918d981ea351bc6d8acd15eb7c05db620f0651386a484d9d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_meninsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:40:48 compute-0 podman[414103]: 2025-10-02 19:40:48.802332304 +0000 UTC m=+0.092577199 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 19:40:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-deaf98578b94585ec6a249a5336fd5ea521c5754359e3d6605ab7e2cca0ed0c9-merged.mount: Deactivated successfully.
Oct 02 19:40:48 compute-0 podman[414085]: 2025-10-02 19:40:48.846314131 +0000 UTC m=+0.265055264 container remove 99d8eedbed328c918d981ea351bc6d8acd15eb7c05db620f0651386a484d9d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:40:48 compute-0 podman[414113]: 2025-10-02 19:40:48.847835832 +0000 UTC m=+0.102501697 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 19:40:48 compute-0 systemd[1]: libpod-conmon-99d8eedbed328c918d981ea351bc6d8acd15eb7c05db620f0651386a484d9d97.scope: Deactivated successfully.
Oct 02 19:40:48 compute-0 podman[414114]: 2025-10-02 19:40:48.893625398 +0000 UTC m=+0.135446536 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:40:49 compute-0 podman[414182]: 2025-10-02 19:40:49.08968701 +0000 UTC m=+0.088535201 container create 496e4711f8dc4dd02f74af3bb262e88a35f083de4f5b4e69494c6375ffdcfeec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 19:40:49 compute-0 podman[414182]: 2025-10-02 19:40:49.055087626 +0000 UTC m=+0.053935877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:40:49 compute-0 systemd[1]: Started libpod-conmon-496e4711f8dc4dd02f74af3bb262e88a35f083de4f5b4e69494c6375ffdcfeec.scope.
Oct 02 19:40:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:40:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75c82331accd3cfbc5a3e059d871147c47412ee735442c87d48b2080cc5ace80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:40:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75c82331accd3cfbc5a3e059d871147c47412ee735442c87d48b2080cc5ace80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:40:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75c82331accd3cfbc5a3e059d871147c47412ee735442c87d48b2080cc5ace80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:40:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75c82331accd3cfbc5a3e059d871147c47412ee735442c87d48b2080cc5ace80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:40:49 compute-0 podman[414182]: 2025-10-02 19:40:49.250775638 +0000 UTC m=+0.249623849 container init 496e4711f8dc4dd02f74af3bb262e88a35f083de4f5b4e69494c6375ffdcfeec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 19:40:49 compute-0 podman[414182]: 2025-10-02 19:40:49.287694924 +0000 UTC m=+0.286543125 container start 496e4711f8dc4dd02f74af3bb262e88a35f083de4f5b4e69494c6375ffdcfeec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:40:49 compute-0 podman[414182]: 2025-10-02 19:40:49.29532793 +0000 UTC m=+0.294176181 container attach 496e4711f8dc4dd02f74af3bb262e88a35f083de4f5b4e69494c6375ffdcfeec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:40:49 compute-0 ceph-mon[191910]: pgmap v1064: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]: {
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:     "0": [
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:         {
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "devices": [
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "/dev/loop3"
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             ],
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "lv_name": "ceph_lv0",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "lv_size": "21470642176",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "name": "ceph_lv0",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "tags": {
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.cluster_name": "ceph",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.crush_device_class": "",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.encrypted": "0",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.osd_id": "0",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.type": "block",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.vdo": "0"
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             },
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "type": "block",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "vg_name": "ceph_vg0"
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:         }
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:     ],
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:     "1": [
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:         {
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "devices": [
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "/dev/loop4"
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             ],
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "lv_name": "ceph_lv1",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "lv_size": "21470642176",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "name": "ceph_lv1",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "tags": {
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.cluster_name": "ceph",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.crush_device_class": "",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.encrypted": "0",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.osd_id": "1",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.type": "block",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.vdo": "0"
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             },
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "type": "block",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "vg_name": "ceph_vg1"
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:         }
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:     ],
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:     "2": [
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:         {
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "devices": [
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "/dev/loop5"
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             ],
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "lv_name": "ceph_lv2",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "lv_size": "21470642176",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "name": "ceph_lv2",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "tags": {
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.cluster_name": "ceph",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.crush_device_class": "",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.encrypted": "0",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.osd_id": "2",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.type": "block",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:                 "ceph.vdo": "0"
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             },
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "type": "block",
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:             "vg_name": "ceph_vg2"
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:         }
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]:     ]
Oct 02 19:40:50 compute-0 sharp_bardeen[414197]: }
Oct 02 19:40:50 compute-0 systemd[1]: libpod-496e4711f8dc4dd02f74af3bb262e88a35f083de4f5b4e69494c6375ffdcfeec.scope: Deactivated successfully.
Oct 02 19:40:50 compute-0 podman[414182]: 2025-10-02 19:40:50.171974461 +0000 UTC m=+1.170822662 container died 496e4711f8dc4dd02f74af3bb262e88a35f083de4f5b4e69494c6375ffdcfeec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:40:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-75c82331accd3cfbc5a3e059d871147c47412ee735442c87d48b2080cc5ace80-merged.mount: Deactivated successfully.
Oct 02 19:40:50 compute-0 podman[414182]: 2025-10-02 19:40:50.267861098 +0000 UTC m=+1.266709269 container remove 496e4711f8dc4dd02f74af3bb262e88a35f083de4f5b4e69494c6375ffdcfeec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 19:40:50 compute-0 systemd[1]: libpod-conmon-496e4711f8dc4dd02f74af3bb262e88a35f083de4f5b4e69494c6375ffdcfeec.scope: Deactivated successfully.
Oct 02 19:40:50 compute-0 sudo[414022]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:50 compute-0 sudo[414221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:40:50 compute-0 sudo[414221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:50 compute-0 sudo[414221]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:50 compute-0 sudo[414246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:40:50 compute-0 sudo[414246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:50 compute-0 sudo[414246]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:50 compute-0 sudo[414271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:40:50 compute-0 sudo[414271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:50 compute-0 sudo[414271]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:50 compute-0 sudo[414296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:40:50 compute-0 sudo[414296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:51 compute-0 podman[414359]: 2025-10-02 19:40:51.27756027 +0000 UTC m=+0.086960788 container create 02afb7399f1ce3fad92629f9bc2b9ef3b0860c1e5977c4ff02c495170bd1d916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 19:40:51 compute-0 podman[414359]: 2025-10-02 19:40:51.24683101 +0000 UTC m=+0.056231608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:40:51 compute-0 systemd[1]: Started libpod-conmon-02afb7399f1ce3fad92629f9bc2b9ef3b0860c1e5977c4ff02c495170bd1d916.scope.
Oct 02 19:40:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:40:51 compute-0 podman[414359]: 2025-10-02 19:40:51.440117487 +0000 UTC m=+0.249518075 container init 02afb7399f1ce3fad92629f9bc2b9ef3b0860c1e5977c4ff02c495170bd1d916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_zhukovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:40:51 compute-0 podman[414359]: 2025-10-02 19:40:51.457235799 +0000 UTC m=+0.266636337 container start 02afb7399f1ce3fad92629f9bc2b9ef3b0860c1e5977c4ff02c495170bd1d916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:40:51 compute-0 podman[414359]: 2025-10-02 19:40:51.464042223 +0000 UTC m=+0.273442781 container attach 02afb7399f1ce3fad92629f9bc2b9ef3b0860c1e5977c4ff02c495170bd1d916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_zhukovsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:40:51 compute-0 elegant_zhukovsky[414375]: 167 167
Oct 02 19:40:51 compute-0 systemd[1]: libpod-02afb7399f1ce3fad92629f9bc2b9ef3b0860c1e5977c4ff02c495170bd1d916.scope: Deactivated successfully.
Oct 02 19:40:51 compute-0 podman[414359]: 2025-10-02 19:40:51.470317392 +0000 UTC m=+0.279717940 container died 02afb7399f1ce3fad92629f9bc2b9ef3b0860c1e5977c4ff02c495170bd1d916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:40:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3194751f2dbf3ac9148c174f799b269e2d4fd98c1ebe8927cdc60d2b2fde7ff4-merged.mount: Deactivated successfully.
Oct 02 19:40:51 compute-0 podman[414359]: 2025-10-02 19:40:51.551881373 +0000 UTC m=+0.361281891 container remove 02afb7399f1ce3fad92629f9bc2b9ef3b0860c1e5977c4ff02c495170bd1d916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_zhukovsky, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:40:51 compute-0 systemd[1]: libpod-conmon-02afb7399f1ce3fad92629f9bc2b9ef3b0860c1e5977c4ff02c495170bd1d916.scope: Deactivated successfully.
Oct 02 19:40:51 compute-0 ceph-mon[191910]: pgmap v1065: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:51 compute-0 podman[414398]: 2025-10-02 19:40:51.818801117 +0000 UTC m=+0.077048170 container create d0e81c2bac94797c9e05318db431b0104cdf1674746f621181cf463fad08c937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euclid, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:40:51 compute-0 podman[414398]: 2025-10-02 19:40:51.787010979 +0000 UTC m=+0.045258092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:40:51 compute-0 systemd[1]: Started libpod-conmon-d0e81c2bac94797c9e05318db431b0104cdf1674746f621181cf463fad08c937.scope.
Oct 02 19:40:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:40:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d74ae0227a9166859c9511bfcaaa3f471b046b378e9284042276d059798c30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:40:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d74ae0227a9166859c9511bfcaaa3f471b046b378e9284042276d059798c30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:40:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d74ae0227a9166859c9511bfcaaa3f471b046b378e9284042276d059798c30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:40:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d74ae0227a9166859c9511bfcaaa3f471b046b378e9284042276d059798c30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:40:52 compute-0 podman[414398]: 2025-10-02 19:40:52.060925902 +0000 UTC m=+0.319172955 container init d0e81c2bac94797c9e05318db431b0104cdf1674746f621181cf463fad08c937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 19:40:52 compute-0 podman[414398]: 2025-10-02 19:40:52.079234286 +0000 UTC m=+0.337481309 container start d0e81c2bac94797c9e05318db431b0104cdf1674746f621181cf463fad08c937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euclid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 19:40:52 compute-0 podman[414398]: 2025-10-02 19:40:52.093509622 +0000 UTC m=+0.351756745 container attach d0e81c2bac94797c9e05318db431b0104cdf1674746f621181cf463fad08c937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:40:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:53 compute-0 zen_euclid[414415]: {
Oct 02 19:40:53 compute-0 zen_euclid[414415]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:40:53 compute-0 zen_euclid[414415]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:40:53 compute-0 zen_euclid[414415]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:40:53 compute-0 zen_euclid[414415]:         "osd_id": 1,
Oct 02 19:40:53 compute-0 zen_euclid[414415]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:40:53 compute-0 zen_euclid[414415]:         "type": "bluestore"
Oct 02 19:40:53 compute-0 zen_euclid[414415]:     },
Oct 02 19:40:53 compute-0 zen_euclid[414415]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:40:53 compute-0 zen_euclid[414415]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:40:53 compute-0 zen_euclid[414415]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:40:53 compute-0 zen_euclid[414415]:         "osd_id": 2,
Oct 02 19:40:53 compute-0 zen_euclid[414415]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:40:53 compute-0 zen_euclid[414415]:         "type": "bluestore"
Oct 02 19:40:53 compute-0 zen_euclid[414415]:     },
Oct 02 19:40:53 compute-0 zen_euclid[414415]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:40:53 compute-0 zen_euclid[414415]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:40:53 compute-0 zen_euclid[414415]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:40:53 compute-0 zen_euclid[414415]:         "osd_id": 0,
Oct 02 19:40:53 compute-0 zen_euclid[414415]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:40:53 compute-0 zen_euclid[414415]:         "type": "bluestore"
Oct 02 19:40:53 compute-0 zen_euclid[414415]:     }
Oct 02 19:40:53 compute-0 zen_euclid[414415]: }
Oct 02 19:40:53 compute-0 systemd[1]: libpod-d0e81c2bac94797c9e05318db431b0104cdf1674746f621181cf463fad08c937.scope: Deactivated successfully.
Oct 02 19:40:53 compute-0 podman[414398]: 2025-10-02 19:40:53.310468798 +0000 UTC m=+1.568715841 container died d0e81c2bac94797c9e05318db431b0104cdf1674746f621181cf463fad08c937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euclid, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 19:40:53 compute-0 systemd[1]: libpod-d0e81c2bac94797c9e05318db431b0104cdf1674746f621181cf463fad08c937.scope: Consumed 1.223s CPU time.
Oct 02 19:40:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-50d74ae0227a9166859c9511bfcaaa3f471b046b378e9284042276d059798c30-merged.mount: Deactivated successfully.
Oct 02 19:40:53 compute-0 podman[414398]: 2025-10-02 19:40:53.411076203 +0000 UTC m=+1.669323236 container remove d0e81c2bac94797c9e05318db431b0104cdf1674746f621181cf463fad08c937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 19:40:53 compute-0 systemd[1]: libpod-conmon-d0e81c2bac94797c9e05318db431b0104cdf1674746f621181cf463fad08c937.scope: Deactivated successfully.
Oct 02 19:40:53 compute-0 sudo[414296]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:40:53 compute-0 podman[414451]: 2025-10-02 19:40:53.465680856 +0000 UTC m=+0.106569086 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:40:53 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:40:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:40:53 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:40:53 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev af27002a-8720-4504-abdb-c9875c97837b does not exist
Oct 02 19:40:53 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 403bf4e0-10a4-4d9e-9ba5-4bda7dcd6127 does not exist
Oct 02 19:40:53 compute-0 podman[414449]: 2025-10-02 19:40:53.49730858 +0000 UTC m=+0.141991623 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41)
Oct 02 19:40:53 compute-0 sudo[414499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:40:53 compute-0 sudo[414499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:53 compute-0 sudo[414499]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:53 compute-0 ceph-mon[191910]: pgmap v1066: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:53 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:40:53 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:40:53 compute-0 sudo[414524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:40:53 compute-0 sudo[414524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:40:53 compute-0 sudo[414524]: pam_unix(sudo:session): session closed for user root
Oct 02 19:40:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:40:55 compute-0 ceph-mon[191910]: pgmap v1067: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:57 compute-0 ceph-mon[191910]: pgmap v1068: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:59 compute-0 ceph-mon[191910]: pgmap v1069: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:40:59 compute-0 podman[157186]: time="2025-10-02T19:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:40:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:40:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8532 "" "Go-http-client/1.1"
Oct 02 19:41:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:41:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:01 compute-0 openstack_network_exporter[372736]: ERROR   19:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:41:01 compute-0 openstack_network_exporter[372736]: ERROR   19:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:41:01 compute-0 openstack_network_exporter[372736]: ERROR   19:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:41:01 compute-0 openstack_network_exporter[372736]: ERROR   19:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:41:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:41:01 compute-0 openstack_network_exporter[372736]: ERROR   19:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:41:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:41:01 compute-0 ceph-mon[191910]: pgmap v1070: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:41:03
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.mgr', 'vms', 'default.rgw.meta', '.rgw.root', 'backups']
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:41:03 compute-0 ceph-mon[191910]: pgmap v1071: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:03 compute-0 podman[414549]: 2025-10-02 19:41:03.727083849 +0000 UTC m=+0.141489240 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:41:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.293 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.295 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes.delta': [], 'network.incoming.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes.delta': [], 'network.incoming.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.bytes.delta': [], 'network.incoming.packets.error': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.309 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.309 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.309 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.309 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.310 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:41:04.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:41:05 compute-0 ceph-mon[191910]: pgmap v1072: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:06 compute-0 sshd-session[414570]: Accepted publickey for zuul from 38.102.83.68 port 34824 ssh2: RSA SHA256:ehN6Fkjxbk4ZZsnbj2qNHppfucDBFar0ERCGpW0xI0M
Oct 02 19:41:06 compute-0 systemd-logind[793]: New session 62 of user zuul.
Oct 02 19:41:06 compute-0 systemd[1]: Started Session 62 of User zuul.
Oct 02 19:41:06 compute-0 sshd-session[414570]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:41:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:07 compute-0 ceph-mon[191910]: pgmap v1073: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:07 compute-0 python3[414747]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:41:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:09 compute-0 podman[414853]: 2025-10-02 19:41:09.68866216 +0000 UTC m=+0.103795692 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:41:09 compute-0 podman[414855]: 2025-10-02 19:41:09.702836183 +0000 UTC m=+0.122046695 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm)
Oct 02 19:41:09 compute-0 ceph-mon[191910]: pgmap v1074: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:41:10 compute-0 sudo[415018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bofhpsthypainrpgsqglvjcnznnizlbx ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759434069.7762163-35018-189918167561679/AnsiballZ_command.py'
Oct 02 19:41:10 compute-0 sudo[415018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:41:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:10 compute-0 python3[415020]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:41:11 compute-0 sudo[415018]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:11 compute-0 ceph-mon[191910]: pgmap v1075: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:11 compute-0 sudo[415171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urcawrksyvwevhiavqemkcghqerxtntq ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759434071.3561862-35029-216555686154591/AnsiballZ_command.py'
Oct 02 19:41:11 compute-0 sudo[415171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:41:12 compute-0 python3[415173]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:41:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:41:13 compute-0 sudo[415171]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:13 compute-0 ceph-mon[191910]: pgmap v1076: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:14 compute-0 podman[415275]: 2025-10-02 19:41:14.746235453 +0000 UTC m=+0.170248816 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct 02 19:41:14 compute-0 python3[415344]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:41:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:41:15 compute-0 ceph-mon[191910]: pgmap v1077: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:16 compute-0 sudo[415495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owcdvvxazovgztbgmitamyevmuiapsvt ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759434075.4487362-35073-146471269582943/AnsiballZ_setup.py'
Oct 02 19:41:16 compute-0 sudo[415495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:41:16 compute-0 podman[415497]: 2025-10-02 19:41:16.174707657 +0000 UTC m=+0.131607223 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, release-0.7.12=, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, container_name=kepler, architecture=x86_64, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9)
Oct 02 19:41:16 compute-0 python3[415498]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:41:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:17 compute-0 ceph-mon[191910]: pgmap v1078: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:17 compute-0 sudo[415495]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:19 compute-0 sudo[415784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxlalftgthmnrvevbnwruuemlyhnlwnl ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759434078.6340716-35102-260558616900721/AnsiballZ_command.py'
Oct 02 19:41:19 compute-0 sudo[415784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:41:19 compute-0 podman[415725]: 2025-10-02 19:41:19.180619375 +0000 UTC m=+0.115752155 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 19:41:19 compute-0 podman[415729]: 2025-10-02 19:41:19.18265791 +0000 UTC m=+0.107594905 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 19:41:19 compute-0 podman[415732]: 2025-10-02 19:41:19.238039855 +0000 UTC m=+0.150184424 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 19:41:19 compute-0 python3[415797]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:41:19 compute-0 sudo[415784]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:19 compute-0 ceph-mon[191910]: pgmap v1079: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:41:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4203954860' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:41:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:41:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4203954860' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:41:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:41:20 compute-0 sudo[415974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhamtwevnpixuryqaqrfcahyatctilyj ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759434079.9004834-35119-16795072453299/AnsiballZ_command.py'
Oct 02 19:41:20 compute-0 sudo[415974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:41:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:20 compute-0 python3[415976]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:41:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/4203954860' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:41:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/4203954860' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:41:20 compute-0 sudo[415974]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:21 compute-0 ceph-mon[191910]: pgmap v1080: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:23 compute-0 podman[416017]: 2025-10-02 19:41:23.729879358 +0000 UTC m=+0.133315249 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:41:23 compute-0 podman[416016]: 2025-10-02 19:41:23.778253384 +0000 UTC m=+0.192531258 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, version=9.6, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, com.redhat.component=ubi9-minimal-container)
Oct 02 19:41:23 compute-0 ceph-mon[191910]: pgmap v1081: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:41:25 compute-0 ceph-mon[191910]: pgmap v1082: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:27 compute-0 ceph-mon[191910]: pgmap v1083: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:29 compute-0 podman[157186]: time="2025-10-02T19:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:41:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:41:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8536 "" "Go-http-client/1.1"
Oct 02 19:41:29 compute-0 ceph-mon[191910]: pgmap v1084: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:41:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:31 compute-0 openstack_network_exporter[372736]: ERROR   19:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:41:31 compute-0 openstack_network_exporter[372736]: ERROR   19:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:41:31 compute-0 openstack_network_exporter[372736]: ERROR   19:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:41:31 compute-0 openstack_network_exporter[372736]: ERROR   19:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:41:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:41:31 compute-0 openstack_network_exporter[372736]: ERROR   19:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:41:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:41:32 compute-0 ceph-mon[191910]: pgmap v1085: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:41:32.289 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:41:32.290 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:41:32.290 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:41:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:41:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:41:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:41:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:41:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:41:34 compute-0 ceph-mon[191910]: pgmap v1086: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:34 compute-0 podman[416058]: 2025-10-02 19:41:34.719825582 +0000 UTC m=+0.138459578 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 19:41:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:41:35 compute-0 nova_compute[355794]: 2025-10-02 19:41:35.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:35 compute-0 nova_compute[355794]: 2025-10-02 19:41:35.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 19:41:35 compute-0 nova_compute[355794]: 2025-10-02 19:41:35.592 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 19:41:36 compute-0 ceph-mon[191910]: pgmap v1087: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:38 compute-0 ceph-mon[191910]: pgmap v1088: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:38 compute-0 nova_compute[355794]: 2025-10-02 19:41:38.592 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:38 compute-0 nova_compute[355794]: 2025-10-02 19:41:38.593 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:39 compute-0 nova_compute[355794]: 2025-10-02 19:41:39.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:41:40 compute-0 ceph-mon[191910]: pgmap v1089: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:40 compute-0 nova_compute[355794]: 2025-10-02 19:41:40.589 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:40 compute-0 podman[416078]: 2025-10-02 19:41:40.657890191 +0000 UTC m=+0.095358785 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:41:40 compute-0 podman[416079]: 2025-10-02 19:41:40.733230734 +0000 UTC m=+0.159480705 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:41:41 compute-0 ceph-mon[191910]: pgmap v1090: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:41 compute-0 nova_compute[355794]: 2025-10-02 19:41:41.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:41 compute-0 nova_compute[355794]: 2025-10-02 19:41:41.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:41 compute-0 nova_compute[355794]: 2025-10-02 19:41:41.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:41:41 compute-0 nova_compute[355794]: 2025-10-02 19:41:41.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:41 compute-0 nova_compute[355794]: 2025-10-02 19:41:41.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 19:41:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:42 compute-0 nova_compute[355794]: 2025-10-02 19:41:42.595 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:43 compute-0 nova_compute[355794]: 2025-10-02 19:41:43.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:43 compute-0 nova_compute[355794]: 2025-10-02 19:41:43.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:41:43 compute-0 nova_compute[355794]: 2025-10-02 19:41:43.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:41:43 compute-0 nova_compute[355794]: 2025-10-02 19:41:43.589 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:41:43 compute-0 nova_compute[355794]: 2025-10-02 19:41:43.589 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:43 compute-0 ceph-mon[191910]: pgmap v1091: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:44 compute-0 nova_compute[355794]: 2025-10-02 19:41:44.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:44 compute-0 nova_compute[355794]: 2025-10-02 19:41:44.590 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:44 compute-0 nova_compute[355794]: 2025-10-02 19:41:44.628 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:44 compute-0 nova_compute[355794]: 2025-10-02 19:41:44.629 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:44 compute-0 nova_compute[355794]: 2025-10-02 19:41:44.629 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:44 compute-0 nova_compute[355794]: 2025-10-02 19:41:44.629 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:41:44 compute-0 nova_compute[355794]: 2025-10-02 19:41:44.630 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:41:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:41:45 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3251161479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:41:45 compute-0 nova_compute[355794]: 2025-10-02 19:41:45.200 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:45 compute-0 ceph-mon[191910]: pgmap v1092: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:45 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3251161479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:41:45 compute-0 nova_compute[355794]: 2025-10-02 19:41:45.692 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:41:45 compute-0 nova_compute[355794]: 2025-10-02 19:41:45.693 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4570MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:41:45 compute-0 nova_compute[355794]: 2025-10-02 19:41:45.693 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:45 compute-0 nova_compute[355794]: 2025-10-02 19:41:45.694 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:45 compute-0 podman[416142]: 2025-10-02 19:41:45.731937859 +0000 UTC m=+0.151768047 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct 02 19:41:45 compute-0 nova_compute[355794]: 2025-10-02 19:41:45.984 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:41:45 compute-0 nova_compute[355794]: 2025-10-02 19:41:45.985 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:41:46 compute-0 nova_compute[355794]: 2025-10-02 19:41:46.096 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing inventories for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 19:41:46 compute-0 nova_compute[355794]: 2025-10-02 19:41:46.171 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating ProviderTree inventory for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 19:41:46 compute-0 nova_compute[355794]: 2025-10-02 19:41:46.172 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating inventory in ProviderTree for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:41:46 compute-0 nova_compute[355794]: 2025-10-02 19:41:46.189 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing aggregate associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 19:41:46 compute-0 nova_compute[355794]: 2025-10-02 19:41:46.211 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing trait associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, traits: COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 19:41:46 compute-0 nova_compute[355794]: 2025-10-02 19:41:46.228 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:46 compute-0 podman[416180]: 2025-10-02 19:41:46.70233526 +0000 UTC m=+0.125226021 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, config_id=edpm, io.openshift.tags=base rhel9, vcs-type=git)
Oct 02 19:41:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:41:46 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2486542603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:41:46 compute-0 nova_compute[355794]: 2025-10-02 19:41:46.762 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:46 compute-0 nova_compute[355794]: 2025-10-02 19:41:46.771 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:41:46 compute-0 nova_compute[355794]: 2025-10-02 19:41:46.791 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:41:46 compute-0 nova_compute[355794]: 2025-10-02 19:41:46.795 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:41:46 compute-0 nova_compute[355794]: 2025-10-02 19:41:46.796 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:47 compute-0 ceph-mon[191910]: pgmap v1093: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:47 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2486542603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:41:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:49 compute-0 ceph-mon[191910]: pgmap v1094: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:49 compute-0 podman[416203]: 2025-10-02 19:41:49.68048593 +0000 UTC m=+0.107322837 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible)
Oct 02 19:41:49 compute-0 podman[416202]: 2025-10-02 19:41:49.685960008 +0000 UTC m=+0.102924639 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:41:49 compute-0 podman[416204]: 2025-10-02 19:41:49.751930799 +0000 UTC m=+0.166605098 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:41:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:41:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:51 compute-0 ceph-mon[191910]: pgmap v1095: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:53 compute-0 ceph-mon[191910]: pgmap v1096: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:53 compute-0 sudo[416264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:41:53 compute-0 sudo[416264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:41:53 compute-0 sudo[416264]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:53 compute-0 podman[416288]: 2025-10-02 19:41:53.893034465 +0000 UTC m=+0.083549165 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:41:53 compute-0 sudo[416296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:41:53 compute-0 sudo[416296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:41:53 compute-0 sudo[416296]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:53 compute-0 podman[416289]: 2025-10-02 19:41:53.93322821 +0000 UTC m=+0.106904326 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, release=1755695350, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:41:54 compute-0 sudo[416357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:41:54 compute-0 sudo[416357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:41:54 compute-0 sudo[416357]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:54 compute-0 sudo[416382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 19:41:54 compute-0 sudo[416382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:41:54 compute-0 sudo[416382]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:41:54 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:41:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:41:54 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:41:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:54 compute-0 sudo[416425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:41:54 compute-0 sudo[416425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:41:54 compute-0 sudo[416425]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:54 compute-0 sudo[416450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:41:54 compute-0 sudo[416450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:41:54 compute-0 sudo[416450]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:54 compute-0 sudo[416475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:41:54 compute-0 sudo[416475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:41:54 compute-0 sudo[416475]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:41:55 compute-0 sudo[416500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:41:55 compute-0 sudo[416500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:41:55 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:41:55 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:41:55 compute-0 ceph-mon[191910]: pgmap v1097: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:41:55 compute-0 sudo[416500]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:41:55 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:41:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:41:55 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:41:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:41:55 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:41:55 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 63d1e3c4-a624-4383-bae2-6228c31e2974 does not exist
Oct 02 19:41:55 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 6aba2e97-a08d-4a91-a525-eafebcdf97c9 does not exist
Oct 02 19:41:55 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev b7661bf9-bbd9-497c-8593-3ac9ab974d3c does not exist
Oct 02 19:41:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:41:55 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:41:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:41:55 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:41:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:41:55 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:41:55 compute-0 sudo[416556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:41:55 compute-0 sudo[416556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:41:55 compute-0 sudo[416556]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:56 compute-0 sudo[416581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:41:56 compute-0 sudo[416581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:41:56 compute-0 sudo[416581]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:56 compute-0 sudo[416606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:41:56 compute-0 sudo[416606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:41:56 compute-0 sudo[416606]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:56 compute-0 sudo[416631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:41:56 compute-0 sudo[416631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:41:56 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:41:56 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:41:56 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:41:56 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:41:56 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:41:56 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:41:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Oct 02 19:41:56 compute-0 podman[416695]: 2025-10-02 19:41:56.883583389 +0000 UTC m=+0.092547668 container create f41de3a90f0a642e4148e0258c5ed02cce0e6fd3ae9157e412b6b58396d0ba6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 19:41:56 compute-0 podman[416695]: 2025-10-02 19:41:56.844082703 +0000 UTC m=+0.053047072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:41:56 compute-0 systemd[1]: Started libpod-conmon-f41de3a90f0a642e4148e0258c5ed02cce0e6fd3ae9157e412b6b58396d0ba6b.scope.
Oct 02 19:41:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:41:57 compute-0 podman[416695]: 2025-10-02 19:41:57.031181643 +0000 UTC m=+0.240145942 container init f41de3a90f0a642e4148e0258c5ed02cce0e6fd3ae9157e412b6b58396d0ba6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 19:41:57 compute-0 podman[416695]: 2025-10-02 19:41:57.051303556 +0000 UTC m=+0.260267845 container start f41de3a90f0a642e4148e0258c5ed02cce0e6fd3ae9157e412b6b58396d0ba6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_margulis, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:41:57 compute-0 podman[416695]: 2025-10-02 19:41:57.057681638 +0000 UTC m=+0.266645947 container attach f41de3a90f0a642e4148e0258c5ed02cce0e6fd3ae9157e412b6b58396d0ba6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 19:41:57 compute-0 focused_margulis[416711]: 167 167
Oct 02 19:41:57 compute-0 systemd[1]: libpod-f41de3a90f0a642e4148e0258c5ed02cce0e6fd3ae9157e412b6b58396d0ba6b.scope: Deactivated successfully.
Oct 02 19:41:57 compute-0 podman[416695]: 2025-10-02 19:41:57.064640556 +0000 UTC m=+0.273604855 container died f41de3a90f0a642e4148e0258c5ed02cce0e6fd3ae9157e412b6b58396d0ba6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:41:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-28cf8e7e3e16da7b2a6e6c8c9ad2ce557f0174ff2e756af7eac36778e26f770b-merged.mount: Deactivated successfully.
Oct 02 19:41:57 compute-0 podman[416695]: 2025-10-02 19:41:57.137240656 +0000 UTC m=+0.346204945 container remove f41de3a90f0a642e4148e0258c5ed02cce0e6fd3ae9157e412b6b58396d0ba6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_margulis, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:41:57 compute-0 systemd[1]: libpod-conmon-f41de3a90f0a642e4148e0258c5ed02cce0e6fd3ae9157e412b6b58396d0ba6b.scope: Deactivated successfully.
Oct 02 19:41:57 compute-0 podman[416735]: 2025-10-02 19:41:57.357995384 +0000 UTC m=+0.074085171 container create f7c93a50892f5193245b9773ad5a726ea4a4f5244654b871ee3bab58019dcc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mahavira, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:41:57 compute-0 systemd[1]: Started libpod-conmon-f7c93a50892f5193245b9773ad5a726ea4a4f5244654b871ee3bab58019dcc6b.scope.
Oct 02 19:41:57 compute-0 podman[416735]: 2025-10-02 19:41:57.325015734 +0000 UTC m=+0.041105531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:41:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:41:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4105ce102edcdac6a47a703a924124abe7af71d9b1acfd8e01c001e25e6e0327/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:41:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4105ce102edcdac6a47a703a924124abe7af71d9b1acfd8e01c001e25e6e0327/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:41:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4105ce102edcdac6a47a703a924124abe7af71d9b1acfd8e01c001e25e6e0327/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:41:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4105ce102edcdac6a47a703a924124abe7af71d9b1acfd8e01c001e25e6e0327/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:41:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4105ce102edcdac6a47a703a924124abe7af71d9b1acfd8e01c001e25e6e0327/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:41:57 compute-0 podman[416735]: 2025-10-02 19:41:57.501133817 +0000 UTC m=+0.217223584 container init f7c93a50892f5193245b9773ad5a726ea4a4f5244654b871ee3bab58019dcc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mahavira, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 19:41:57 compute-0 podman[416735]: 2025-10-02 19:41:57.536182333 +0000 UTC m=+0.252272140 container start f7c93a50892f5193245b9773ad5a726ea4a4f5244654b871ee3bab58019dcc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mahavira, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:41:57 compute-0 podman[416735]: 2025-10-02 19:41:57.542315599 +0000 UTC m=+0.258405396 container attach f7c93a50892f5193245b9773ad5a726ea4a4f5244654b871ee3bab58019dcc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mahavira, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:41:57 compute-0 ceph-mon[191910]: pgmap v1098: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Oct 02 19:41:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Oct 02 19:41:58 compute-0 gracious_mahavira[416751]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:41:58 compute-0 gracious_mahavira[416751]: --> relative data size: 1.0
Oct 02 19:41:58 compute-0 gracious_mahavira[416751]: --> All data devices are unavailable
Oct 02 19:41:58 compute-0 systemd[1]: libpod-f7c93a50892f5193245b9773ad5a726ea4a4f5244654b871ee3bab58019dcc6b.scope: Deactivated successfully.
Oct 02 19:41:58 compute-0 podman[416735]: 2025-10-02 19:41:58.769083238 +0000 UTC m=+1.485173045 container died f7c93a50892f5193245b9773ad5a726ea4a4f5244654b871ee3bab58019dcc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mahavira, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:41:58 compute-0 systemd[1]: libpod-f7c93a50892f5193245b9773ad5a726ea4a4f5244654b871ee3bab58019dcc6b.scope: Consumed 1.174s CPU time.
Oct 02 19:41:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4105ce102edcdac6a47a703a924124abe7af71d9b1acfd8e01c001e25e6e0327-merged.mount: Deactivated successfully.
Oct 02 19:41:58 compute-0 podman[416735]: 2025-10-02 19:41:58.869135789 +0000 UTC m=+1.585225576 container remove f7c93a50892f5193245b9773ad5a726ea4a4f5244654b871ee3bab58019dcc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mahavira, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 19:41:58 compute-0 systemd[1]: libpod-conmon-f7c93a50892f5193245b9773ad5a726ea4a4f5244654b871ee3bab58019dcc6b.scope: Deactivated successfully.
Oct 02 19:41:58 compute-0 sudo[416631]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:59 compute-0 sudo[416793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:41:59 compute-0 sudo[416793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:41:59 compute-0 sudo[416793]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:59 compute-0 sudo[416818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:41:59 compute-0 sudo[416818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:41:59 compute-0 sudo[416818]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:59 compute-0 sudo[416843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:41:59 compute-0 sudo[416843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:41:59 compute-0 sudo[416843]: pam_unix(sudo:session): session closed for user root
Oct 02 19:41:59 compute-0 sudo[416868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:41:59 compute-0 sudo[416868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:41:59 compute-0 ceph-mon[191910]: pgmap v1099: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Oct 02 19:41:59 compute-0 podman[157186]: time="2025-10-02T19:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:41:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:41:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8537 "" "Go-http-client/1.1"
Oct 02 19:42:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:42:00 compute-0 podman[416931]: 2025-10-02 19:42:00.163888074 +0000 UTC m=+0.082650822 container create 85a19457d5fd29957f3a3cf8987a8440625dd3b9cd69b7ab1b19ca9f65c5eeb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 19:42:00 compute-0 podman[416931]: 2025-10-02 19:42:00.133128464 +0000 UTC m=+0.051891262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:42:00 compute-0 systemd[1]: Started libpod-conmon-85a19457d5fd29957f3a3cf8987a8440625dd3b9cd69b7ab1b19ca9f65c5eeb1.scope.
Oct 02 19:42:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:42:00 compute-0 podman[416931]: 2025-10-02 19:42:00.307656074 +0000 UTC m=+0.226418822 container init 85a19457d5fd29957f3a3cf8987a8440625dd3b9cd69b7ab1b19ca9f65c5eeb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:42:00 compute-0 podman[416931]: 2025-10-02 19:42:00.325529197 +0000 UTC m=+0.244291955 container start 85a19457d5fd29957f3a3cf8987a8440625dd3b9cd69b7ab1b19ca9f65c5eeb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 19:42:00 compute-0 podman[416931]: 2025-10-02 19:42:00.33305815 +0000 UTC m=+0.251820888 container attach 85a19457d5fd29957f3a3cf8987a8440625dd3b9cd69b7ab1b19ca9f65c5eeb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct 02 19:42:00 compute-0 epic_cannon[416947]: 167 167
Oct 02 19:42:00 compute-0 systemd[1]: libpod-85a19457d5fd29957f3a3cf8987a8440625dd3b9cd69b7ab1b19ca9f65c5eeb1.scope: Deactivated successfully.
Oct 02 19:42:00 compute-0 podman[416931]: 2025-10-02 19:42:00.339073042 +0000 UTC m=+0.257835770 container died 85a19457d5fd29957f3a3cf8987a8440625dd3b9cd69b7ab1b19ca9f65c5eeb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 19:42:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-32b4a5db580b8469515773d573e23904c7cc8303b8be1da7122242f89959ca89-merged.mount: Deactivated successfully.
Oct 02 19:42:00 compute-0 podman[416931]: 2025-10-02 19:42:00.412442743 +0000 UTC m=+0.331205501 container remove 85a19457d5fd29957f3a3cf8987a8440625dd3b9cd69b7ab1b19ca9f65c5eeb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cannon, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 19:42:00 compute-0 systemd[1]: libpod-conmon-85a19457d5fd29957f3a3cf8987a8440625dd3b9cd69b7ab1b19ca9f65c5eeb1.scope: Deactivated successfully.
Oct 02 19:42:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 19:42:00 compute-0 podman[416970]: 2025-10-02 19:42:00.704893266 +0000 UTC m=+0.081925342 container create 0ad0733860d3ce17cc417355043f6dc4df5fc282fcb8e87e627e41c1e2019e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 19:42:00 compute-0 podman[416970]: 2025-10-02 19:42:00.669065369 +0000 UTC m=+0.046097495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:42:00 compute-0 systemd[1]: Started libpod-conmon-0ad0733860d3ce17cc417355043f6dc4df5fc282fcb8e87e627e41c1e2019e2e.scope.
Oct 02 19:42:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4ea090834c1d967f7f7cdc77e50014946412124017e14a3c589ae0b3b3d880/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4ea090834c1d967f7f7cdc77e50014946412124017e14a3c589ae0b3b3d880/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4ea090834c1d967f7f7cdc77e50014946412124017e14a3c589ae0b3b3d880/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4ea090834c1d967f7f7cdc77e50014946412124017e14a3c589ae0b3b3d880/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:42:00 compute-0 podman[416970]: 2025-10-02 19:42:00.910453404 +0000 UTC m=+0.287485500 container init 0ad0733860d3ce17cc417355043f6dc4df5fc282fcb8e87e627e41c1e2019e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 02 19:42:00 compute-0 podman[416970]: 2025-10-02 19:42:00.939313783 +0000 UTC m=+0.316345819 container start 0ad0733860d3ce17cc417355043f6dc4df5fc282fcb8e87e627e41c1e2019e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Oct 02 19:42:00 compute-0 podman[416970]: 2025-10-02 19:42:00.946053255 +0000 UTC m=+0.323085371 container attach 0ad0733860d3ce17cc417355043f6dc4df5fc282fcb8e87e627e41c1e2019e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dirac, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 19:42:01 compute-0 openstack_network_exporter[372736]: ERROR   19:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:42:01 compute-0 openstack_network_exporter[372736]: ERROR   19:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:42:01 compute-0 openstack_network_exporter[372736]: ERROR   19:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:42:01 compute-0 openstack_network_exporter[372736]: ERROR   19:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:42:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:42:01 compute-0 openstack_network_exporter[372736]: ERROR   19:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:42:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:42:01 compute-0 ceph-mon[191910]: pgmap v1100: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 19:42:01 compute-0 magical_dirac[416986]: {
Oct 02 19:42:01 compute-0 magical_dirac[416986]:     "0": [
Oct 02 19:42:01 compute-0 magical_dirac[416986]:         {
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "devices": [
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "/dev/loop3"
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             ],
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "lv_name": "ceph_lv0",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "lv_size": "21470642176",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "name": "ceph_lv0",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "tags": {
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.cluster_name": "ceph",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.crush_device_class": "",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.encrypted": "0",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.osd_id": "0",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.type": "block",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.vdo": "0"
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             },
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "type": "block",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "vg_name": "ceph_vg0"
Oct 02 19:42:01 compute-0 magical_dirac[416986]:         }
Oct 02 19:42:01 compute-0 magical_dirac[416986]:     ],
Oct 02 19:42:01 compute-0 magical_dirac[416986]:     "1": [
Oct 02 19:42:01 compute-0 magical_dirac[416986]:         {
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "devices": [
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "/dev/loop4"
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             ],
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "lv_name": "ceph_lv1",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "lv_size": "21470642176",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "name": "ceph_lv1",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "tags": {
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.cluster_name": "ceph",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.crush_device_class": "",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.encrypted": "0",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.osd_id": "1",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.type": "block",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.vdo": "0"
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             },
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "type": "block",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "vg_name": "ceph_vg1"
Oct 02 19:42:01 compute-0 magical_dirac[416986]:         }
Oct 02 19:42:01 compute-0 magical_dirac[416986]:     ],
Oct 02 19:42:01 compute-0 magical_dirac[416986]:     "2": [
Oct 02 19:42:01 compute-0 magical_dirac[416986]:         {
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "devices": [
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "/dev/loop5"
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             ],
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "lv_name": "ceph_lv2",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "lv_size": "21470642176",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "name": "ceph_lv2",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "tags": {
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.cluster_name": "ceph",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.crush_device_class": "",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.encrypted": "0",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.osd_id": "2",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.type": "block",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:                 "ceph.vdo": "0"
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             },
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "type": "block",
Oct 02 19:42:01 compute-0 magical_dirac[416986]:             "vg_name": "ceph_vg2"
Oct 02 19:42:01 compute-0 magical_dirac[416986]:         }
Oct 02 19:42:01 compute-0 magical_dirac[416986]:     ]
Oct 02 19:42:01 compute-0 magical_dirac[416986]: }
Oct 02 19:42:01 compute-0 systemd[1]: libpod-0ad0733860d3ce17cc417355043f6dc4df5fc282fcb8e87e627e41c1e2019e2e.scope: Deactivated successfully.
Oct 02 19:42:01 compute-0 podman[416970]: 2025-10-02 19:42:01.830129836 +0000 UTC m=+1.207161902 container died 0ad0733860d3ce17cc417355043f6dc4df5fc282fcb8e87e627e41c1e2019e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 19:42:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb4ea090834c1d967f7f7cdc77e50014946412124017e14a3c589ae0b3b3d880-merged.mount: Deactivated successfully.
Oct 02 19:42:01 compute-0 podman[416970]: 2025-10-02 19:42:01.919688052 +0000 UTC m=+1.296720088 container remove 0ad0733860d3ce17cc417355043f6dc4df5fc282fcb8e87e627e41c1e2019e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dirac, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 19:42:01 compute-0 systemd[1]: libpod-conmon-0ad0733860d3ce17cc417355043f6dc4df5fc282fcb8e87e627e41c1e2019e2e.scope: Deactivated successfully.
Oct 02 19:42:01 compute-0 sudo[416868]: pam_unix(sudo:session): session closed for user root
Oct 02 19:42:02 compute-0 sudo[417007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:42:02 compute-0 sudo[417007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:42:02 compute-0 sudo[417007]: pam_unix(sudo:session): session closed for user root
Oct 02 19:42:02 compute-0 sudo[417032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:42:02 compute-0 sudo[417032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:42:02 compute-0 sudo[417032]: pam_unix(sudo:session): session closed for user root
Oct 02 19:42:02 compute-0 sudo[417057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:42:02 compute-0 sudo[417057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:42:02 compute-0 sudo[417057]: pam_unix(sudo:session): session closed for user root
Oct 02 19:42:02 compute-0 sudo[417082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:42:02 compute-0 sudo[417082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:42:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 19:42:02 compute-0 podman[417145]: 2025-10-02 19:42:02.99666082 +0000 UTC m=+0.068216522 container create 945d953b6037726acfb74fdcaaf2e48530629c3abc98ad60df751e77dfbcadf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 19:42:03 compute-0 systemd[1]: Started libpod-conmon-945d953b6037726acfb74fdcaaf2e48530629c3abc98ad60df751e77dfbcadf1.scope.
Oct 02 19:42:03 compute-0 podman[417145]: 2025-10-02 19:42:02.974003398 +0000 UTC m=+0.045559130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:42:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:42:03 compute-0 podman[417145]: 2025-10-02 19:42:03.150189733 +0000 UTC m=+0.221745495 container init 945d953b6037726acfb74fdcaaf2e48530629c3abc98ad60df751e77dfbcadf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:42:03 compute-0 podman[417145]: 2025-10-02 19:42:03.166180345 +0000 UTC m=+0.237736037 container start 945d953b6037726acfb74fdcaaf2e48530629c3abc98ad60df751e77dfbcadf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 19:42:03 compute-0 podman[417145]: 2025-10-02 19:42:03.171456857 +0000 UTC m=+0.243012579 container attach 945d953b6037726acfb74fdcaaf2e48530629c3abc98ad60df751e77dfbcadf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 19:42:03 compute-0 fervent_visvesvaraya[417161]: 167 167
Oct 02 19:42:03 compute-0 systemd[1]: libpod-945d953b6037726acfb74fdcaaf2e48530629c3abc98ad60df751e77dfbcadf1.scope: Deactivated successfully.
Oct 02 19:42:03 compute-0 podman[417145]: 2025-10-02 19:42:03.173348099 +0000 UTC m=+0.244903821 container died 945d953b6037726acfb74fdcaaf2e48530629c3abc98ad60df751e77dfbcadf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:42:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac8709633735161911cc5906e90433b7807ec5f74c69cb047331601613a76ec4-merged.mount: Deactivated successfully.
Oct 02 19:42:03 compute-0 podman[417145]: 2025-10-02 19:42:03.247261164 +0000 UTC m=+0.318816886 container remove 945d953b6037726acfb74fdcaaf2e48530629c3abc98ad60df751e77dfbcadf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_visvesvaraya, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:42:03 compute-0 systemd[1]: libpod-conmon-945d953b6037726acfb74fdcaaf2e48530629c3abc98ad60df751e77dfbcadf1.scope: Deactivated successfully.
Oct 02 19:42:03 compute-0 podman[417185]: 2025-10-02 19:42:03.530077057 +0000 UTC m=+0.062528449 container create aede996f1f221093003d48a08d239182b241fdd7f7961c97ade3d21b625f1f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:42:03
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', '.mgr', 'backups', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.control']
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:42:03 compute-0 ceph-mon[191910]: pgmap v1101: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 19:42:03 compute-0 systemd[1]: Started libpod-conmon-aede996f1f221093003d48a08d239182b241fdd7f7961c97ade3d21b625f1f26.scope.
Oct 02 19:42:03 compute-0 podman[417185]: 2025-10-02 19:42:03.508607977 +0000 UTC m=+0.041059409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:42:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:42:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800631de69a78835e62aa6f04246fef36ac55758915095fa3b475af136d5e912/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:42:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800631de69a78835e62aa6f04246fef36ac55758915095fa3b475af136d5e912/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:42:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800631de69a78835e62aa6f04246fef36ac55758915095fa3b475af136d5e912/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:42:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800631de69a78835e62aa6f04246fef36ac55758915095fa3b475af136d5e912/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:42:03 compute-0 podman[417185]: 2025-10-02 19:42:03.66617591 +0000 UTC m=+0.198627332 container init aede996f1f221093003d48a08d239182b241fdd7f7961c97ade3d21b625f1f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhabha, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 19:42:03 compute-0 podman[417185]: 2025-10-02 19:42:03.68543391 +0000 UTC m=+0.217885302 container start aede996f1f221093003d48a08d239182b241fdd7f7961c97ade3d21b625f1f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhabha, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:42:03 compute-0 podman[417185]: 2025-10-02 19:42:03.690688222 +0000 UTC m=+0.223139684 container attach aede996f1f221093003d48a08d239182b241fdd7f7961c97ade3d21b625f1f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhabha, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:42:03 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:42:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:42:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 19:42:04 compute-0 happy_bhabha[417200]: {
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:         "osd_id": 1,
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:         "type": "bluestore"
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:     },
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:         "osd_id": 2,
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:         "type": "bluestore"
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:     },
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:         "osd_id": 0,
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:         "type": "bluestore"
Oct 02 19:42:04 compute-0 happy_bhabha[417200]:     }
Oct 02 19:42:04 compute-0 happy_bhabha[417200]: }
Oct 02 19:42:04 compute-0 systemd[1]: libpod-aede996f1f221093003d48a08d239182b241fdd7f7961c97ade3d21b625f1f26.scope: Deactivated successfully.
Oct 02 19:42:04 compute-0 podman[417185]: 2025-10-02 19:42:04.844526334 +0000 UTC m=+1.376977726 container died aede996f1f221093003d48a08d239182b241fdd7f7961c97ade3d21b625f1f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 19:42:04 compute-0 systemd[1]: libpod-aede996f1f221093003d48a08d239182b241fdd7f7961c97ade3d21b625f1f26.scope: Consumed 1.164s CPU time.
Oct 02 19:42:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-800631de69a78835e62aa6f04246fef36ac55758915095fa3b475af136d5e912-merged.mount: Deactivated successfully.
Oct 02 19:42:04 compute-0 podman[417185]: 2025-10-02 19:42:04.938652284 +0000 UTC m=+1.471103676 container remove aede996f1f221093003d48a08d239182b241fdd7f7961c97ade3d21b625f1f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhabha, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:42:04 compute-0 systemd[1]: libpod-conmon-aede996f1f221093003d48a08d239182b241fdd7f7961c97ade3d21b625f1f26.scope: Deactivated successfully.
Oct 02 19:42:04 compute-0 sudo[417082]: pam_unix(sudo:session): session closed for user root
Oct 02 19:42:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:42:05 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:42:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:42:05 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:42:05 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 388b019d-7acc-4a0c-afdf-5c87063ec434 does not exist
Oct 02 19:42:05 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev f438b086-5475-4743-8385-dee37ba6fa8d does not exist
Oct 02 19:42:05 compute-0 podman[417234]: 2025-10-02 19:42:05.025914589 +0000 UTC m=+0.134019188 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 19:42:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:42:05 compute-0 sudo[417266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:42:05 compute-0 sudo[417266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:42:05 compute-0 sudo[417266]: pam_unix(sudo:session): session closed for user root
Oct 02 19:42:05 compute-0 sudo[417292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:42:05 compute-0 sudo[417292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:42:05 compute-0 sudo[417292]: pam_unix(sudo:session): session closed for user root
Oct 02 19:42:06 compute-0 ceph-mon[191910]: pgmap v1102: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 19:42:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:42:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:42:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 19:42:08 compute-0 ceph-mon[191910]: pgmap v1103: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 19:42:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:42:09.046537) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434129046580, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2056, "num_deletes": 251, "total_data_size": 3522572, "memory_usage": 3574368, "flush_reason": "Manual Compaction"}
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434129070815, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3424010, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20900, "largest_seqno": 22955, "table_properties": {"data_size": 3414699, "index_size": 5869, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18602, "raw_average_key_size": 19, "raw_value_size": 3396166, "raw_average_value_size": 3643, "num_data_blocks": 267, "num_entries": 932, "num_filter_entries": 932, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759433905, "oldest_key_time": 1759433905, "file_creation_time": 1759434129, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 24367 microseconds, and 13695 cpu microseconds.
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:42:09.070903) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3424010 bytes OK
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:42:09.070931) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:42:09.073935) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:42:09.073958) EVENT_LOG_v1 {"time_micros": 1759434129073951, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:42:09.073979) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3513953, prev total WAL file size 3513953, number of live WAL files 2.
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:42:09.076591) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3343KB)], [50(7371KB)]
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434129076667, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10972019, "oldest_snapshot_seqno": -1}
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4697 keys, 9215473 bytes, temperature: kUnknown
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434129155855, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9215473, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9181620, "index_size": 21007, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 114936, "raw_average_key_size": 24, "raw_value_size": 9094247, "raw_average_value_size": 1936, "num_data_blocks": 886, "num_entries": 4697, "num_filter_entries": 4697, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759434129, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:42:09.156138) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9215473 bytes
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:42:09.158720) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.4 rd, 116.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.2 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5211, records dropped: 514 output_compression: NoCompression
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:42:09.158749) EVENT_LOG_v1 {"time_micros": 1759434129158736, "job": 26, "event": "compaction_finished", "compaction_time_micros": 79260, "compaction_time_cpu_micros": 41115, "output_level": 6, "num_output_files": 1, "total_output_size": 9215473, "num_input_records": 5211, "num_output_records": 4697, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434129160113, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434129163065, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:42:09.076327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:42:09.163276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:42:09.163285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:42:09.163289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:42:09.163293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:42:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:42:09.163297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:42:10 compute-0 ceph-mon[191910]: pgmap v1104: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Oct 02 19:42:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:42:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 34 op/s
Oct 02 19:42:11 compute-0 podman[417318]: 2025-10-02 19:42:11.689053846 +0000 UTC m=+0.103355601 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 19:42:11 compute-0 podman[417317]: 2025-10-02 19:42:11.715225202 +0000 UTC m=+0.133600817 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:42:12 compute-0 ceph-mon[191910]: pgmap v1105: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 34 op/s
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:42:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:42:14 compute-0 ceph-mon[191910]: pgmap v1106: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:42:16 compute-0 ceph-mon[191910]: pgmap v1107: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:16 compute-0 nova_compute[355794]: 2025-10-02 19:42:16.184 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:16 compute-0 podman[417356]: 2025-10-02 19:42:16.698660324 +0000 UTC m=+0.123142805 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:42:17 compute-0 podman[417374]: 2025-10-02 19:42:17.70626479 +0000 UTC m=+0.125017666 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, config_id=edpm, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.tags=base rhel9)
Oct 02 19:42:18 compute-0 ceph-mon[191910]: pgmap v1108: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:42:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:42:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/344449352' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:42:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:42:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/344449352' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:42:20 compute-0 ceph-mon[191910]: pgmap v1109: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/344449352' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:42:20 compute-0 sshd-session[414573]: Received disconnect from 38.102.83.68 port 34824:11: disconnected by user
Oct 02 19:42:20 compute-0 sshd-session[414573]: Disconnected from user zuul 38.102.83.68 port 34824
Oct 02 19:42:20 compute-0 sshd-session[414570]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:42:20 compute-0 systemd[1]: session-62.scope: Deactivated successfully.
Oct 02 19:42:20 compute-0 systemd[1]: session-62.scope: Consumed 12.836s CPU time.
Oct 02 19:42:20 compute-0 systemd-logind[793]: Session 62 logged out. Waiting for processes to exit.
Oct 02 19:42:20 compute-0 systemd-logind[793]: Removed session 62.
Oct 02 19:42:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:20 compute-0 podman[417393]: 2025-10-02 19:42:20.57850314 +0000 UTC m=+0.097704228 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:42:20 compute-0 podman[417394]: 2025-10-02 19:42:20.603898875 +0000 UTC m=+0.108407887 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 02 19:42:20 compute-0 podman[417395]: 2025-10-02 19:42:20.683541685 +0000 UTC m=+0.181303154 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 19:42:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/344449352' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:42:22 compute-0 ceph-mon[191910]: pgmap v1110: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:24 compute-0 ceph-mon[191910]: pgmap v1111: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:24 compute-0 podman[417455]: 2025-10-02 19:42:24.704027818 +0000 UTC m=+0.119710492 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:42:24 compute-0 podman[417454]: 2025-10-02 19:42:24.711191861 +0000 UTC m=+0.130388370 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.33.7, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 02 19:42:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:42:26 compute-0 ceph-mon[191910]: pgmap v1112: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:27 compute-0 ceph-mon[191910]: pgmap v1113: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:29 compute-0 ceph-mon[191910]: pgmap v1114: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:29 compute-0 podman[157186]: time="2025-10-02T19:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:42:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:42:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8541 "" "Go-http-client/1.1"
Oct 02 19:42:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:42:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:31 compute-0 openstack_network_exporter[372736]: ERROR   19:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:42:31 compute-0 openstack_network_exporter[372736]: ERROR   19:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:42:31 compute-0 openstack_network_exporter[372736]: ERROR   19:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:42:31 compute-0 openstack_network_exporter[372736]: ERROR   19:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:42:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:42:31 compute-0 openstack_network_exporter[372736]: ERROR   19:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:42:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:42:31 compute-0 ceph-mon[191910]: pgmap v1115: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:42:32.291 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:42:32.292 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:42:32.292 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:42:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:42:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:42:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:42:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:42:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:42:33 compute-0 ceph-mon[191910]: pgmap v1116: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:42:35 compute-0 ceph-mon[191910]: pgmap v1117: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:35 compute-0 podman[417495]: 2025-10-02 19:42:35.711269709 +0000 UTC m=+0.133107033 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:42:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:37 compute-0 ceph-mon[191910]: pgmap v1118: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:38 compute-0 nova_compute[355794]: 2025-10-02 19:42:38.736 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:39 compute-0 ceph-mon[191910]: pgmap v1119: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:42:40 compute-0 nova_compute[355794]: 2025-10-02 19:42:40.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:41 compute-0 nova_compute[355794]: 2025-10-02 19:42:41.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:41 compute-0 nova_compute[355794]: 2025-10-02 19:42:41.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:42:41 compute-0 ceph-mon[191910]: pgmap v1120: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:42 compute-0 nova_compute[355794]: 2025-10-02 19:42:42.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:42 compute-0 podman[417514]: 2025-10-02 19:42:42.72210967 +0000 UTC m=+0.138515069 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:42:42 compute-0 podman[417515]: 2025-10-02 19:42:42.755850131 +0000 UTC m=+0.166257979 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm)
Oct 02 19:42:43 compute-0 nova_compute[355794]: 2025-10-02 19:42:43.569 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:43 compute-0 nova_compute[355794]: 2025-10-02 19:42:43.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:43 compute-0 ceph-mon[191910]: pgmap v1121: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:44 compute-0 nova_compute[355794]: 2025-10-02 19:42:44.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:44 compute-0 nova_compute[355794]: 2025-10-02 19:42:44.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:42:44 compute-0 nova_compute[355794]: 2025-10-02 19:42:44.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:42:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:44 compute-0 nova_compute[355794]: 2025-10-02 19:42:44.599 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:42:44 compute-0 nova_compute[355794]: 2025-10-02 19:42:44.599 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:44 compute-0 nova_compute[355794]: 2025-10-02 19:42:44.644 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:44 compute-0 nova_compute[355794]: 2025-10-02 19:42:44.645 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:44 compute-0 nova_compute[355794]: 2025-10-02 19:42:44.646 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:44 compute-0 nova_compute[355794]: 2025-10-02 19:42:44.646 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:42:44 compute-0 nova_compute[355794]: 2025-10-02 19:42:44.647 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:42:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:42:45 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3210388377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:42:45 compute-0 nova_compute[355794]: 2025-10-02 19:42:45.198 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:45 compute-0 ceph-mon[191910]: pgmap v1122: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:45 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3210388377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:42:45 compute-0 nova_compute[355794]: 2025-10-02 19:42:45.838 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:42:45 compute-0 nova_compute[355794]: 2025-10-02 19:42:45.840 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4561MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:42:45 compute-0 nova_compute[355794]: 2025-10-02 19:42:45.840 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:45 compute-0 nova_compute[355794]: 2025-10-02 19:42:45.841 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:45 compute-0 nova_compute[355794]: 2025-10-02 19:42:45.921 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:42:45 compute-0 nova_compute[355794]: 2025-10-02 19:42:45.921 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:42:45 compute-0 nova_compute[355794]: 2025-10-02 19:42:45.938 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:42:46 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1526115872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:42:46 compute-0 nova_compute[355794]: 2025-10-02 19:42:46.411 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:46 compute-0 nova_compute[355794]: 2025-10-02 19:42:46.420 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:42:46 compute-0 nova_compute[355794]: 2025-10-02 19:42:46.442 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:42:46 compute-0 nova_compute[355794]: 2025-10-02 19:42:46.447 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:42:46 compute-0 nova_compute[355794]: 2025-10-02 19:42:46.448 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1123: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:46 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1526115872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:42:47 compute-0 nova_compute[355794]: 2025-10-02 19:42:47.424 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:47 compute-0 podman[417602]: 2025-10-02 19:42:47.703289302 +0000 UTC m=+0.129794865 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Oct 02 19:42:47 compute-0 ceph-mon[191910]: pgmap v1123: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:47 compute-0 podman[417621]: 2025-10-02 19:42:47.882007345 +0000 UTC m=+0.113951706 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, container_name=kepler, vcs-type=git, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, architecture=x86_64, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct 02 19:42:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:49 compute-0 ceph-mon[191910]: pgmap v1124: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:42:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:51 compute-0 podman[417643]: 2025-10-02 19:42:51.683017876 +0000 UTC m=+0.099787354 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:42:51 compute-0 podman[417644]: 2025-10-02 19:42:51.702894663 +0000 UTC m=+0.123396122 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:42:51 compute-0 podman[417642]: 2025-10-02 19:42:51.705580125 +0000 UTC m=+0.124643515 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:42:51 compute-0 ceph-mon[191910]: pgmap v1125: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:53 compute-0 ceph-mon[191910]: pgmap v1126: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:42:55 compute-0 podman[417706]: 2025-10-02 19:42:55.675010048 +0000 UTC m=+0.103024692 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git)
Oct 02 19:42:55 compute-0 podman[417707]: 2025-10-02 19:42:55.685557432 +0000 UTC m=+0.112031994 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:42:55 compute-0 ceph-mon[191910]: pgmap v1127: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1128: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:57 compute-0 ceph-mon[191910]: pgmap v1128: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:42:59 compute-0 podman[157186]: time="2025-10-02T19:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:42:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:42:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8535 "" "Go-http-client/1.1"
Oct 02 19:42:59 compute-0 ceph-mon[191910]: pgmap v1129: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:43:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:01 compute-0 openstack_network_exporter[372736]: ERROR   19:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:43:01 compute-0 openstack_network_exporter[372736]: ERROR   19:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:43:01 compute-0 openstack_network_exporter[372736]: ERROR   19:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:43:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:43:01 compute-0 openstack_network_exporter[372736]: ERROR   19:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:43:01 compute-0 openstack_network_exporter[372736]: ERROR   19:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:43:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:43:02 compute-0 ceph-mon[191910]: pgmap v1130: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:43:03
Oct 02 19:43:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:43:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:43:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'vms', 'default.rgw.control', 'backups', 'default.rgw.meta', 'images', '.mgr', 'volumes', 'cephfs.cephfs.meta']
Oct 02 19:43:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:43:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:43:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:43:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:43:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:43:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:43:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:43:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:43:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:43:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:43:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:43:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:43:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:43:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:43:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:43:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:43:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:43:04 compute-0 ceph-mon[191910]: pgmap v1131: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.294 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.294 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:43:04.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:43:05 compute-0 sudo[417752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:43:05 compute-0 sudo[417752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:43:05 compute-0 sudo[417752]: pam_unix(sudo:session): session closed for user root
Oct 02 19:43:05 compute-0 sudo[417777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:43:05 compute-0 sudo[417777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:43:05 compute-0 sudo[417777]: pam_unix(sudo:session): session closed for user root
Oct 02 19:43:05 compute-0 sudo[417802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:43:05 compute-0 sudo[417802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:43:05 compute-0 sudo[417802]: pam_unix(sudo:session): session closed for user root
Oct 02 19:43:05 compute-0 sudo[417827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:43:05 compute-0 sudo[417827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:43:06 compute-0 ceph-mon[191910]: pgmap v1132: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:06 compute-0 sudo[417827]: pam_unix(sudo:session): session closed for user root
Oct 02 19:43:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 19:43:06 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 19:43:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:43:06 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:43:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:43:06 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:43:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:43:06 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:43:06 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 31925410-aaea-4283-af03-f4de947aded9 does not exist
Oct 02 19:43:06 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 8edb6bd3-4538-4479-b993-c9f0d4ed5235 does not exist
Oct 02 19:43:06 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 53a63e96-a956-495d-aa66-ad5e40203619 does not exist
Oct 02 19:43:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:43:06 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:43:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:43:06 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:43:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:43:06 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:43:06 compute-0 sudo[417882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:43:06 compute-0 sudo[417882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:43:06 compute-0 sudo[417882]: pam_unix(sudo:session): session closed for user root
Oct 02 19:43:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:06 compute-0 sudo[417913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:43:06 compute-0 sudo[417913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:43:06 compute-0 sudo[417913]: pam_unix(sudo:session): session closed for user root
Oct 02 19:43:06 compute-0 podman[417906]: 2025-10-02 19:43:06.649654832 +0000 UTC m=+0.148661984 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, container_name=multipathd)
Oct 02 19:43:06 compute-0 sudo[417950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:43:06 compute-0 sudo[417950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:43:06 compute-0 sudo[417950]: pam_unix(sudo:session): session closed for user root
Oct 02 19:43:06 compute-0 sudo[417975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:43:06 compute-0 sudo[417975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:43:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 19:43:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:43:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:43:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:43:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:43:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:43:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:43:07 compute-0 podman[418038]: 2025-10-02 19:43:07.463342973 +0000 UTC m=+0.081719377 container create 1de0f3fa46fdd0d2a0c6239cb33a49d2729253a90b680d38db16acdc52c3aafe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hypatia, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:43:07 compute-0 podman[418038]: 2025-10-02 19:43:07.436109918 +0000 UTC m=+0.054486292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:43:07 compute-0 systemd[1]: Started libpod-conmon-1de0f3fa46fdd0d2a0c6239cb33a49d2729253a90b680d38db16acdc52c3aafe.scope.
Oct 02 19:43:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:43:07 compute-0 podman[418038]: 2025-10-02 19:43:07.600757852 +0000 UTC m=+0.219134306 container init 1de0f3fa46fdd0d2a0c6239cb33a49d2729253a90b680d38db16acdc52c3aafe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:43:07 compute-0 podman[418038]: 2025-10-02 19:43:07.617811052 +0000 UTC m=+0.236187456 container start 1de0f3fa46fdd0d2a0c6239cb33a49d2729253a90b680d38db16acdc52c3aafe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:43:07 compute-0 podman[418038]: 2025-10-02 19:43:07.624942095 +0000 UTC m=+0.243318509 container attach 1de0f3fa46fdd0d2a0c6239cb33a49d2729253a90b680d38db16acdc52c3aafe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hypatia, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 19:43:07 compute-0 elastic_hypatia[418052]: 167 167
Oct 02 19:43:07 compute-0 systemd[1]: libpod-1de0f3fa46fdd0d2a0c6239cb33a49d2729253a90b680d38db16acdc52c3aafe.scope: Deactivated successfully.
Oct 02 19:43:07 compute-0 conmon[418052]: conmon 1de0f3fa46fdd0d2a0c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1de0f3fa46fdd0d2a0c6239cb33a49d2729253a90b680d38db16acdc52c3aafe.scope/container/memory.events
Oct 02 19:43:07 compute-0 podman[418038]: 2025-10-02 19:43:07.631489981 +0000 UTC m=+0.249866345 container died 1de0f3fa46fdd0d2a0c6239cb33a49d2729253a90b680d38db16acdc52c3aafe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:43:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-03951c3ac12d1c4b6f0552ca8f844a4313944dcdf92bba75c75475e45cff97a4-merged.mount: Deactivated successfully.
Oct 02 19:43:07 compute-0 podman[418038]: 2025-10-02 19:43:07.716204268 +0000 UTC m=+0.334580672 container remove 1de0f3fa46fdd0d2a0c6239cb33a49d2729253a90b680d38db16acdc52c3aafe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hypatia, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 19:43:07 compute-0 systemd[1]: libpod-conmon-1de0f3fa46fdd0d2a0c6239cb33a49d2729253a90b680d38db16acdc52c3aafe.scope: Deactivated successfully.
Oct 02 19:43:08 compute-0 podman[418075]: 2025-10-02 19:43:08.021104007 +0000 UTC m=+0.106994839 container create f9a5e3dae60bf8817365f4924011cfc72b620b724f64dea2a40eb3ad76066578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lumiere, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 19:43:08 compute-0 podman[418075]: 2025-10-02 19:43:07.984203251 +0000 UTC m=+0.070094143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:43:08 compute-0 systemd[1]: Started libpod-conmon-f9a5e3dae60bf8817365f4924011cfc72b620b724f64dea2a40eb3ad76066578.scope.
Oct 02 19:43:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:43:08 compute-0 ceph-mon[191910]: pgmap v1133: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b72ec72a3311483b0abd680eb20a4781f4f2ef7ca33550cb4678c391072331/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b72ec72a3311483b0abd680eb20a4781f4f2ef7ca33550cb4678c391072331/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b72ec72a3311483b0abd680eb20a4781f4f2ef7ca33550cb4678c391072331/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b72ec72a3311483b0abd680eb20a4781f4f2ef7ca33550cb4678c391072331/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b72ec72a3311483b0abd680eb20a4781f4f2ef7ca33550cb4678c391072331/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:43:08 compute-0 podman[418075]: 2025-10-02 19:43:08.195591866 +0000 UTC m=+0.281482698 container init f9a5e3dae60bf8817365f4924011cfc72b620b724f64dea2a40eb3ad76066578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 19:43:08 compute-0 podman[418075]: 2025-10-02 19:43:08.213925901 +0000 UTC m=+0.299816703 container start f9a5e3dae60bf8817365f4924011cfc72b620b724f64dea2a40eb3ad76066578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 19:43:08 compute-0 podman[418075]: 2025-10-02 19:43:08.21980222 +0000 UTC m=+0.305693022 container attach f9a5e3dae60bf8817365f4924011cfc72b620b724f64dea2a40eb3ad76066578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lumiere, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:43:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:09 compute-0 compassionate_lumiere[418091]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:43:09 compute-0 compassionate_lumiere[418091]: --> relative data size: 1.0
Oct 02 19:43:09 compute-0 compassionate_lumiere[418091]: --> All data devices are unavailable
Oct 02 19:43:09 compute-0 systemd[1]: libpod-f9a5e3dae60bf8817365f4924011cfc72b620b724f64dea2a40eb3ad76066578.scope: Deactivated successfully.
Oct 02 19:43:09 compute-0 podman[418075]: 2025-10-02 19:43:09.527550616 +0000 UTC m=+1.613441438 container died f9a5e3dae60bf8817365f4924011cfc72b620b724f64dea2a40eb3ad76066578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lumiere, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 19:43:09 compute-0 systemd[1]: libpod-f9a5e3dae60bf8817365f4924011cfc72b620b724f64dea2a40eb3ad76066578.scope: Consumed 1.271s CPU time.
Oct 02 19:43:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-39b72ec72a3311483b0abd680eb20a4781f4f2ef7ca33550cb4678c391072331-merged.mount: Deactivated successfully.
Oct 02 19:43:09 compute-0 podman[418075]: 2025-10-02 19:43:09.61847083 +0000 UTC m=+1.704361622 container remove f9a5e3dae60bf8817365f4924011cfc72b620b724f64dea2a40eb3ad76066578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 19:43:09 compute-0 systemd[1]: libpod-conmon-f9a5e3dae60bf8817365f4924011cfc72b620b724f64dea2a40eb3ad76066578.scope: Deactivated successfully.
Oct 02 19:43:09 compute-0 sudo[417975]: pam_unix(sudo:session): session closed for user root
Oct 02 19:43:09 compute-0 sudo[418133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:43:09 compute-0 sudo[418133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:43:09 compute-0 sudo[418133]: pam_unix(sudo:session): session closed for user root
Oct 02 19:43:09 compute-0 sudo[418158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:43:09 compute-0 sudo[418158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:43:09 compute-0 sudo[418158]: pam_unix(sudo:session): session closed for user root
Oct 02 19:43:10 compute-0 sudo[418183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:43:10 compute-0 sudo[418183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:43:10 compute-0 sudo[418183]: pam_unix(sudo:session): session closed for user root
Oct 02 19:43:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:43:10 compute-0 ceph-mon[191910]: pgmap v1134: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:10 compute-0 sudo[418208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:43:10 compute-0 sudo[418208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:43:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:10 compute-0 podman[418270]: 2025-10-02 19:43:10.80635046 +0000 UTC m=+0.079742563 container create bec3c794ec6f0e2cb8108bed6c231534552ab73ce2e0802473fc866e4e860931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_blackburn, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:43:10 compute-0 podman[418270]: 2025-10-02 19:43:10.774443599 +0000 UTC m=+0.047835782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:43:10 compute-0 systemd[1]: Started libpod-conmon-bec3c794ec6f0e2cb8108bed6c231534552ab73ce2e0802473fc866e4e860931.scope.
Oct 02 19:43:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:43:10 compute-0 podman[418270]: 2025-10-02 19:43:10.955089274 +0000 UTC m=+0.228481387 container init bec3c794ec6f0e2cb8108bed6c231534552ab73ce2e0802473fc866e4e860931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_blackburn, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 19:43:10 compute-0 podman[418270]: 2025-10-02 19:43:10.965062293 +0000 UTC m=+0.238454406 container start bec3c794ec6f0e2cb8108bed6c231534552ab73ce2e0802473fc866e4e860931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:43:10 compute-0 podman[418270]: 2025-10-02 19:43:10.970624224 +0000 UTC m=+0.244016337 container attach bec3c794ec6f0e2cb8108bed6c231534552ab73ce2e0802473fc866e4e860931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_blackburn, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 02 19:43:10 compute-0 wizardly_blackburn[418286]: 167 167
Oct 02 19:43:10 compute-0 systemd[1]: libpod-bec3c794ec6f0e2cb8108bed6c231534552ab73ce2e0802473fc866e4e860931.scope: Deactivated successfully.
Oct 02 19:43:10 compute-0 podman[418270]: 2025-10-02 19:43:10.977582711 +0000 UTC m=+0.250974804 container died bec3c794ec6f0e2cb8108bed6c231534552ab73ce2e0802473fc866e4e860931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 19:43:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-72be02b4050681f64da2d9e2f56a03a1a508e6cfd8b271731441d7940ddb0334-merged.mount: Deactivated successfully.
Oct 02 19:43:11 compute-0 podman[418270]: 2025-10-02 19:43:11.057222961 +0000 UTC m=+0.330615084 container remove bec3c794ec6f0e2cb8108bed6c231534552ab73ce2e0802473fc866e4e860931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:43:11 compute-0 systemd[1]: libpod-conmon-bec3c794ec6f0e2cb8108bed6c231534552ab73ce2e0802473fc866e4e860931.scope: Deactivated successfully.
Oct 02 19:43:11 compute-0 podman[418308]: 2025-10-02 19:43:11.345560873 +0000 UTC m=+0.094103131 container create ce9613283c0b58c2047820bfeab76927562e0c9d614d66866bf6c67b7621cb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pasteur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:43:11 compute-0 podman[418308]: 2025-10-02 19:43:11.313351554 +0000 UTC m=+0.061893782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:43:11 compute-0 systemd[1]: Started libpod-conmon-ce9613283c0b58c2047820bfeab76927562e0c9d614d66866bf6c67b7621cb8c.scope.
Oct 02 19:43:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:43:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09d38597b78eb8e5835af35f065c538d5fe22d603cc90bdb718512a7c59bd2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:43:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09d38597b78eb8e5835af35f065c538d5fe22d603cc90bdb718512a7c59bd2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:43:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09d38597b78eb8e5835af35f065c538d5fe22d603cc90bdb718512a7c59bd2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:43:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09d38597b78eb8e5835af35f065c538d5fe22d603cc90bdb718512a7c59bd2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:43:11 compute-0 podman[418308]: 2025-10-02 19:43:11.481186344 +0000 UTC m=+0.229728562 container init ce9613283c0b58c2047820bfeab76927562e0c9d614d66866bf6c67b7621cb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 19:43:11 compute-0 podman[418308]: 2025-10-02 19:43:11.504512653 +0000 UTC m=+0.253054871 container start ce9613283c0b58c2047820bfeab76927562e0c9d614d66866bf6c67b7621cb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pasteur, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 02 19:43:11 compute-0 podman[418308]: 2025-10-02 19:43:11.510812053 +0000 UTC m=+0.259354461 container attach ce9613283c0b58c2047820bfeab76927562e0c9d614d66866bf6c67b7621cb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pasteur, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:43:12 compute-0 ceph-mon[191910]: pgmap v1135: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]: {
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:     "0": [
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:         {
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "devices": [
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "/dev/loop3"
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             ],
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "lv_name": "ceph_lv0",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "lv_size": "21470642176",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "name": "ceph_lv0",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "tags": {
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.cluster_name": "ceph",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.crush_device_class": "",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.encrypted": "0",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.osd_id": "0",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.type": "block",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.vdo": "0"
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             },
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "type": "block",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "vg_name": "ceph_vg0"
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:         }
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:     ],
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:     "1": [
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:         {
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "devices": [
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "/dev/loop4"
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             ],
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "lv_name": "ceph_lv1",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "lv_size": "21470642176",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "name": "ceph_lv1",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "tags": {
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.cluster_name": "ceph",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.crush_device_class": "",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.encrypted": "0",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.osd_id": "1",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.type": "block",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.vdo": "0"
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             },
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "type": "block",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "vg_name": "ceph_vg1"
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:         }
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:     ],
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:     "2": [
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:         {
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "devices": [
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "/dev/loop5"
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             ],
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "lv_name": "ceph_lv2",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "lv_size": "21470642176",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "name": "ceph_lv2",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "tags": {
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.cluster_name": "ceph",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.crush_device_class": "",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.encrypted": "0",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.osd_id": "2",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.type": "block",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:                 "ceph.vdo": "0"
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             },
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "type": "block",
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:             "vg_name": "ceph_vg2"
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:         }
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]:     ]
Oct 02 19:43:12 compute-0 adoring_pasteur[418326]: }
Oct 02 19:43:12 compute-0 systemd[1]: libpod-ce9613283c0b58c2047820bfeab76927562e0c9d614d66866bf6c67b7621cb8c.scope: Deactivated successfully.
Oct 02 19:43:12 compute-0 podman[418308]: 2025-10-02 19:43:12.413836236 +0000 UTC m=+1.162378524 container died ce9613283c0b58c2047820bfeab76927562e0c9d614d66866bf6c67b7621cb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:43:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b09d38597b78eb8e5835af35f065c538d5fe22d603cc90bdb718512a7c59bd2b-merged.mount: Deactivated successfully.
Oct 02 19:43:12 compute-0 podman[418308]: 2025-10-02 19:43:12.507019391 +0000 UTC m=+1.255561619 container remove ce9613283c0b58c2047820bfeab76927562e0c9d614d66866bf6c67b7621cb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pasteur, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 19:43:12 compute-0 systemd[1]: libpod-conmon-ce9613283c0b58c2047820bfeab76927562e0c9d614d66866bf6c67b7621cb8c.scope: Deactivated successfully.
Oct 02 19:43:12 compute-0 sudo[418208]: pam_unix(sudo:session): session closed for user root
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:12 compute-0 sudo[418348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:43:12 compute-0 sudo[418348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:43:12 compute-0 sudo[418348]: pam_unix(sudo:session): session closed for user root
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:43:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:43:12 compute-0 sudo[418373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:43:12 compute-0 sudo[418373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:43:12 compute-0 sudo[418373]: pam_unix(sudo:session): session closed for user root
Oct 02 19:43:12 compute-0 podman[418397]: 2025-10-02 19:43:12.924528759 +0000 UTC m=+0.089720782 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:43:12 compute-0 sudo[418411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:43:12 compute-0 sudo[418411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:43:12 compute-0 sudo[418411]: pam_unix(sudo:session): session closed for user root
Oct 02 19:43:12 compute-0 podman[418398]: 2025-10-02 19:43:12.956509692 +0000 UTC m=+0.123405011 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:43:13 compute-0 sudo[418464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:43:13 compute-0 sudo[418464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:43:13 compute-0 ceph-mon[191910]: pgmap v1136: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:13 compute-0 podman[418527]: 2025-10-02 19:43:13.573069922 +0000 UTC m=+0.080207525 container create 479ef7eb39be5d415e6dd8b178d52e34081abb0d916a706f44de8d1055cc6e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dhawan, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 19:43:13 compute-0 podman[418527]: 2025-10-02 19:43:13.543272048 +0000 UTC m=+0.050409641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:43:13 compute-0 systemd[1]: Started libpod-conmon-479ef7eb39be5d415e6dd8b178d52e34081abb0d916a706f44de8d1055cc6e0c.scope.
Oct 02 19:43:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:43:13 compute-0 podman[418527]: 2025-10-02 19:43:13.722581178 +0000 UTC m=+0.229718781 container init 479ef7eb39be5d415e6dd8b178d52e34081abb0d916a706f44de8d1055cc6e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 19:43:13 compute-0 podman[418527]: 2025-10-02 19:43:13.741494488 +0000 UTC m=+0.248632081 container start 479ef7eb39be5d415e6dd8b178d52e34081abb0d916a706f44de8d1055cc6e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:43:13 compute-0 podman[418527]: 2025-10-02 19:43:13.748820526 +0000 UTC m=+0.255958099 container attach 479ef7eb39be5d415e6dd8b178d52e34081abb0d916a706f44de8d1055cc6e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:43:13 compute-0 vibrant_dhawan[418543]: 167 167
Oct 02 19:43:13 compute-0 systemd[1]: libpod-479ef7eb39be5d415e6dd8b178d52e34081abb0d916a706f44de8d1055cc6e0c.scope: Deactivated successfully.
Oct 02 19:43:13 compute-0 podman[418527]: 2025-10-02 19:43:13.756170574 +0000 UTC m=+0.263308167 container died 479ef7eb39be5d415e6dd8b178d52e34081abb0d916a706f44de8d1055cc6e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dhawan, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:43:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f2a4531293556b7c028a53c74265c87bb5f16e7a567e444c526ab7a5bcad2c9-merged.mount: Deactivated successfully.
Oct 02 19:43:13 compute-0 podman[418527]: 2025-10-02 19:43:13.84380547 +0000 UTC m=+0.350943063 container remove 479ef7eb39be5d415e6dd8b178d52e34081abb0d916a706f44de8d1055cc6e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 19:43:13 compute-0 systemd[1]: libpod-conmon-479ef7eb39be5d415e6dd8b178d52e34081abb0d916a706f44de8d1055cc6e0c.scope: Deactivated successfully.
Oct 02 19:43:14 compute-0 podman[418565]: 2025-10-02 19:43:14.141644698 +0000 UTC m=+0.096926947 container create 4e6eaf9f2cedfbd417727ede797813a6bb2eae9ec8ba5145969562e403fa272f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_fermi, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 19:43:14 compute-0 podman[418565]: 2025-10-02 19:43:14.109054019 +0000 UTC m=+0.064336358 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:43:14 compute-0 systemd[1]: Started libpod-conmon-4e6eaf9f2cedfbd417727ede797813a6bb2eae9ec8ba5145969562e403fa272f.scope.
Oct 02 19:43:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c77c58235157ad784d88c7156d0d69f4f4d12e25bf1726132ff72615126d64ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c77c58235157ad784d88c7156d0d69f4f4d12e25bf1726132ff72615126d64ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c77c58235157ad784d88c7156d0d69f4f4d12e25bf1726132ff72615126d64ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c77c58235157ad784d88c7156d0d69f4f4d12e25bf1726132ff72615126d64ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:43:14 compute-0 podman[418565]: 2025-10-02 19:43:14.336986391 +0000 UTC m=+0.292268730 container init 4e6eaf9f2cedfbd417727ede797813a6bb2eae9ec8ba5145969562e403fa272f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:43:14 compute-0 podman[418565]: 2025-10-02 19:43:14.346715843 +0000 UTC m=+0.301998132 container start 4e6eaf9f2cedfbd417727ede797813a6bb2eae9ec8ba5145969562e403fa272f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:43:14 compute-0 podman[418565]: 2025-10-02 19:43:14.353105386 +0000 UTC m=+0.308387725 container attach 4e6eaf9f2cedfbd417727ede797813a6bb2eae9ec8ba5145969562e403fa272f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:43:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:43:15 compute-0 naughty_fermi[418580]: {
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:         "osd_id": 1,
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:         "type": "bluestore"
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:     },
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:         "osd_id": 2,
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:         "type": "bluestore"
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:     },
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:         "osd_id": 0,
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:         "type": "bluestore"
Oct 02 19:43:15 compute-0 naughty_fermi[418580]:     }
Oct 02 19:43:15 compute-0 naughty_fermi[418580]: }
Oct 02 19:43:15 compute-0 systemd[1]: libpod-4e6eaf9f2cedfbd417727ede797813a6bb2eae9ec8ba5145969562e403fa272f.scope: Deactivated successfully.
Oct 02 19:43:15 compute-0 systemd[1]: libpod-4e6eaf9f2cedfbd417727ede797813a6bb2eae9ec8ba5145969562e403fa272f.scope: Consumed 1.222s CPU time.
Oct 02 19:43:15 compute-0 podman[418613]: 2025-10-02 19:43:15.649484015 +0000 UTC m=+0.052609741 container died 4e6eaf9f2cedfbd417727ede797813a6bb2eae9ec8ba5145969562e403fa272f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:43:15 compute-0 ceph-mon[191910]: pgmap v1137: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c77c58235157ad784d88c7156d0d69f4f4d12e25bf1726132ff72615126d64ca-merged.mount: Deactivated successfully.
Oct 02 19:43:15 compute-0 podman[418613]: 2025-10-02 19:43:15.738731084 +0000 UTC m=+0.141856800 container remove 4e6eaf9f2cedfbd417727ede797813a6bb2eae9ec8ba5145969562e403fa272f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 19:43:15 compute-0 systemd[1]: libpod-conmon-4e6eaf9f2cedfbd417727ede797813a6bb2eae9ec8ba5145969562e403fa272f.scope: Deactivated successfully.
Oct 02 19:43:15 compute-0 sudo[418464]: pam_unix(sudo:session): session closed for user root
Oct 02 19:43:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:43:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:43:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:43:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:43:15 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 47c39f8f-5225-4159-a1b3-62d22be504c5 does not exist
Oct 02 19:43:15 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 3511d352-8dc1-40aa-8407-777a43f0d37d does not exist
Oct 02 19:43:15 compute-0 sudo[418628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:43:15 compute-0 sudo[418628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:43:15 compute-0 sudo[418628]: pam_unix(sudo:session): session closed for user root
Oct 02 19:43:16 compute-0 sudo[418653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:43:16 compute-0 sudo[418653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:43:16 compute-0 sudo[418653]: pam_unix(sudo:session): session closed for user root
Oct 02 19:43:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:43:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:43:17 compute-0 ceph-mon[191910]: pgmap v1138: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:18 compute-0 podman[418679]: 2025-10-02 19:43:18.694169778 +0000 UTC m=+0.109240959 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, release-0.7.12=, vcs-type=git, io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, config_id=edpm, vendor=Red Hat, Inc., release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public)
Oct 02 19:43:18 compute-0 podman[418678]: 2025-10-02 19:43:18.706361517 +0000 UTC m=+0.122694872 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:43:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct 02 19:43:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct 02 19:43:18 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct 02 19:43:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct 02 19:43:19 compute-0 ceph-mon[191910]: pgmap v1139: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:19 compute-0 ceph-mon[191910]: osdmap e122: 3 total, 3 up, 3 in
Oct 02 19:43:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct 02 19:43:19 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct 02 19:43:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:43:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:43:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1774791524' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:43:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:43:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1774791524' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:43:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 456 KiB data, 156 MiB used, 60 GiB / 60 GiB avail; 127 B/s rd, 255 B/s wr, 0 op/s
Oct 02 19:43:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct 02 19:43:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct 02 19:43:20 compute-0 ceph-mon[191910]: osdmap e123: 3 total, 3 up, 3 in
Oct 02 19:43:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1774791524' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:43:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1774791524' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:43:20 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct 02 19:43:21 compute-0 ceph-mon[191910]: pgmap v1142: 321 pgs: 321 active+clean; 456 KiB data, 156 MiB used, 60 GiB / 60 GiB avail; 127 B/s rd, 255 B/s wr, 0 op/s
Oct 02 19:43:21 compute-0 ceph-mon[191910]: osdmap e124: 3 total, 3 up, 3 in
Oct 02 19:43:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 456 KiB data, 156 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Oct 02 19:43:22 compute-0 podman[418712]: 2025-10-02 19:43:22.641705133 +0000 UTC m=+0.069457105 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:43:22 compute-0 podman[418713]: 2025-10-02 19:43:22.672160415 +0000 UTC m=+0.099618259 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3)
Oct 02 19:43:22 compute-0 podman[418714]: 2025-10-02 19:43:22.71976022 +0000 UTC m=+0.140530214 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 19:43:23 compute-0 ceph-mon[191910]: pgmap v1144: 321 pgs: 321 active+clean; 456 KiB data, 156 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Oct 02 19:43:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 8.4 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.3 MiB/s wr, 17 op/s
Oct 02 19:43:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:43:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct 02 19:43:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct 02 19:43:25 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct 02 19:43:26 compute-0 ceph-mon[191910]: pgmap v1145: 321 pgs: 321 active+clean; 8.4 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.3 MiB/s wr, 17 op/s
Oct 02 19:43:26 compute-0 ceph-mon[191910]: osdmap e125: 3 total, 3 up, 3 in
Oct 02 19:43:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 2.3 MiB/s wr, 16 op/s
Oct 02 19:43:26 compute-0 podman[418774]: 2025-10-02 19:43:26.701049715 +0000 UTC m=+0.117402189 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:43:26 compute-0 podman[418773]: 2025-10-02 19:43:26.716980325 +0000 UTC m=+0.138462918 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., name=ubi9-minimal, version=9.6, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.openshift.tags=minimal rhel9)
Oct 02 19:43:28 compute-0 ceph-mon[191910]: pgmap v1147: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 2.3 MiB/s wr, 16 op/s
Oct 02 19:43:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 1.9 MiB/s wr, 13 op/s
Oct 02 19:43:29 compute-0 podman[157186]: time="2025-10-02T19:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:43:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:43:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8540 "" "Go-http-client/1.1"
Oct 02 19:43:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:43:30.105894) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434210105945, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1186, "num_deletes": 505, "total_data_size": 1263812, "memory_usage": 1287568, "flush_reason": "Manual Compaction"}
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434210115829, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1001024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22956, "largest_seqno": 24141, "table_properties": {"data_size": 996172, "index_size": 1864, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14241, "raw_average_key_size": 18, "raw_value_size": 984086, "raw_average_value_size": 1305, "num_data_blocks": 83, "num_entries": 754, "num_filter_entries": 754, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759434130, "oldest_key_time": 1759434130, "file_creation_time": 1759434210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 10044 microseconds, and 5412 cpu microseconds.
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:43:30.115933) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1001024 bytes OK
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:43:30.115971) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:43:30.118569) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:43:30.118587) EVENT_LOG_v1 {"time_micros": 1759434210118580, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:43:30.118615) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1257285, prev total WAL file size 1257285, number of live WAL files 2.
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:43:30.119735) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373532' seq:0, type:0; will stop at (end)
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(977KB)], [53(8999KB)]
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434210119790, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10216497, "oldest_snapshot_seqno": -1}
Oct 02 19:43:30 compute-0 ceph-mon[191910]: pgmap v1148: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 1.9 MiB/s wr, 13 op/s
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4443 keys, 7083879 bytes, temperature: kUnknown
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434210197891, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7083879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7054454, "index_size": 17219, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 111295, "raw_average_key_size": 25, "raw_value_size": 6974128, "raw_average_value_size": 1569, "num_data_blocks": 718, "num_entries": 4443, "num_filter_entries": 4443, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759434210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:43:30.198217) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7083879 bytes
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:43:30.200474) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.6 rd, 90.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.8 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(17.3) write-amplify(7.1) OK, records in: 5451, records dropped: 1008 output_compression: NoCompression
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:43:30.200499) EVENT_LOG_v1 {"time_micros": 1759434210200488, "job": 28, "event": "compaction_finished", "compaction_time_micros": 78219, "compaction_time_cpu_micros": 39049, "output_level": 6, "num_output_files": 1, "total_output_size": 7083879, "num_input_records": 5451, "num_output_records": 4443, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434210200897, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434210202944, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:43:30.119500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:43:30.203129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:43:30.203135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:43:30.203138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:43:30.203140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:43:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:43:30.203143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:43:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:43:30.274 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '6e:8a:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:85:f4:22:b9:4f'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:43:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:43:30.276 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:43:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:43:30.277 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1de2af17-a89c-45e5-97c6-db433f26bbb6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:43:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 1.6 MiB/s wr, 11 op/s
Oct 02 19:43:31 compute-0 openstack_network_exporter[372736]: ERROR   19:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:43:31 compute-0 openstack_network_exporter[372736]: ERROR   19:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:43:31 compute-0 openstack_network_exporter[372736]: ERROR   19:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:43:31 compute-0 openstack_network_exporter[372736]: ERROR   19:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:43:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:43:31 compute-0 openstack_network_exporter[372736]: ERROR   19:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:43:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:43:32 compute-0 ceph-mon[191910]: pgmap v1149: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 1.6 MiB/s wr, 11 op/s
Oct 02 19:43:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:43:32.293 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:43:32.294 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:43:32.295 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.6 MiB/s wr, 6 op/s
Oct 02 19:43:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:43:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:43:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:43:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:43:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:43:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:43:34 compute-0 ceph-mon[191910]: pgmap v1150: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.6 MiB/s wr, 6 op/s
Oct 02 19:43:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 772 KiB/s wr, 0 op/s
Oct 02 19:43:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:43:36 compute-0 ceph-mon[191910]: pgmap v1151: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 772 KiB/s wr, 0 op/s
Oct 02 19:43:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 88 B/s rd, 671 KiB/s wr, 0 op/s
Oct 02 19:43:37 compute-0 podman[418814]: 2025-10-02 19:43:37.696232822 +0000 UTC m=+0.121053488 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:43:38 compute-0 ceph-mon[191910]: pgmap v1152: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 88 B/s rd, 671 KiB/s wr, 0 op/s
Oct 02 19:43:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:39 compute-0 ceph-mon[191910]: pgmap v1153: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:43:40 compute-0 nova_compute[355794]: 2025-10-02 19:43:40.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:41 compute-0 nova_compute[355794]: 2025-10-02 19:43:41.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:41 compute-0 nova_compute[355794]: 2025-10-02 19:43:41.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:43:41 compute-0 ceph-mon[191910]: pgmap v1154: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:42 compute-0 nova_compute[355794]: 2025-10-02 19:43:42.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:43 compute-0 nova_compute[355794]: 2025-10-02 19:43:43.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:43 compute-0 ceph-mon[191910]: pgmap v1155: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:43 compute-0 podman[418835]: 2025-10-02 19:43:43.71027106 +0000 UTC m=+0.126173447 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20250930)
Oct 02 19:43:43 compute-0 podman[418834]: 2025-10-02 19:43:43.715240304 +0000 UTC m=+0.135554660 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:43:44 compute-0 nova_compute[355794]: 2025-10-02 19:43:44.569 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:44 compute-0 nova_compute[355794]: 2025-10-02 19:43:44.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:44 compute-0 nova_compute[355794]: 2025-10-02 19:43:44.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:43:44 compute-0 nova_compute[355794]: 2025-10-02 19:43:44.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:43:44 compute-0 nova_compute[355794]: 2025-10-02 19:43:44.598 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:43:44 compute-0 nova_compute[355794]: 2025-10-02 19:43:44.598 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:43:45 compute-0 nova_compute[355794]: 2025-10-02 19:43:45.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:45 compute-0 nova_compute[355794]: 2025-10-02 19:43:45.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:45 compute-0 nova_compute[355794]: 2025-10-02 19:43:45.622 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:45 compute-0 nova_compute[355794]: 2025-10-02 19:43:45.623 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:45 compute-0 nova_compute[355794]: 2025-10-02 19:43:45.624 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:45 compute-0 nova_compute[355794]: 2025-10-02 19:43:45.624 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:43:45 compute-0 nova_compute[355794]: 2025-10-02 19:43:45.625 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:45 compute-0 ceph-mon[191910]: pgmap v1156: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:43:46 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4169089283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:43:46 compute-0 nova_compute[355794]: 2025-10-02 19:43:46.146 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:46 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4169089283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:43:46 compute-0 nova_compute[355794]: 2025-10-02 19:43:46.763 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:43:46 compute-0 nova_compute[355794]: 2025-10-02 19:43:46.766 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4562MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:43:46 compute-0 nova_compute[355794]: 2025-10-02 19:43:46.767 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:46 compute-0 nova_compute[355794]: 2025-10-02 19:43:46.767 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:46 compute-0 nova_compute[355794]: 2025-10-02 19:43:46.846 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:43:46 compute-0 nova_compute[355794]: 2025-10-02 19:43:46.846 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:43:46 compute-0 nova_compute[355794]: 2025-10-02 19:43:46.868 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:43:47 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/160851185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:43:47 compute-0 nova_compute[355794]: 2025-10-02 19:43:47.359 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:47 compute-0 nova_compute[355794]: 2025-10-02 19:43:47.374 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:43:47 compute-0 nova_compute[355794]: 2025-10-02 19:43:47.398 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:43:47 compute-0 nova_compute[355794]: 2025-10-02 19:43:47.401 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:43:47 compute-0 nova_compute[355794]: 2025-10-02 19:43:47.402 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:47 compute-0 ceph-mon[191910]: pgmap v1157: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:47 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/160851185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:43:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:49 compute-0 podman[418923]: 2025-10-02 19:43:49.749485975 +0000 UTC m=+0.166129415 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct 02 19:43:49 compute-0 podman[418924]: 2025-10-02 19:43:49.751360685 +0000 UTC m=+0.164921642 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, version=9.4, architecture=x86_64, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, release-0.7.12=, config_id=edpm, name=ubi9, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9)
Oct 02 19:43:49 compute-0 ceph-mon[191910]: pgmap v1158: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:43:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1159: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:51 compute-0 nova_compute[355794]: 2025-10-02 19:43:51.400 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:51 compute-0 ceph-mon[191910]: pgmap v1159: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:53 compute-0 podman[418962]: 2025-10-02 19:43:53.676110783 +0000 UTC m=+0.096129945 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3)
Oct 02 19:43:53 compute-0 podman[418961]: 2025-10-02 19:43:53.687816739 +0000 UTC m=+0.114119621 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 19:43:53 compute-0 podman[418963]: 2025-10-02 19:43:53.736592906 +0000 UTC m=+0.140703849 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 02 19:43:53 compute-0 ceph-mon[191910]: pgmap v1160: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:43:55 compute-0 ceph-mon[191910]: pgmap v1161: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:57 compute-0 podman[419022]: 2025-10-02 19:43:57.688131975 +0000 UTC m=+0.116173836 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1755695350, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal)
Oct 02 19:43:57 compute-0 podman[419023]: 2025-10-02 19:43:57.719503652 +0000 UTC m=+0.134872001 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:43:57 compute-0 ceph-mon[191910]: pgmap v1162: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:43:59 compute-0 podman[157186]: time="2025-10-02T19:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:43:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:43:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8541 "" "Go-http-client/1.1"
Oct 02 19:43:59 compute-0 ceph-mon[191910]: pgmap v1163: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:44:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:01 compute-0 openstack_network_exporter[372736]: ERROR   19:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:44:01 compute-0 openstack_network_exporter[372736]: ERROR   19:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:44:01 compute-0 openstack_network_exporter[372736]: ERROR   19:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:44:01 compute-0 openstack_network_exporter[372736]: ERROR   19:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:44:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:44:01 compute-0 openstack_network_exporter[372736]: ERROR   19:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:44:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:44:01 compute-0 ceph-mon[191910]: pgmap v1164: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:44:03
Oct 02 19:44:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:44:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:44:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'images', '.mgr', 'backups']
Oct 02 19:44:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:44:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:44:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:44:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:44:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:44:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:44:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:44:03 compute-0 ceph-mon[191910]: pgmap v1165: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:44:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:44:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:44:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:44:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:44:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:44:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:44:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:44:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:44:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:44:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:44:05 compute-0 ceph-mon[191910]: pgmap v1166: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:07 compute-0 ceph-mon[191910]: pgmap v1167: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:08 compute-0 podman[419062]: 2025-10-02 19:44:08.707992948 +0000 UTC m=+0.129488956 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:44:09 compute-0 ceph-mon[191910]: pgmap v1168: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:44:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:11 compute-0 ceph-mon[191910]: pgmap v1169: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:44:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:44:13 compute-0 ceph-mon[191910]: pgmap v1170: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:14 compute-0 podman[419081]: 2025-10-02 19:44:14.724102391 +0000 UTC m=+0.139299181 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:44:14 compute-0 podman[419082]: 2025-10-02 19:44:14.750969046 +0000 UTC m=+0.164310996 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.build-date=20250930, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Oct 02 19:44:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:44:16 compute-0 ceph-mon[191910]: pgmap v1171: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:16 compute-0 sudo[419125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:44:16 compute-0 sudo[419125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:44:16 compute-0 sudo[419125]: pam_unix(sudo:session): session closed for user root
Oct 02 19:44:16 compute-0 sudo[419150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:44:16 compute-0 sudo[419150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:44:16 compute-0 sudo[419150]: pam_unix(sudo:session): session closed for user root
Oct 02 19:44:16 compute-0 sudo[419175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:44:16 compute-0 sudo[419175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:44:16 compute-0 sudo[419175]: pam_unix(sudo:session): session closed for user root
Oct 02 19:44:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:16 compute-0 sudo[419200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:44:16 compute-0 sudo[419200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:44:17 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:17.060 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '6e:8a:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:85:f4:22:b9:4f'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:44:17 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:17.064 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:44:17 compute-0 sudo[419200]: pam_unix(sudo:session): session closed for user root
Oct 02 19:44:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:44:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:44:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:44:17 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:44:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:44:17 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:44:17 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 15fe2148-a808-445e-ba08-ab771ab91cfc does not exist
Oct 02 19:44:17 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 73a9aab9-5857-4186-a915-1e5000d34598 does not exist
Oct 02 19:44:17 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 8eb54af6-6f72-4529-a7c0-9a4f8c4f9f96 does not exist
Oct 02 19:44:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:44:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:44:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:44:17 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:44:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:44:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:44:17 compute-0 sudo[419256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:44:17 compute-0 sudo[419256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:44:17 compute-0 sudo[419256]: pam_unix(sudo:session): session closed for user root
Oct 02 19:44:17 compute-0 sudo[419281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:44:17 compute-0 sudo[419281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:44:17 compute-0 sudo[419281]: pam_unix(sudo:session): session closed for user root
Oct 02 19:44:17 compute-0 sudo[419306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:44:17 compute-0 sudo[419306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:44:17 compute-0 sudo[419306]: pam_unix(sudo:session): session closed for user root
Oct 02 19:44:18 compute-0 ceph-mon[191910]: pgmap v1172: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:44:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:44:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:44:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:44:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:44:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:44:18 compute-0 sudo[419331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:44:18 compute-0 sudo[419331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:44:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:18 compute-0 podman[419395]: 2025-10-02 19:44:18.702643071 +0000 UTC m=+0.095984492 container create f3c76e33b3d7fa9e5cb59508d347c68fcf7e76e29525dc566097a17847281886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_beaver, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:44:18 compute-0 podman[419395]: 2025-10-02 19:44:18.658264323 +0000 UTC m=+0.051605794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:44:18 compute-0 systemd[1]: Started libpod-conmon-f3c76e33b3d7fa9e5cb59508d347c68fcf7e76e29525dc566097a17847281886.scope.
Oct 02 19:44:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:44:18 compute-0 podman[419395]: 2025-10-02 19:44:18.877638534 +0000 UTC m=+0.270979995 container init f3c76e33b3d7fa9e5cb59508d347c68fcf7e76e29525dc566097a17847281886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_beaver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 19:44:18 compute-0 podman[419395]: 2025-10-02 19:44:18.891824697 +0000 UTC m=+0.285166118 container start f3c76e33b3d7fa9e5cb59508d347c68fcf7e76e29525dc566097a17847281886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_beaver, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:44:18 compute-0 podman[419395]: 2025-10-02 19:44:18.898724543 +0000 UTC m=+0.292066024 container attach f3c76e33b3d7fa9e5cb59508d347c68fcf7e76e29525dc566097a17847281886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 19:44:18 compute-0 loving_beaver[419410]: 167 167
Oct 02 19:44:18 compute-0 systemd[1]: libpod-f3c76e33b3d7fa9e5cb59508d347c68fcf7e76e29525dc566097a17847281886.scope: Deactivated successfully.
Oct 02 19:44:18 compute-0 podman[419395]: 2025-10-02 19:44:18.90899471 +0000 UTC m=+0.302336121 container died f3c76e33b3d7fa9e5cb59508d347c68fcf7e76e29525dc566097a17847281886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:44:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cb96f16f1cd1e56ac794da3da82f9dd3f07b5378d3c6e5ad69c9b47ff27b837-merged.mount: Deactivated successfully.
Oct 02 19:44:18 compute-0 podman[419395]: 2025-10-02 19:44:18.995565047 +0000 UTC m=+0.388906478 container remove f3c76e33b3d7fa9e5cb59508d347c68fcf7e76e29525dc566097a17847281886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:44:19 compute-0 systemd[1]: libpod-conmon-f3c76e33b3d7fa9e5cb59508d347c68fcf7e76e29525dc566097a17847281886.scope: Deactivated successfully.
Oct 02 19:44:19 compute-0 podman[419434]: 2025-10-02 19:44:19.312667305 +0000 UTC m=+0.097378319 container create 10bac67ebfb4a3b44570141ccb79f8f89799e2afa9a9572750254911f581106e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:44:19 compute-0 podman[419434]: 2025-10-02 19:44:19.277478736 +0000 UTC m=+0.062189790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:44:19 compute-0 systemd[1]: Started libpod-conmon-10bac67ebfb4a3b44570141ccb79f8f89799e2afa9a9572750254911f581106e.scope.
Oct 02 19:44:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0269e1b31b5a2cf10ab5f2029ed4b8be1e40e06bb3da5e61e6cdd3e013dea02c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0269e1b31b5a2cf10ab5f2029ed4b8be1e40e06bb3da5e61e6cdd3e013dea02c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0269e1b31b5a2cf10ab5f2029ed4b8be1e40e06bb3da5e61e6cdd3e013dea02c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0269e1b31b5a2cf10ab5f2029ed4b8be1e40e06bb3da5e61e6cdd3e013dea02c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0269e1b31b5a2cf10ab5f2029ed4b8be1e40e06bb3da5e61e6cdd3e013dea02c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:44:19 compute-0 podman[419434]: 2025-10-02 19:44:19.493702791 +0000 UTC m=+0.278413845 container init 10bac67ebfb4a3b44570141ccb79f8f89799e2afa9a9572750254911f581106e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 19:44:19 compute-0 podman[419434]: 2025-10-02 19:44:19.51514014 +0000 UTC m=+0.299851144 container start 10bac67ebfb4a3b44570141ccb79f8f89799e2afa9a9572750254911f581106e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 19:44:19 compute-0 podman[419434]: 2025-10-02 19:44:19.520986858 +0000 UTC m=+0.305697872 container attach 10bac67ebfb4a3b44570141ccb79f8f89799e2afa9a9572750254911f581106e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:44:20 compute-0 ceph-mon[191910]: pgmap v1173: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:44:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2677740741' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:44:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:44:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2677740741' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:44:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:44:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:20 compute-0 podman[419471]: 2025-10-02 19:44:20.719935167 +0000 UTC m=+0.134287965 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct 02 19:44:20 compute-0 podman[419473]: 2025-10-02 19:44:20.73596597 +0000 UTC m=+0.143294318 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, container_name=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm)
Oct 02 19:44:20 compute-0 kind_taussig[419450]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:44:20 compute-0 kind_taussig[419450]: --> relative data size: 1.0
Oct 02 19:44:20 compute-0 kind_taussig[419450]: --> All data devices are unavailable
Oct 02 19:44:20 compute-0 systemd[1]: libpod-10bac67ebfb4a3b44570141ccb79f8f89799e2afa9a9572750254911f581106e.scope: Deactivated successfully.
Oct 02 19:44:20 compute-0 systemd[1]: libpod-10bac67ebfb4a3b44570141ccb79f8f89799e2afa9a9572750254911f581106e.scope: Consumed 1.235s CPU time.
Oct 02 19:44:20 compute-0 conmon[419450]: conmon 10bac67ebfb4a3b44570 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-10bac67ebfb4a3b44570141ccb79f8f89799e2afa9a9572750254911f581106e.scope/container/memory.events
Oct 02 19:44:20 compute-0 podman[419513]: 2025-10-02 19:44:20.91008886 +0000 UTC m=+0.064285176 container died 10bac67ebfb4a3b44570141ccb79f8f89799e2afa9a9572750254911f581106e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_taussig, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:44:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-0269e1b31b5a2cf10ab5f2029ed4b8be1e40e06bb3da5e61e6cdd3e013dea02c-merged.mount: Deactivated successfully.
Oct 02 19:44:21 compute-0 podman[419513]: 2025-10-02 19:44:21.022614647 +0000 UTC m=+0.176810963 container remove 10bac67ebfb4a3b44570141ccb79f8f89799e2afa9a9572750254911f581106e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 19:44:21 compute-0 systemd[1]: libpod-conmon-10bac67ebfb4a3b44570141ccb79f8f89799e2afa9a9572750254911f581106e.scope: Deactivated successfully.
Oct 02 19:44:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2677740741' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:44:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2677740741' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:44:21 compute-0 sudo[419331]: pam_unix(sudo:session): session closed for user root
Oct 02 19:44:21 compute-0 sudo[419528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:44:21 compute-0 sudo[419528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:44:21 compute-0 sudo[419528]: pam_unix(sudo:session): session closed for user root
Oct 02 19:44:21 compute-0 sudo[419553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:44:21 compute-0 sudo[419553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:44:21 compute-0 sudo[419553]: pam_unix(sudo:session): session closed for user root
Oct 02 19:44:21 compute-0 sudo[419578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:44:21 compute-0 sudo[419578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:44:21 compute-0 sudo[419578]: pam_unix(sudo:session): session closed for user root
Oct 02 19:44:21 compute-0 sudo[419603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:44:21 compute-0 sudo[419603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:44:22 compute-0 ceph-mon[191910]: pgmap v1174: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:22 compute-0 podman[419664]: 2025-10-02 19:44:22.253314482 +0000 UTC m=+0.091741528 container create 064207de4be087ead252cd342e634cb1082a70674ce7836c813f62a09640392c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:44:22 compute-0 systemd[1]: Started libpod-conmon-064207de4be087ead252cd342e634cb1082a70674ce7836c813f62a09640392c.scope.
Oct 02 19:44:22 compute-0 podman[419664]: 2025-10-02 19:44:22.218057501 +0000 UTC m=+0.056484586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:44:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:44:22 compute-0 podman[419664]: 2025-10-02 19:44:22.369601151 +0000 UTC m=+0.208028176 container init 064207de4be087ead252cd342e634cb1082a70674ce7836c813f62a09640392c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:44:22 compute-0 podman[419664]: 2025-10-02 19:44:22.38215547 +0000 UTC m=+0.220582475 container start 064207de4be087ead252cd342e634cb1082a70674ce7836c813f62a09640392c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct 02 19:44:22 compute-0 podman[419664]: 2025-10-02 19:44:22.387585256 +0000 UTC m=+0.226012301 container attach 064207de4be087ead252cd342e634cb1082a70674ce7836c813f62a09640392c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:44:22 compute-0 distracted_kowalevski[419679]: 167 167
Oct 02 19:44:22 compute-0 systemd[1]: libpod-064207de4be087ead252cd342e634cb1082a70674ce7836c813f62a09640392c.scope: Deactivated successfully.
Oct 02 19:44:22 compute-0 podman[419664]: 2025-10-02 19:44:22.394627016 +0000 UTC m=+0.233054021 container died 064207de4be087ead252cd342e634cb1082a70674ce7836c813f62a09640392c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 19:44:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-253b138bc755554e7790fb191b019929a690390ec4079b7dfaa32fe82739d502-merged.mount: Deactivated successfully.
Oct 02 19:44:22 compute-0 podman[419664]: 2025-10-02 19:44:22.450834844 +0000 UTC m=+0.289261849 container remove 064207de4be087ead252cd342e634cb1082a70674ce7836c813f62a09640392c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 19:44:22 compute-0 systemd[1]: libpod-conmon-064207de4be087ead252cd342e634cb1082a70674ce7836c813f62a09640392c.scope: Deactivated successfully.
Oct 02 19:44:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:22 compute-0 podman[419700]: 2025-10-02 19:44:22.738698753 +0000 UTC m=+0.093064013 container create 46a7229ccef329bc6bd9941cc37424352ccca7e90179fba24a5e974b25b07caa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jang, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:44:22 compute-0 podman[419700]: 2025-10-02 19:44:22.70821875 +0000 UTC m=+0.062584050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:44:22 compute-0 systemd[1]: Started libpod-conmon-46a7229ccef329bc6bd9941cc37424352ccca7e90179fba24a5e974b25b07caa.scope.
Oct 02 19:44:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a953b401069275285ad23d5a2318f21c713379ed5112ad107bec904336c93a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a953b401069275285ad23d5a2318f21c713379ed5112ad107bec904336c93a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a953b401069275285ad23d5a2318f21c713379ed5112ad107bec904336c93a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a953b401069275285ad23d5a2318f21c713379ed5112ad107bec904336c93a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:44:22 compute-0 podman[419700]: 2025-10-02 19:44:22.902217806 +0000 UTC m=+0.256583106 container init 46a7229ccef329bc6bd9941cc37424352ccca7e90179fba24a5e974b25b07caa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jang, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 19:44:22 compute-0 podman[419700]: 2025-10-02 19:44:22.937081687 +0000 UTC m=+0.291446947 container start 46a7229ccef329bc6bd9941cc37424352ccca7e90179fba24a5e974b25b07caa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jang, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 19:44:22 compute-0 podman[419700]: 2025-10-02 19:44:22.945772842 +0000 UTC m=+0.300138162 container attach 46a7229ccef329bc6bd9941cc37424352ccca7e90179fba24a5e974b25b07caa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jang, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 19:44:23 compute-0 epic_jang[419717]: {
Oct 02 19:44:23 compute-0 epic_jang[419717]:     "0": [
Oct 02 19:44:23 compute-0 epic_jang[419717]:         {
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "devices": [
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "/dev/loop3"
Oct 02 19:44:23 compute-0 epic_jang[419717]:             ],
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "lv_name": "ceph_lv0",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "lv_size": "21470642176",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "name": "ceph_lv0",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "tags": {
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.cluster_name": "ceph",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.crush_device_class": "",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.encrypted": "0",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.osd_id": "0",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.type": "block",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.vdo": "0"
Oct 02 19:44:23 compute-0 epic_jang[419717]:             },
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "type": "block",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "vg_name": "ceph_vg0"
Oct 02 19:44:23 compute-0 epic_jang[419717]:         }
Oct 02 19:44:23 compute-0 epic_jang[419717]:     ],
Oct 02 19:44:23 compute-0 epic_jang[419717]:     "1": [
Oct 02 19:44:23 compute-0 epic_jang[419717]:         {
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "devices": [
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "/dev/loop4"
Oct 02 19:44:23 compute-0 epic_jang[419717]:             ],
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "lv_name": "ceph_lv1",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "lv_size": "21470642176",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "name": "ceph_lv1",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "tags": {
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.cluster_name": "ceph",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.crush_device_class": "",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.encrypted": "0",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.osd_id": "1",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.type": "block",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.vdo": "0"
Oct 02 19:44:23 compute-0 epic_jang[419717]:             },
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "type": "block",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "vg_name": "ceph_vg1"
Oct 02 19:44:23 compute-0 epic_jang[419717]:         }
Oct 02 19:44:23 compute-0 epic_jang[419717]:     ],
Oct 02 19:44:23 compute-0 epic_jang[419717]:     "2": [
Oct 02 19:44:23 compute-0 epic_jang[419717]:         {
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "devices": [
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "/dev/loop5"
Oct 02 19:44:23 compute-0 epic_jang[419717]:             ],
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "lv_name": "ceph_lv2",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "lv_size": "21470642176",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "name": "ceph_lv2",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "tags": {
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.cluster_name": "ceph",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.crush_device_class": "",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.encrypted": "0",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.osd_id": "2",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.type": "block",
Oct 02 19:44:23 compute-0 epic_jang[419717]:                 "ceph.vdo": "0"
Oct 02 19:44:23 compute-0 epic_jang[419717]:             },
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "type": "block",
Oct 02 19:44:23 compute-0 epic_jang[419717]:             "vg_name": "ceph_vg2"
Oct 02 19:44:23 compute-0 epic_jang[419717]:         }
Oct 02 19:44:23 compute-0 epic_jang[419717]:     ]
Oct 02 19:44:23 compute-0 epic_jang[419717]: }
Oct 02 19:44:23 compute-0 systemd[1]: libpod-46a7229ccef329bc6bd9941cc37424352ccca7e90179fba24a5e974b25b07caa.scope: Deactivated successfully.
Oct 02 19:44:23 compute-0 podman[419700]: 2025-10-02 19:44:23.782173346 +0000 UTC m=+1.136538606 container died 46a7229ccef329bc6bd9941cc37424352ccca7e90179fba24a5e974b25b07caa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jang, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:44:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4a953b401069275285ad23d5a2318f21c713379ed5112ad107bec904336c93a-merged.mount: Deactivated successfully.
Oct 02 19:44:23 compute-0 podman[419700]: 2025-10-02 19:44:23.878739973 +0000 UTC m=+1.233105213 container remove 46a7229ccef329bc6bd9941cc37424352ccca7e90179fba24a5e974b25b07caa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:44:23 compute-0 systemd[1]: libpod-conmon-46a7229ccef329bc6bd9941cc37424352ccca7e90179fba24a5e974b25b07caa.scope: Deactivated successfully.
Oct 02 19:44:23 compute-0 sudo[419603]: pam_unix(sudo:session): session closed for user root
Oct 02 19:44:23 compute-0 podman[419727]: 2025-10-02 19:44:23.943552802 +0000 UTC m=+0.105081277 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 19:44:23 compute-0 podman[419733]: 2025-10-02 19:44:23.947858158 +0000 UTC m=+0.108512100 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 02 19:44:23 compute-0 podman[419735]: 2025-10-02 19:44:23.984287381 +0000 UTC m=+0.137737448 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:44:24 compute-0 sudo[419788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:44:24 compute-0 sudo[419788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:44:24 compute-0 sudo[419788]: pam_unix(sudo:session): session closed for user root
Oct 02 19:44:24 compute-0 ceph-mon[191910]: pgmap v1175: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:24 compute-0 sudo[419821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:44:24 compute-0 sudo[419821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:44:24 compute-0 sudo[419821]: pam_unix(sudo:session): session closed for user root
Oct 02 19:44:24 compute-0 sudo[419846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:44:24 compute-0 sudo[419846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:44:24 compute-0 sudo[419846]: pam_unix(sudo:session): session closed for user root
Oct 02 19:44:24 compute-0 sudo[419871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:44:24 compute-0 sudo[419871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:44:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:24 compute-0 podman[419933]: 2025-10-02 19:44:24.953834939 +0000 UTC m=+0.083980507 container create f5043cdb43e0f1614b24f652e46d2f2b7e545515d24634c37c1817e5a8dff665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 19:44:25 compute-0 podman[419933]: 2025-10-02 19:44:24.917955671 +0000 UTC m=+0.048101289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:44:25 compute-0 systemd[1]: Started libpod-conmon-f5043cdb43e0f1614b24f652e46d2f2b7e545515d24634c37c1817e5a8dff665.scope.
Oct 02 19:44:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:44:25 compute-0 podman[419933]: 2025-10-02 19:44:25.108960255 +0000 UTC m=+0.239105883 container init f5043cdb43e0f1614b24f652e46d2f2b7e545515d24634c37c1817e5a8dff665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 19:44:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:44:25 compute-0 podman[419933]: 2025-10-02 19:44:25.126608302 +0000 UTC m=+0.256753870 container start f5043cdb43e0f1614b24f652e46d2f2b7e545515d24634c37c1817e5a8dff665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 19:44:25 compute-0 podman[419933]: 2025-10-02 19:44:25.133475947 +0000 UTC m=+0.263621555 container attach f5043cdb43e0f1614b24f652e46d2f2b7e545515d24634c37c1817e5a8dff665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 19:44:25 compute-0 affectionate_mccarthy[419949]: 167 167
Oct 02 19:44:25 compute-0 systemd[1]: libpod-f5043cdb43e0f1614b24f652e46d2f2b7e545515d24634c37c1817e5a8dff665.scope: Deactivated successfully.
Oct 02 19:44:25 compute-0 podman[419933]: 2025-10-02 19:44:25.141107373 +0000 UTC m=+0.271252941 container died f5043cdb43e0f1614b24f652e46d2f2b7e545515d24634c37c1817e5a8dff665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 19:44:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e33d850c0cb569624ebee7f23c854753edbea6b1910768ca926fd8c5080e208d-merged.mount: Deactivated successfully.
Oct 02 19:44:25 compute-0 podman[419933]: 2025-10-02 19:44:25.222707235 +0000 UTC m=+0.352852773 container remove f5043cdb43e0f1614b24f652e46d2f2b7e545515d24634c37c1817e5a8dff665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:44:25 compute-0 systemd[1]: libpod-conmon-f5043cdb43e0f1614b24f652e46d2f2b7e545515d24634c37c1817e5a8dff665.scope: Deactivated successfully.
Oct 02 19:44:25 compute-0 podman[419971]: 2025-10-02 19:44:25.503329009 +0000 UTC m=+0.094205903 container create 02b462810a576c51c9ab2f7a69608b37030f828765624c18cb4b296d36e03d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:44:25 compute-0 podman[419971]: 2025-10-02 19:44:25.47703662 +0000 UTC m=+0.067913594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:44:25 compute-0 systemd[1]: Started libpod-conmon-02b462810a576c51c9ab2f7a69608b37030f828765624c18cb4b296d36e03d39.scope.
Oct 02 19:44:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:44:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d8b6e3a21ae705c1af18c719a6e6a60b2f7d0dd0d03b8b58d548872d6094612/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:44:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d8b6e3a21ae705c1af18c719a6e6a60b2f7d0dd0d03b8b58d548872d6094612/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:44:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d8b6e3a21ae705c1af18c719a6e6a60b2f7d0dd0d03b8b58d548872d6094612/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:44:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d8b6e3a21ae705c1af18c719a6e6a60b2f7d0dd0d03b8b58d548872d6094612/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:44:25 compute-0 podman[419971]: 2025-10-02 19:44:25.66822861 +0000 UTC m=+0.259105534 container init 02b462810a576c51c9ab2f7a69608b37030f828765624c18cb4b296d36e03d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_austin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 19:44:25 compute-0 podman[419971]: 2025-10-02 19:44:25.685195448 +0000 UTC m=+0.276072332 container start 02b462810a576c51c9ab2f7a69608b37030f828765624c18cb4b296d36e03d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_austin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 19:44:25 compute-0 podman[419971]: 2025-10-02 19:44:25.690065589 +0000 UTC m=+0.280942493 container attach 02b462810a576c51c9ab2f7a69608b37030f828765624c18cb4b296d36e03d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_austin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 19:44:26 compute-0 ceph-mon[191910]: pgmap v1176: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:26 compute-0 peaceful_austin[419987]: {
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:         "osd_id": 1,
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:         "type": "bluestore"
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:     },
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:         "osd_id": 2,
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:         "type": "bluestore"
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:     },
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:         "osd_id": 0,
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:         "type": "bluestore"
Oct 02 19:44:26 compute-0 peaceful_austin[419987]:     }
Oct 02 19:44:26 compute-0 peaceful_austin[419987]: }
Oct 02 19:44:26 compute-0 systemd[1]: libpod-02b462810a576c51c9ab2f7a69608b37030f828765624c18cb4b296d36e03d39.scope: Deactivated successfully.
Oct 02 19:44:26 compute-0 systemd[1]: libpod-02b462810a576c51c9ab2f7a69608b37030f828765624c18cb4b296d36e03d39.scope: Consumed 1.218s CPU time.
Oct 02 19:44:26 compute-0 podman[419971]: 2025-10-02 19:44:26.901817414 +0000 UTC m=+1.492694328 container died 02b462810a576c51c9ab2f7a69608b37030f828765624c18cb4b296d36e03d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_austin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:44:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d8b6e3a21ae705c1af18c719a6e6a60b2f7d0dd0d03b8b58d548872d6094612-merged.mount: Deactivated successfully.
Oct 02 19:44:27 compute-0 podman[419971]: 2025-10-02 19:44:27.003282313 +0000 UTC m=+1.594159217 container remove 02b462810a576c51c9ab2f7a69608b37030f828765624c18cb4b296d36e03d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_austin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 19:44:27 compute-0 systemd[1]: libpod-conmon-02b462810a576c51c9ab2f7a69608b37030f828765624c18cb4b296d36e03d39.scope: Deactivated successfully.
Oct 02 19:44:27 compute-0 sudo[419871]: pam_unix(sudo:session): session closed for user root
Oct 02 19:44:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:44:27 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:27.067 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1de2af17-a89c-45e5-97c6-db433f26bbb6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:44:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:44:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:44:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:44:27 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev afe35517-9d99-4892-b9da-cb1b679196c7 does not exist
Oct 02 19:44:27 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 3533d980-a37d-46e4-8b39-64082937cb36 does not exist
Oct 02 19:44:27 compute-0 sudo[420032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:44:27 compute-0 sudo[420032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:44:27 compute-0 sudo[420032]: pam_unix(sudo:session): session closed for user root
Oct 02 19:44:27 compute-0 sudo[420057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:44:27 compute-0 sudo[420057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:44:27 compute-0 sudo[420057]: pam_unix(sudo:session): session closed for user root
Oct 02 19:44:27 compute-0 nova_compute[355794]: 2025-10-02 19:44:27.796 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:27 compute-0 nova_compute[355794]: 2025-10-02 19:44:27.797 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:27 compute-0 nova_compute[355794]: 2025-10-02 19:44:27.865 2 DEBUG nova.compute.manager [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:44:28 compute-0 nova_compute[355794]: 2025-10-02 19:44:28.068 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:28 compute-0 nova_compute[355794]: 2025-10-02 19:44:28.070 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:28 compute-0 ceph-mon[191910]: pgmap v1177: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:44:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:44:28 compute-0 nova_compute[355794]: 2025-10-02 19:44:28.088 2 DEBUG nova.virt.hardware [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:44:28 compute-0 nova_compute[355794]: 2025-10-02 19:44:28.089 2 INFO nova.compute.claims [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:44:28 compute-0 nova_compute[355794]: 2025-10-02 19:44:28.278 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:28 compute-0 podman[420103]: 2025-10-02 19:44:28.687068067 +0000 UTC m=+0.108180350 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:44:28 compute-0 podman[420102]: 2025-10-02 19:44:28.697812867 +0000 UTC m=+0.116916227 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, release=1755695350, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal)
Oct 02 19:44:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:44:28 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3614851256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:44:28 compute-0 nova_compute[355794]: 2025-10-02 19:44:28.793 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:28 compute-0 nova_compute[355794]: 2025-10-02 19:44:28.803 2 DEBUG nova.compute.provider_tree [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:44:28 compute-0 nova_compute[355794]: 2025-10-02 19:44:28.835 2 DEBUG nova.scheduler.client.report [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:44:28 compute-0 nova_compute[355794]: 2025-10-02 19:44:28.934 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:28 compute-0 nova_compute[355794]: 2025-10-02 19:44:28.935 2 DEBUG nova.compute.manager [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:44:29 compute-0 nova_compute[355794]: 2025-10-02 19:44:29.009 2 DEBUG nova.compute.manager [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 19:44:29 compute-0 nova_compute[355794]: 2025-10-02 19:44:29.010 2 DEBUG nova.network.neutron [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 19:44:29 compute-0 nova_compute[355794]: 2025-10-02 19:44:29.061 2 INFO nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:44:29 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3614851256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:44:29 compute-0 nova_compute[355794]: 2025-10-02 19:44:29.125 2 DEBUG nova.compute.manager [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:44:29 compute-0 nova_compute[355794]: 2025-10-02 19:44:29.287 2 DEBUG nova.compute.manager [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:44:29 compute-0 nova_compute[355794]: 2025-10-02 19:44:29.291 2 DEBUG nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:44:29 compute-0 nova_compute[355794]: 2025-10-02 19:44:29.292 2 INFO nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Creating image(s)
Oct 02 19:44:29 compute-0 nova_compute[355794]: 2025-10-02 19:44:29.346 2 DEBUG nova.storage.rbd_utils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:44:29 compute-0 nova_compute[355794]: 2025-10-02 19:44:29.397 2 DEBUG nova.storage.rbd_utils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:44:29 compute-0 nova_compute[355794]: 2025-10-02 19:44:29.445 2 DEBUG nova.storage.rbd_utils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:44:29 compute-0 nova_compute[355794]: 2025-10-02 19:44:29.454 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "29c290047b888f2c82efe3bcb0c2a3e42b009a3e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:29 compute-0 nova_compute[355794]: 2025-10-02 19:44:29.455 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "29c290047b888f2c82efe3bcb0c2a3e42b009a3e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:29 compute-0 podman[157186]: time="2025-10-02T19:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:44:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45033 "" "Go-http-client/1.1"
Oct 02 19:44:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8543 "" "Go-http-client/1.1"
Oct 02 19:44:30 compute-0 nova_compute[355794]: 2025-10-02 19:44:29.999 2 WARNING oslo_policy.policy [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 02 19:44:30 compute-0 nova_compute[355794]: 2025-10-02 19:44:29.999 2 WARNING oslo_policy.policy [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 02 19:44:30 compute-0 ceph-mon[191910]: pgmap v1178: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:44:30 compute-0 nova_compute[355794]: 2025-10-02 19:44:30.112 2 DEBUG nova.virt.libvirt.imagebackend [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Image locations are: [{'url': 'rbd://6019f664-a1c2-5955-8391-692cb79a59f9/images/ce28338d-119e-49e1-ab67-60da8882593a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://6019f664-a1c2-5955-8391-692cb79a59f9/images/ce28338d-119e-49e1-ab67-60da8882593a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 19:44:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:44:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 02 19:44:30 compute-0 nova_compute[355794]: 2025-10-02 19:44:30.819 2 DEBUG nova.network.neutron [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Successfully created port: 24e0cf3f-162d-4105-9362-fc5616a6815a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 19:44:31 compute-0 nova_compute[355794]: 2025-10-02 19:44:31.396 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:31 compute-0 openstack_network_exporter[372736]: ERROR   19:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:44:31 compute-0 openstack_network_exporter[372736]: ERROR   19:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:44:31 compute-0 openstack_network_exporter[372736]: ERROR   19:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:44:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:44:31 compute-0 openstack_network_exporter[372736]: ERROR   19:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:44:31 compute-0 openstack_network_exporter[372736]: ERROR   19:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:44:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:44:31 compute-0 nova_compute[355794]: 2025-10-02 19:44:31.517 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e.part --force-share --output=json" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:31 compute-0 nova_compute[355794]: 2025-10-02 19:44:31.519 2 DEBUG nova.virt.images [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] ce28338d-119e-49e1-ab67-60da8882593a was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 02 19:44:31 compute-0 nova_compute[355794]: 2025-10-02 19:44:31.521 2 DEBUG nova.privsep.utils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 19:44:31 compute-0 nova_compute[355794]: 2025-10-02 19:44:31.522 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e.part /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:31 compute-0 nova_compute[355794]: 2025-10-02 19:44:31.771 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e.part /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e.converted" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:31 compute-0 nova_compute[355794]: 2025-10-02 19:44:31.779 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:31 compute-0 nova_compute[355794]: 2025-10-02 19:44:31.863 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e.converted --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:31 compute-0 nova_compute[355794]: 2025-10-02 19:44:31.865 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "29c290047b888f2c82efe3bcb0c2a3e42b009a3e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:31 compute-0 nova_compute[355794]: 2025-10-02 19:44:31.916 2 DEBUG nova.storage.rbd_utils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:44:31 compute-0 nova_compute[355794]: 2025-10-02 19:44:31.927 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct 02 19:44:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct 02 19:44:32 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct 02 19:44:32 compute-0 ceph-mon[191910]: pgmap v1179: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 02 19:44:32 compute-0 nova_compute[355794]: 2025-10-02 19:44:32.247 2 DEBUG nova.network.neutron [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Successfully updated port: 24e0cf3f-162d-4105-9362-fc5616a6815a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 19:44:32 compute-0 nova_compute[355794]: 2025-10-02 19:44:32.283 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:44:32 compute-0 nova_compute[355794]: 2025-10-02 19:44:32.284 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:44:32 compute-0 nova_compute[355794]: 2025-10-02 19:44:32.285 2 DEBUG nova.network.neutron [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:44:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:32.294 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:32.295 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:32.295 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:32 compute-0 nova_compute[355794]: 2025-10-02 19:44:32.472 2 DEBUG nova.network.neutron [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:44:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s
Oct 02 19:44:32 compute-0 nova_compute[355794]: 2025-10-02 19:44:32.825 2 DEBUG nova.compute.manager [req-793f4132-f485-4ae9-9c21-0eaf74d1f419 req-582b9a89-afd7-4e53-b578-b8e225c8b28a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Received event network-changed-24e0cf3f-162d-4105-9362-fc5616a6815a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:44:32 compute-0 nova_compute[355794]: 2025-10-02 19:44:32.826 2 DEBUG nova.compute.manager [req-793f4132-f485-4ae9-9c21-0eaf74d1f419 req-582b9a89-afd7-4e53-b578-b8e225c8b28a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Refreshing instance network info cache due to event network-changed-24e0cf3f-162d-4105-9362-fc5616a6815a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:44:32 compute-0 nova_compute[355794]: 2025-10-02 19:44:32.826 2 DEBUG oslo_concurrency.lockutils [req-793f4132-f485-4ae9-9c21-0eaf74d1f419 req-582b9a89-afd7-4e53-b578-b8e225c8b28a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:44:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct 02 19:44:33 compute-0 ceph-mon[191910]: osdmap e126: 3 total, 3 up, 3 in
Oct 02 19:44:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct 02 19:44:33 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct 02 19:44:33 compute-0 nova_compute[355794]: 2025-10-02 19:44:33.180 2 DEBUG nova.network.neutron [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:44:33 compute-0 nova_compute[355794]: 2025-10-02 19:44:33.208 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:44:33 compute-0 nova_compute[355794]: 2025-10-02 19:44:33.209 2 DEBUG nova.compute.manager [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Instance network_info: |[{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 19:44:33 compute-0 nova_compute[355794]: 2025-10-02 19:44:33.210 2 DEBUG oslo_concurrency.lockutils [req-793f4132-f485-4ae9-9c21-0eaf74d1f419 req-582b9a89-afd7-4e53-b578-b8e225c8b28a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:44:33 compute-0 nova_compute[355794]: 2025-10-02 19:44:33.211 2 DEBUG nova.network.neutron [req-793f4132-f485-4ae9-9c21-0eaf74d1f419 req-582b9a89-afd7-4e53-b578-b8e225c8b28a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Refreshing network info cache for port 24e0cf3f-162d-4105-9362-fc5616a6815a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:44:33 compute-0 nova_compute[355794]: 2025-10-02 19:44:33.587 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.660s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:44:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:44:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:44:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:44:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:44:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:44:33 compute-0 nova_compute[355794]: 2025-10-02 19:44:33.742 2 DEBUG nova.storage.rbd_utils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] resizing rbd image d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 19:44:33 compute-0 nova_compute[355794]: 2025-10-02 19:44:33.961 2 DEBUG nova.objects.instance [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lazy-loading 'migration_context' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:44:34 compute-0 nova_compute[355794]: 2025-10-02 19:44:34.037 2 DEBUG nova.storage.rbd_utils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:44:34 compute-0 nova_compute[355794]: 2025-10-02 19:44:34.103 2 DEBUG nova.storage.rbd_utils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:44:34 compute-0 nova_compute[355794]: 2025-10-02 19:44:34.115 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:34 compute-0 nova_compute[355794]: 2025-10-02 19:44:34.117 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:34 compute-0 nova_compute[355794]: 2025-10-02 19:44:34.119 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:34 compute-0 nova_compute[355794]: 2025-10-02 19:44:34.165 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:34 compute-0 ceph-mon[191910]: pgmap v1181: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s
Oct 02 19:44:34 compute-0 ceph-mon[191910]: osdmap e127: 3 total, 3 up, 3 in
Oct 02 19:44:34 compute-0 nova_compute[355794]: 2025-10-02 19:44:34.169 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:34 compute-0 nova_compute[355794]: 2025-10-02 19:44:34.244 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:34 compute-0 nova_compute[355794]: 2025-10-02 19:44:34.245 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:34 compute-0 nova_compute[355794]: 2025-10-02 19:44:34.288 2 DEBUG nova.storage.rbd_utils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:44:34 compute-0 nova_compute[355794]: 2025-10-02 19:44:34.300 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:34 compute-0 nova_compute[355794]: 2025-10-02 19:44:34.329 2 DEBUG nova.network.neutron [req-793f4132-f485-4ae9-9c21-0eaf74d1f419 req-582b9a89-afd7-4e53-b578-b8e225c8b28a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated VIF entry in instance network info cache for port 24e0cf3f-162d-4105-9362-fc5616a6815a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:44:34 compute-0 nova_compute[355794]: 2025-10-02 19:44:34.331 2 DEBUG nova.network.neutron [req-793f4132-f485-4ae9-9c21-0eaf74d1f419 req-582b9a89-afd7-4e53-b578-b8e225c8b28a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:44:34 compute-0 nova_compute[355794]: 2025-10-02 19:44:34.350 2 DEBUG oslo_concurrency.lockutils [req-793f4132-f485-4ae9-9c21-0eaf74d1f419 req-582b9a89-afd7-4e53-b578-b8e225c8b28a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:44:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 36 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 25 op/s
Oct 02 19:44:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.417 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.620 2 DEBUG nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.621 2 DEBUG nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Ensure instance console log exists: /var/lib/nova/instances/d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.622 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.623 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.623 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.627 2 DEBUG nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Start _get_guest_xml network_info=[{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:43:18Z,direct_url=<?>,disk_format='qcow2',id=ce28338d-119e-49e1-ab67-60da8882593a,min_disk=0,min_ram=0,name='cirros',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:43:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'disk_bus': 'virtio', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'image_id': 'ce28338d-119e-49e1-ab67-60da8882593a'}], 'ephemerals': [{'encryption_secret_uuid': None, 'device_name': '/dev/vdb', 'encrypted': False, 'disk_bus': 'virtio', 'size': 1, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.636 2 WARNING nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.643 2 DEBUG nova.virt.libvirt.host [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.644 2 DEBUG nova.virt.libvirt.host [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.649 2 DEBUG nova.virt.libvirt.host [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.649 2 DEBUG nova.virt.libvirt.host [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.650 2 DEBUG nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.651 2 DEBUG nova.virt.hardware [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:43:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='8f0521f8-dc4e-4ca1-bf77-f443ae74db03',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:43:18Z,direct_url=<?>,disk_format='qcow2',id=ce28338d-119e-49e1-ab67-60da8882593a,min_disk=0,min_ram=0,name='cirros',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:43:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.651 2 DEBUG nova.virt.hardware [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.652 2 DEBUG nova.virt.hardware [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.652 2 DEBUG nova.virt.hardware [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.653 2 DEBUG nova.virt.hardware [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.653 2 DEBUG nova.virt.hardware [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.653 2 DEBUG nova.virt.hardware [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.654 2 DEBUG nova.virt.hardware [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.654 2 DEBUG nova.virt.hardware [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.655 2 DEBUG nova.virt.hardware [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.655 2 DEBUG nova.virt.hardware [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.659 2 DEBUG nova.privsep.utils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 19:44:35 compute-0 nova_compute[355794]: 2025-10-02 19:44:35.660 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 19:44:36 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3633908792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:44:36 compute-0 nova_compute[355794]: 2025-10-02 19:44:36.158 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:36 compute-0 nova_compute[355794]: 2025-10-02 19:44:36.161 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:36 compute-0 ceph-mon[191910]: pgmap v1183: 321 pgs: 321 active+clean; 36 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 25 op/s
Oct 02 19:44:36 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3633908792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:44:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 46 MiB data, 171 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 47 op/s
Oct 02 19:44:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 19:44:36 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/922471141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:44:36 compute-0 nova_compute[355794]: 2025-10-02 19:44:36.704 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:36 compute-0 nova_compute[355794]: 2025-10-02 19:44:36.737 2 DEBUG nova.storage.rbd_utils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:44:36 compute-0 nova_compute[355794]: 2025-10-02 19:44:36.746 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:37 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/922471141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:44:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 19:44:37 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1456073601' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.265 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.269 2 DEBUG nova.virt.libvirt.vif [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:44:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='ce28338d-119e-49e1-ab67-60da8882593a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c35486f37b94d43a7bf2f2fa09c70b9',ramdisk_id='',reservation_id='r-ima937ci',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='ce28338d-119e-49e1-ab67-60da8882593a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:44:29Z,user_data=None,user_id='811fb7ac717e4ba9b9874e5454ee08f4',uuid=d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.270 2 DEBUG nova.network.os_vif_util [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converting VIF {"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.273 2 DEBUG nova.network.os_vif_util [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:e8:fe,bridge_name='br-int',has_traffic_filtering=True,id=24e0cf3f-162d-4105-9362-fc5616a6815a,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24e0cf3f-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.277 2 DEBUG nova.objects.instance [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lazy-loading 'pci_devices' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.305 2 DEBUG nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:44:37 compute-0 nova_compute[355794]:   <uuid>d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77</uuid>
Oct 02 19:44:37 compute-0 nova_compute[355794]:   <name>instance-00000001</name>
Oct 02 19:44:37 compute-0 nova_compute[355794]:   <memory>524288</memory>
Oct 02 19:44:37 compute-0 nova_compute[355794]:   <vcpu>1</vcpu>
Oct 02 19:44:37 compute-0 nova_compute[355794]:   <metadata>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <nova:name>test_0</nova:name>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <nova:creationTime>2025-10-02 19:44:35</nova:creationTime>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <nova:flavor name="m1.small">
Oct 02 19:44:37 compute-0 nova_compute[355794]:         <nova:memory>512</nova:memory>
Oct 02 19:44:37 compute-0 nova_compute[355794]:         <nova:disk>1</nova:disk>
Oct 02 19:44:37 compute-0 nova_compute[355794]:         <nova:swap>0</nova:swap>
Oct 02 19:44:37 compute-0 nova_compute[355794]:         <nova:ephemeral>1</nova:ephemeral>
Oct 02 19:44:37 compute-0 nova_compute[355794]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       </nova:flavor>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <nova:owner>
Oct 02 19:44:37 compute-0 nova_compute[355794]:         <nova:user uuid="811fb7ac717e4ba9b9874e5454ee08f4">admin</nova:user>
Oct 02 19:44:37 compute-0 nova_compute[355794]:         <nova:project uuid="1c35486f37b94d43a7bf2f2fa09c70b9">admin</nova:project>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       </nova:owner>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <nova:root type="image" uuid="ce28338d-119e-49e1-ab67-60da8882593a"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <nova:ports>
Oct 02 19:44:37 compute-0 nova_compute[355794]:         <nova:port uuid="24e0cf3f-162d-4105-9362-fc5616a6815a">
Oct 02 19:44:37 compute-0 nova_compute[355794]:           <nova:ip type="fixed" address="192.168.0.37" ipVersion="4"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:         </nova:port>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       </nova:ports>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     </nova:instance>
Oct 02 19:44:37 compute-0 nova_compute[355794]:   </metadata>
Oct 02 19:44:37 compute-0 nova_compute[355794]:   <sysinfo type="smbios">
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <system>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <entry name="serial">d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77</entry>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <entry name="uuid">d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77</entry>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     </system>
Oct 02 19:44:37 compute-0 nova_compute[355794]:   </sysinfo>
Oct 02 19:44:37 compute-0 nova_compute[355794]:   <os>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <boot dev="hd"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <smbios mode="sysinfo"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:   </os>
Oct 02 19:44:37 compute-0 nova_compute[355794]:   <features>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <acpi/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <apic/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <vmcoreinfo/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:   </features>
Oct 02 19:44:37 compute-0 nova_compute[355794]:   <clock offset="utc">
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <timer name="hpet" present="no"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:   </clock>
Oct 02 19:44:37 compute-0 nova_compute[355794]:   <cpu mode="host-model" match="exact">
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:   </cpu>
Oct 02 19:44:37 compute-0 nova_compute[355794]:   <devices>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk">
Oct 02 19:44:37 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       </source>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 19:44:37 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       </auth>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <target dev="vda" bus="virtio"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk.eph0">
Oct 02 19:44:37 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       </source>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 19:44:37 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       </auth>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <target dev="vdb" bus="virtio"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <disk type="network" device="cdrom">
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk.config">
Oct 02 19:44:37 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       </source>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 19:44:37 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       </auth>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <target dev="sda" bus="sata"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <interface type="ethernet">
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <mac address="fa:16:3e:6b:e8:fe"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <mtu size="1442"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <target dev="tap24e0cf3f-16"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     </interface>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <serial type="pty">
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <log file="/var/lib/nova/instances/d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/console.log" append="off"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     </serial>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <video>
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     </video>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <input type="tablet" bus="usb"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <rng model="virtio">
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     </rng>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <controller type="usb" index="0"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     <memballoon model="virtio">
Oct 02 19:44:37 compute-0 nova_compute[355794]:       <stats period="10"/>
Oct 02 19:44:37 compute-0 nova_compute[355794]:     </memballoon>
Oct 02 19:44:37 compute-0 nova_compute[355794]:   </devices>
Oct 02 19:44:37 compute-0 nova_compute[355794]: </domain>
Oct 02 19:44:37 compute-0 nova_compute[355794]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.308 2 DEBUG nova.compute.manager [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Preparing to wait for external event network-vif-plugged-24e0cf3f-162d-4105-9362-fc5616a6815a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.308 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.309 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.309 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.310 2 DEBUG nova.virt.libvirt.vif [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:44:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='ce28338d-119e-49e1-ab67-60da8882593a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c35486f37b94d43a7bf2f2fa09c70b9',ramdisk_id='',reservation_id='r-ima937ci',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='ce28338d-119e-49e1-ab67-60da8882593a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:44:29Z,user_data=None,user_id='811fb7ac717e4ba9b9874e5454ee08f4',uuid=d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.310 2 DEBUG nova.network.os_vif_util [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converting VIF {"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.311 2 DEBUG nova.network.os_vif_util [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:e8:fe,bridge_name='br-int',has_traffic_filtering=True,id=24e0cf3f-162d-4105-9362-fc5616a6815a,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24e0cf3f-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.311 2 DEBUG os_vif [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:e8:fe,bridge_name='br-int',has_traffic_filtering=True,id=24e0cf3f-162d-4105-9362-fc5616a6815a,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24e0cf3f-16') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.359 2 DEBUG ovsdbapp.backend.ovs_idl [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.359 2 DEBUG ovsdbapp.backend.ovs_idl [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.359 2 DEBUG ovsdbapp.backend.ovs_idl [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:44:37 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.384 2 INFO oslo.privsep.daemon [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpufvp8gxv/privsep.sock']
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:38.102 2 INFO oslo.privsep.daemon [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Spawned new privsep daemon via rootwrap
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.969 1024 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.975 1024 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.979 1024 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:37.979 1024 INFO oslo.privsep.daemon [-] privsep daemon running as pid 1024
Oct 02 19:44:38 compute-0 ceph-mon[191910]: pgmap v1184: 321 pgs: 321 active+clean; 46 MiB data, 171 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 47 op/s
Oct 02 19:44:38 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1456073601' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:38.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:38.404 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24e0cf3f-16, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:38.406 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap24e0cf3f-16, col_values=(('external_ids', {'iface-id': '24e0cf3f-162d-4105-9362-fc5616a6815a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:e8:fe', 'vm-uuid': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:44:38 compute-0 NetworkManager[44968]: <info>  [1759434278.4106] manager: (tap24e0cf3f-16): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:38.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:38.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:38.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:38.418 2 INFO os_vif [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:e8:fe,bridge_name='br-int',has_traffic_filtering=True,id=24e0cf3f-162d-4105-9362-fc5616a6815a,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24e0cf3f-16')
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:38.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:38.492 2 DEBUG nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:38.493 2 DEBUG nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:38.493 2 DEBUG nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:38.493 2 DEBUG nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No VIF found with MAC fa:16:3e:6b:e8:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:38.494 2 INFO nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Using config drive
Oct 02 19:44:38 compute-0 nova_compute[355794]: 2025-10-02 19:44:38.529 2 DEBUG nova.storage.rbd_utils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:44:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 47 MiB data, 171 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 50 op/s
Oct 02 19:44:39 compute-0 podman[420570]: 2025-10-02 19:44:39.708484633 +0000 UTC m=+0.129029944 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:44:40 compute-0 nova_compute[355794]: 2025-10-02 19:44:40.064 2 INFO nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Creating config drive at /var/lib/nova/instances/d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.config
Oct 02 19:44:40 compute-0 nova_compute[355794]: 2025-10-02 19:44:40.071 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz9zfna1p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:44:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct 02 19:44:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct 02 19:44:40 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct 02 19:44:40 compute-0 ceph-mon[191910]: pgmap v1185: 321 pgs: 321 active+clean; 47 MiB data, 171 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 50 op/s
Oct 02 19:44:40 compute-0 ceph-mon[191910]: osdmap e128: 3 total, 3 up, 3 in
Oct 02 19:44:40 compute-0 nova_compute[355794]: 2025-10-02 19:44:40.234 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz9zfna1p" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:40 compute-0 nova_compute[355794]: 2025-10-02 19:44:40.314 2 DEBUG nova.storage.rbd_utils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:44:40 compute-0 nova_compute[355794]: 2025-10-02 19:44:40.327 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.config d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:40 compute-0 nova_compute[355794]: 2025-10-02 19:44:40.634 2 DEBUG oslo_concurrency.processutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.config d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:40 compute-0 nova_compute[355794]: 2025-10-02 19:44:40.636 2 INFO nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Deleting local config drive /var/lib/nova/instances/d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.config because it was imported into RBD.
Oct 02 19:44:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 64 op/s
Oct 02 19:44:40 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 02 19:44:40 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 02 19:44:40 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 02 19:44:40 compute-0 kernel: tap24e0cf3f-16: entered promiscuous mode
Oct 02 19:44:40 compute-0 NetworkManager[44968]: <info>  [1759434280.8602] manager: (tap24e0cf3f-16): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Oct 02 19:44:40 compute-0 ovn_controller[88435]: 2025-10-02T19:44:40Z|00027|binding|INFO|Claiming lport 24e0cf3f-162d-4105-9362-fc5616a6815a for this chassis.
Oct 02 19:44:40 compute-0 ovn_controller[88435]: 2025-10-02T19:44:40Z|00028|binding|INFO|24e0cf3f-162d-4105-9362-fc5616a6815a: Claiming fa:16:3e:6b:e8:fe 192.168.0.37
Oct 02 19:44:40 compute-0 nova_compute[355794]: 2025-10-02 19:44:40.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:40 compute-0 nova_compute[355794]: 2025-10-02 19:44:40.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:40 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:40.898 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:e8:fe 192.168.0.37'], port_security=['fa:16:3e:6b:e8:fe 192.168.0.37'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.37/24', 'neutron:device_id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '457fbfbc-bc7f-4eb1-986f-4ac88ca280aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a344602b-6814-4d50-9f7f-4fc26c3ad853, chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=24e0cf3f-162d-4105-9362-fc5616a6815a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:44:40 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:40.901 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 24e0cf3f-162d-4105-9362-fc5616a6815a in datapath 6e3c6c60-2fbc-4181-942a-00056fc667f2 bound to our chassis
Oct 02 19:44:40 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:40.908 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e3c6c60-2fbc-4181-942a-00056fc667f2
Oct 02 19:44:40 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:40.912 285790 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpix0dvtny/privsep.sock']
Oct 02 19:44:40 compute-0 systemd-udevd[420661]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:44:40 compute-0 NetworkManager[44968]: <info>  [1759434280.9520] device (tap24e0cf3f-16): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:44:40 compute-0 NetworkManager[44968]: <info>  [1759434280.9529] device (tap24e0cf3f-16): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:44:40 compute-0 systemd-machined[137646]: New machine qemu-1-instance-00000001.
Oct 02 19:44:41 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct 02 19:44:41 compute-0 nova_compute[355794]: 2025-10-02 19:44:41.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:41 compute-0 ovn_controller[88435]: 2025-10-02T19:44:41Z|00029|binding|INFO|Setting lport 24e0cf3f-162d-4105-9362-fc5616a6815a ovn-installed in OVS
Oct 02 19:44:41 compute-0 ovn_controller[88435]: 2025-10-02T19:44:41Z|00030|binding|INFO|Setting lport 24e0cf3f-162d-4105-9362-fc5616a6815a up in Southbound
Oct 02 19:44:41 compute-0 nova_compute[355794]: 2025-10-02 19:44:41.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:41 compute-0 nova_compute[355794]: 2025-10-02 19:44:41.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:41 compute-0 nova_compute[355794]: 2025-10-02 19:44:41.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:44:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:41.775 285790 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 19:44:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:41.776 285790 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpix0dvtny/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 19:44:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:41.627 420728 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 19:44:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:41.636 420728 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 19:44:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:41.641 420728 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 02 19:44:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:41.641 420728 INFO oslo.privsep.daemon [-] privsep daemon running as pid 420728
Oct 02 19:44:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:41.782 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[693c52ac-cda8-4a4a-b0a9-665e525a5a6f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.065 2 DEBUG nova.compute.manager [req-9a91d90a-1369-43f2-9ff8-0c93bc4f57ef req-51d4b547-c6e6-48db-8846-a45dc15839b4 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Received event network-vif-plugged-24e0cf3f-162d-4105-9362-fc5616a6815a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.065 2 DEBUG oslo_concurrency.lockutils [req-9a91d90a-1369-43f2-9ff8-0c93bc4f57ef req-51d4b547-c6e6-48db-8846-a45dc15839b4 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.066 2 DEBUG oslo_concurrency.lockutils [req-9a91d90a-1369-43f2-9ff8-0c93bc4f57ef req-51d4b547-c6e6-48db-8846-a45dc15839b4 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.066 2 DEBUG oslo_concurrency.lockutils [req-9a91d90a-1369-43f2-9ff8-0c93bc4f57ef req-51d4b547-c6e6-48db-8846-a45dc15839b4 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.066 2 DEBUG nova.compute.manager [req-9a91d90a-1369-43f2-9ff8-0c93bc4f57ef req-51d4b547-c6e6-48db-8846-a45dc15839b4 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Processing event network-vif-plugged-24e0cf3f-162d-4105-9362-fc5616a6815a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 19:44:42 compute-0 ceph-mon[191910]: pgmap v1187: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 64 op/s
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.578 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.614 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759434282.6137798, d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.615 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] VM Started (Lifecycle Event)
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.617 2 DEBUG nova.compute.manager [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.625 2 DEBUG nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.632 2 INFO nova.virt.libvirt.driver [-] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Instance spawned successfully.
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.632 2 DEBUG nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:44:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 56 op/s
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.712 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.723 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.746 2 DEBUG nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.747 2 DEBUG nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.748 2 DEBUG nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:44:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:42.748 420728 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.749 2 DEBUG nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:44:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:42.748 420728 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:42.748 420728 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.750 2 DEBUG nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.751 2 DEBUG nova.virt.libvirt.driver [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.757 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.758 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759434282.6139064, d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.758 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] VM Paused (Lifecycle Event)
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.841 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.850 2 INFO nova.compute.manager [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Took 13.56 seconds to spawn the instance on the hypervisor.
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.851 2 DEBUG nova.compute.manager [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.854 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759434282.6235726, d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.855 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] VM Resumed (Lifecycle Event)
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.887 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.896 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.939 2 INFO nova.compute.manager [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Took 14.92 seconds to build instance.
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.962 2 DEBUG oslo_concurrency.lockutils [None req-90c919fb-7dad-44f4-bef6-a1261d1a8f47 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:42 compute-0 nova_compute[355794]: 2025-10-02 19:44:42.964 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:44:43 compute-0 nova_compute[355794]: 2025-10-02 19:44:43.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:43 compute-0 nova_compute[355794]: 2025-10-02 19:44:43.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:43 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 02 19:44:43 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 02 19:44:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:43.620 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[d44836ed-1d1f-4de8-8bdc-15deadd3ba67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:43.622 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6e3c6c60-21 in ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 19:44:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:43.625 420728 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6e3c6c60-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 19:44:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:43.625 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[77fee17f-9493-425b-a65b-6f37b547caa5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:43.630 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[fbbd1c7a-683e-4ad8-80dd-cc158ad62869]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:43.667 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bbb39c-01cc-4d34-b4b3-6720a9bad78d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:43.711 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[6e101233-837e-4632-8f89-dcd125c518ae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:43.715 285790 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmphsr_33w0/privsep.sock']
Oct 02 19:44:44 compute-0 nova_compute[355794]: 2025-10-02 19:44:44.160 2 DEBUG nova.compute.manager [req-f725539c-b993-4dd6-a03a-1a4456d374b7 req-01fe9bd8-7844-4c21-8d4c-ff9d107119c7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Received event network-vif-plugged-24e0cf3f-162d-4105-9362-fc5616a6815a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:44:44 compute-0 nova_compute[355794]: 2025-10-02 19:44:44.161 2 DEBUG oslo_concurrency.lockutils [req-f725539c-b993-4dd6-a03a-1a4456d374b7 req-01fe9bd8-7844-4c21-8d4c-ff9d107119c7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:44 compute-0 nova_compute[355794]: 2025-10-02 19:44:44.162 2 DEBUG oslo_concurrency.lockutils [req-f725539c-b993-4dd6-a03a-1a4456d374b7 req-01fe9bd8-7844-4c21-8d4c-ff9d107119c7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:44 compute-0 nova_compute[355794]: 2025-10-02 19:44:44.162 2 DEBUG oslo_concurrency.lockutils [req-f725539c-b993-4dd6-a03a-1a4456d374b7 req-01fe9bd8-7844-4c21-8d4c-ff9d107119c7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:44 compute-0 nova_compute[355794]: 2025-10-02 19:44:44.163 2 DEBUG nova.compute.manager [req-f725539c-b993-4dd6-a03a-1a4456d374b7 req-01fe9bd8-7844-4c21-8d4c-ff9d107119c7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] No waiting events found dispatching network-vif-plugged-24e0cf3f-162d-4105-9362-fc5616a6815a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:44:44 compute-0 nova_compute[355794]: 2025-10-02 19:44:44.164 2 WARNING nova.compute.manager [req-f725539c-b993-4dd6-a03a-1a4456d374b7 req-01fe9bd8-7844-4c21-8d4c-ff9d107119c7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Received unexpected event network-vif-plugged-24e0cf3f-162d-4105-9362-fc5616a6815a for instance with vm_state active and task_state None.
Oct 02 19:44:44 compute-0 ceph-mon[191910]: pgmap v1188: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 56 op/s
Oct 02 19:44:44 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:44.558 285790 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 19:44:44 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:44.560 285790 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmphsr_33w0/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 19:44:44 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:44.406 420769 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 19:44:44 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:44.414 420769 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 19:44:44 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:44.419 420769 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 02 19:44:44 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:44.419 420769 INFO oslo.privsep.daemon [-] privsep daemon running as pid 420769
Oct 02 19:44:44 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:44.568 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[788d58b0-210f-4a49-bf68-07691cbf315e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:44 compute-0 nova_compute[355794]: 2025-10-02 19:44:44.569 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:44 compute-0 nova_compute[355794]: 2025-10-02 19:44:44.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:44 compute-0 nova_compute[355794]: 2025-10-02 19:44:44.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:44:44 compute-0 nova_compute[355794]: 2025-10-02 19:44:44.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:44:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 838 KiB/s rd, 744 KiB/s wr, 45 op/s
Oct 02 19:44:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:44:45 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:45.134 420769 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:45 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:45.134 420769 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:45 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:45.134 420769 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:45 compute-0 nova_compute[355794]: 2025-10-02 19:44:45.260 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:44:45 compute-0 nova_compute[355794]: 2025-10-02 19:44:45.261 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:44:45 compute-0 nova_compute[355794]: 2025-10-02 19:44:45.262 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:44:45 compute-0 nova_compute[355794]: 2025-10-02 19:44:45.262 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:44:45 compute-0 ceph-mon[191910]: pgmap v1189: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 838 KiB/s rd, 744 KiB/s wr, 45 op/s
Oct 02 19:44:45 compute-0 podman[420774]: 2025-10-02 19:44:45.694754001 +0000 UTC m=+0.115274212 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:44:45 compute-0 podman[420775]: 2025-10-02 19:44:45.709970872 +0000 UTC m=+0.133244737 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:44:45 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:45.739 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[4feb39cc-193e-49a9-976a-062fbedb20c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:45 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:45.759 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[41623e6f-9b1f-4db1-b4c8-db7f06d10c7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:45 compute-0 NetworkManager[44968]: <info>  [1759434285.7613] manager: (tap6e3c6c60-20): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Oct 02 19:44:45 compute-0 systemd-udevd[420821]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:44:45 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:45.794 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[3446a369-6a6b-4b02-a649-672b1fe2f188]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:45 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:45.797 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[85285cd4-9883-47d8-ab35-251ab22c27ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:45 compute-0 NetworkManager[44968]: <info>  [1759434285.8246] device (tap6e3c6c60-20): carrier: link connected
Oct 02 19:44:45 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:45.828 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf43326-362d-4951-bc83-873029e0cb41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:45 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:45.849 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1faeae-a9b0-4e83-ac7b-6ef18861998a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e3c6c60-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:df:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545739, 'reachable_time': 25030, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420840, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:45 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:45.868 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[78d80ec4-cb27-4b9a-8186-d24409954864]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:dfb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545739, 'tstamp': 545739}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420841, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:45 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:45.886 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[b0ec6fc9-768a-4215-97ee-03df24ae688c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e3c6c60-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:df:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545739, 'reachable_time': 25030, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 420842, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:45 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:45.925 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[817859d6-9ee5-4a8b-b088-0ebb636c4363]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:46.006 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[f93e7be3-caa3-422c-be4f-d8c7ddf668cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:46.009 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e3c6c60-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:46.009 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:46.010 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e3c6c60-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:44:46 compute-0 kernel: tap6e3c6c60-20: entered promiscuous mode
Oct 02 19:44:46 compute-0 nova_compute[355794]: 2025-10-02 19:44:46.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:46 compute-0 NetworkManager[44968]: <info>  [1759434286.0180] manager: (tap6e3c6c60-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:46.021 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e3c6c60-20, col_values=(('external_ids', {'iface-id': '4a8af5bc-5352-4506-b6d3-43b5d33802a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:44:46 compute-0 nova_compute[355794]: 2025-10-02 19:44:46.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:46 compute-0 nova_compute[355794]: 2025-10-02 19:44:46.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:46.030 285790 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6e3c6c60-2fbc-4181-942a-00056fc667f2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6e3c6c60-2fbc-4181-942a-00056fc667f2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 19:44:46 compute-0 ovn_controller[88435]: 2025-10-02T19:44:46Z|00031|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:46.035 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[53e389eb-eede-4d95-8b0b-3e0cdf2e4d94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:46.038 285790 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]: global
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     log         /dev/log local0 debug
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     log-tag     haproxy-metadata-proxy-6e3c6c60-2fbc-4181-942a-00056fc667f2
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     user        root
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     group       root
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     maxconn     1024
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     pidfile     /var/lib/neutron/external/pids/6e3c6c60-2fbc-4181-942a-00056fc667f2.pid.haproxy
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     daemon
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]: 
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]: defaults
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     log global
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     mode http
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     option httplog
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     option dontlognull
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     option http-server-close
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     option forwardfor
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     retries                 3
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     timeout http-request    30s
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     timeout connect         30s
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     timeout client          32s
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     timeout server          32s
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     timeout http-keep-alive 30s
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]: 
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]: 
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]: listen listener
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     bind 169.254.169.254:80
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:     http-request add-header X-OVN-Network-ID 6e3c6c60-2fbc-4181-942a-00056fc667f2
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 19:44:46 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:44:46.043 285790 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'env', 'PROCESS_TAG=haproxy-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6e3c6c60-2fbc-4181-942a-00056fc667f2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 19:44:46 compute-0 nova_compute[355794]: 2025-10-02 19:44:46.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:46 compute-0 podman[420875]: 2025-10-02 19:44:46.612679055 +0000 UTC m=+0.123087893 container create 92ac1b751f0d9280ef85c4595de3067de425fc11975dadeae24f0ce3bbe74462 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:44:46 compute-0 podman[420875]: 2025-10-02 19:44:46.546595011 +0000 UTC m=+0.057003909 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 19:44:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 757 KiB/s rd, 19 KiB/s wr, 53 op/s
Oct 02 19:44:46 compute-0 systemd[1]: Started libpod-conmon-92ac1b751f0d9280ef85c4595de3067de425fc11975dadeae24f0ce3bbe74462.scope.
Oct 02 19:44:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12bb9417afcc6f3bde84fb67810f9e4bd21ac4b62de4f893c089485ab7f87cc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 19:44:46 compute-0 podman[420875]: 2025-10-02 19:44:46.740002952 +0000 UTC m=+0.250411870 container init 92ac1b751f0d9280ef85c4595de3067de425fc11975dadeae24f0ce3bbe74462 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 19:44:46 compute-0 podman[420875]: 2025-10-02 19:44:46.749096567 +0000 UTC m=+0.259505415 container start 92ac1b751f0d9280ef85c4595de3067de425fc11975dadeae24f0ce3bbe74462 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:44:46 compute-0 neutron-haproxy-ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2[420889]: [NOTICE]   (420893) : New worker (420895) forked
Oct 02 19:44:46 compute-0 neutron-haproxy-ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2[420889]: [NOTICE]   (420893) : Loading success.
Oct 02 19:44:47 compute-0 nova_compute[355794]: 2025-10-02 19:44:47.080 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:44:47 compute-0 nova_compute[355794]: 2025-10-02 19:44:47.104 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:44:47 compute-0 nova_compute[355794]: 2025-10-02 19:44:47.105 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:44:47 compute-0 nova_compute[355794]: 2025-10-02 19:44:47.106 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:47 compute-0 nova_compute[355794]: 2025-10-02 19:44:47.107 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:47 compute-0 nova_compute[355794]: 2025-10-02 19:44:47.108 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:47 compute-0 nova_compute[355794]: 2025-10-02 19:44:47.109 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:47 compute-0 nova_compute[355794]: 2025-10-02 19:44:47.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:47 compute-0 nova_compute[355794]: 2025-10-02 19:44:47.603 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:47 compute-0 nova_compute[355794]: 2025-10-02 19:44:47.604 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:47 compute-0 nova_compute[355794]: 2025-10-02 19:44:47.604 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:47 compute-0 nova_compute[355794]: 2025-10-02 19:44:47.605 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:44:47 compute-0 nova_compute[355794]: 2025-10-02 19:44:47.606 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:47 compute-0 ceph-mon[191910]: pgmap v1190: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 757 KiB/s rd, 19 KiB/s wr, 53 op/s
Oct 02 19:44:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:44:48 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4057098200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:44:48 compute-0 nova_compute[355794]: 2025-10-02 19:44:48.137 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:48 compute-0 nova_compute[355794]: 2025-10-02 19:44:48.250 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:44:48 compute-0 nova_compute[355794]: 2025-10-02 19:44:48.251 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:44:48 compute-0 nova_compute[355794]: 2025-10-02 19:44:48.251 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:44:48 compute-0 nova_compute[355794]: 2025-10-02 19:44:48.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:48 compute-0 nova_compute[355794]: 2025-10-02 19:44:48.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 866 KiB/s rd, 17 KiB/s wr, 53 op/s
Oct 02 19:44:48 compute-0 nova_compute[355794]: 2025-10-02 19:44:48.700 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:44:48 compute-0 nova_compute[355794]: 2025-10-02 19:44:48.703 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4083MB free_disk=59.97224044799805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:44:48 compute-0 nova_compute[355794]: 2025-10-02 19:44:48.703 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:48 compute-0 nova_compute[355794]: 2025-10-02 19:44:48.703 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:48 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4057098200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:44:48 compute-0 nova_compute[355794]: 2025-10-02 19:44:48.830 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:44:48 compute-0 nova_compute[355794]: 2025-10-02 19:44:48.830 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:44:48 compute-0 nova_compute[355794]: 2025-10-02 19:44:48.831 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:44:48 compute-0 nova_compute[355794]: 2025-10-02 19:44:48.873 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:44:49 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1026197031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:44:49 compute-0 nova_compute[355794]: 2025-10-02 19:44:49.350 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:49 compute-0 nova_compute[355794]: 2025-10-02 19:44:49.358 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating inventory in ProviderTree for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:44:49 compute-0 nova_compute[355794]: 2025-10-02 19:44:49.395 2 ERROR nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [req-21f55f17-d6a3-45b3-8b9b-52034fa8db7f] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 9d5f6e5d-658d-4616-b5da-8b0a4093afb0.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-21f55f17-d6a3-45b3-8b9b-52034fa8db7f"}]}
Oct 02 19:44:49 compute-0 nova_compute[355794]: 2025-10-02 19:44:49.415 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing inventories for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 19:44:49 compute-0 nova_compute[355794]: 2025-10-02 19:44:49.433 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating ProviderTree inventory for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 19:44:49 compute-0 nova_compute[355794]: 2025-10-02 19:44:49.434 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating inventory in ProviderTree for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:44:49 compute-0 nova_compute[355794]: 2025-10-02 19:44:49.454 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing aggregate associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 19:44:49 compute-0 nova_compute[355794]: 2025-10-02 19:44:49.496 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing trait associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, traits: COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 19:44:49 compute-0 nova_compute[355794]: 2025-10-02 19:44:49.548 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:49 compute-0 ceph-mon[191910]: pgmap v1191: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 866 KiB/s rd, 17 KiB/s wr, 53 op/s
Oct 02 19:44:49 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1026197031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:44:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:44:50 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/238447073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:44:50 compute-0 nova_compute[355794]: 2025-10-02 19:44:50.046 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:50 compute-0 nova_compute[355794]: 2025-10-02 19:44:50.055 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating inventory in ProviderTree for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:44:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:44:50 compute-0 nova_compute[355794]: 2025-10-02 19:44:50.141 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updated inventory for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 02 19:44:50 compute-0 nova_compute[355794]: 2025-10-02 19:44:50.141 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 02 19:44:50 compute-0 nova_compute[355794]: 2025-10-02 19:44:50.142 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating inventory in ProviderTree for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:44:50 compute-0 nova_compute[355794]: 2025-10-02 19:44:50.175 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:44:50 compute-0 nova_compute[355794]: 2025-10-02 19:44:50.175 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 14 KiB/s wr, 69 op/s
Oct 02 19:44:50 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/238447073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:44:51 compute-0 podman[420973]: 2025-10-02 19:44:51.691474641 +0000 UTC m=+0.117424871 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 19:44:51 compute-0 podman[420974]: 2025-10-02 19:44:51.715830728 +0000 UTC m=+0.129558218 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., config_id=edpm, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible)
Oct 02 19:44:51 compute-0 ceph-mon[191910]: pgmap v1192: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 14 KiB/s wr, 69 op/s
Oct 02 19:44:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 60 op/s
Oct 02 19:44:53 compute-0 nova_compute[355794]: 2025-10-02 19:44:53.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:53 compute-0 nova_compute[355794]: 2025-10-02 19:44:53.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:53 compute-0 ceph-mon[191910]: pgmap v1193: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 60 op/s
Oct 02 19:44:54 compute-0 podman[421009]: 2025-10-02 19:44:54.664877572 +0000 UTC m=+0.083689550 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:44:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 426 B/s wr, 58 op/s
Oct 02 19:44:54 compute-0 podman[421010]: 2025-10-02 19:44:54.683312749 +0000 UTC m=+0.102461486 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:44:54 compute-0 podman[421011]: 2025-10-02 19:44:54.709192998 +0000 UTC m=+0.129591469 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 19:44:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:44:55 compute-0 ceph-mon[191910]: pgmap v1194: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 426 B/s wr, 58 op/s
Oct 02 19:44:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1195: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 49 op/s
Oct 02 19:44:57 compute-0 nova_compute[355794]: 2025-10-02 19:44:57.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:57 compute-0 NetworkManager[44968]: <info>  [1759434297.0367] manager: (patch-provnet-259fe2ab-99ce-449a-ac51-8ea47835d151-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Oct 02 19:44:57 compute-0 ovn_controller[88435]: 2025-10-02T19:44:57Z|00032|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 19:44:57 compute-0 NetworkManager[44968]: <info>  [1759434297.0421] device (patch-provnet-259fe2ab-99ce-449a-ac51-8ea47835d151-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 19:44:57 compute-0 NetworkManager[44968]: <info>  [1759434297.0556] manager: (patch-br-int-to-provnet-259fe2ab-99ce-449a-ac51-8ea47835d151): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Oct 02 19:44:57 compute-0 NetworkManager[44968]: <info>  [1759434297.0562] device (patch-br-int-to-provnet-259fe2ab-99ce-449a-ac51-8ea47835d151)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 19:44:57 compute-0 NetworkManager[44968]: <info>  [1759434297.0579] manager: (patch-br-int-to-provnet-259fe2ab-99ce-449a-ac51-8ea47835d151): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Oct 02 19:44:57 compute-0 NetworkManager[44968]: <info>  [1759434297.0590] manager: (patch-provnet-259fe2ab-99ce-449a-ac51-8ea47835d151-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Oct 02 19:44:57 compute-0 NetworkManager[44968]: <info>  [1759434297.0598] device (patch-provnet-259fe2ab-99ce-449a-ac51-8ea47835d151-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 02 19:44:57 compute-0 NetworkManager[44968]: <info>  [1759434297.0604] device (patch-br-int-to-provnet-259fe2ab-99ce-449a-ac51-8ea47835d151)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 02 19:44:57 compute-0 nova_compute[355794]: 2025-10-02 19:44:57.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:57 compute-0 ovn_controller[88435]: 2025-10-02T19:44:57Z|00033|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 19:44:57 compute-0 nova_compute[355794]: 2025-10-02 19:44:57.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:57 compute-0 nova_compute[355794]: 2025-10-02 19:44:57.655 2 DEBUG nova.compute.manager [req-fad4c47c-86da-458b-a44f-b25463a8e4ec req-8c22b14b-d301-40d2-9b5b-606b95255790 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Received event network-changed-24e0cf3f-162d-4105-9362-fc5616a6815a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:44:57 compute-0 nova_compute[355794]: 2025-10-02 19:44:57.656 2 DEBUG nova.compute.manager [req-fad4c47c-86da-458b-a44f-b25463a8e4ec req-8c22b14b-d301-40d2-9b5b-606b95255790 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Refreshing instance network info cache due to event network-changed-24e0cf3f-162d-4105-9362-fc5616a6815a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:44:57 compute-0 nova_compute[355794]: 2025-10-02 19:44:57.656 2 DEBUG oslo_concurrency.lockutils [req-fad4c47c-86da-458b-a44f-b25463a8e4ec req-8c22b14b-d301-40d2-9b5b-606b95255790 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:44:57 compute-0 nova_compute[355794]: 2025-10-02 19:44:57.657 2 DEBUG oslo_concurrency.lockutils [req-fad4c47c-86da-458b-a44f-b25463a8e4ec req-8c22b14b-d301-40d2-9b5b-606b95255790 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:44:57 compute-0 nova_compute[355794]: 2025-10-02 19:44:57.658 2 DEBUG nova.network.neutron [req-fad4c47c-86da-458b-a44f-b25463a8e4ec req-8c22b14b-d301-40d2-9b5b-606b95255790 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Refreshing network info cache for port 24e0cf3f-162d-4105-9362-fc5616a6815a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:44:57 compute-0 ceph-mon[191910]: pgmap v1195: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 49 op/s
Oct 02 19:44:58 compute-0 nova_compute[355794]: 2025-10-02 19:44:58.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:58 compute-0 nova_compute[355794]: 2025-10-02 19:44:58.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 899 KiB/s rd, 28 op/s
Oct 02 19:44:59 compute-0 nova_compute[355794]: 2025-10-02 19:44:59.013 2 DEBUG nova.network.neutron [req-fad4c47c-86da-458b-a44f-b25463a8e4ec req-8c22b14b-d301-40d2-9b5b-606b95255790 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated VIF entry in instance network info cache for port 24e0cf3f-162d-4105-9362-fc5616a6815a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:44:59 compute-0 nova_compute[355794]: 2025-10-02 19:44:59.015 2 DEBUG nova.network.neutron [req-fad4c47c-86da-458b-a44f-b25463a8e4ec req-8c22b14b-d301-40d2-9b5b-606b95255790 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:44:59 compute-0 nova_compute[355794]: 2025-10-02 19:44:59.144 2 DEBUG oslo_concurrency.lockutils [req-fad4c47c-86da-458b-a44f-b25463a8e4ec req-8c22b14b-d301-40d2-9b5b-606b95255790 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:44:59 compute-0 podman[421073]: 2025-10-02 19:44:59.706205086 +0000 UTC m=+0.121661404 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 19:44:59 compute-0 podman[421074]: 2025-10-02 19:44:59.706984637 +0000 UTC m=+0.119792194 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:44:59 compute-0 podman[157186]: time="2025-10-02T19:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:44:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:44:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9032 "" "Go-http-client/1.1"
Oct 02 19:44:59 compute-0 ceph-mon[191910]: pgmap v1196: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 899 KiB/s rd, 28 op/s
Oct 02 19:45:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:45:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 807 KiB/s rd, 25 op/s
Oct 02 19:45:01 compute-0 openstack_network_exporter[372736]: ERROR   19:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:45:01 compute-0 openstack_network_exporter[372736]: ERROR   19:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:45:01 compute-0 openstack_network_exporter[372736]: ERROR   19:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:45:01 compute-0 openstack_network_exporter[372736]: ERROR   19:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:45:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:45:01 compute-0 openstack_network_exporter[372736]: ERROR   19:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:45:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:45:01 compute-0 ceph-mon[191910]: pgmap v1197: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 807 KiB/s rd, 25 op/s
Oct 02 19:45:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1198: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:02 compute-0 sshd-session[421114]: Invalid user admin from 139.19.117.129 port 51428
Oct 02 19:45:02 compute-0 sshd-session[421114]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Oct 02 19:45:03 compute-0 nova_compute[355794]: 2025-10-02 19:45:03.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:03 compute-0 nova_compute[355794]: 2025-10-02 19:45:03.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:45:03
Oct 02 19:45:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:45:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:45:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'backups', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.data']
Oct 02 19:45:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:45:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:45:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:45:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:45:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:45:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:45:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:45:03 compute-0 ceph-mon[191910]: pgmap v1198: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:45:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:45:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:45:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:45:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:45:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:45:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:45:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:45:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:45:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.295 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.296 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.307 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:04.659 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}83403cbd27ca997ec29507654ce1f65c7e3234a2ba47473760e788c908204f10" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 19:45:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.430 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1849 Content-Type: application/json Date: Thu, 02 Oct 2025 19:45:04 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-dc416d86-f8b3-4d0e-87d7-6caa02f71f27 x-openstack-request-id: req-dc416d86-f8b3-4d0e-87d7-6caa02f71f27 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.431 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77", "name": "test_0", "status": "ACTIVE", "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "user_id": "811fb7ac717e4ba9b9874e5454ee08f4", "metadata": {}, "hostId": "0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d", "image": {"id": "ce28338d-119e-49e1-ab67-60da8882593a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ce28338d-119e-49e1-ab67-60da8882593a"}]}, "flavor": {"id": "8f0521f8-dc4e-4ca1-bf77-f443ae74db03", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8f0521f8-dc4e-4ca1-bf77-f443ae74db03"}]}, "created": "2025-10-02T19:44:24Z", "updated": "2025-10-02T19:44:42Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.37", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:6b:e8:fe"}, {"version": 4, "addr": "192.168.122.205", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:6b:e8:fe"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-02T19:44:42.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.431 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 used request id req-dc416d86-f8b3-4d0e-87d7-6caa02f71f27 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.434 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.434 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.434 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.436 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:45:05.434763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.500 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.502 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.504 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.505 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.506 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.507 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:45:05.507228) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.545 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.546 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.546 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.547 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.548 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.548 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.548 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.548 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.548 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.548 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.548 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.549 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.549 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.549 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.549 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.549 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.549 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.550 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.549 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:45:05.548234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.550 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:45:05.549491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.550 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.550 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.550 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.550 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.550 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.550 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.551 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:45:05.550904) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.625 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.625 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.626 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.626 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.626 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.626 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.626 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.627 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.627 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.627 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.628 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.628 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.628 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.628 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.629 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:45:05.626322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.629 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:45:05.628479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.650 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 / tap24e0cf3f-16 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.650 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.651 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.651 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.651 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.651 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.651 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.652 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.652 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.652 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.652 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.653 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.653 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.653 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.653 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:45:05.651898) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.654 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-02T19:45:05.653102) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.653 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.657 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.657 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.658 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.658 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.658 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.658 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.659 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.659 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.659 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.659 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.659 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:45:05.658224) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.659 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.659 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.660 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.660 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.660 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.660 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.660 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.660 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.661 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.661 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:45:05.659798) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.661 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.661 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:45:05.660948) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.661 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.661 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.661 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.662 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.662 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.662 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.662 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.662 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.663 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.663 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.663 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.663 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.663 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.663 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.664 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.664 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.664 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.664 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.664 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.664 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.665 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.665 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:45:05.662131) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.665 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:45:05.663331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.665 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.665 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:45:05.664442) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.666 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.666 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.666 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.666 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.666 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.666 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.666 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.667 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.667 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.667 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.667 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.667 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.668 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.668 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.668 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:45:05.666761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.668 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.668 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.669 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.669 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.669 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.669 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.669 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.669 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:45:05.668060) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.670 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.670 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.670 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.670 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.670 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.670 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.670 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.671 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.671 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77: ceilometer.compute.pollsters.NoVolumeException
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.671 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-02T19:45:05.669939) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.671 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.671 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.671 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:45:05.670969) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.671 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.671 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.671 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.672 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.672 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 110 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.672 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.672 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.672 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.672 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.673 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.673 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:45:05.672022) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.673 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.673 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.674 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.674 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.674 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.674 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.674 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:45:05.673296) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.674 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.674 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.675 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:45:05.674803) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.675 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.675 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.676 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.676 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.676 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.676 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.676 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.676 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.676 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.677 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.677 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.677 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.677 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.677 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.677 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.677 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.678 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:45:05.676661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.678 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.678 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:45:05.677825) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.678 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.679 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.679 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.679 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.679 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 22240000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.679 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.679 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.679 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.680 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.680 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.680 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:45:05.679152) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.680 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1320159866 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.680 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:45:05.680266) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.680 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.681 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 2347893 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.681 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.682 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:45:05.683 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:05 compute-0 ceph-mon[191910]: pgmap v1199: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:07 compute-0 ceph-mon[191910]: pgmap v1200: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:08 compute-0 nova_compute[355794]: 2025-10-02 19:45:08.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:08 compute-0 nova_compute[355794]: 2025-10-02 19:45:08.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:09 compute-0 ceph-mon[191910]: pgmap v1201: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:45:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:10 compute-0 podman[421118]: 2025-10-02 19:45:10.688949808 +0000 UTC m=+0.112883308 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:45:11 compute-0 ceph-mon[191910]: pgmap v1202: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:12 compute-0 sshd-session[421114]: Connection closed by invalid user admin 139.19.117.129 port 51428 [preauth]
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0002673989263853617 of space, bias 1.0, pg target 0.08021967791560852 quantized to 32 (current 32)
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:45:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:45:13 compute-0 nova_compute[355794]: 2025-10-02 19:45:13.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:13 compute-0 nova_compute[355794]: 2025-10-02 19:45:13.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:13 compute-0 ceph-mon[191910]: pgmap v1203: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:45:15 compute-0 ceph-mon[191910]: pgmap v1204: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 1 op/s
Oct 02 19:45:16 compute-0 podman[421136]: 2025-10-02 19:45:16.685008699 +0000 UTC m=+0.093789923 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:45:16 compute-0 podman[421137]: 2025-10-02 19:45:16.689106129 +0000 UTC m=+0.092791135 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:45:16 compute-0 ovn_controller[88435]: 2025-10-02T19:45:16Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:e8:fe 192.168.0.37
Oct 02 19:45:16 compute-0 ovn_controller[88435]: 2025-10-02T19:45:16Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:e8:fe 192.168.0.37
Oct 02 19:45:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 02 19:45:17 compute-0 ceph-mon[191910]: pgmap v1205: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 1 op/s
Oct 02 19:45:18 compute-0 nova_compute[355794]: 2025-10-02 19:45:18.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:18 compute-0 nova_compute[355794]: 2025-10-02 19:45:18.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 11 KiB/s wr, 14 op/s
Oct 02 19:45:19 compute-0 ceph-mon[191910]: pgmap v1206: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 11 KiB/s wr, 14 op/s
Oct 02 19:45:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:45:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1792316997' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:45:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:45:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1792316997' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:45:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:45:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 76 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Oct 02 19:45:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1792316997' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:45:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1792316997' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:45:22 compute-0 ceph-mon[191910]: pgmap v1207: 321 pgs: 321 active+clean; 76 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Oct 02 19:45:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 77 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 55 op/s
Oct 02 19:45:22 compute-0 podman[421183]: 2025-10-02 19:45:22.723615409 +0000 UTC m=+0.127752009 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, distribution-scope=public, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.openshift.tags=base rhel9, container_name=kepler, name=ubi9, version=9.4, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0)
Oct 02 19:45:22 compute-0 podman[421182]: 2025-10-02 19:45:22.755624943 +0000 UTC m=+0.166447403 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:45:23 compute-0 nova_compute[355794]: 2025-10-02 19:45:23.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:23 compute-0 nova_compute[355794]: 2025-10-02 19:45:23.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:24 compute-0 ceph-mon[191910]: pgmap v1208: 321 pgs: 321 active+clean; 77 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 55 op/s
Oct 02 19:45:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Oct 02 19:45:24 compute-0 podman[421222]: 2025-10-02 19:45:24.83443447 +0000 UTC m=+0.109545388 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 19:45:24 compute-0 podman[421223]: 2025-10-02 19:45:24.867934364 +0000 UTC m=+0.126845454 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 19:45:25 compute-0 podman[421256]: 2025-10-02 19:45:25.034558181 +0000 UTC m=+0.171024497 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 19:45:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:45:26 compute-0 ceph-mon[191910]: pgmap v1209: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Oct 02 19:45:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Oct 02 19:45:27 compute-0 ovn_controller[88435]: 2025-10-02T19:45:27Z|00034|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 02 19:45:27 compute-0 sudo[421280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:45:27 compute-0 sudo[421280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:27 compute-0 sudo[421280]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:27 compute-0 sudo[421305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:45:27 compute-0 sudo[421305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:27 compute-0 sudo[421305]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:27 compute-0 sudo[421330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:45:27 compute-0 sudo[421330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:27 compute-0 sudo[421330]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:27 compute-0 sudo[421355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 19:45:27 compute-0 sudo[421355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:28 compute-0 ceph-mon[191910]: pgmap v1210: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Oct 02 19:45:28 compute-0 nova_compute[355794]: 2025-10-02 19:45:28.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:28 compute-0 nova_compute[355794]: 2025-10-02 19:45:28.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 149 KiB/s rd, 1.5 MiB/s wr, 54 op/s
Oct 02 19:45:28 compute-0 podman[421446]: 2025-10-02 19:45:28.813137616 +0000 UTC m=+0.139911257 container exec a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:45:28 compute-0 podman[421446]: 2025-10-02 19:45:28.958507839 +0000 UTC m=+0.285281460 container exec_died a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 19:45:29 compute-0 podman[157186]: time="2025-10-02T19:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:45:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:45:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9039 "" "Go-http-client/1.1"
Oct 02 19:45:29 compute-0 podman[421565]: 2025-10-02 19:45:29.973127693 +0000 UTC m=+0.098339435 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:45:30 compute-0 podman[421564]: 2025-10-02 19:45:30.002599698 +0000 UTC m=+0.134181832 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_id=edpm, name=ubi9-minimal, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container)
Oct 02 19:45:30 compute-0 ceph-mon[191910]: pgmap v1211: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 149 KiB/s rd, 1.5 MiB/s wr, 54 op/s
Oct 02 19:45:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:45:30 compute-0 sudo[421355]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:45:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:45:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:45:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:45:30 compute-0 sudo[421639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:45:30 compute-0 sudo[421639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:30 compute-0 sudo[421639]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:30 compute-0 sudo[421664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:45:30 compute-0 sudo[421664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:30 compute-0 sudo[421664]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:30 compute-0 sudo[421689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:45:30 compute-0 sudo[421689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:30 compute-0 sudo[421689]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 1.5 MiB/s wr, 41 op/s
Oct 02 19:45:30 compute-0 sudo[421714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:45:30 compute-0 sudo[421714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:45:31 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:45:31 compute-0 openstack_network_exporter[372736]: ERROR   19:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:45:31 compute-0 openstack_network_exporter[372736]: ERROR   19:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:45:31 compute-0 openstack_network_exporter[372736]: ERROR   19:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:45:31 compute-0 openstack_network_exporter[372736]: ERROR   19:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:45:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:45:31 compute-0 openstack_network_exporter[372736]: ERROR   19:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:45:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:45:31 compute-0 sudo[421714]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:45:31 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:45:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:45:31 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:45:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:45:31 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:45:31 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 0b1831bd-d51f-4213-9644-98fc5f1567d9 does not exist
Oct 02 19:45:31 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 12a26550-43a8-421d-9a40-a57d430a8983 does not exist
Oct 02 19:45:31 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 952be215-a66e-4211-9d98-7267d82c1602 does not exist
Oct 02 19:45:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:45:31 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:45:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:45:31 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:45:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:45:31 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:45:31 compute-0 sudo[421769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:45:31 compute-0 sudo[421769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:31 compute-0 sudo[421769]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:31 compute-0 sudo[421794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:45:31 compute-0 sudo[421794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:31 compute-0 sudo[421794]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:32 compute-0 sudo[421819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:45:32 compute-0 sudo[421819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:32 compute-0 sudo[421819]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:32 compute-0 ceph-mon[191910]: pgmap v1212: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 1.5 MiB/s wr, 41 op/s
Oct 02 19:45:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:45:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:45:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:45:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:45:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:45:32 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:45:32 compute-0 sudo[421844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:45:32 compute-0 sudo[421844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:32.296 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:32.297 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:32.297 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 25 KiB/s wr, 5 op/s
Oct 02 19:45:32 compute-0 podman[421906]: 2025-10-02 19:45:32.855065446 +0000 UTC m=+0.086268009 container create 3cb653d5462de42c48cfb99fcb48b3d3f66eaa842bfb3ae30b444b391c572fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:45:32 compute-0 podman[421906]: 2025-10-02 19:45:32.81888882 +0000 UTC m=+0.050091433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:45:32 compute-0 systemd[1]: Started libpod-conmon-3cb653d5462de42c48cfb99fcb48b3d3f66eaa842bfb3ae30b444b391c572fb2.scope.
Oct 02 19:45:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:45:33 compute-0 podman[421906]: 2025-10-02 19:45:33.015528916 +0000 UTC m=+0.246731499 container init 3cb653d5462de42c48cfb99fcb48b3d3f66eaa842bfb3ae30b444b391c572fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:45:33 compute-0 podman[421906]: 2025-10-02 19:45:33.036552654 +0000 UTC m=+0.267755197 container start 3cb653d5462de42c48cfb99fcb48b3d3f66eaa842bfb3ae30b444b391c572fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatelet, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 19:45:33 compute-0 podman[421906]: 2025-10-02 19:45:33.042530565 +0000 UTC m=+0.273733148 container attach 3cb653d5462de42c48cfb99fcb48b3d3f66eaa842bfb3ae30b444b391c572fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:45:33 compute-0 competent_chatelet[421922]: 167 167
Oct 02 19:45:33 compute-0 systemd[1]: libpod-3cb653d5462de42c48cfb99fcb48b3d3f66eaa842bfb3ae30b444b391c572fb2.scope: Deactivated successfully.
Oct 02 19:45:33 compute-0 podman[421906]: 2025-10-02 19:45:33.047209391 +0000 UTC m=+0.278411924 container died 3cb653d5462de42c48cfb99fcb48b3d3f66eaa842bfb3ae30b444b391c572fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatelet, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:45:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-190327fba7c8405b51d5ec9e3f44b4648396134e7942bc0ef45235b1d86866c6-merged.mount: Deactivated successfully.
Oct 02 19:45:33 compute-0 podman[421906]: 2025-10-02 19:45:33.109977746 +0000 UTC m=+0.341180289 container remove 3cb653d5462de42c48cfb99fcb48b3d3f66eaa842bfb3ae30b444b391c572fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 19:45:33 compute-0 systemd[1]: libpod-conmon-3cb653d5462de42c48cfb99fcb48b3d3f66eaa842bfb3ae30b444b391c572fb2.scope: Deactivated successfully.
Oct 02 19:45:33 compute-0 podman[421947]: 2025-10-02 19:45:33.357735522 +0000 UTC m=+0.086146556 container create b2021be22cafe32c0438cabf42e04e8a187393be8c3febeec6fd0fe583f4f21f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_poincare, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:45:33 compute-0 podman[421947]: 2025-10-02 19:45:33.330042265 +0000 UTC m=+0.058453369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:45:33 compute-0 systemd[1]: Started libpod-conmon-b2021be22cafe32c0438cabf42e04e8a187393be8c3febeec6fd0fe583f4f21f.scope.
Oct 02 19:45:33 compute-0 nova_compute[355794]: 2025-10-02 19:45:33.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244abec3dc3477099d8dfa6db3199cefde85efcb52735c76597446980fc67910/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244abec3dc3477099d8dfa6db3199cefde85efcb52735c76597446980fc67910/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244abec3dc3477099d8dfa6db3199cefde85efcb52735c76597446980fc67910/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244abec3dc3477099d8dfa6db3199cefde85efcb52735c76597446980fc67910/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244abec3dc3477099d8dfa6db3199cefde85efcb52735c76597446980fc67910/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:45:33 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:45:33 compute-0 nova_compute[355794]: 2025-10-02 19:45:33.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:33 compute-0 podman[421947]: 2025-10-02 19:45:33.518009748 +0000 UTC m=+0.246420862 container init b2021be22cafe32c0438cabf42e04e8a187393be8c3febeec6fd0fe583f4f21f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_poincare, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:45:33 compute-0 podman[421947]: 2025-10-02 19:45:33.567890365 +0000 UTC m=+0.296301419 container start b2021be22cafe32c0438cabf42e04e8a187393be8c3febeec6fd0fe583f4f21f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_poincare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 19:45:33 compute-0 podman[421947]: 2025-10-02 19:45:33.575531741 +0000 UTC m=+0.303942765 container attach b2021be22cafe32c0438cabf42e04e8a187393be8c3febeec6fd0fe583f4f21f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 19:45:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:45:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:45:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:45:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:45:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:45:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:45:34 compute-0 ceph-mon[191910]: pgmap v1213: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 25 KiB/s wr, 5 op/s
Oct 02 19:45:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s wr, 0 op/s
Oct 02 19:45:34 compute-0 strange_poincare[421963]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:45:34 compute-0 strange_poincare[421963]: --> relative data size: 1.0
Oct 02 19:45:34 compute-0 strange_poincare[421963]: --> All data devices are unavailable
Oct 02 19:45:34 compute-0 systemd[1]: libpod-b2021be22cafe32c0438cabf42e04e8a187393be8c3febeec6fd0fe583f4f21f.scope: Deactivated successfully.
Oct 02 19:45:34 compute-0 systemd[1]: libpod-b2021be22cafe32c0438cabf42e04e8a187393be8c3febeec6fd0fe583f4f21f.scope: Consumed 1.299s CPU time.
Oct 02 19:45:35 compute-0 podman[421993]: 2025-10-02 19:45:35.049070462 +0000 UTC m=+0.073225748 container died b2021be22cafe32c0438cabf42e04e8a187393be8c3febeec6fd0fe583f4f21f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_poincare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 19:45:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-244abec3dc3477099d8dfa6db3199cefde85efcb52735c76597446980fc67910-merged.mount: Deactivated successfully.
Oct 02 19:45:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:45:35 compute-0 podman[421993]: 2025-10-02 19:45:35.179656476 +0000 UTC m=+0.203811722 container remove b2021be22cafe32c0438cabf42e04e8a187393be8c3febeec6fd0fe583f4f21f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_poincare, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:45:35 compute-0 systemd[1]: libpod-conmon-b2021be22cafe32c0438cabf42e04e8a187393be8c3febeec6fd0fe583f4f21f.scope: Deactivated successfully.
Oct 02 19:45:35 compute-0 sudo[421844]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:35 compute-0 sudo[422009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:45:35 compute-0 sudo[422009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:35 compute-0 sudo[422009]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:35 compute-0 sudo[422034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:45:35 compute-0 sudo[422034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:35 compute-0 sudo[422034]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:35 compute-0 sudo[422059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:45:35 compute-0 sudo[422059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:35 compute-0 sudo[422059]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:35 compute-0 sudo[422084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:45:35 compute-0 sudo[422084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:36 compute-0 ceph-mon[191910]: pgmap v1214: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s wr, 0 op/s
Oct 02 19:45:36 compute-0 podman[422148]: 2025-10-02 19:45:36.39321378 +0000 UTC m=+0.077951085 container create 09ed139dab709f7c03ffef90c2771945ce3984ad4c9ebeac78a199f7220d95df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 19:45:36 compute-0 podman[422148]: 2025-10-02 19:45:36.362755148 +0000 UTC m=+0.047492483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:45:36 compute-0 systemd[1]: Started libpod-conmon-09ed139dab709f7c03ffef90c2771945ce3984ad4c9ebeac78a199f7220d95df.scope.
Oct 02 19:45:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:45:36 compute-0 podman[422148]: 2025-10-02 19:45:36.534031961 +0000 UTC m=+0.218769286 container init 09ed139dab709f7c03ffef90c2771945ce3984ad4c9ebeac78a199f7220d95df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 19:45:36 compute-0 podman[422148]: 2025-10-02 19:45:36.555751337 +0000 UTC m=+0.240488602 container start 09ed139dab709f7c03ffef90c2771945ce3984ad4c9ebeac78a199f7220d95df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_heyrovsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:45:36 compute-0 podman[422148]: 2025-10-02 19:45:36.560794333 +0000 UTC m=+0.245531648 container attach 09ed139dab709f7c03ffef90c2771945ce3984ad4c9ebeac78a199f7220d95df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_heyrovsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 19:45:36 compute-0 admiring_heyrovsky[422163]: 167 167
Oct 02 19:45:36 compute-0 systemd[1]: libpod-09ed139dab709f7c03ffef90c2771945ce3984ad4c9ebeac78a199f7220d95df.scope: Deactivated successfully.
Oct 02 19:45:36 compute-0 conmon[422163]: conmon 09ed139dab709f7c03ff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-09ed139dab709f7c03ffef90c2771945ce3984ad4c9ebeac78a199f7220d95df.scope/container/memory.events
Oct 02 19:45:36 compute-0 podman[422148]: 2025-10-02 19:45:36.570013472 +0000 UTC m=+0.254750737 container died 09ed139dab709f7c03ffef90c2771945ce3984ad4c9ebeac78a199f7220d95df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 19:45:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2d44aaaf04333fbf8a05e058c63ec84105c165e4ceb45f229e44ed34f7f37a2-merged.mount: Deactivated successfully.
Oct 02 19:45:36 compute-0 podman[422148]: 2025-10-02 19:45:36.651793848 +0000 UTC m=+0.336531123 container remove 09ed139dab709f7c03ffef90c2771945ce3984ad4c9ebeac78a199f7220d95df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_heyrovsky, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:45:36 compute-0 systemd[1]: libpod-conmon-09ed139dab709f7c03ffef90c2771945ce3984ad4c9ebeac78a199f7220d95df.scope: Deactivated successfully.
Oct 02 19:45:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:36 compute-0 podman[422186]: 2025-10-02 19:45:36.89713063 +0000 UTC m=+0.056577078 container create 4f980aa60a27a7adcb4f58a00a8b7d475a1bfd504b3ec80c1ceabb3ec52036b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 19:45:36 compute-0 systemd[1]: Started libpod-conmon-4f980aa60a27a7adcb4f58a00a8b7d475a1bfd504b3ec80c1ceabb3ec52036b7.scope.
Oct 02 19:45:36 compute-0 podman[422186]: 2025-10-02 19:45:36.876985336 +0000 UTC m=+0.036431584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:45:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea192918c0e648e90df7e2750b231932be01b39ad675350f5fc1e5a68b1047a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea192918c0e648e90df7e2750b231932be01b39ad675350f5fc1e5a68b1047a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea192918c0e648e90df7e2750b231932be01b39ad675350f5fc1e5a68b1047a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea192918c0e648e90df7e2750b231932be01b39ad675350f5fc1e5a68b1047a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:45:37 compute-0 podman[422186]: 2025-10-02 19:45:37.008988829 +0000 UTC m=+0.168435067 container init 4f980aa60a27a7adcb4f58a00a8b7d475a1bfd504b3ec80c1ceabb3ec52036b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:45:37 compute-0 podman[422186]: 2025-10-02 19:45:37.020440318 +0000 UTC m=+0.179886536 container start 4f980aa60a27a7adcb4f58a00a8b7d475a1bfd504b3ec80c1ceabb3ec52036b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:45:37 compute-0 podman[422186]: 2025-10-02 19:45:37.024789055 +0000 UTC m=+0.184235293 container attach 4f980aa60a27a7adcb4f58a00a8b7d475a1bfd504b3ec80c1ceabb3ec52036b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_robinson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:45:37 compute-0 determined_robinson[422203]: {
Oct 02 19:45:37 compute-0 determined_robinson[422203]:     "0": [
Oct 02 19:45:37 compute-0 determined_robinson[422203]:         {
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "devices": [
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "/dev/loop3"
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             ],
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "lv_name": "ceph_lv0",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "lv_size": "21470642176",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "name": "ceph_lv0",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "tags": {
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.cluster_name": "ceph",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.crush_device_class": "",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.encrypted": "0",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.osd_id": "0",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.type": "block",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.vdo": "0"
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             },
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "type": "block",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "vg_name": "ceph_vg0"
Oct 02 19:45:37 compute-0 determined_robinson[422203]:         }
Oct 02 19:45:37 compute-0 determined_robinson[422203]:     ],
Oct 02 19:45:37 compute-0 determined_robinson[422203]:     "1": [
Oct 02 19:45:37 compute-0 determined_robinson[422203]:         {
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "devices": [
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "/dev/loop4"
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             ],
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "lv_name": "ceph_lv1",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "lv_size": "21470642176",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "name": "ceph_lv1",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "tags": {
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.cluster_name": "ceph",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.crush_device_class": "",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.encrypted": "0",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.osd_id": "1",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.type": "block",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.vdo": "0"
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             },
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "type": "block",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "vg_name": "ceph_vg1"
Oct 02 19:45:37 compute-0 determined_robinson[422203]:         }
Oct 02 19:45:37 compute-0 determined_robinson[422203]:     ],
Oct 02 19:45:37 compute-0 determined_robinson[422203]:     "2": [
Oct 02 19:45:37 compute-0 determined_robinson[422203]:         {
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "devices": [
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "/dev/loop5"
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             ],
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "lv_name": "ceph_lv2",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "lv_size": "21470642176",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "name": "ceph_lv2",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "tags": {
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.cluster_name": "ceph",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.crush_device_class": "",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.encrypted": "0",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.osd_id": "2",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.type": "block",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:                 "ceph.vdo": "0"
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             },
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "type": "block",
Oct 02 19:45:37 compute-0 determined_robinson[422203]:             "vg_name": "ceph_vg2"
Oct 02 19:45:37 compute-0 determined_robinson[422203]:         }
Oct 02 19:45:37 compute-0 determined_robinson[422203]:     ]
Oct 02 19:45:37 compute-0 determined_robinson[422203]: }
Oct 02 19:45:37 compute-0 systemd[1]: libpod-4f980aa60a27a7adcb4f58a00a8b7d475a1bfd504b3ec80c1ceabb3ec52036b7.scope: Deactivated successfully.
Oct 02 19:45:37 compute-0 podman[422186]: 2025-10-02 19:45:37.956080461 +0000 UTC m=+1.115526749 container died 4f980aa60a27a7adcb4f58a00a8b7d475a1bfd504b3ec80c1ceabb3ec52036b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_robinson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:45:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea192918c0e648e90df7e2750b231932be01b39ad675350f5fc1e5a68b1047a6-merged.mount: Deactivated successfully.
Oct 02 19:45:38 compute-0 podman[422186]: 2025-10-02 19:45:38.091206558 +0000 UTC m=+1.250652806 container remove 4f980aa60a27a7adcb4f58a00a8b7d475a1bfd504b3ec80c1ceabb3ec52036b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_robinson, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:45:38 compute-0 systemd[1]: libpod-conmon-4f980aa60a27a7adcb4f58a00a8b7d475a1bfd504b3ec80c1ceabb3ec52036b7.scope: Deactivated successfully.
Oct 02 19:45:38 compute-0 sudo[422084]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:38 compute-0 ceph-mon[191910]: pgmap v1215: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:38 compute-0 sudo[422225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:45:38 compute-0 sudo[422225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:38 compute-0 sudo[422225]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:38 compute-0 nova_compute[355794]: 2025-10-02 19:45:38.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:38 compute-0 sudo[422250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:45:38 compute-0 sudo[422250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:38 compute-0 sudo[422250]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:38 compute-0 nova_compute[355794]: 2025-10-02 19:45:38.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:38 compute-0 sudo[422275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:45:38 compute-0 sudo[422275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:38 compute-0 sudo[422275]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:38 compute-0 sudo[422300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:45:38 compute-0 sudo[422300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:39 compute-0 podman[422364]: 2025-10-02 19:45:39.336717234 +0000 UTC m=+0.077259176 container create 42c5aef9a3e82627a5e120d7954fd40311bd07f016716e4ab7a98c8e8dfc99ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_heisenberg, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:45:39 compute-0 ceph-mon[191910]: pgmap v1216: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:39 compute-0 podman[422364]: 2025-10-02 19:45:39.314805613 +0000 UTC m=+0.055347555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:45:39 compute-0 systemd[1]: Started libpod-conmon-42c5aef9a3e82627a5e120d7954fd40311bd07f016716e4ab7a98c8e8dfc99ff.scope.
Oct 02 19:45:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:45:39 compute-0 podman[422364]: 2025-10-02 19:45:39.488125171 +0000 UTC m=+0.228667143 container init 42c5aef9a3e82627a5e120d7954fd40311bd07f016716e4ab7a98c8e8dfc99ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 19:45:39 compute-0 podman[422364]: 2025-10-02 19:45:39.500795793 +0000 UTC m=+0.241337745 container start 42c5aef9a3e82627a5e120d7954fd40311bd07f016716e4ab7a98c8e8dfc99ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_heisenberg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:45:39 compute-0 podman[422364]: 2025-10-02 19:45:39.508038008 +0000 UTC m=+0.248579980 container attach 42c5aef9a3e82627a5e120d7954fd40311bd07f016716e4ab7a98c8e8dfc99ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:45:39 compute-0 sweet_heisenberg[422380]: 167 167
Oct 02 19:45:39 compute-0 systemd[1]: libpod-42c5aef9a3e82627a5e120d7954fd40311bd07f016716e4ab7a98c8e8dfc99ff.scope: Deactivated successfully.
Oct 02 19:45:39 compute-0 podman[422364]: 2025-10-02 19:45:39.512674433 +0000 UTC m=+0.253216395 container died 42c5aef9a3e82627a5e120d7954fd40311bd07f016716e4ab7a98c8e8dfc99ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_heisenberg, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 19:45:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac174f8ff81239c6286185f26f194dbe12184460319be21dc8d79febef7669e7-merged.mount: Deactivated successfully.
Oct 02 19:45:39 compute-0 podman[422364]: 2025-10-02 19:45:39.591222653 +0000 UTC m=+0.331764605 container remove 42c5aef9a3e82627a5e120d7954fd40311bd07f016716e4ab7a98c8e8dfc99ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_heisenberg, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:45:39 compute-0 systemd[1]: libpod-conmon-42c5aef9a3e82627a5e120d7954fd40311bd07f016716e4ab7a98c8e8dfc99ff.scope: Deactivated successfully.
Oct 02 19:45:39 compute-0 podman[422403]: 2025-10-02 19:45:39.836613816 +0000 UTC m=+0.079493276 container create f9a49287b3eefa94556d6b951c6a40fb1a670d9f22cc3ba192445872b423d26c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:45:39 compute-0 podman[422403]: 2025-10-02 19:45:39.801574911 +0000 UTC m=+0.044454381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:45:39 compute-0 systemd[1]: Started libpod-conmon-f9a49287b3eefa94556d6b951c6a40fb1a670d9f22cc3ba192445872b423d26c.scope.
Oct 02 19:45:39 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d549aa182b265d1669d212ac49191b07a438ff770a5ae277f0a1f739af6cb77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d549aa182b265d1669d212ac49191b07a438ff770a5ae277f0a1f739af6cb77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d549aa182b265d1669d212ac49191b07a438ff770a5ae277f0a1f739af6cb77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d549aa182b265d1669d212ac49191b07a438ff770a5ae277f0a1f739af6cb77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:45:40 compute-0 podman[422403]: 2025-10-02 19:45:40.028797864 +0000 UTC m=+0.271677334 container init f9a49287b3eefa94556d6b951c6a40fb1a670d9f22cc3ba192445872b423d26c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 19:45:40 compute-0 podman[422403]: 2025-10-02 19:45:40.047019355 +0000 UTC m=+0.289898815 container start f9a49287b3eefa94556d6b951c6a40fb1a670d9f22cc3ba192445872b423d26c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:45:40 compute-0 podman[422403]: 2025-10-02 19:45:40.054132387 +0000 UTC m=+0.297011847 container attach f9a49287b3eefa94556d6b951c6a40fb1a670d9f22cc3ba192445872b423d26c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:45:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:45:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:41 compute-0 festive_cohen[422418]: {
Oct 02 19:45:41 compute-0 festive_cohen[422418]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:45:41 compute-0 festive_cohen[422418]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:45:41 compute-0 festive_cohen[422418]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:45:41 compute-0 festive_cohen[422418]:         "osd_id": 1,
Oct 02 19:45:41 compute-0 festive_cohen[422418]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:45:41 compute-0 festive_cohen[422418]:         "type": "bluestore"
Oct 02 19:45:41 compute-0 festive_cohen[422418]:     },
Oct 02 19:45:41 compute-0 festive_cohen[422418]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:45:41 compute-0 festive_cohen[422418]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:45:41 compute-0 festive_cohen[422418]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:45:41 compute-0 festive_cohen[422418]:         "osd_id": 2,
Oct 02 19:45:41 compute-0 festive_cohen[422418]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:45:41 compute-0 festive_cohen[422418]:         "type": "bluestore"
Oct 02 19:45:41 compute-0 festive_cohen[422418]:     },
Oct 02 19:45:41 compute-0 festive_cohen[422418]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:45:41 compute-0 festive_cohen[422418]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:45:41 compute-0 festive_cohen[422418]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:45:41 compute-0 festive_cohen[422418]:         "osd_id": 0,
Oct 02 19:45:41 compute-0 festive_cohen[422418]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:45:41 compute-0 festive_cohen[422418]:         "type": "bluestore"
Oct 02 19:45:41 compute-0 festive_cohen[422418]:     }
Oct 02 19:45:41 compute-0 festive_cohen[422418]: }
Oct 02 19:45:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:41.343 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '6e:8a:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:85:f4:22:b9:4f'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:45:41 compute-0 nova_compute[355794]: 2025-10-02 19:45:41.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:41.346 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:45:41 compute-0 systemd[1]: libpod-f9a49287b3eefa94556d6b951c6a40fb1a670d9f22cc3ba192445872b423d26c.scope: Deactivated successfully.
Oct 02 19:45:41 compute-0 systemd[1]: libpod-f9a49287b3eefa94556d6b951c6a40fb1a670d9f22cc3ba192445872b423d26c.scope: Consumed 1.309s CPU time.
Oct 02 19:45:41 compute-0 podman[422403]: 2025-10-02 19:45:41.362778807 +0000 UTC m=+1.605658267 container died f9a49287b3eefa94556d6b951c6a40fb1a670d9f22cc3ba192445872b423d26c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cohen, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 19:45:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d549aa182b265d1669d212ac49191b07a438ff770a5ae277f0a1f739af6cb77-merged.mount: Deactivated successfully.
Oct 02 19:45:41 compute-0 podman[422403]: 2025-10-02 19:45:41.476613539 +0000 UTC m=+1.719492999 container remove f9a49287b3eefa94556d6b951c6a40fb1a670d9f22cc3ba192445872b423d26c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cohen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 19:45:41 compute-0 systemd[1]: libpod-conmon-f9a49287b3eefa94556d6b951c6a40fb1a670d9f22cc3ba192445872b423d26c.scope: Deactivated successfully.
Oct 02 19:45:41 compute-0 sudo[422300]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:45:41 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:45:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:45:41 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:45:41 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 786a7745-ea6e-4019-b5bb-d7ef10b509c6 does not exist
Oct 02 19:45:41 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 7a650515-0e46-4fdb-9628-00564a166861 does not exist
Oct 02 19:45:41 compute-0 podman[422452]: 2025-10-02 19:45:41.562858197 +0000 UTC m=+0.148126519 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 02 19:45:41 compute-0 sudo[422481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:45:41 compute-0 sudo[422481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:41 compute-0 sudo[422481]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:41 compute-0 ceph-mon[191910]: pgmap v1217: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:45:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:45:41 compute-0 sudo[422506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:45:41 compute-0 sudo[422506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:45:41 compute-0 sudo[422506]: pam_unix(sudo:session): session closed for user root
Oct 02 19:45:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:43 compute-0 nova_compute[355794]: 2025-10-02 19:45:43.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:43 compute-0 nova_compute[355794]: 2025-10-02 19:45:43.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:43 compute-0 ceph-mon[191910]: pgmap v1218: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:45:45 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:45.351 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1de2af17-a89c-45e5-97c6-db433f26bbb6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:45 compute-0 ceph-mon[191910]: pgmap v1219: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:46 compute-0 nova_compute[355794]: 2025-10-02 19:45:46.176 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:46 compute-0 nova_compute[355794]: 2025-10-02 19:45:46.177 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:45:46 compute-0 nova_compute[355794]: 2025-10-02 19:45:46.177 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:45:46 compute-0 nova_compute[355794]: 2025-10-02 19:45:46.606 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:45:46 compute-0 nova_compute[355794]: 2025-10-02 19:45:46.607 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:45:46 compute-0 nova_compute[355794]: 2025-10-02 19:45:46.607 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:45:46 compute-0 nova_compute[355794]: 2025-10-02 19:45:46.607 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:45:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:45:46.827117) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434346827252, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1399, "num_deletes": 251, "total_data_size": 2151946, "memory_usage": 2185840, "flush_reason": "Manual Compaction"}
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434346840924, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2108963, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24142, "largest_seqno": 25540, "table_properties": {"data_size": 2102351, "index_size": 3748, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13856, "raw_average_key_size": 19, "raw_value_size": 2089045, "raw_average_value_size": 3014, "num_data_blocks": 168, "num_entries": 693, "num_filter_entries": 693, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759434210, "oldest_key_time": 1759434210, "file_creation_time": 1759434346, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 13874 microseconds, and 8211 cpu microseconds.
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:45:46.841009) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2108963 bytes OK
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:45:46.841036) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:45:46.843348) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:45:46.843366) EVENT_LOG_v1 {"time_micros": 1759434346843360, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:45:46.843453) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2145730, prev total WAL file size 2145730, number of live WAL files 2.
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:45:46.844877) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2059KB)], [56(6917KB)]
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434346844934, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9192842, "oldest_snapshot_seqno": -1}
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4618 keys, 7456629 bytes, temperature: kUnknown
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434346901667, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7456629, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7425640, "index_size": 18335, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 115616, "raw_average_key_size": 25, "raw_value_size": 7341791, "raw_average_value_size": 1589, "num_data_blocks": 760, "num_entries": 4618, "num_filter_entries": 4618, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759434346, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:45:46.902895) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7456629 bytes
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:45:46.906106) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.8 rd, 131.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 6.8 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(7.9) write-amplify(3.5) OK, records in: 5136, records dropped: 518 output_compression: NoCompression
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:45:46.906144) EVENT_LOG_v1 {"time_micros": 1759434346906127, "job": 30, "event": "compaction_finished", "compaction_time_micros": 56803, "compaction_time_cpu_micros": 39030, "output_level": 6, "num_output_files": 1, "total_output_size": 7456629, "num_input_records": 5136, "num_output_records": 4618, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434346907455, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434346910595, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:45:46.844535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:45:46.910911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:45:46.910919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:45:46.910923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:45:46.910928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:45:46 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:45:46.910932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:45:47 compute-0 podman[422532]: 2025-10-02 19:45:47.705495106 +0000 UTC m=+0.119456115 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 19:45:47 compute-0 podman[422531]: 2025-10-02 19:45:47.719489864 +0000 UTC m=+0.135589170 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:45:47 compute-0 ceph-mon[191910]: pgmap v1220: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.052 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.068 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.069 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.070 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.071 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.071 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.072 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.073 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.074 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.466 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.577 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.589 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.589 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.609 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.609 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.609 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.610 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.610 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.643 2 DEBUG nova.compute.manager [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:45:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.748 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.749 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.757 2 DEBUG nova.virt.hardware [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.757 2 INFO nova.compute.claims [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:45:48 compute-0 nova_compute[355794]: 2025-10-02 19:45:48.859 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:45:49 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3115277681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:45:49 compute-0 nova_compute[355794]: 2025-10-02 19:45:49.147 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:49 compute-0 nova_compute[355794]: 2025-10-02 19:45:49.224 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:45:49 compute-0 nova_compute[355794]: 2025-10-02 19:45:49.224 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:45:49 compute-0 nova_compute[355794]: 2025-10-02 19:45:49.224 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:45:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:45:49 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1535497393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:45:49 compute-0 nova_compute[355794]: 2025-10-02 19:45:49.460 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:49 compute-0 nova_compute[355794]: 2025-10-02 19:45:49.473 2 DEBUG nova.compute.provider_tree [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:45:49 compute-0 nova_compute[355794]: 2025-10-02 19:45:49.494 2 DEBUG nova.scheduler.client.report [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:45:49 compute-0 nova_compute[355794]: 2025-10-02 19:45:49.526 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:49 compute-0 nova_compute[355794]: 2025-10-02 19:45:49.528 2 DEBUG nova.compute.manager [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:45:49 compute-0 nova_compute[355794]: 2025-10-02 19:45:49.713 2 DEBUG nova.compute.manager [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 19:45:49 compute-0 nova_compute[355794]: 2025-10-02 19:45:49.713 2 DEBUG nova.network.neutron [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 19:45:49 compute-0 nova_compute[355794]: 2025-10-02 19:45:49.733 2 INFO nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:45:49 compute-0 nova_compute[355794]: 2025-10-02 19:45:49.786 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:45:49 compute-0 nova_compute[355794]: 2025-10-02 19:45:49.787 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3977MB free_disk=59.955204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:45:49 compute-0 nova_compute[355794]: 2025-10-02 19:45:49.788 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:49 compute-0 nova_compute[355794]: 2025-10-02 19:45:49.788 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:49 compute-0 ceph-mon[191910]: pgmap v1221: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:45:49 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3115277681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:45:49 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1535497393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.053 2 DEBUG nova.compute.manager [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.135 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.135 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.136 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.136 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:45:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.183 2 DEBUG nova.compute.manager [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.185 2 DEBUG nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.186 2 INFO nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Creating image(s)
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.245 2 DEBUG nova.storage.rbd_utils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.301 2 DEBUG nova.storage.rbd_utils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.361 2 DEBUG nova.storage.rbd_utils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.372 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.446 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.480 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.482 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "29c290047b888f2c82efe3bcb0c2a3e42b009a3e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.483 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "29c290047b888f2c82efe3bcb0c2a3e42b009a3e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.484 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "29c290047b888f2c82efe3bcb0c2a3e42b009a3e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.530 2 DEBUG nova.storage.rbd_utils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.540 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 767 B/s wr, 0 op/s
Oct 02 19:45:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:45:50 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4167818413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:45:50 compute-0 nova_compute[355794]: 2025-10-02 19:45:50.942 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.020 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.127 2 DEBUG nova.storage.rbd_utils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] resizing rbd image 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.212 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.234 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.272 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.273 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.380 2 DEBUG nova.network.neutron [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Successfully updated port: c759e48d-48de-4316-a1e4-9c04eb965fd0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.394 2 DEBUG nova.objects.instance [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lazy-loading 'migration_context' on Instance uuid 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.404 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.405 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquired lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.405 2 DEBUG nova.network.neutron [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.460 2 DEBUG nova.storage.rbd_utils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.506 2 DEBUG nova.storage.rbd_utils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.515 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.561 2 DEBUG nova.compute.manager [req-32c0dfe8-638c-471c-b1b9-51a75860fe39 req-5f7c1267-c9f9-4d12-ba48-d6f15cea3b5c 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Received event network-changed-c759e48d-48de-4316-a1e4-9c04eb965fd0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.561 2 DEBUG nova.compute.manager [req-32c0dfe8-638c-471c-b1b9-51a75860fe39 req-5f7c1267-c9f9-4d12-ba48-d6f15cea3b5c 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Refreshing instance network info cache due to event network-changed-c759e48d-48de-4316-a1e4-9c04eb965fd0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.562 2 DEBUG oslo_concurrency.lockutils [req-32c0dfe8-638c-471c-b1b9-51a75860fe39 req-5f7c1267-c9f9-4d12-ba48-d6f15cea3b5c 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.614 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.615 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.616 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.616 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.660 2 DEBUG nova.storage.rbd_utils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.669 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:51 compute-0 nova_compute[355794]: 2025-10-02 19:45:51.721 2 DEBUG nova.network.neutron [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:45:51 compute-0 ceph-mon[191910]: pgmap v1222: 321 pgs: 321 active+clean; 78 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 767 B/s wr, 0 op/s
Oct 02 19:45:51 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4167818413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:45:52 compute-0 nova_compute[355794]: 2025-10-02 19:45:52.277 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:52 compute-0 nova_compute[355794]: 2025-10-02 19:45:52.518 2 DEBUG nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:45:52 compute-0 nova_compute[355794]: 2025-10-02 19:45:52.519 2 DEBUG nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Ensure instance console log exists: /var/lib/nova/instances/4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:45:52 compute-0 nova_compute[355794]: 2025-10-02 19:45:52.520 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:52 compute-0 nova_compute[355794]: 2025-10-02 19:45:52.521 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:52 compute-0 nova_compute[355794]: 2025-10-02 19:45:52.522 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1223: 321 pgs: 321 active+clean; 90 MiB data, 223 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 637 KiB/s wr, 2 op/s
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.363 2 DEBUG nova.network.neutron [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Updating instance_info_cache with network_info: [{"id": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "address": "fa:16:3e:10:ab:29", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc759e48d-48", "ovs_interfaceid": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.657 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Releasing lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.657 2 DEBUG nova.compute.manager [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Instance network_info: |[{"id": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "address": "fa:16:3e:10:ab:29", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc759e48d-48", "ovs_interfaceid": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.658 2 DEBUG oslo_concurrency.lockutils [req-32c0dfe8-638c-471c-b1b9-51a75860fe39 req-5f7c1267-c9f9-4d12-ba48-d6f15cea3b5c 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.658 2 DEBUG nova.network.neutron [req-32c0dfe8-638c-471c-b1b9-51a75860fe39 req-5f7c1267-c9f9-4d12-ba48-d6f15cea3b5c 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Refreshing network info cache for port c759e48d-48de-4316-a1e4-9c04eb965fd0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.662 2 DEBUG nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Start _get_guest_xml network_info=[{"id": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "address": "fa:16:3e:10:ab:29", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc759e48d-48", "ovs_interfaceid": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:43:18Z,direct_url=<?>,disk_format='qcow2',id=ce28338d-119e-49e1-ab67-60da8882593a,min_disk=0,min_ram=0,name='cirros',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:43:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'disk_bus': 'virtio', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'image_id': 'ce28338d-119e-49e1-ab67-60da8882593a'}], 'ephemerals': [{'encryption_secret_uuid': None, 'device_name': '/dev/vdb', 'encrypted': False, 'disk_bus': 'virtio', 'size': 1, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.671 2 WARNING nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.683 2 DEBUG nova.virt.libvirt.host [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.684 2 DEBUG nova.virt.libvirt.host [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.688 2 DEBUG nova.virt.libvirt.host [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.689 2 DEBUG nova.virt.libvirt.host [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.689 2 DEBUG nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.689 2 DEBUG nova.virt.hardware [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:43:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='8f0521f8-dc4e-4ca1-bf77-f443ae74db03',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:43:18Z,direct_url=<?>,disk_format='qcow2',id=ce28338d-119e-49e1-ab67-60da8882593a,min_disk=0,min_ram=0,name='cirros',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:43:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.690 2 DEBUG nova.virt.hardware [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.690 2 DEBUG nova.virt.hardware [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.690 2 DEBUG nova.virt.hardware [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.690 2 DEBUG nova.virt.hardware [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.691 2 DEBUG nova.virt.hardware [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.691 2 DEBUG nova.virt.hardware [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.691 2 DEBUG nova.virt.hardware [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.691 2 DEBUG nova.virt.hardware [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.691 2 DEBUG nova.virt.hardware [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.692 2 DEBUG nova.virt.hardware [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:45:53 compute-0 nova_compute[355794]: 2025-10-02 19:45:53.694 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:53 compute-0 podman[422939]: 2025-10-02 19:45:53.720921533 +0000 UTC m=+0.139146437 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, architecture=x86_64, release-0.7.12=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git)
Oct 02 19:45:53 compute-0 podman[422938]: 2025-10-02 19:45:53.728835377 +0000 UTC m=+0.148802188 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 19:45:53 compute-0 ceph-mon[191910]: pgmap v1223: 321 pgs: 321 active+clean; 90 MiB data, 223 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 637 KiB/s wr, 2 op/s
Oct 02 19:45:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 19:45:54 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3685940081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:45:54 compute-0 nova_compute[355794]: 2025-10-02 19:45:54.205 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:54 compute-0 nova_compute[355794]: 2025-10-02 19:45:54.207 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 19:45:54 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1159126338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:45:54 compute-0 nova_compute[355794]: 2025-10-02 19:45:54.706 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 110 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.3 MiB/s wr, 34 op/s
Oct 02 19:45:54 compute-0 nova_compute[355794]: 2025-10-02 19:45:54.772 2 DEBUG nova.storage.rbd_utils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:45:54 compute-0 nova_compute[355794]: 2025-10-02 19:45:54.786 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:54 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3685940081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:45:54 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1159126338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:45:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:45:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 19:45:55 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/624596404' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.288 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.291 2 DEBUG nova.virt.libvirt.vif [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:45:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-wkgthwg-cyepyxtlaijo-cznrkcgobntv-vnf-te2v6j4ustz5',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-wkgthwg-cyepyxtlaijo-cznrkcgobntv-vnf-te2v6j4ustz5',id=2,image_ref='ce28338d-119e-49e1-ab67-60da8882593a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='d2d7e2b0-01e0-44b1-b2c7-fe502b333743'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c35486f37b94d43a7bf2f2fa09c70b9',ramdisk_id='',reservation_id='r-03sz3rjv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='ce28338d-119e-49e1-ab67-60da8882593a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:45:50Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MDQyOTk1MjA4MzUzMzEzNzA3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgwNDI5OTUyMDgzNTMzMTM3MDc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODA0Mjk5NTIwODM1MzMxMzcwNz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgwNDI5OTUyMDgzNTMzMTM3MDc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MDQyOTk1MjA4MzUzMzEzNzA3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MDQyOTk1MjA4MzUzMzEzNzA3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3
Oct 02 19:45:55 compute-0 nova_compute[355794]: Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODA0Mjk5NTIwODM1MzMxMzcwNz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgwNDI5OTUyMDgzNTMzMTM3MDc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MDQyOTk1MjA4MzUzMzEzNzA3PT0tLQo=',user_id='811fb7ac717e4ba9b9874e5454ee08f4',uuid=4cdbea11-17d2-4466-a5f5-9a3d25e25d8a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "address": "fa:16:3e:10:ab:29", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc759e48d-48", "ovs_interfaceid": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.292 2 DEBUG nova.network.os_vif_util [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converting VIF {"id": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "address": "fa:16:3e:10:ab:29", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc759e48d-48", "ovs_interfaceid": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.293 2 DEBUG nova.network.os_vif_util [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:ab:29,bridge_name='br-int',has_traffic_filtering=True,id=c759e48d-48de-4316-a1e4-9c04eb965fd0,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc759e48d-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.296 2 DEBUG nova.objects.instance [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.321 2 DEBUG nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:45:55 compute-0 nova_compute[355794]:   <uuid>4cdbea11-17d2-4466-a5f5-9a3d25e25d8a</uuid>
Oct 02 19:45:55 compute-0 nova_compute[355794]:   <name>instance-00000002</name>
Oct 02 19:45:55 compute-0 nova_compute[355794]:   <memory>524288</memory>
Oct 02 19:45:55 compute-0 nova_compute[355794]:   <vcpu>1</vcpu>
Oct 02 19:45:55 compute-0 nova_compute[355794]:   <metadata>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <nova:name>vn-wkgthwg-cyepyxtlaijo-cznrkcgobntv-vnf-te2v6j4ustz5</nova:name>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <nova:creationTime>2025-10-02 19:45:53</nova:creationTime>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <nova:flavor name="m1.small">
Oct 02 19:45:55 compute-0 nova_compute[355794]:         <nova:memory>512</nova:memory>
Oct 02 19:45:55 compute-0 nova_compute[355794]:         <nova:disk>1</nova:disk>
Oct 02 19:45:55 compute-0 nova_compute[355794]:         <nova:swap>0</nova:swap>
Oct 02 19:45:55 compute-0 nova_compute[355794]:         <nova:ephemeral>1</nova:ephemeral>
Oct 02 19:45:55 compute-0 nova_compute[355794]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       </nova:flavor>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <nova:owner>
Oct 02 19:45:55 compute-0 nova_compute[355794]:         <nova:user uuid="811fb7ac717e4ba9b9874e5454ee08f4">admin</nova:user>
Oct 02 19:45:55 compute-0 nova_compute[355794]:         <nova:project uuid="1c35486f37b94d43a7bf2f2fa09c70b9">admin</nova:project>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       </nova:owner>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <nova:root type="image" uuid="ce28338d-119e-49e1-ab67-60da8882593a"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <nova:ports>
Oct 02 19:45:55 compute-0 nova_compute[355794]:         <nova:port uuid="c759e48d-48de-4316-a1e4-9c04eb965fd0">
Oct 02 19:45:55 compute-0 nova_compute[355794]:           <nova:ip type="fixed" address="192.168.0.227" ipVersion="4"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:         </nova:port>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       </nova:ports>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     </nova:instance>
Oct 02 19:45:55 compute-0 nova_compute[355794]:   </metadata>
Oct 02 19:45:55 compute-0 nova_compute[355794]:   <sysinfo type="smbios">
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <system>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <entry name="serial">4cdbea11-17d2-4466-a5f5-9a3d25e25d8a</entry>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <entry name="uuid">4cdbea11-17d2-4466-a5f5-9a3d25e25d8a</entry>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     </system>
Oct 02 19:45:55 compute-0 nova_compute[355794]:   </sysinfo>
Oct 02 19:45:55 compute-0 nova_compute[355794]:   <os>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <boot dev="hd"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <smbios mode="sysinfo"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:   </os>
Oct 02 19:45:55 compute-0 nova_compute[355794]:   <features>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <acpi/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <apic/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <vmcoreinfo/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:   </features>
Oct 02 19:45:55 compute-0 nova_compute[355794]:   <clock offset="utc">
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <timer name="hpet" present="no"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:   </clock>
Oct 02 19:45:55 compute-0 nova_compute[355794]:   <cpu mode="host-model" match="exact">
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:   </cpu>
Oct 02 19:45:55 compute-0 nova_compute[355794]:   <devices>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk">
Oct 02 19:45:55 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       </source>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 19:45:55 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       </auth>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <target dev="vda" bus="virtio"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk.eph0">
Oct 02 19:45:55 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       </source>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 19:45:55 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       </auth>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <target dev="vdb" bus="virtio"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <disk type="network" device="cdrom">
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk.config">
Oct 02 19:45:55 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       </source>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 19:45:55 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       </auth>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <target dev="sda" bus="sata"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <interface type="ethernet">
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <mac address="fa:16:3e:10:ab:29"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <mtu size="1442"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <target dev="tapc759e48d-48"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     </interface>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <serial type="pty">
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <log file="/var/lib/nova/instances/4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/console.log" append="off"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     </serial>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <video>
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     </video>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <input type="tablet" bus="usb"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <rng model="virtio">
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     </rng>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <controller type="usb" index="0"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     <memballoon model="virtio">
Oct 02 19:45:55 compute-0 nova_compute[355794]:       <stats period="10"/>
Oct 02 19:45:55 compute-0 nova_compute[355794]:     </memballoon>
Oct 02 19:45:55 compute-0 nova_compute[355794]:   </devices>
Oct 02 19:45:55 compute-0 nova_compute[355794]: </domain>
Oct 02 19:45:55 compute-0 nova_compute[355794]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.322 2 DEBUG nova.compute.manager [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Preparing to wait for external event network-vif-plugged-c759e48d-48de-4316-a1e4-9c04eb965fd0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.323 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.323 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.323 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.325 2 DEBUG nova.virt.libvirt.vif [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:45:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-wkgthwg-cyepyxtlaijo-cznrkcgobntv-vnf-te2v6j4ustz5',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-wkgthwg-cyepyxtlaijo-cznrkcgobntv-vnf-te2v6j4ustz5',id=2,image_ref='ce28338d-119e-49e1-ab67-60da8882593a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='d2d7e2b0-01e0-44b1-b2c7-fe502b333743'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c35486f37b94d43a7bf2f2fa09c70b9',ramdisk_id='',reservation_id='r-03sz3rjv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='ce28338d-119e-49e1-ab67-60da8882593a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:45:50Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MDQyOTk1MjA4MzUzMzEzNzA3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgwNDI5OTUyMDgzNTMzMTM3MDc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODA0Mjk5NTIwODM1MzMxMzcwNz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgwNDI5OTUyMDgzNTMzMTM3MDc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MDQyOTk1MjA4MzUzMzEzNzA3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MDQyOTk1MjA4MzUzMzEzNzA3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4o
Oct 02 19:45:55 compute-0 nova_compute[355794]: YXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODA0Mjk5NTIwODM1MzMxMzcwNz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgwNDI5OTUyMDgzNTMzMTM3MDc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MDQyOTk1MjA4MzUzMzEzNzA3PT0tLQo=',user_id='811fb7ac717e4ba9b9874e5454ee08f4',uuid=4cdbea11-17d2-4466-a5f5-9a3d25e25d8a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "address": "fa:16:3e:10:ab:29", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc759e48d-48", "ovs_interfaceid": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.325 2 DEBUG nova.network.os_vif_util [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converting VIF {"id": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "address": "fa:16:3e:10:ab:29", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc759e48d-48", "ovs_interfaceid": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.326 2 DEBUG nova.network.os_vif_util [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:ab:29,bridge_name='br-int',has_traffic_filtering=True,id=c759e48d-48de-4316-a1e4-9c04eb965fd0,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc759e48d-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.327 2 DEBUG os_vif [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:ab:29,bridge_name='br-int',has_traffic_filtering=True,id=c759e48d-48de-4316-a1e4-9c04eb965fd0,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc759e48d-48') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.329 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.329 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc759e48d-48, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.336 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc759e48d-48, col_values=(('external_ids', {'iface-id': 'c759e48d-48de-4316-a1e4-9c04eb965fd0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:ab:29', 'vm-uuid': '4cdbea11-17d2-4466-a5f5-9a3d25e25d8a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:55 compute-0 NetworkManager[44968]: <info>  [1759434355.3407] manager: (tapc759e48d-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.358 2 INFO os_vif [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:ab:29,bridge_name='br-int',has_traffic_filtering=True,id=c759e48d-48de-4316-a1e4-9c04eb965fd0,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc759e48d-48')
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.432 2 DEBUG nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.432 2 DEBUG nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.433 2 DEBUG nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.433 2 DEBUG nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No VIF found with MAC fa:16:3e:10:ab:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.433 2 INFO nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Using config drive
Oct 02 19:45:55 compute-0 rsyslogd[187702]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 19:45:55.291 2 DEBUG nova.virt.libvirt.vif [None req-6f2cfdf4-8f6a-43 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 19:45:55 compute-0 rsyslogd[187702]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 19:45:55.325 2 DEBUG nova.virt.libvirt.vif [None req-6f2cfdf4-8f6a-43 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.486 2 DEBUG nova.storage.rbd_utils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.556 2 DEBUG nova.network.neutron [req-32c0dfe8-638c-471c-b1b9-51a75860fe39 req-5f7c1267-c9f9-4d12-ba48-d6f15cea3b5c 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Updated VIF entry in instance network info cache for port c759e48d-48de-4316-a1e4-9c04eb965fd0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.557 2 DEBUG nova.network.neutron [req-32c0dfe8-638c-471c-b1b9-51a75860fe39 req-5f7c1267-c9f9-4d12-ba48-d6f15cea3b5c 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Updating instance_info_cache with network_info: [{"id": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "address": "fa:16:3e:10:ab:29", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc759e48d-48", "ovs_interfaceid": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.575 2 DEBUG oslo_concurrency.lockutils [req-32c0dfe8-638c-471c-b1b9-51a75860fe39 req-5f7c1267-c9f9-4d12-ba48-d6f15cea3b5c 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:45:55 compute-0 podman[423080]: 2025-10-02 19:45:55.715842875 +0000 UTC m=+0.119320641 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 19:45:55 compute-0 podman[423079]: 2025-10-02 19:45:55.728349302 +0000 UTC m=+0.134549712 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:45:55 compute-0 podman[423081]: 2025-10-02 19:45:55.769658907 +0000 UTC m=+0.172978409 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.773 2 INFO nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Creating config drive at /var/lib/nova/instances/4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.config
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.780 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6xlqxj9d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:55 compute-0 ceph-mon[191910]: pgmap v1224: 321 pgs: 321 active+clean; 110 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.3 MiB/s wr, 34 op/s
Oct 02 19:45:55 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/624596404' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.919 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6xlqxj9d" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.963 2 DEBUG nova.storage.rbd_utils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:45:55 compute-0 nova_compute[355794]: 2025-10-02 19:45:55.975 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.config 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:56 compute-0 nova_compute[355794]: 2025-10-02 19:45:56.340 2 DEBUG oslo_concurrency.processutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.config 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:56 compute-0 nova_compute[355794]: 2025-10-02 19:45:56.342 2 INFO nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Deleting local config drive /var/lib/nova/instances/4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.config because it was imported into RBD.
Oct 02 19:45:56 compute-0 kernel: tapc759e48d-48: entered promiscuous mode
Oct 02 19:45:56 compute-0 ovn_controller[88435]: 2025-10-02T19:45:56Z|00035|binding|INFO|Claiming lport c759e48d-48de-4316-a1e4-9c04eb965fd0 for this chassis.
Oct 02 19:45:56 compute-0 ovn_controller[88435]: 2025-10-02T19:45:56Z|00036|binding|INFO|c759e48d-48de-4316-a1e4-9c04eb965fd0: Claiming fa:16:3e:10:ab:29 192.168.0.227
Oct 02 19:45:56 compute-0 NetworkManager[44968]: <info>  [1759434356.4614] manager: (tapc759e48d-48): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Oct 02 19:45:56 compute-0 nova_compute[355794]: 2025-10-02 19:45:56.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:56 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:56.475 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:ab:29 192.168.0.227'], port_security=['fa:16:3e:10:ab:29 192.168.0.227'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-rlri6wkgthwg-cyepyxtlaijo-cznrkcgobntv-port-7t6l3urie5h2', 'neutron:cidrs': '192.168.0.227/24', 'neutron:device_id': '4cdbea11-17d2-4466-a5f5-9a3d25e25d8a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-rlri6wkgthwg-cyepyxtlaijo-cznrkcgobntv-port-7t6l3urie5h2', 'neutron:project_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '457fbfbc-bc7f-4eb1-986f-4ac88ca280aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a344602b-6814-4d50-9f7f-4fc26c3ad853, chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=c759e48d-48de-4316-a1e4-9c04eb965fd0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:45:56 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:56.478 285790 INFO neutron.agent.ovn.metadata.agent [-] Port c759e48d-48de-4316-a1e4-9c04eb965fd0 in datapath 6e3c6c60-2fbc-4181-942a-00056fc667f2 bound to our chassis
Oct 02 19:45:56 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:56.481 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e3c6c60-2fbc-4181-942a-00056fc667f2
Oct 02 19:45:56 compute-0 nova_compute[355794]: 2025-10-02 19:45:56.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:56 compute-0 ovn_controller[88435]: 2025-10-02T19:45:56Z|00037|binding|INFO|Setting lport c759e48d-48de-4316-a1e4-9c04eb965fd0 ovn-installed in OVS
Oct 02 19:45:56 compute-0 ovn_controller[88435]: 2025-10-02T19:45:56Z|00038|binding|INFO|Setting lport c759e48d-48de-4316-a1e4-9c04eb965fd0 up in Southbound
Oct 02 19:45:56 compute-0 nova_compute[355794]: 2025-10-02 19:45:56.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:56 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:56.513 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0d63a8-0a71-41bd-937a-6cc3baf9ecb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:56 compute-0 systemd-machined[137646]: New machine qemu-2-instance-00000002.
Oct 02 19:45:56 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Oct 02 19:45:56 compute-0 systemd-udevd[423195]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:45:56 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:56.562 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[3bf2bded-1476-4d53-9ee7-872b1135af9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:56 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:56.566 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[c84fccc9-f443-4cc8-aca0-b15f7ab91404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:56 compute-0 NetworkManager[44968]: <info>  [1759434356.5779] device (tapc759e48d-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:45:56 compute-0 NetworkManager[44968]: <info>  [1759434356.5797] device (tapc759e48d-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:45:56 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:56.621 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[618fb83f-d0da-4b1c-89c3-f9b5d86f0973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:56 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:56.653 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[e4cc358a-ceaa-4949-a3b5-bde1e32986d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e3c6c60-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:df:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545739, 'reachable_time': 25030, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423204, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:56 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:56.686 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[3de58160-5a71-415f-a415-66dae93c027d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e3c6c60-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545752, 'tstamp': 545752}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423206, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap6e3c6c60-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545756, 'tstamp': 545756}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423206, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:56 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:56.691 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e3c6c60-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:56 compute-0 nova_compute[355794]: 2025-10-02 19:45:56.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:56 compute-0 nova_compute[355794]: 2025-10-02 19:45:56.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:56 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:56.697 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e3c6c60-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:56 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:56.698 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:45:56 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:56.698 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e3c6c60-20, col_values=(('external_ids', {'iface-id': '4a8af5bc-5352-4506-b6d3-43b5d33802a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:56 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:45:56.699 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:45:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 110 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Oct 02 19:45:56 compute-0 nova_compute[355794]: 2025-10-02 19:45:56.778 2 DEBUG nova.compute.manager [req-1a303376-9f4c-47e2-a5e4-9b7f6192c79d req-b863bba3-92a8-4a91-bf05-ac29fbc3b319 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Received event network-vif-plugged-c759e48d-48de-4316-a1e4-9c04eb965fd0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:45:56 compute-0 nova_compute[355794]: 2025-10-02 19:45:56.779 2 DEBUG oslo_concurrency.lockutils [req-1a303376-9f4c-47e2-a5e4-9b7f6192c79d req-b863bba3-92a8-4a91-bf05-ac29fbc3b319 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:56 compute-0 nova_compute[355794]: 2025-10-02 19:45:56.781 2 DEBUG oslo_concurrency.lockutils [req-1a303376-9f4c-47e2-a5e4-9b7f6192c79d req-b863bba3-92a8-4a91-bf05-ac29fbc3b319 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:56 compute-0 nova_compute[355794]: 2025-10-02 19:45:56.782 2 DEBUG oslo_concurrency.lockutils [req-1a303376-9f4c-47e2-a5e4-9b7f6192c79d req-b863bba3-92a8-4a91-bf05-ac29fbc3b319 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:56 compute-0 nova_compute[355794]: 2025-10-02 19:45:56.782 2 DEBUG nova.compute.manager [req-1a303376-9f4c-47e2-a5e4-9b7f6192c79d req-b863bba3-92a8-4a91-bf05-ac29fbc3b319 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Processing event network-vif-plugged-c759e48d-48de-4316-a1e4-9c04eb965fd0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 19:45:57 compute-0 nova_compute[355794]: 2025-10-02 19:45:57.268 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:57 compute-0 ceph-mon[191910]: pgmap v1225: 321 pgs: 321 active+clean; 110 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.122 2 DEBUG nova.compute.manager [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.125 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759434358.1248043, 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.126 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] VM Started (Lifecycle Event)
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.130 2 DEBUG nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.140 2 INFO nova.virt.libvirt.driver [-] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Instance spawned successfully.
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.142 2 DEBUG nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.149 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.163 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.176 2 DEBUG nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.177 2 DEBUG nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.178 2 DEBUG nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.179 2 DEBUG nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.181 2 DEBUG nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.184 2 DEBUG nova.virt.libvirt.driver [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.199 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.201 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759434358.1250274, 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.202 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] VM Paused (Lifecycle Event)
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.242 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.248 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759434358.1335688, 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.249 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] VM Resumed (Lifecycle Event)
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.272 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.277 2 INFO nova.compute.manager [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Took 8.09 seconds to spawn the instance on the hypervisor.
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.278 2 DEBUG nova.compute.manager [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.281 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.312 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.346 2 INFO nova.compute.manager [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Took 9.64 seconds to build instance.
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.361 2 DEBUG oslo_concurrency.lockutils [None req-6f2cfdf4-8f6a-43e5-be5f-3320c622fb78 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 110 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 MiB/s wr, 39 op/s
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.908 2 DEBUG nova.compute.manager [req-cdf10378-51d6-4eb8-afd1-9c842bff7f90 req-e87f27a0-4b68-478f-9f67-16d923ecb12e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Received event network-vif-plugged-c759e48d-48de-4316-a1e4-9c04eb965fd0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.909 2 DEBUG oslo_concurrency.lockutils [req-cdf10378-51d6-4eb8-afd1-9c842bff7f90 req-e87f27a0-4b68-478f-9f67-16d923ecb12e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.909 2 DEBUG oslo_concurrency.lockutils [req-cdf10378-51d6-4eb8-afd1-9c842bff7f90 req-e87f27a0-4b68-478f-9f67-16d923ecb12e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.910 2 DEBUG oslo_concurrency.lockutils [req-cdf10378-51d6-4eb8-afd1-9c842bff7f90 req-e87f27a0-4b68-478f-9f67-16d923ecb12e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.910 2 DEBUG nova.compute.manager [req-cdf10378-51d6-4eb8-afd1-9c842bff7f90 req-e87f27a0-4b68-478f-9f67-16d923ecb12e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] No waiting events found dispatching network-vif-plugged-c759e48d-48de-4316-a1e4-9c04eb965fd0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:45:58 compute-0 nova_compute[355794]: 2025-10-02 19:45:58.910 2 WARNING nova.compute.manager [req-cdf10378-51d6-4eb8-afd1-9c842bff7f90 req-e87f27a0-4b68-478f-9f67-16d923ecb12e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Received unexpected event network-vif-plugged-c759e48d-48de-4316-a1e4-9c04eb965fd0 for instance with vm_state active and task_state None.
Oct 02 19:45:59 compute-0 podman[157186]: time="2025-10-02T19:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:45:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:45:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9042 "" "Go-http-client/1.1"
Oct 02 19:45:59 compute-0 ceph-mon[191910]: pgmap v1226: 321 pgs: 321 active+clean; 110 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 MiB/s wr, 39 op/s
Oct 02 19:46:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:46:00 compute-0 nova_compute[355794]: 2025-10-02 19:46:00.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:00 compute-0 podman[423269]: 2025-10-02 19:46:00.684087897 +0000 UTC m=+0.111665065 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:46:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.4 MiB/s wr, 50 op/s
Oct 02 19:46:00 compute-0 podman[423268]: 2025-10-02 19:46:00.730626433 +0000 UTC m=+0.153513814 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1755695350, architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vcs-type=git, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=)
Oct 02 19:46:01 compute-0 openstack_network_exporter[372736]: ERROR   19:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:46:01 compute-0 openstack_network_exporter[372736]: ERROR   19:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:46:01 compute-0 openstack_network_exporter[372736]: ERROR   19:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:46:01 compute-0 openstack_network_exporter[372736]: ERROR   19:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:46:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:46:01 compute-0 openstack_network_exporter[372736]: ERROR   19:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:46:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:46:01 compute-0 anacron[143512]: Job `cron.weekly' started
Oct 02 19:46:01 compute-0 anacron[143512]: Job `cron.weekly' terminated
Oct 02 19:46:01 compute-0 ceph-mon[191910]: pgmap v1227: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.4 MiB/s wr, 50 op/s
Oct 02 19:46:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 658 KiB/s rd, 1.4 MiB/s wr, 69 op/s
Oct 02 19:46:03 compute-0 nova_compute[355794]: 2025-10-02 19:46:03.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:46:03
Oct 02 19:46:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:46:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:46:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', '.rgw.root', 'images', 'volumes', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct 02 19:46:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:46:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:46:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:46:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:46:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:46:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:46:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:46:03 compute-0 ceph-mon[191910]: pgmap v1228: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 658 KiB/s rd, 1.4 MiB/s wr, 69 op/s
Oct 02 19:46:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:46:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:46:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:46:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:46:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:46:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:46:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:46:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:46:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:46:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:46:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 774 KiB/s wr, 95 op/s
Oct 02 19:46:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:46:05 compute-0 nova_compute[355794]: 2025-10-02 19:46:05.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:05 compute-0 ceph-mon[191910]: pgmap v1229: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 774 KiB/s wr, 95 op/s
Oct 02 19:46:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 30 KiB/s wr, 63 op/s
Oct 02 19:46:07 compute-0 ceph-mon[191910]: pgmap v1230: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 30 KiB/s wr, 63 op/s
Oct 02 19:46:08 compute-0 nova_compute[355794]: 2025-10-02 19:46:08.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Oct 02 19:46:10 compute-0 ceph-mon[191910]: pgmap v1231: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Oct 02 19:46:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:46:10 compute-0 nova_compute[355794]: 2025-10-02 19:46:10.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 58 op/s
Oct 02 19:46:12 compute-0 ceph-mon[191910]: pgmap v1232: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 58 op/s
Oct 02 19:46:12 compute-0 podman[423313]: 2025-10-02 19:46:12.711257366 +0000 UTC m=+0.135463467 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 47 op/s
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008202565401771179 of space, bias 1.0, pg target 0.24607696205313537 quantized to 32 (current 32)
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:46:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:46:13 compute-0 nova_compute[355794]: 2025-10-02 19:46:13.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:14 compute-0 ceph-mon[191910]: pgmap v1233: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 47 op/s
Oct 02 19:46:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 887 KiB/s rd, 28 op/s
Oct 02 19:46:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:46:15 compute-0 nova_compute[355794]: 2025-10-02 19:46:15.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:16 compute-0 ceph-mon[191910]: pgmap v1234: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 887 KiB/s rd, 28 op/s
Oct 02 19:46:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 0 op/s
Oct 02 19:46:18 compute-0 ceph-mon[191910]: pgmap v1235: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 0 op/s
Oct 02 19:46:18 compute-0 nova_compute[355794]: 2025-10-02 19:46:18.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:18 compute-0 podman[423332]: 2025-10-02 19:46:18.708524061 +0000 UTC m=+0.127409089 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:46:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:46:18 compute-0 podman[423333]: 2025-10-02 19:46:18.750599207 +0000 UTC m=+0.169526127 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4)
Oct 02 19:46:20 compute-0 ceph-mon[191910]: pgmap v1236: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:46:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:46:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1162983528' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:46:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:46:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1162983528' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:46:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:46:20 compute-0 nova_compute[355794]: 2025-10-02 19:46:20.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:46:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1162983528' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:46:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1162983528' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:46:22 compute-0 ceph-mon[191910]: pgmap v1237: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:46:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 02 19:46:23 compute-0 nova_compute[355794]: 2025-10-02 19:46:23.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:24 compute-0 ceph-mon[191910]: pgmap v1238: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 02 19:46:24 compute-0 podman[423378]: 2025-10-02 19:46:24.686453703 +0000 UTC m=+0.103287298 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, io.openshift.tags=base rhel9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release-0.7.12=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, config_id=edpm, distribution-scope=public)
Oct 02 19:46:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 02 19:46:24 compute-0 podman[423377]: 2025-10-02 19:46:24.741037597 +0000 UTC m=+0.157559774 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:46:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:46:25 compute-0 nova_compute[355794]: 2025-10-02 19:46:25.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:26 compute-0 ceph-mon[191910]: pgmap v1239: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 02 19:46:26 compute-0 ovn_controller[88435]: 2025-10-02T19:46:26Z|00039|memory_trim|INFO|Detected inactivity (last active 30025 ms ago): trimming memory
Oct 02 19:46:26 compute-0 podman[423414]: 2025-10-02 19:46:26.689437254 +0000 UTC m=+0.107963915 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:46:26 compute-0 podman[423415]: 2025-10-02 19:46:26.724588372 +0000 UTC m=+0.133995897 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:46:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 02 19:46:26 compute-0 podman[423416]: 2025-10-02 19:46:26.758776534 +0000 UTC m=+0.156587036 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 02 19:46:28 compute-0 ceph-mon[191910]: pgmap v1240: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 02 19:46:28 compute-0 nova_compute[355794]: 2025-10-02 19:46:28.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 02 19:46:29 compute-0 podman[157186]: time="2025-10-02T19:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:46:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:46:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9041 "" "Go-http-client/1.1"
Oct 02 19:46:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:46:30 compute-0 ceph-mon[191910]: pgmap v1241: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 02 19:46:30 compute-0 nova_compute[355794]: 2025-10-02 19:46:30.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 02 19:46:31 compute-0 openstack_network_exporter[372736]: ERROR   19:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:46:31 compute-0 openstack_network_exporter[372736]: ERROR   19:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:46:31 compute-0 openstack_network_exporter[372736]: ERROR   19:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:46:31 compute-0 openstack_network_exporter[372736]: ERROR   19:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:46:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:46:31 compute-0 openstack_network_exporter[372736]: ERROR   19:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:46:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:46:31 compute-0 podman[423472]: 2025-10-02 19:46:31.725453624 +0000 UTC m=+0.145272142 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 02 19:46:31 compute-0 podman[423473]: 2025-10-02 19:46:31.727491409 +0000 UTC m=+0.127317907 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:46:32 compute-0 ceph-mon[191910]: pgmap v1242: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 02 19:46:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:46:32.296 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:46:32.297 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:46:32.298 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 5.3 KiB/s wr, 4 op/s
Oct 02 19:46:33 compute-0 nova_compute[355794]: 2025-10-02 19:46:33.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:46:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:46:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:46:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:46:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:46:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:46:33 compute-0 ovn_controller[88435]: 2025-10-02T19:46:33Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:10:ab:29 192.168.0.227
Oct 02 19:46:33 compute-0 ovn_controller[88435]: 2025-10-02T19:46:33Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:10:ab:29 192.168.0.227
Oct 02 19:46:34 compute-0 ceph-mon[191910]: pgmap v1243: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 5.3 KiB/s wr, 4 op/s
Oct 02 19:46:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 115 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 630 KiB/s wr, 20 op/s
Oct 02 19:46:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:46:35 compute-0 nova_compute[355794]: 2025-10-02 19:46:35.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:36 compute-0 ceph-mon[191910]: pgmap v1244: 321 pgs: 321 active+clean; 115 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 630 KiB/s wr, 20 op/s
Oct 02 19:46:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 122 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 1014 KiB/s wr, 33 op/s
Oct 02 19:46:37 compute-0 nova_compute[355794]: 2025-10-02 19:46:37.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:37 compute-0 nova_compute[355794]: 2025-10-02 19:46:37.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 19:46:37 compute-0 nova_compute[355794]: 2025-10-02 19:46:37.603 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 19:46:38 compute-0 ceph-mon[191910]: pgmap v1245: 321 pgs: 321 active+clean; 122 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 1014 KiB/s wr, 33 op/s
Oct 02 19:46:38 compute-0 nova_compute[355794]: 2025-10-02 19:46:38.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 131 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 141 KiB/s rd, 1.4 MiB/s wr, 45 op/s
Oct 02 19:46:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:46:40 compute-0 ceph-mon[191910]: pgmap v1246: 321 pgs: 321 active+clean; 131 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 141 KiB/s rd, 1.4 MiB/s wr, 45 op/s
Oct 02 19:46:40 compute-0 nova_compute[355794]: 2025-10-02 19:46:40.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct 02 19:46:41 compute-0 sudo[423512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:46:42 compute-0 sudo[423512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:46:42 compute-0 sudo[423512]: pam_unix(sudo:session): session closed for user root
Oct 02 19:46:42 compute-0 sudo[423537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:46:42 compute-0 sudo[423537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:46:42 compute-0 sudo[423537]: pam_unix(sudo:session): session closed for user root
Oct 02 19:46:42 compute-0 ceph-mon[191910]: pgmap v1247: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct 02 19:46:42 compute-0 sudo[423562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:46:42 compute-0 sudo[423562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:46:42 compute-0 sudo[423562]: pam_unix(sudo:session): session closed for user root
Oct 02 19:46:42 compute-0 sudo[423587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:46:42 compute-0 sudo[423587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:46:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct 02 19:46:43 compute-0 sudo[423587]: pam_unix(sudo:session): session closed for user root
Oct 02 19:46:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:46:43 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:46:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:46:43 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:46:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:46:43 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:46:43 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 64b8b0af-c3e1-4492-b485-26646c5b31c8 does not exist
Oct 02 19:46:43 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 33add72e-6a53-41f3-ba9b-b5516ab100e5 does not exist
Oct 02 19:46:43 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev f7005453-03b0-4d13-8daf-7664c03ed559 does not exist
Oct 02 19:46:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:46:43 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:46:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:46:43 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:46:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:46:43 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:46:43 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:46:43 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:46:43 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:46:43 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:46:43 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:46:43 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:46:43 compute-0 sudo[423642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:46:43 compute-0 sudo[423642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:46:43 compute-0 sudo[423642]: pam_unix(sudo:session): session closed for user root
Oct 02 19:46:43 compute-0 sudo[423671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:46:43 compute-0 sudo[423671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:46:43 compute-0 podman[423666]: 2025-10-02 19:46:43.489099922 +0000 UTC m=+0.117576249 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:46:43 compute-0 sudo[423671]: pam_unix(sudo:session): session closed for user root
Oct 02 19:46:43 compute-0 nova_compute[355794]: 2025-10-02 19:46:43.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:43 compute-0 nova_compute[355794]: 2025-10-02 19:46:43.604 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:43 compute-0 nova_compute[355794]: 2025-10-02 19:46:43.605 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:46:43 compute-0 sudo[423712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:46:43 compute-0 sudo[423712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:46:43 compute-0 sudo[423712]: pam_unix(sudo:session): session closed for user root
Oct 02 19:46:43 compute-0 sudo[423737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:46:43 compute-0 sudo[423737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:46:44 compute-0 ceph-mon[191910]: pgmap v1248: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct 02 19:46:44 compute-0 podman[423801]: 2025-10-02 19:46:44.380200605 +0000 UTC m=+0.075965212 container create 0cbcacca87b142e10592414ab50969fc3c72aa5622eb74377fbd84352a8c6f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:46:44 compute-0 podman[423801]: 2025-10-02 19:46:44.34730202 +0000 UTC m=+0.043066727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:46:44 compute-0 systemd[1]: Started libpod-conmon-0cbcacca87b142e10592414ab50969fc3c72aa5622eb74377fbd84352a8c6f43.scope.
Oct 02 19:46:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:46:44 compute-0 podman[423801]: 2025-10-02 19:46:44.542654137 +0000 UTC m=+0.238418814 container init 0cbcacca87b142e10592414ab50969fc3c72aa5622eb74377fbd84352a8c6f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 19:46:44 compute-0 podman[423801]: 2025-10-02 19:46:44.561720614 +0000 UTC m=+0.257485251 container start 0cbcacca87b142e10592414ab50969fc3c72aa5622eb74377fbd84352a8c6f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:46:44 compute-0 podman[423801]: 2025-10-02 19:46:44.569230333 +0000 UTC m=+0.264994980 container attach 0cbcacca87b142e10592414ab50969fc3c72aa5622eb74377fbd84352a8c6f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 19:46:44 compute-0 naughty_mirzakhani[423818]: 167 167
Oct 02 19:46:44 compute-0 systemd[1]: libpod-0cbcacca87b142e10592414ab50969fc3c72aa5622eb74377fbd84352a8c6f43.scope: Deactivated successfully.
Oct 02 19:46:44 compute-0 podman[423801]: 2025-10-02 19:46:44.576578689 +0000 UTC m=+0.272343336 container died 0cbcacca87b142e10592414ab50969fc3c72aa5622eb74377fbd84352a8c6f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 19:46:44 compute-0 nova_compute[355794]: 2025-10-02 19:46:44.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-7df065537a796e9b54c1e63be14f835e2a962297fd780677b7ac1957ec891eb3-merged.mount: Deactivated successfully.
Oct 02 19:46:44 compute-0 podman[423801]: 2025-10-02 19:46:44.666298435 +0000 UTC m=+0.362063042 container remove 0cbcacca87b142e10592414ab50969fc3c72aa5622eb74377fbd84352a8c6f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mirzakhani, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:46:44 compute-0 systemd[1]: libpod-conmon-0cbcacca87b142e10592414ab50969fc3c72aa5622eb74377fbd84352a8c6f43.scope: Deactivated successfully.
Oct 02 19:46:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 1.5 MiB/s wr, 54 op/s
Oct 02 19:46:44 compute-0 podman[423844]: 2025-10-02 19:46:44.952662612 +0000 UTC m=+0.096575500 container create d9fb5a7793d82a1296a34c3efc4cdddc6ea42ca0a6b7fe451de94ac783b45965 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:46:45 compute-0 podman[423844]: 2025-10-02 19:46:44.924230336 +0000 UTC m=+0.068143224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:46:45 compute-0 systemd[1]: Started libpod-conmon-d9fb5a7793d82a1296a34c3efc4cdddc6ea42ca0a6b7fe451de94ac783b45965.scope.
Oct 02 19:46:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06209123da550751727ee1adf697dd64da4982aca4d6e29cf24b4a0e17fe2717/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06209123da550751727ee1adf697dd64da4982aca4d6e29cf24b4a0e17fe2717/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06209123da550751727ee1adf697dd64da4982aca4d6e29cf24b4a0e17fe2717/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06209123da550751727ee1adf697dd64da4982aca4d6e29cf24b4a0e17fe2717/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06209123da550751727ee1adf697dd64da4982aca4d6e29cf24b4a0e17fe2717/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:46:45 compute-0 podman[423844]: 2025-10-02 19:46:45.114466286 +0000 UTC m=+0.258379184 container init d9fb5a7793d82a1296a34c3efc4cdddc6ea42ca0a6b7fe451de94ac783b45965 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 19:46:45 compute-0 podman[423844]: 2025-10-02 19:46:45.138841465 +0000 UTC m=+0.282754363 container start d9fb5a7793d82a1296a34c3efc4cdddc6ea42ca0a6b7fe451de94ac783b45965 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mccarthy, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:46:45 compute-0 podman[423844]: 2025-10-02 19:46:45.145890292 +0000 UTC m=+0.289803230 container attach d9fb5a7793d82a1296a34c3efc4cdddc6ea42ca0a6b7fe451de94ac783b45965 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mccarthy, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 02 19:46:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:46:45 compute-0 nova_compute[355794]: 2025-10-02 19:46:45.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:45 compute-0 nova_compute[355794]: 2025-10-02 19:46:45.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:46 compute-0 ceph-mon[191910]: pgmap v1249: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 1.5 MiB/s wr, 54 op/s
Oct 02 19:46:46 compute-0 adoring_mccarthy[423860]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:46:46 compute-0 adoring_mccarthy[423860]: --> relative data size: 1.0
Oct 02 19:46:46 compute-0 adoring_mccarthy[423860]: --> All data devices are unavailable
Oct 02 19:46:46 compute-0 systemd[1]: libpod-d9fb5a7793d82a1296a34c3efc4cdddc6ea42ca0a6b7fe451de94ac783b45965.scope: Deactivated successfully.
Oct 02 19:46:46 compute-0 systemd[1]: libpod-d9fb5a7793d82a1296a34c3efc4cdddc6ea42ca0a6b7fe451de94ac783b45965.scope: Consumed 1.223s CPU time.
Oct 02 19:46:46 compute-0 podman[423889]: 2025-10-02 19:46:46.529032354 +0000 UTC m=+0.046768215 container died d9fb5a7793d82a1296a34c3efc4cdddc6ea42ca0a6b7fe451de94ac783b45965 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 19:46:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-06209123da550751727ee1adf697dd64da4982aca4d6e29cf24b4a0e17fe2717-merged.mount: Deactivated successfully.
Oct 02 19:46:46 compute-0 nova_compute[355794]: 2025-10-02 19:46:46.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:46 compute-0 nova_compute[355794]: 2025-10-02 19:46:46.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:46:46 compute-0 nova_compute[355794]: 2025-10-02 19:46:46.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:46:46 compute-0 podman[423889]: 2025-10-02 19:46:46.632457215 +0000 UTC m=+0.150193046 container remove d9fb5a7793d82a1296a34c3efc4cdddc6ea42ca0a6b7fe451de94ac783b45965 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mccarthy, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:46:46 compute-0 systemd[1]: libpod-conmon-d9fb5a7793d82a1296a34c3efc4cdddc6ea42ca0a6b7fe451de94ac783b45965.scope: Deactivated successfully.
Oct 02 19:46:46 compute-0 sudo[423737]: pam_unix(sudo:session): session closed for user root
Oct 02 19:46:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 889 KiB/s wr, 37 op/s
Oct 02 19:46:46 compute-0 nova_compute[355794]: 2025-10-02 19:46:46.776 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:46:46 compute-0 nova_compute[355794]: 2025-10-02 19:46:46.776 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:46:46 compute-0 nova_compute[355794]: 2025-10-02 19:46:46.777 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:46:46 compute-0 nova_compute[355794]: 2025-10-02 19:46:46.777 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:46:46 compute-0 sudo[423903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:46:46 compute-0 sudo[423903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:46:46 compute-0 sudo[423903]: pam_unix(sudo:session): session closed for user root
Oct 02 19:46:46 compute-0 sudo[423928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:46:46 compute-0 sudo[423928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:46:46 compute-0 sudo[423928]: pam_unix(sudo:session): session closed for user root
Oct 02 19:46:47 compute-0 sudo[423953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:46:47 compute-0 sudo[423953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:46:47 compute-0 sudo[423953]: pam_unix(sudo:session): session closed for user root
Oct 02 19:46:47 compute-0 sudo[423978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:46:47 compute-0 sudo[423978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:46:47 compute-0 podman[424041]: 2025-10-02 19:46:47.895311848 +0000 UTC m=+0.093538699 container create 022cafa49d49f69c96ddc2155b17fa9f608edb49135325abb04cc26bc443de1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kowalevski, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:46:47 compute-0 podman[424041]: 2025-10-02 19:46:47.855281503 +0000 UTC m=+0.053508414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:46:47 compute-0 systemd[1]: Started libpod-conmon-022cafa49d49f69c96ddc2155b17fa9f608edb49135325abb04cc26bc443de1d.scope.
Oct 02 19:46:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:46:48 compute-0 nova_compute[355794]: 2025-10-02 19:46:48.063 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:46:48 compute-0 podman[424041]: 2025-10-02 19:46:48.077098664 +0000 UTC m=+0.275325525 container init 022cafa49d49f69c96ddc2155b17fa9f608edb49135325abb04cc26bc443de1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kowalevski, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 19:46:48 compute-0 nova_compute[355794]: 2025-10-02 19:46:48.089 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:46:48 compute-0 nova_compute[355794]: 2025-10-02 19:46:48.091 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:46:48 compute-0 nova_compute[355794]: 2025-10-02 19:46:48.092 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:48 compute-0 nova_compute[355794]: 2025-10-02 19:46:48.093 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:48 compute-0 podman[424041]: 2025-10-02 19:46:48.09651724 +0000 UTC m=+0.294744101 container start 022cafa49d49f69c96ddc2155b17fa9f608edb49135325abb04cc26bc443de1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 19:46:48 compute-0 podman[424041]: 2025-10-02 19:46:48.105641833 +0000 UTC m=+0.303868734 container attach 022cafa49d49f69c96ddc2155b17fa9f608edb49135325abb04cc26bc443de1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kowalevski, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:46:48 compute-0 kind_kowalevski[424057]: 167 167
Oct 02 19:46:48 compute-0 systemd[1]: libpod-022cafa49d49f69c96ddc2155b17fa9f608edb49135325abb04cc26bc443de1d.scope: Deactivated successfully.
Oct 02 19:46:48 compute-0 podman[424041]: 2025-10-02 19:46:48.111349475 +0000 UTC m=+0.309576316 container died 022cafa49d49f69c96ddc2155b17fa9f608edb49135325abb04cc26bc443de1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kowalevski, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:46:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ee4d9caf334e12450fa947d43a5658e8c369b0469cc8b85eeefff3dbc659975-merged.mount: Deactivated successfully.
Oct 02 19:46:48 compute-0 podman[424041]: 2025-10-02 19:46:48.190769416 +0000 UTC m=+0.388996267 container remove 022cafa49d49f69c96ddc2155b17fa9f608edb49135325abb04cc26bc443de1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 19:46:48 compute-0 systemd[1]: libpod-conmon-022cafa49d49f69c96ddc2155b17fa9f608edb49135325abb04cc26bc443de1d.scope: Deactivated successfully.
Oct 02 19:46:48 compute-0 ceph-mon[191910]: pgmap v1250: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 889 KiB/s wr, 37 op/s
Oct 02 19:46:48 compute-0 podman[424079]: 2025-10-02 19:46:48.430871263 +0000 UTC m=+0.069028817 container create 0f5e59119ca08fd0a5ec79e9afbeb8ddabef39db41e5ae0f9ef956c9e377d1e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 19:46:48 compute-0 systemd[1]: Started libpod-conmon-0f5e59119ca08fd0a5ec79e9afbeb8ddabef39db41e5ae0f9ef956c9e377d1e9.scope.
Oct 02 19:46:48 compute-0 podman[424079]: 2025-10-02 19:46:48.402715624 +0000 UTC m=+0.040873228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:46:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e8679190a0b49bfebf0ac7aa61f1d4b0d89396e5791dbfc6b137301f38c197/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e8679190a0b49bfebf0ac7aa61f1d4b0d89396e5791dbfc6b137301f38c197/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e8679190a0b49bfebf0ac7aa61f1d4b0d89396e5791dbfc6b137301f38c197/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e8679190a0b49bfebf0ac7aa61f1d4b0d89396e5791dbfc6b137301f38c197/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:46:48 compute-0 podman[424079]: 2025-10-02 19:46:48.545660307 +0000 UTC m=+0.183817881 container init 0f5e59119ca08fd0a5ec79e9afbeb8ddabef39db41e5ae0f9ef956c9e377d1e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_vaughan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 02 19:46:48 compute-0 podman[424079]: 2025-10-02 19:46:48.558729654 +0000 UTC m=+0.196887198 container start 0f5e59119ca08fd0a5ec79e9afbeb8ddabef39db41e5ae0f9ef956c9e377d1e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 02 19:46:48 compute-0 podman[424079]: 2025-10-02 19:46:48.562962297 +0000 UTC m=+0.201119871 container attach 0f5e59119ca08fd0a5ec79e9afbeb8ddabef39db41e5ae0f9ef956c9e377d1e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:46:48 compute-0 nova_compute[355794]: 2025-10-02 19:46:48.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:48 compute-0 nova_compute[355794]: 2025-10-02 19:46:48.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:48 compute-0 nova_compute[355794]: 2025-10-02 19:46:48.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:48 compute-0 nova_compute[355794]: 2025-10-02 19:46:48.598 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:48 compute-0 nova_compute[355794]: 2025-10-02 19:46:48.598 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:48 compute-0 nova_compute[355794]: 2025-10-02 19:46:48.599 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:48 compute-0 nova_compute[355794]: 2025-10-02 19:46:48.599 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:46:48 compute-0 nova_compute[355794]: 2025-10-02 19:46:48.600 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 505 KiB/s wr, 24 op/s
Oct 02 19:46:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:46:49 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1960313511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:46:49 compute-0 nova_compute[355794]: 2025-10-02 19:46:49.085 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:49 compute-0 nova_compute[355794]: 2025-10-02 19:46:49.237 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:46:49 compute-0 nova_compute[355794]: 2025-10-02 19:46:49.237 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:46:49 compute-0 nova_compute[355794]: 2025-10-02 19:46:49.238 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:46:49 compute-0 nova_compute[355794]: 2025-10-02 19:46:49.243 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:46:49 compute-0 nova_compute[355794]: 2025-10-02 19:46:49.244 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:46:49 compute-0 nova_compute[355794]: 2025-10-02 19:46:49.244 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:46:49 compute-0 podman[424121]: 2025-10-02 19:46:49.266008843 +0000 UTC m=+0.107899640 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:46:49 compute-0 podman[424123]: 2025-10-02 19:46:49.270853162 +0000 UTC m=+0.106364419 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct 02 19:46:49 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1960313511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:46:49 compute-0 practical_vaughan[424094]: {
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:     "0": [
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:         {
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "devices": [
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "/dev/loop3"
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             ],
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "lv_name": "ceph_lv0",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "lv_size": "21470642176",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "name": "ceph_lv0",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "tags": {
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.cluster_name": "ceph",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.crush_device_class": "",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.encrypted": "0",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.osd_id": "0",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.type": "block",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.vdo": "0"
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             },
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "type": "block",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "vg_name": "ceph_vg0"
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:         }
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:     ],
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:     "1": [
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:         {
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "devices": [
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "/dev/loop4"
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             ],
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "lv_name": "ceph_lv1",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "lv_size": "21470642176",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "name": "ceph_lv1",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "tags": {
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.cluster_name": "ceph",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.crush_device_class": "",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.encrypted": "0",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.osd_id": "1",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.type": "block",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.vdo": "0"
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             },
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "type": "block",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "vg_name": "ceph_vg1"
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:         }
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:     ],
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:     "2": [
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:         {
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "devices": [
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "/dev/loop5"
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             ],
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "lv_name": "ceph_lv2",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "lv_size": "21470642176",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "name": "ceph_lv2",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "tags": {
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.cluster_name": "ceph",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.crush_device_class": "",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.encrypted": "0",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.osd_id": "2",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.type": "block",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:                 "ceph.vdo": "0"
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             },
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "type": "block",
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:             "vg_name": "ceph_vg2"
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:         }
Oct 02 19:46:49 compute-0 practical_vaughan[424094]:     ]
Oct 02 19:46:49 compute-0 practical_vaughan[424094]: }
Oct 02 19:46:49 compute-0 systemd[1]: libpod-0f5e59119ca08fd0a5ec79e9afbeb8ddabef39db41e5ae0f9ef956c9e377d1e9.scope: Deactivated successfully.
Oct 02 19:46:49 compute-0 podman[424079]: 2025-10-02 19:46:49.341204313 +0000 UTC m=+0.979361917 container died 0f5e59119ca08fd0a5ec79e9afbeb8ddabef39db41e5ae0f9ef956c9e377d1e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_vaughan, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 19:46:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-40e8679190a0b49bfebf0ac7aa61f1d4b0d89396e5791dbfc6b137301f38c197-merged.mount: Deactivated successfully.
Oct 02 19:46:49 compute-0 podman[424079]: 2025-10-02 19:46:49.409041327 +0000 UTC m=+1.047198871 container remove 0f5e59119ca08fd0a5ec79e9afbeb8ddabef39db41e5ae0f9ef956c9e377d1e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 19:46:49 compute-0 systemd[1]: libpod-conmon-0f5e59119ca08fd0a5ec79e9afbeb8ddabef39db41e5ae0f9ef956c9e377d1e9.scope: Deactivated successfully.
Oct 02 19:46:49 compute-0 sudo[423978]: pam_unix(sudo:session): session closed for user root
Oct 02 19:46:49 compute-0 sudo[424177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:46:49 compute-0 sudo[424177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:46:49 compute-0 sudo[424177]: pam_unix(sudo:session): session closed for user root
Oct 02 19:46:49 compute-0 sudo[424203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:46:49 compute-0 sudo[424203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:46:49 compute-0 sudo[424203]: pam_unix(sudo:session): session closed for user root
Oct 02 19:46:49 compute-0 nova_compute[355794]: 2025-10-02 19:46:49.687 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:46:49 compute-0 nova_compute[355794]: 2025-10-02 19:46:49.690 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3682MB free_disk=59.92204284667969GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:46:49 compute-0 nova_compute[355794]: 2025-10-02 19:46:49.691 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:49 compute-0 nova_compute[355794]: 2025-10-02 19:46:49.692 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:49 compute-0 sudo[424228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:46:49 compute-0 sudo[424228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:46:49 compute-0 sudo[424228]: pam_unix(sudo:session): session closed for user root
Oct 02 19:46:49 compute-0 sudo[424253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:46:49 compute-0 sudo[424253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:46:49 compute-0 nova_compute[355794]: 2025-10-02 19:46:49.966 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:46:49 compute-0 nova_compute[355794]: 2025-10-02 19:46:49.966 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:46:49 compute-0 nova_compute[355794]: 2025-10-02 19:46:49.966 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:46:49 compute-0 nova_compute[355794]: 2025-10-02 19:46:49.967 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:46:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:46:50 compute-0 nova_compute[355794]: 2025-10-02 19:46:50.176 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:50 compute-0 ceph-mon[191910]: pgmap v1251: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 505 KiB/s wr, 24 op/s
Oct 02 19:46:50 compute-0 nova_compute[355794]: 2025-10-02 19:46:50.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:50 compute-0 podman[424338]: 2025-10-02 19:46:50.442101018 +0000 UTC m=+0.067206938 container create 8ea5a8ea988ff6bdd1e0d658fed08a27e641ac0bc4caf5cbe8c043ffe21a6ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:46:50 compute-0 systemd[1]: Started libpod-conmon-8ea5a8ea988ff6bdd1e0d658fed08a27e641ac0bc4caf5cbe8c043ffe21a6ca9.scope.
Oct 02 19:46:50 compute-0 podman[424338]: 2025-10-02 19:46:50.409226924 +0000 UTC m=+0.034332864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:46:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:46:50 compute-0 podman[424338]: 2025-10-02 19:46:50.576305327 +0000 UTC m=+0.201411267 container init 8ea5a8ea988ff6bdd1e0d658fed08a27e641ac0bc4caf5cbe8c043ffe21a6ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cartwright, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct 02 19:46:50 compute-0 podman[424338]: 2025-10-02 19:46:50.594272735 +0000 UTC m=+0.219378655 container start 8ea5a8ea988ff6bdd1e0d658fed08a27e641ac0bc4caf5cbe8c043ffe21a6ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cartwright, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 19:46:50 compute-0 podman[424338]: 2025-10-02 19:46:50.599640427 +0000 UTC m=+0.224746367 container attach 8ea5a8ea988ff6bdd1e0d658fed08a27e641ac0bc4caf5cbe8c043ffe21a6ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:46:50 compute-0 compassionate_cartwright[424354]: 167 167
Oct 02 19:46:50 compute-0 systemd[1]: libpod-8ea5a8ea988ff6bdd1e0d658fed08a27e641ac0bc4caf5cbe8c043ffe21a6ca9.scope: Deactivated successfully.
Oct 02 19:46:50 compute-0 conmon[424354]: conmon 8ea5a8ea988ff6bdd1e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ea5a8ea988ff6bdd1e0d658fed08a27e641ac0bc4caf5cbe8c043ffe21a6ca9.scope/container/memory.events
Oct 02 19:46:50 compute-0 podman[424338]: 2025-10-02 19:46:50.612128299 +0000 UTC m=+0.237234219 container died 8ea5a8ea988ff6bdd1e0d658fed08a27e641ac0bc4caf5cbe8c043ffe21a6ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:46:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:46:50 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3625373752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:46:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3a1f2c38f92873d3e690ede40cf21980dacb05811c2c2ff60bc75a2c21fc92c-merged.mount: Deactivated successfully.
Oct 02 19:46:50 compute-0 nova_compute[355794]: 2025-10-02 19:46:50.689 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:50 compute-0 nova_compute[355794]: 2025-10-02 19:46:50.703 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:46:50 compute-0 podman[424338]: 2025-10-02 19:46:50.705360279 +0000 UTC m=+0.330466249 container remove 8ea5a8ea988ff6bdd1e0d658fed08a27e641ac0bc4caf5cbe8c043ffe21a6ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 19:46:50 compute-0 systemd[1]: libpod-conmon-8ea5a8ea988ff6bdd1e0d658fed08a27e641ac0bc4caf5cbe8c043ffe21a6ca9.scope: Deactivated successfully.
Oct 02 19:46:50 compute-0 nova_compute[355794]: 2025-10-02 19:46:50.727 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:46:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 35 KiB/s wr, 12 op/s
Oct 02 19:46:50 compute-0 nova_compute[355794]: 2025-10-02 19:46:50.750 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:46:50 compute-0 nova_compute[355794]: 2025-10-02 19:46:50.751 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:50 compute-0 nova_compute[355794]: 2025-10-02 19:46:50.752 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:50 compute-0 podman[424378]: 2025-10-02 19:46:50.963656087 +0000 UTC m=+0.084071556 container create 9f15b78178012f34e5e9b03925e05ca3a4a532bcbfb2ae4515d984004fe99f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:46:51 compute-0 podman[424378]: 2025-10-02 19:46:50.930895076 +0000 UTC m=+0.051310555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:46:51 compute-0 systemd[1]: Started libpod-conmon-9f15b78178012f34e5e9b03925e05ca3a4a532bcbfb2ae4515d984004fe99f79.scope.
Oct 02 19:46:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29a76f565efbf9ff4651889371e0e4f4120404163674d712bcec0c635fad19dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29a76f565efbf9ff4651889371e0e4f4120404163674d712bcec0c635fad19dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29a76f565efbf9ff4651889371e0e4f4120404163674d712bcec0c635fad19dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29a76f565efbf9ff4651889371e0e4f4120404163674d712bcec0c635fad19dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:46:51 compute-0 podman[424378]: 2025-10-02 19:46:51.145533234 +0000 UTC m=+0.265948713 container init 9f15b78178012f34e5e9b03925e05ca3a4a532bcbfb2ae4515d984004fe99f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 19:46:51 compute-0 podman[424378]: 2025-10-02 19:46:51.173114747 +0000 UTC m=+0.293530216 container start 9f15b78178012f34e5e9b03925e05ca3a4a532bcbfb2ae4515d984004fe99f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bohr, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:46:51 compute-0 podman[424378]: 2025-10-02 19:46:51.179530328 +0000 UTC m=+0.299945767 container attach 9f15b78178012f34e5e9b03925e05ca3a4a532bcbfb2ae4515d984004fe99f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bohr, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:46:51 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3625373752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:46:51 compute-0 nova_compute[355794]: 2025-10-02 19:46:51.763 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]: {
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:         "osd_id": 1,
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:         "type": "bluestore"
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:     },
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:         "osd_id": 2,
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:         "type": "bluestore"
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:     },
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:         "osd_id": 0,
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:         "type": "bluestore"
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]:     }
Oct 02 19:46:52 compute-0 thirsty_bohr[424394]: }
Oct 02 19:46:52 compute-0 systemd[1]: libpod-9f15b78178012f34e5e9b03925e05ca3a4a532bcbfb2ae4515d984004fe99f79.scope: Deactivated successfully.
Oct 02 19:46:52 compute-0 systemd[1]: libpod-9f15b78178012f34e5e9b03925e05ca3a4a532bcbfb2ae4515d984004fe99f79.scope: Consumed 1.129s CPU time.
Oct 02 19:46:52 compute-0 ceph-mon[191910]: pgmap v1252: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 35 KiB/s wr, 12 op/s
Oct 02 19:46:52 compute-0 podman[424427]: 2025-10-02 19:46:52.387670064 +0000 UTC m=+0.043432196 container died 9f15b78178012f34e5e9b03925e05ca3a4a532bcbfb2ae4515d984004fe99f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 19:46:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-29a76f565efbf9ff4651889371e0e4f4120404163674d712bcec0c635fad19dd-merged.mount: Deactivated successfully.
Oct 02 19:46:52 compute-0 podman[424427]: 2025-10-02 19:46:52.480521623 +0000 UTC m=+0.136283725 container remove 9f15b78178012f34e5e9b03925e05ca3a4a532bcbfb2ae4515d984004fe99f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bohr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:46:52 compute-0 systemd[1]: libpod-conmon-9f15b78178012f34e5e9b03925e05ca3a4a532bcbfb2ae4515d984004fe99f79.scope: Deactivated successfully.
Oct 02 19:46:52 compute-0 sudo[424253]: pam_unix(sudo:session): session closed for user root
Oct 02 19:46:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:46:52 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:46:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:46:52 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:46:52 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 4957089c-f1c4-4fec-9b17-941cecd23f98 does not exist
Oct 02 19:46:52 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 672a54b3-ca1d-4f18-9265-131e0b5bd3d5 does not exist
Oct 02 19:46:52 compute-0 sudo[424442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:46:52 compute-0 sudo[424442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:46:52 compute-0 sudo[424442]: pam_unix(sudo:session): session closed for user root
Oct 02 19:46:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Oct 02 19:46:52 compute-0 sudo[424467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:46:52 compute-0 sudo[424467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:46:52 compute-0 sudo[424467]: pam_unix(sudo:session): session closed for user root
Oct 02 19:46:53 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:46:53 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:46:53 compute-0 ceph-mon[191910]: pgmap v1253: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Oct 02 19:46:53 compute-0 nova_compute[355794]: 2025-10-02 19:46:53.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Oct 02 19:46:54 compute-0 podman[424493]: 2025-10-02 19:46:54.937273723 +0000 UTC m=+0.145849189 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 19:46:54 compute-0 podman[424492]: 2025-10-02 19:46:54.972228293 +0000 UTC m=+0.183718706 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-type=git, release=1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Oct 02 19:46:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:46:55 compute-0 nova_compute[355794]: 2025-10-02 19:46:55.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:55 compute-0 nova_compute[355794]: 2025-10-02 19:46:55.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:55 compute-0 nova_compute[355794]: 2025-10-02 19:46:55.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 19:46:55 compute-0 ceph-mon[191910]: pgmap v1254: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Oct 02 19:46:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Oct 02 19:46:57 compute-0 podman[424531]: 2025-10-02 19:46:57.655310981 +0000 UTC m=+0.092951992 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:46:57 compute-0 podman[424532]: 2025-10-02 19:46:57.682256188 +0000 UTC m=+0.115943954 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true)
Oct 02 19:46:57 compute-0 podman[424533]: 2025-10-02 19:46:57.74325347 +0000 UTC m=+0.158937137 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:46:57 compute-0 ceph-mon[191910]: pgmap v1255: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Oct 02 19:46:58 compute-0 nova_compute[355794]: 2025-10-02 19:46:58.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:46:59 compute-0 podman[157186]: time="2025-10-02T19:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:46:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:46:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9034 "" "Go-http-client/1.1"
Oct 02 19:46:59 compute-0 ceph-mon[191910]: pgmap v1256: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:47:00 compute-0 nova_compute[355794]: 2025-10-02 19:47:00.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:01 compute-0 openstack_network_exporter[372736]: ERROR   19:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:47:01 compute-0 openstack_network_exporter[372736]: ERROR   19:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:47:01 compute-0 openstack_network_exporter[372736]: ERROR   19:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:47:01 compute-0 openstack_network_exporter[372736]: ERROR   19:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:47:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:47:01 compute-0 openstack_network_exporter[372736]: ERROR   19:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:47:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:47:01 compute-0 ceph-mon[191910]: pgmap v1257: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:02 compute-0 podman[424592]: 2025-10-02 19:47:02.658890345 +0000 UTC m=+0.084793416 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:47:02 compute-0 podman[424591]: 2025-10-02 19:47:02.684571148 +0000 UTC m=+0.110711845 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, managed_by=edpm_ansible)
Oct 02 19:47:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:47:03
Oct 02 19:47:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:47:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:47:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'default.rgw.log', 'images', 'backups', 'vms', 'cephfs.cephfs.meta']
Oct 02 19:47:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:47:03 compute-0 nova_compute[355794]: 2025-10-02 19:47:03.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:47:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:47:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:47:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:47:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:47:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:47:03 compute-0 ceph-mon[191910]: pgmap v1258: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:47:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:47:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:47:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:47:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:47:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:47:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:47:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:47:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:47:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.295 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.296 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.309 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.314 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.317 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/4cdbea11-17d2-4466-a5f5-9a3d25e25d8a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}83403cbd27ca997ec29507654ce1f65c7e3234a2ba47473760e788c908204f10" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 19:47:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.857 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Thu, 02 Oct 2025 19:47:04 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-f536063c-aac7-477c-a8b8-a11b28ce38e9 x-openstack-request-id: req-f536063c-aac7-477c-a8b8-a11b28ce38e9 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.857 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a", "name": "vn-wkgthwg-cyepyxtlaijo-cznrkcgobntv-vnf-te2v6j4ustz5", "status": "ACTIVE", "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "user_id": "811fb7ac717e4ba9b9874e5454ee08f4", "metadata": {"metering.server_group": "d2d7e2b0-01e0-44b1-b2c7-fe502b333743"}, "hostId": "0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d", "image": {"id": "ce28338d-119e-49e1-ab67-60da8882593a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ce28338d-119e-49e1-ab67-60da8882593a"}]}, "flavor": {"id": "8f0521f8-dc4e-4ca1-bf77-f443ae74db03", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8f0521f8-dc4e-4ca1-bf77-f443ae74db03"}]}, "created": "2025-10-02T19:45:47Z", "updated": "2025-10-02T19:45:58Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.227", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:10:ab:29"}, {"version": 4, "addr": "192.168.122.174", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:10:ab:29"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/4cdbea11-17d2-4466-a5f5-9a3d25e25d8a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/4cdbea11-17d2-4466-a5f5-9a3d25e25d8a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-02T19:45:58.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.857 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/4cdbea11-17d2-4466-a5f5-9a3d25e25d8a used request id req-f536063c-aac7-477c-a8b8-a11b28ce38e9 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.860 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4cdbea11-17d2-4466-a5f5-9a3d25e25d8a', 'name': 'vn-wkgthwg-cyepyxtlaijo-cznrkcgobntv-vnf-te2v6j4ustz5', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {'metering.server_group': 'd2d7e2b0-01e0-44b1-b2c7-fe502b333743'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.860 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.861 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.861 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.861 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.863 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:47:04.861303) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.945 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.946 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:04.947 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.024 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.026 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.026 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.028 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.029 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.029 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.029 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.030 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:47:05.030057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.066 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.067 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.068 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.107 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.108 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.109 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.110 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.112 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.112 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.114 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.115 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.116 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:47:05.114346) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.117 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.118 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.120 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.bytes volume: 41689088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.121 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.122 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.123 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.125 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.125 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.126 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.127 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.128 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.128 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:47:05.127301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.129 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.129 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.131 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.latency volume: 5980607081 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.131 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.latency volume: 36317650 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.132 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.134 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.134 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.135 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.136 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.136 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.137 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.138 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:47:05.137485) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.181 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.226 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.228 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.228 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.229 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.230 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.230 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.231 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.231 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.232 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:47:05.231057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.232 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.233 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.234 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.requests volume: 219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.235 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.236 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.238 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.238 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.239 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.239 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.240 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.240 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.241 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:47:05.240799) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.248 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 2200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.255 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a / tapc759e48d-48 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.255 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.255 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.256 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.256 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.256 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.256 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.257 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:47:05.256628) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.257 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.257 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.257 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.257 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.257 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.258 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.258 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.258 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-02T19:47:05.257991) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.258 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-wkgthwg-cyepyxtlaijo-cznrkcgobntv-vnf-te2v6j4ustz5>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-wkgthwg-cyepyxtlaijo-cznrkcgobntv-vnf-te2v6j4ustz5>]
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.258 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.259 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.259 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.259 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.259 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:47:05.259211) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.260 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.260 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.260 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.260 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.260 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.260 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.261 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.261 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:47:05.260458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.261 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.261 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.261 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.261 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.261 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.262 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.262 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.262 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:47:05.261800) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.262 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.262 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.262 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.263 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.263 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.263 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 2174 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.263 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:47:05.263101) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.264 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.264 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.264 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.264 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.264 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.264 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.264 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.265 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:47:05.264490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.265 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.265 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.265 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.265 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.265 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.265 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.266 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.266 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:47:05.265805) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.266 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.266 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.266 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.267 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.267 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.267 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.267 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.268 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.268 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.268 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.268 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2174 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.268 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:47:05.268188) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.268 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.bytes volume: 2146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.269 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.269 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.269 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.269 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.269 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.270 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.270 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:47:05.269797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.270 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.270 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.271 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.271 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.271 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.271 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.272 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.272 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.272 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.272 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.272 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.272 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-wkgthwg-cyepyxtlaijo-cznrkcgobntv-vnf-te2v6j4ustz5>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-wkgthwg-cyepyxtlaijo-cznrkcgobntv-vnf-te2v6j4ustz5>]
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.272 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-02T19:47:05.272358) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.273 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.273 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.273 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.273 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.273 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.274 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/memory.usage volume: 49.66015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:47:05.273424) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.274 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.274 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.274 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.274 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.274 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.274 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.275 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2310 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:47:05.274695) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.275 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.bytes volume: 1786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.275 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.276 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.276 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.276 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.276 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.276 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.276 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.276 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.276 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:47:05.276321) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.277 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.277 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.277 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.277 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.277 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.278 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.278 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.278 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.278 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.278 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:47:05.277789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.279 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.279 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.279 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.280 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.280 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.280 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.280 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.280 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.281 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.281 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:47:05.280329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.281 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.281 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.281 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.281 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.282 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.282 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.282 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:47:05.281996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.282 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.283 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.283 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.283 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.283 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.283 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.284 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 34840000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.284 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/cpu volume: 35050000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.284 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:47:05.283635) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.284 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.285 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.285 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.285 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.285 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.285 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.285 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:47:05.285246) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.286 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.286 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.latency volume: 1750462100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.286 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.latency volume: 323566119 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.286 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.latency volume: 193343486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.287 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:47:05.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:05 compute-0 nova_compute[355794]: 2025-10-02 19:47:05.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:05 compute-0 ceph-mon[191910]: pgmap v1259: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:07 compute-0 ceph-mon[191910]: pgmap v1260: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:08 compute-0 nova_compute[355794]: 2025-10-02 19:47:08.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:09 compute-0 ceph-mon[191910]: pgmap v1261: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:47:10 compute-0 nova_compute[355794]: 2025-10-02 19:47:10.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:47:11 compute-0 ceph-mon[191910]: pgmap v1262: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011041890505958194 of space, bias 1.0, pg target 0.33125671517874583 quantized to 32 (current 32)
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:47:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:47:13 compute-0 nova_compute[355794]: 2025-10-02 19:47:13.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:13 compute-0 podman[424634]: 2025-10-02 19:47:13.704079062 +0000 UTC m=+0.123177767 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct 02 19:47:13 compute-0 ceph-mon[191910]: pgmap v1263: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:47:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:47:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:47:15 compute-0 nova_compute[355794]: 2025-10-02 19:47:15.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:15 compute-0 ceph-mon[191910]: pgmap v1264: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:47:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:47:17 compute-0 ceph-mon[191910]: pgmap v1265: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:47:18 compute-0 sshd-session[424653]: Connection closed by 220.154.129.88 port 34572
Oct 02 19:47:18 compute-0 nova_compute[355794]: 2025-10-02 19:47:18.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:47:19 compute-0 podman[424655]: 2025-10-02 19:47:19.700652165 +0000 UTC m=+0.123347471 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:47:19 compute-0 podman[424656]: 2025-10-02 19:47:19.720054411 +0000 UTC m=+0.134586190 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute)
Oct 02 19:47:19 compute-0 ceph-mon[191910]: pgmap v1266: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:47:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:47:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1324398794' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:47:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:47:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1324398794' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:47:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:47:20 compute-0 nova_compute[355794]: 2025-10-02 19:47:20.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.2 KiB/s wr, 0 op/s
Oct 02 19:47:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1324398794' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:47:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1324398794' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:47:22 compute-0 ceph-mon[191910]: pgmap v1267: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.2 KiB/s wr, 0 op/s
Oct 02 19:47:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 85 B/s wr, 0 op/s
Oct 02 19:47:23 compute-0 nova_compute[355794]: 2025-10-02 19:47:23.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:24 compute-0 ceph-mon[191910]: pgmap v1268: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 85 B/s wr, 0 op/s
Oct 02 19:47:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 85 B/s wr, 0 op/s
Oct 02 19:47:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:47:25 compute-0 nova_compute[355794]: 2025-10-02 19:47:25.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:25 compute-0 podman[424697]: 2025-10-02 19:47:25.727207562 +0000 UTC m=+0.118350318 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, managed_by=edpm_ansible, container_name=kepler, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, version=9.4, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.29.0)
Oct 02 19:47:25 compute-0 podman[424696]: 2025-10-02 19:47:25.732416231 +0000 UTC m=+0.133837760 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm)
Oct 02 19:47:26 compute-0 ceph-mon[191910]: pgmap v1269: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 85 B/s wr, 0 op/s
Oct 02 19:47:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Oct 02 19:47:28 compute-0 ceph-mon[191910]: pgmap v1270: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Oct 02 19:47:28 compute-0 nova_compute[355794]: 2025-10-02 19:47:28.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:28 compute-0 podman[424736]: 2025-10-02 19:47:28.734772589 +0000 UTC m=+0.146723933 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:47:28 compute-0 podman[424737]: 2025-10-02 19:47:28.760050831 +0000 UTC m=+0.159011369 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:47:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Oct 02 19:47:28 compute-0 podman[424738]: 2025-10-02 19:47:28.771628649 +0000 UTC m=+0.176708420 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 02 19:47:29 compute-0 podman[157186]: time="2025-10-02T19:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:47:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:47:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9029 "" "Go-http-client/1.1"
Oct 02 19:47:30 compute-0 ceph-mon[191910]: pgmap v1271: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Oct 02 19:47:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:47:30 compute-0 nova_compute[355794]: 2025-10-02 19:47:30.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Oct 02 19:47:31 compute-0 openstack_network_exporter[372736]: ERROR   19:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:47:31 compute-0 openstack_network_exporter[372736]: ERROR   19:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:47:31 compute-0 openstack_network_exporter[372736]: ERROR   19:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:47:31 compute-0 openstack_network_exporter[372736]: ERROR   19:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:47:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:47:31 compute-0 openstack_network_exporter[372736]: ERROR   19:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:47:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:47:32 compute-0 ceph-mon[191910]: pgmap v1272: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Oct 02 19:47:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:47:32.298 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:47:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:47:32.298 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:47:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:47:32.299 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:47:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Oct 02 19:47:33 compute-0 nova_compute[355794]: 2025-10-02 19:47:33.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:47:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:47:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:47:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:47:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:47:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:47:33 compute-0 podman[424796]: 2025-10-02 19:47:33.712731873 +0000 UTC m=+0.126595438 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9)
Oct 02 19:47:33 compute-0 podman[424797]: 2025-10-02 19:47:33.729237492 +0000 UTC m=+0.133632365 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:47:34 compute-0 ceph-mon[191910]: pgmap v1273: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Oct 02 19:47:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Oct 02 19:47:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:47:35 compute-0 nova_compute[355794]: 2025-10-02 19:47:35.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:36 compute-0 ceph-mon[191910]: pgmap v1274: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Oct 02 19:47:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Oct 02 19:47:38 compute-0 ceph-mon[191910]: pgmap v1275: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Oct 02 19:47:38 compute-0 nova_compute[355794]: 2025-10-02 19:47:38.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:40 compute-0 ceph-mon[191910]: pgmap v1276: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:47:40 compute-0 nova_compute[355794]: 2025-10-02 19:47:40.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Oct 02 19:47:42 compute-0 ceph-mon[191910]: pgmap v1277: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Oct 02 19:47:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Oct 02 19:47:43 compute-0 nova_compute[355794]: 2025-10-02 19:47:43.595 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:43 compute-0 nova_compute[355794]: 2025-10-02 19:47:43.596 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:47:43 compute-0 nova_compute[355794]: 2025-10-02 19:47:43.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:44 compute-0 ceph-mon[191910]: pgmap v1278: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Oct 02 19:47:44 compute-0 podman[424837]: 2025-10-02 19:47:44.676823 +0000 UTC m=+0.110769076 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 19:47:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Oct 02 19:47:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:47:45 compute-0 nova_compute[355794]: 2025-10-02 19:47:45.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:45 compute-0 nova_compute[355794]: 2025-10-02 19:47:45.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:45 compute-0 nova_compute[355794]: 2025-10-02 19:47:45.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:46 compute-0 ceph-mon[191910]: pgmap v1279: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Oct 02 19:47:46 compute-0 nova_compute[355794]: 2025-10-02 19:47:46.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Oct 02 19:47:47 compute-0 nova_compute[355794]: 2025-10-02 19:47:47.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:47 compute-0 nova_compute[355794]: 2025-10-02 19:47:47.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:47:48 compute-0 nova_compute[355794]: 2025-10-02 19:47:48.060 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:47:48 compute-0 nova_compute[355794]: 2025-10-02 19:47:48.062 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:47:48 compute-0 nova_compute[355794]: 2025-10-02 19:47:48.063 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:47:48 compute-0 ceph-mon[191910]: pgmap v1280: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Oct 02 19:47:48 compute-0 nova_compute[355794]: 2025-10-02 19:47:48.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Oct 02 19:47:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:47:50 compute-0 ceph-mon[191910]: pgmap v1281: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Oct 02 19:47:50 compute-0 nova_compute[355794]: 2025-10-02 19:47:50.319 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Updating instance_info_cache with network_info: [{"id": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "address": "fa:16:3e:10:ab:29", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc759e48d-48", "ovs_interfaceid": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:47:50 compute-0 nova_compute[355794]: 2025-10-02 19:47:50.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:50 compute-0 nova_compute[355794]: 2025-10-02 19:47:50.449 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:47:50 compute-0 nova_compute[355794]: 2025-10-02 19:47:50.450 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:47:50 compute-0 nova_compute[355794]: 2025-10-02 19:47:50.452 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:50 compute-0 nova_compute[355794]: 2025-10-02 19:47:50.453 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:50 compute-0 nova_compute[355794]: 2025-10-02 19:47:50.454 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:50 compute-0 nova_compute[355794]: 2025-10-02 19:47:50.529 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:47:50 compute-0 nova_compute[355794]: 2025-10-02 19:47:50.531 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:47:50 compute-0 nova_compute[355794]: 2025-10-02 19:47:50.531 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:47:50 compute-0 nova_compute[355794]: 2025-10-02 19:47:50.532 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:47:50 compute-0 nova_compute[355794]: 2025-10-02 19:47:50.533 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:47:50 compute-0 podman[424857]: 2025-10-02 19:47:50.686061357 +0000 UTC m=+0.098818749 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:47:50 compute-0 podman[424858]: 2025-10-02 19:47:50.692423606 +0000 UTC m=+0.094839693 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 19:47:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Oct 02 19:47:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:47:50 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3616856743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:47:51 compute-0 nova_compute[355794]: 2025-10-02 19:47:51.023 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:47:51 compute-0 nova_compute[355794]: 2025-10-02 19:47:51.107 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:47:51 compute-0 nova_compute[355794]: 2025-10-02 19:47:51.107 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:47:51 compute-0 nova_compute[355794]: 2025-10-02 19:47:51.108 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:47:51 compute-0 nova_compute[355794]: 2025-10-02 19:47:51.115 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:47:51 compute-0 nova_compute[355794]: 2025-10-02 19:47:51.116 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:47:51 compute-0 nova_compute[355794]: 2025-10-02 19:47:51.116 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:47:51 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3616856743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:47:51 compute-0 nova_compute[355794]: 2025-10-02 19:47:51.551 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:47:51 compute-0 nova_compute[355794]: 2025-10-02 19:47:51.552 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3755MB free_disk=59.92201232910156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:47:51 compute-0 nova_compute[355794]: 2025-10-02 19:47:51.553 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:47:51 compute-0 nova_compute[355794]: 2025-10-02 19:47:51.553 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:47:51 compute-0 nova_compute[355794]: 2025-10-02 19:47:51.655 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:47:51 compute-0 nova_compute[355794]: 2025-10-02 19:47:51.656 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:47:51 compute-0 nova_compute[355794]: 2025-10-02 19:47:51.657 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:47:51 compute-0 nova_compute[355794]: 2025-10-02 19:47:51.657 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:47:51 compute-0 nova_compute[355794]: 2025-10-02 19:47:51.727 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:47:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:47:52 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/884535306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:47:52 compute-0 nova_compute[355794]: 2025-10-02 19:47:52.170 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:47:52 compute-0 nova_compute[355794]: 2025-10-02 19:47:52.181 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:47:52 compute-0 nova_compute[355794]: 2025-10-02 19:47:52.206 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:47:52 compute-0 nova_compute[355794]: 2025-10-02 19:47:52.210 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:47:52 compute-0 nova_compute[355794]: 2025-10-02 19:47:52.211 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:47:52 compute-0 ceph-mon[191910]: pgmap v1282: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Oct 02 19:47:52 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/884535306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:47:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:53 compute-0 sudo[424941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:47:53 compute-0 sudo[424941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:47:53 compute-0 sudo[424941]: pam_unix(sudo:session): session closed for user root
Oct 02 19:47:53 compute-0 sudo[424966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:47:53 compute-0 sudo[424966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:47:53 compute-0 sudo[424966]: pam_unix(sudo:session): session closed for user root
Oct 02 19:47:53 compute-0 sudo[424991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:47:53 compute-0 sudo[424991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:47:53 compute-0 sudo[424991]: pam_unix(sudo:session): session closed for user root
Oct 02 19:47:53 compute-0 sudo[425016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:47:53 compute-0 sudo[425016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:47:53 compute-0 nova_compute[355794]: 2025-10-02 19:47:53.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:54 compute-0 sudo[425016]: pam_unix(sudo:session): session closed for user root
Oct 02 19:47:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:47:54 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:47:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:47:54 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:47:54 compute-0 nova_compute[355794]: 2025-10-02 19:47:54.206 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:47:54 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:47:54 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e6d8093d-a5fe-4ca2-9ee9-34aef565cbc7 does not exist
Oct 02 19:47:54 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 748f00f8-7caa-4bba-ad94-21424f94d2d9 does not exist
Oct 02 19:47:54 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev eb8ecb6d-dacf-4f2d-ace8-eeb23f429386 does not exist
Oct 02 19:47:54 compute-0 ceph-mon[191910]: pgmap v1283: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:54 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:47:54 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:47:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:47:54 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:47:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:47:54 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:47:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:47:54 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:47:54 compute-0 sudo[425071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:47:54 compute-0 sudo[425071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:47:54 compute-0 sudo[425071]: pam_unix(sudo:session): session closed for user root
Oct 02 19:47:54 compute-0 sudo[425096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:47:54 compute-0 sudo[425096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:47:54 compute-0 sudo[425096]: pam_unix(sudo:session): session closed for user root
Oct 02 19:47:54 compute-0 sudo[425121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:47:54 compute-0 sudo[425121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:47:54 compute-0 sudo[425121]: pam_unix(sudo:session): session closed for user root
Oct 02 19:47:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:54 compute-0 sudo[425146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:47:54 compute-0 sudo[425146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:47:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:47:55 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:47:55 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:47:55 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:47:55 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:47:55 compute-0 podman[425208]: 2025-10-02 19:47:55.382638359 +0000 UTC m=+0.081852878 container create 309585a8a66fd47a93bd5bdcd4f73f260b1352167bb07ca12865aa47ae387f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jones, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 19:47:55 compute-0 podman[425208]: 2025-10-02 19:47:55.34808729 +0000 UTC m=+0.047301859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:47:55 compute-0 nova_compute[355794]: 2025-10-02 19:47:55.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:55 compute-0 systemd[1]: Started libpod-conmon-309585a8a66fd47a93bd5bdcd4f73f260b1352167bb07ca12865aa47ae387f21.scope.
Oct 02 19:47:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:47:55 compute-0 podman[425208]: 2025-10-02 19:47:55.526260878 +0000 UTC m=+0.225475437 container init 309585a8a66fd47a93bd5bdcd4f73f260b1352167bb07ca12865aa47ae387f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jones, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:47:55 compute-0 podman[425208]: 2025-10-02 19:47:55.538041281 +0000 UTC m=+0.237255810 container start 309585a8a66fd47a93bd5bdcd4f73f260b1352167bb07ca12865aa47ae387f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jones, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:47:55 compute-0 podman[425208]: 2025-10-02 19:47:55.54326275 +0000 UTC m=+0.242477289 container attach 309585a8a66fd47a93bd5bdcd4f73f260b1352167bb07ca12865aa47ae387f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jones, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:47:55 compute-0 festive_jones[425224]: 167 167
Oct 02 19:47:55 compute-0 systemd[1]: libpod-309585a8a66fd47a93bd5bdcd4f73f260b1352167bb07ca12865aa47ae387f21.scope: Deactivated successfully.
Oct 02 19:47:55 compute-0 conmon[425224]: conmon 309585a8a66fd47a93bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-309585a8a66fd47a93bd5bdcd4f73f260b1352167bb07ca12865aa47ae387f21.scope/container/memory.events
Oct 02 19:47:55 compute-0 nova_compute[355794]: 2025-10-02 19:47:55.569 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:55 compute-0 podman[425229]: 2025-10-02 19:47:55.61732996 +0000 UTC m=+0.043489158 container died 309585a8a66fd47a93bd5bdcd4f73f260b1352167bb07ca12865aa47ae387f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jones, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:47:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-484677b096397187532a5cb10548af6d27abea132e76aee2fd3376e1d097069f-merged.mount: Deactivated successfully.
Oct 02 19:47:55 compute-0 podman[425229]: 2025-10-02 19:47:55.694694967 +0000 UTC m=+0.120854105 container remove 309585a8a66fd47a93bd5bdcd4f73f260b1352167bb07ca12865aa47ae387f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 19:47:55 compute-0 systemd[1]: libpod-conmon-309585a8a66fd47a93bd5bdcd4f73f260b1352167bb07ca12865aa47ae387f21.scope: Deactivated successfully.
Oct 02 19:47:56 compute-0 podman[425251]: 2025-10-02 19:47:56.063201236 +0000 UTC m=+0.106621196 container create c9381e966a0339aea3c110ff4db11d35cfbb5ebcbaa6d095d999bc0a49d7f513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bhabha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:47:56 compute-0 podman[425251]: 2025-10-02 19:47:56.032820999 +0000 UTC m=+0.076240969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:47:56 compute-0 systemd[1]: Started libpod-conmon-c9381e966a0339aea3c110ff4db11d35cfbb5ebcbaa6d095d999bc0a49d7f513.scope.
Oct 02 19:47:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea778921a8520f267fcea7c4615cac2dc7e07f3f13862f326820b33ab817a121/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea778921a8520f267fcea7c4615cac2dc7e07f3f13862f326820b33ab817a121/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea778921a8520f267fcea7c4615cac2dc7e07f3f13862f326820b33ab817a121/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea778921a8520f267fcea7c4615cac2dc7e07f3f13862f326820b33ab817a121/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea778921a8520f267fcea7c4615cac2dc7e07f3f13862f326820b33ab817a121/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:47:56 compute-0 ceph-mon[191910]: pgmap v1284: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:56 compute-0 podman[425251]: 2025-10-02 19:47:56.264961741 +0000 UTC m=+0.308381691 container init c9381e966a0339aea3c110ff4db11d35cfbb5ebcbaa6d095d999bc0a49d7f513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 19:47:56 compute-0 podman[425251]: 2025-10-02 19:47:56.289633277 +0000 UTC m=+0.333053207 container start c9381e966a0339aea3c110ff4db11d35cfbb5ebcbaa6d095d999bc0a49d7f513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:47:56 compute-0 podman[425266]: 2025-10-02 19:47:56.291011013 +0000 UTC m=+0.136447958 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, version=9.4, config_id=edpm, io.buildah.version=1.29.0, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct 02 19:47:56 compute-0 podman[425251]: 2025-10-02 19:47:56.294739932 +0000 UTC m=+0.338159882 container attach c9381e966a0339aea3c110ff4db11d35cfbb5ebcbaa6d095d999bc0a49d7f513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bhabha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 19:47:56 compute-0 podman[425265]: 2025-10-02 19:47:56.302451658 +0000 UTC m=+0.153318797 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:47:56 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 02 19:47:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:57 compute-0 peaceful_bhabha[425285]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:47:57 compute-0 peaceful_bhabha[425285]: --> relative data size: 1.0
Oct 02 19:47:57 compute-0 peaceful_bhabha[425285]: --> All data devices are unavailable
Oct 02 19:47:57 compute-0 systemd[1]: libpod-c9381e966a0339aea3c110ff4db11d35cfbb5ebcbaa6d095d999bc0a49d7f513.scope: Deactivated successfully.
Oct 02 19:47:57 compute-0 podman[425251]: 2025-10-02 19:47:57.608154699 +0000 UTC m=+1.651574649 container died c9381e966a0339aea3c110ff4db11d35cfbb5ebcbaa6d095d999bc0a49d7f513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:47:57 compute-0 systemd[1]: libpod-c9381e966a0339aea3c110ff4db11d35cfbb5ebcbaa6d095d999bc0a49d7f513.scope: Consumed 1.240s CPU time.
Oct 02 19:47:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea778921a8520f267fcea7c4615cac2dc7e07f3f13862f326820b33ab817a121-merged.mount: Deactivated successfully.
Oct 02 19:47:57 compute-0 podman[425251]: 2025-10-02 19:47:57.69129798 +0000 UTC m=+1.734717920 container remove c9381e966a0339aea3c110ff4db11d35cfbb5ebcbaa6d095d999bc0a49d7f513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:47:57 compute-0 systemd[1]: libpod-conmon-c9381e966a0339aea3c110ff4db11d35cfbb5ebcbaa6d095d999bc0a49d7f513.scope: Deactivated successfully.
Oct 02 19:47:57 compute-0 sudo[425146]: pam_unix(sudo:session): session closed for user root
Oct 02 19:47:57 compute-0 sudo[425345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:47:57 compute-0 sudo[425345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:47:57 compute-0 sudo[425345]: pam_unix(sudo:session): session closed for user root
Oct 02 19:47:58 compute-0 sudo[425370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:47:58 compute-0 sudo[425370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:47:58 compute-0 sudo[425370]: pam_unix(sudo:session): session closed for user root
Oct 02 19:47:58 compute-0 sudo[425395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:47:58 compute-0 sudo[425395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:47:58 compute-0 sudo[425395]: pam_unix(sudo:session): session closed for user root
Oct 02 19:47:58 compute-0 ceph-mon[191910]: pgmap v1285: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:58 compute-0 sudo[425420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:47:58 compute-0 sudo[425420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:47:58 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 02 19:47:58 compute-0 nova_compute[355794]: 2025-10-02 19:47:58.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:47:58 compute-0 podman[425487]: 2025-10-02 19:47:58.967917028 +0000 UTC m=+0.093943099 container create c9890d3e687f1f6b2538038fd4c955a2a6c5712ce4e170dbdb34dcd952294b3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wilbur, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 19:47:59 compute-0 podman[425487]: 2025-10-02 19:47:58.929931438 +0000 UTC m=+0.055957519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:47:59 compute-0 systemd[1]: Started libpod-conmon-c9890d3e687f1f6b2538038fd4c955a2a6c5712ce4e170dbdb34dcd952294b3b.scope.
Oct 02 19:47:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:47:59 compute-0 podman[425487]: 2025-10-02 19:47:59.15075299 +0000 UTC m=+0.276779091 container init c9890d3e687f1f6b2538038fd4c955a2a6c5712ce4e170dbdb34dcd952294b3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wilbur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 19:47:59 compute-0 podman[425487]: 2025-10-02 19:47:59.164633839 +0000 UTC m=+0.290659900 container start c9890d3e687f1f6b2538038fd4c955a2a6c5712ce4e170dbdb34dcd952294b3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wilbur, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:47:59 compute-0 optimistic_wilbur[425520]: 167 167
Oct 02 19:47:59 compute-0 systemd[1]: libpod-c9890d3e687f1f6b2538038fd4c955a2a6c5712ce4e170dbdb34dcd952294b3b.scope: Deactivated successfully.
Oct 02 19:47:59 compute-0 podman[425487]: 2025-10-02 19:47:59.174985795 +0000 UTC m=+0.301011906 container attach c9890d3e687f1f6b2538038fd4c955a2a6c5712ce4e170dbdb34dcd952294b3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wilbur, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 19:47:59 compute-0 podman[425487]: 2025-10-02 19:47:59.176192207 +0000 UTC m=+0.302218318 container died c9890d3e687f1f6b2538038fd4c955a2a6c5712ce4e170dbdb34dcd952294b3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 19:47:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9410a638f2234fd69f589e305aca547f584f3bdfb5d4d3eca7077ab47b489de1-merged.mount: Deactivated successfully.
Oct 02 19:47:59 compute-0 podman[425501]: 2025-10-02 19:47:59.218784829 +0000 UTC m=+0.150323428 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:47:59 compute-0 podman[425487]: 2025-10-02 19:47:59.246043004 +0000 UTC m=+0.372069065 container remove c9890d3e687f1f6b2538038fd4c955a2a6c5712ce4e170dbdb34dcd952294b3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 19:47:59 compute-0 podman[425509]: 2025-10-02 19:47:59.257509369 +0000 UTC m=+0.169794636 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:47:59 compute-0 podman[425504]: 2025-10-02 19:47:59.259215165 +0000 UTC m=+0.182084134 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 02 19:47:59 compute-0 systemd[1]: libpod-conmon-c9890d3e687f1f6b2538038fd4c955a2a6c5712ce4e170dbdb34dcd952294b3b.scope: Deactivated successfully.
Oct 02 19:47:59 compute-0 podman[425586]: 2025-10-02 19:47:59.497816659 +0000 UTC m=+0.084939969 container create d0ceb51cb41ce6b43ce2e1aeec1b725cdc84c2368f538ae6ec1e341b9335cefe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hugle, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:47:59 compute-0 podman[425586]: 2025-10-02 19:47:59.464193345 +0000 UTC m=+0.051316655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:47:59 compute-0 systemd[1]: Started libpod-conmon-d0ceb51cb41ce6b43ce2e1aeec1b725cdc84c2368f538ae6ec1e341b9335cefe.scope.
Oct 02 19:47:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d97dc7023af5676d45f0c821666110a4ad853fb85ba44a059f02fb866fb3d746/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d97dc7023af5676d45f0c821666110a4ad853fb85ba44a059f02fb866fb3d746/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d97dc7023af5676d45f0c821666110a4ad853fb85ba44a059f02fb866fb3d746/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d97dc7023af5676d45f0c821666110a4ad853fb85ba44a059f02fb866fb3d746/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:47:59 compute-0 podman[425586]: 2025-10-02 19:47:59.726266534 +0000 UTC m=+0.313389814 container init d0ceb51cb41ce6b43ce2e1aeec1b725cdc84c2368f538ae6ec1e341b9335cefe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 19:47:59 compute-0 podman[425586]: 2025-10-02 19:47:59.737575545 +0000 UTC m=+0.324698825 container start d0ceb51cb41ce6b43ce2e1aeec1b725cdc84c2368f538ae6ec1e341b9335cefe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 19:47:59 compute-0 podman[157186]: time="2025-10-02T19:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:47:59 compute-0 podman[425586]: 2025-10-02 19:47:59.75429782 +0000 UTC m=+0.341421170 container attach d0ceb51cb41ce6b43ce2e1aeec1b725cdc84c2368f538ae6ec1e341b9335cefe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hugle, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:47:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47836 "" "Go-http-client/1.1"
Oct 02 19:47:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9451 "" "Go-http-client/1.1"
Oct 02 19:48:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:48:00 compute-0 ceph-mon[191910]: pgmap v1286: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:00 compute-0 nova_compute[355794]: 2025-10-02 19:48:00.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]: {
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:     "0": [
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:         {
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "devices": [
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "/dev/loop3"
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             ],
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "lv_name": "ceph_lv0",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "lv_size": "21470642176",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "name": "ceph_lv0",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "tags": {
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.cluster_name": "ceph",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.crush_device_class": "",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.encrypted": "0",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.osd_id": "0",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.type": "block",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.vdo": "0"
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             },
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "type": "block",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "vg_name": "ceph_vg0"
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:         }
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:     ],
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:     "1": [
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:         {
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "devices": [
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "/dev/loop4"
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             ],
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "lv_name": "ceph_lv1",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "lv_size": "21470642176",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "name": "ceph_lv1",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "tags": {
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.cluster_name": "ceph",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.crush_device_class": "",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.encrypted": "0",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.osd_id": "1",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.type": "block",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.vdo": "0"
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             },
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "type": "block",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "vg_name": "ceph_vg1"
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:         }
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:     ],
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:     "2": [
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:         {
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "devices": [
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "/dev/loop5"
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             ],
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "lv_name": "ceph_lv2",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "lv_size": "21470642176",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "name": "ceph_lv2",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "tags": {
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.cluster_name": "ceph",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.crush_device_class": "",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.encrypted": "0",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.osd_id": "2",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.type": "block",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:                 "ceph.vdo": "0"
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             },
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "type": "block",
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:             "vg_name": "ceph_vg2"
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:         }
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]:     ]
Oct 02 19:48:00 compute-0 nostalgic_hugle[425602]: }
Oct 02 19:48:00 compute-0 systemd[1]: libpod-d0ceb51cb41ce6b43ce2e1aeec1b725cdc84c2368f538ae6ec1e341b9335cefe.scope: Deactivated successfully.
Oct 02 19:48:00 compute-0 podman[425586]: 2025-10-02 19:48:00.691783429 +0000 UTC m=+1.278906709 container died d0ceb51cb41ce6b43ce2e1aeec1b725cdc84c2368f538ae6ec1e341b9335cefe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:48:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d97dc7023af5676d45f0c821666110a4ad853fb85ba44a059f02fb866fb3d746-merged.mount: Deactivated successfully.
Oct 02 19:48:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:00 compute-0 podman[425586]: 2025-10-02 19:48:00.778416622 +0000 UTC m=+1.365539902 container remove d0ceb51cb41ce6b43ce2e1aeec1b725cdc84c2368f538ae6ec1e341b9335cefe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hugle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:48:00 compute-0 systemd[1]: libpod-conmon-d0ceb51cb41ce6b43ce2e1aeec1b725cdc84c2368f538ae6ec1e341b9335cefe.scope: Deactivated successfully.
Oct 02 19:48:00 compute-0 sudo[425420]: pam_unix(sudo:session): session closed for user root
Oct 02 19:48:00 compute-0 sudo[425622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:48:00 compute-0 sudo[425622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:48:00 compute-0 sudo[425622]: pam_unix(sudo:session): session closed for user root
Oct 02 19:48:01 compute-0 sudo[425647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:48:01 compute-0 sudo[425647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:48:01 compute-0 sudo[425647]: pam_unix(sudo:session): session closed for user root
Oct 02 19:48:01 compute-0 sudo[425672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:48:01 compute-0 sudo[425672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:48:01 compute-0 sudo[425672]: pam_unix(sudo:session): session closed for user root
Oct 02 19:48:01 compute-0 openstack_network_exporter[372736]: ERROR   19:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:48:01 compute-0 openstack_network_exporter[372736]: ERROR   19:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:48:01 compute-0 openstack_network_exporter[372736]: ERROR   19:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:48:01 compute-0 openstack_network_exporter[372736]: ERROR   19:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:48:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:48:01 compute-0 openstack_network_exporter[372736]: ERROR   19:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:48:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:48:01 compute-0 sudo[425697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:48:01 compute-0 sudo[425697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:48:02 compute-0 podman[425761]: 2025-10-02 19:48:02.06891689 +0000 UTC m=+0.091240528 container create 6280d443ec0d802ca2e9eb426c7b2089db8bf8961cab1c443a8d515f15a3ac6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_spence, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 19:48:02 compute-0 podman[425761]: 2025-10-02 19:48:02.03246626 +0000 UTC m=+0.054789978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:48:02 compute-0 systemd[1]: Started libpod-conmon-6280d443ec0d802ca2e9eb426c7b2089db8bf8961cab1c443a8d515f15a3ac6c.scope.
Oct 02 19:48:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:48:02 compute-0 podman[425761]: 2025-10-02 19:48:02.203584971 +0000 UTC m=+0.225908629 container init 6280d443ec0d802ca2e9eb426c7b2089db8bf8961cab1c443a8d515f15a3ac6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 19:48:02 compute-0 podman[425761]: 2025-10-02 19:48:02.213731871 +0000 UTC m=+0.236055539 container start 6280d443ec0d802ca2e9eb426c7b2089db8bf8961cab1c443a8d515f15a3ac6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_spence, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:48:02 compute-0 podman[425761]: 2025-10-02 19:48:02.220661625 +0000 UTC m=+0.242985263 container attach 6280d443ec0d802ca2e9eb426c7b2089db8bf8961cab1c443a8d515f15a3ac6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 19:48:02 compute-0 cranky_spence[425774]: 167 167
Oct 02 19:48:02 compute-0 systemd[1]: libpod-6280d443ec0d802ca2e9eb426c7b2089db8bf8961cab1c443a8d515f15a3ac6c.scope: Deactivated successfully.
Oct 02 19:48:02 compute-0 podman[425761]: 2025-10-02 19:48:02.236495056 +0000 UTC m=+0.258818684 container died 6280d443ec0d802ca2e9eb426c7b2089db8bf8961cab1c443a8d515f15a3ac6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_spence, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 02 19:48:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7edfb2e785cf242a1202b096bfff7013123d8d31921e5dd07ff395ce5acc2456-merged.mount: Deactivated successfully.
Oct 02 19:48:02 compute-0 ceph-mon[191910]: pgmap v1287: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:02 compute-0 podman[425761]: 2025-10-02 19:48:02.312901888 +0000 UTC m=+0.335225536 container remove 6280d443ec0d802ca2e9eb426c7b2089db8bf8961cab1c443a8d515f15a3ac6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_spence, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:48:02 compute-0 systemd[1]: libpod-conmon-6280d443ec0d802ca2e9eb426c7b2089db8bf8961cab1c443a8d515f15a3ac6c.scope: Deactivated successfully.
Oct 02 19:48:02 compute-0 podman[425804]: 2025-10-02 19:48:02.545409751 +0000 UTC m=+0.089708277 container create 9c03e7e3f0942531be651ad0bed13ac19265d9365d9257719ee93d654f23cae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:48:02 compute-0 systemd[1]: Started libpod-conmon-9c03e7e3f0942531be651ad0bed13ac19265d9365d9257719ee93d654f23cae3.scope.
Oct 02 19:48:02 compute-0 podman[425804]: 2025-10-02 19:48:02.512181927 +0000 UTC m=+0.056480473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:48:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11feec22f1327e640a9bb955ec3cabe260c4e897c3d3eddbd81190f31c2f6d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11feec22f1327e640a9bb955ec3cabe260c4e897c3d3eddbd81190f31c2f6d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11feec22f1327e640a9bb955ec3cabe260c4e897c3d3eddbd81190f31c2f6d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11feec22f1327e640a9bb955ec3cabe260c4e897c3d3eddbd81190f31c2f6d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:48:02 compute-0 podman[425804]: 2025-10-02 19:48:02.703010632 +0000 UTC m=+0.247309198 container init 9c03e7e3f0942531be651ad0bed13ac19265d9365d9257719ee93d654f23cae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chatelet, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:48:02 compute-0 podman[425804]: 2025-10-02 19:48:02.731066628 +0000 UTC m=+0.275365164 container start 9c03e7e3f0942531be651ad0bed13ac19265d9365d9257719ee93d654f23cae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chatelet, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 19:48:02 compute-0 podman[425804]: 2025-10-02 19:48:02.739911753 +0000 UTC m=+0.284210299 container attach 9c03e7e3f0942531be651ad0bed13ac19265d9365d9257719ee93d654f23cae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chatelet, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 19:48:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:48:03
Oct 02 19:48:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:48:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:48:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'volumes', 'default.rgw.control', '.mgr', '.rgw.root', 'default.rgw.log', 'backups']
Oct 02 19:48:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:48:03 compute-0 nova_compute[355794]: 2025-10-02 19:48:03.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:48:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:48:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:48:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:48:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:48:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]: {
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:         "osd_id": 1,
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:         "type": "bluestore"
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:     },
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:         "osd_id": 2,
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:         "type": "bluestore"
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:     },
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:         "osd_id": 0,
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:         "type": "bluestore"
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]:     }
Oct 02 19:48:03 compute-0 sweet_chatelet[425821]: }
Oct 02 19:48:03 compute-0 systemd[1]: libpod-9c03e7e3f0942531be651ad0bed13ac19265d9365d9257719ee93d654f23cae3.scope: Deactivated successfully.
Oct 02 19:48:03 compute-0 podman[425804]: 2025-10-02 19:48:03.972467778 +0000 UTC m=+1.516766394 container died 9c03e7e3f0942531be651ad0bed13ac19265d9365d9257719ee93d654f23cae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chatelet, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 19:48:03 compute-0 systemd[1]: libpod-9c03e7e3f0942531be651ad0bed13ac19265d9365d9257719ee93d654f23cae3.scope: Consumed 1.237s CPU time.
Oct 02 19:48:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d11feec22f1327e640a9bb955ec3cabe260c4e897c3d3eddbd81190f31c2f6d5-merged.mount: Deactivated successfully.
Oct 02 19:48:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:48:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:48:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:48:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:48:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:48:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:48:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:48:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:48:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:48:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:48:04 compute-0 podman[425804]: 2025-10-02 19:48:04.086898681 +0000 UTC m=+1.631197207 container remove 9c03e7e3f0942531be651ad0bed13ac19265d9365d9257719ee93d654f23cae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:48:04 compute-0 systemd[1]: libpod-conmon-9c03e7e3f0942531be651ad0bed13ac19265d9365d9257719ee93d654f23cae3.scope: Deactivated successfully.
Oct 02 19:48:04 compute-0 podman[425862]: 2025-10-02 19:48:04.128499538 +0000 UTC m=+0.108665271 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:48:04 compute-0 sudo[425697]: pam_unix(sudo:session): session closed for user root
Oct 02 19:48:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:48:04 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:48:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:48:04 compute-0 podman[425855]: 2025-10-02 19:48:04.158948627 +0000 UTC m=+0.139092890 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, release=1755695350, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc.)
Oct 02 19:48:04 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:48:04 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev dc7b3559-4080-4a03-bc6b-7e271f8b02ce does not exist
Oct 02 19:48:04 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 5c4a11fa-88d2-440e-a02c-907385f587e1 does not exist
Oct 02 19:48:04 compute-0 sudo[425908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:48:04 compute-0 sudo[425908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:48:04 compute-0 sudo[425908]: pam_unix(sudo:session): session closed for user root
Oct 02 19:48:04 compute-0 ceph-mon[191910]: pgmap v1288: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:48:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:48:04 compute-0 sudo[425933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:48:04 compute-0 sudo[425933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:48:04 compute-0 sudo[425933]: pam_unix(sudo:session): session closed for user root
Oct 02 19:48:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:48:05 compute-0 nova_compute[355794]: 2025-10-02 19:48:05.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:06 compute-0 ceph-mon[191910]: pgmap v1289: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:07 compute-0 ceph-mon[191910]: pgmap v1290: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:08 compute-0 nova_compute[355794]: 2025-10-02 19:48:08.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:09 compute-0 ceph-mon[191910]: pgmap v1291: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:48:10 compute-0 nova_compute[355794]: 2025-10-02 19:48:10.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:11 compute-0 ceph-mon[191910]: pgmap v1292: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011046977762583837 of space, bias 1.0, pg target 0.3314093328775151 quantized to 32 (current 32)
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:48:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:48:13 compute-0 nova_compute[355794]: 2025-10-02 19:48:13.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:13 compute-0 ceph-mon[191910]: pgmap v1293: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:48:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 2400.0 total, 600.0 interval
                                            Cumulative writes: 5957 writes, 26K keys, 5957 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                            Cumulative WAL: 5957 writes, 5957 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1357 writes, 6131 keys, 1357 commit groups, 1.0 writes per commit group, ingest: 8.80 MB, 0.01 MB/s
                                            Interval WAL: 1357 writes, 1357 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     98.2      0.31              0.13        15    0.021       0      0       0.0       0.0
                                              L6      1/0    7.11 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    128.7    104.3      0.95              0.51        14    0.068     63K   7813       0.0       0.0
                                             Sum      1/0    7.11 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     97.2    102.8      1.26              0.64        29    0.043     63K   7813       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5    103.5    104.7      0.37              0.20         8    0.046     20K   2554       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    128.7    104.3      0.95              0.51        14    0.068     63K   7813       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     99.4      0.30              0.13        14    0.022       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 2400.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.030, interval 0.008
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.13 GB write, 0.05 MB/s write, 0.12 GB read, 0.05 MB/s read, 1.3 seconds
                                            Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.4 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x557df35531f0#2 capacity: 308.00 MB usage: 13.46 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000168 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(853,12.95 MB,4.2055%) FilterBlock(30,183.92 KB,0.0583153%) IndexBlock(30,338.20 KB,0.107233%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Oct 02 19:48:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:14 compute-0 podman[425958]: 2025-10-02 19:48:14.898861772 +0000 UTC m=+0.148147130 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 19:48:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:48:15 compute-0 nova_compute[355794]: 2025-10-02 19:48:15.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:15 compute-0 ceph-mon[191910]: pgmap v1294: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:17 compute-0 ceph-mon[191910]: pgmap v1295: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:18 compute-0 nova_compute[355794]: 2025-10-02 19:48:18.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:19 compute-0 ceph-mon[191910]: pgmap v1296: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:48:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3533803670' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:48:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:48:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3533803670' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:48:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:48:20 compute-0 nova_compute[355794]: 2025-10-02 19:48:20.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3533803670' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:48:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3533803670' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:48:21 compute-0 podman[425978]: 2025-10-02 19:48:21.67944352 +0000 UTC m=+0.104657204 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:48:21 compute-0 podman[425979]: 2025-10-02 19:48:21.7110212 +0000 UTC m=+0.121345038 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:48:21 compute-0 ceph-mon[191910]: pgmap v1297: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:23 compute-0 nova_compute[355794]: 2025-10-02 19:48:23.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:23 compute-0 ceph-mon[191910]: pgmap v1298: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:48:25 compute-0 nova_compute[355794]: 2025-10-02 19:48:25.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:25 compute-0 ceph-mon[191910]: pgmap v1299: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:26 compute-0 podman[426021]: 2025-10-02 19:48:26.707708142 +0000 UTC m=+0.129667169 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., io.buildah.version=1.29.0, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Oct 02 19:48:26 compute-0 podman[426020]: 2025-10-02 19:48:26.731747171 +0000 UTC m=+0.155136006 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 19:48:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:28 compute-0 ceph-mon[191910]: pgmap v1300: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:28 compute-0 nova_compute[355794]: 2025-10-02 19:48:28.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:29 compute-0 podman[426056]: 2025-10-02 19:48:29.653558058 +0000 UTC m=+0.090376324 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:48:29 compute-0 podman[426057]: 2025-10-02 19:48:29.665766392 +0000 UTC m=+0.098463699 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 19:48:29 compute-0 podman[426058]: 2025-10-02 19:48:29.688669982 +0000 UTC m=+0.114965339 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 19:48:29 compute-0 podman[157186]: time="2025-10-02T19:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:48:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:48:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9040 "" "Go-http-client/1.1"
Oct 02 19:48:30 compute-0 ceph-mon[191910]: pgmap v1301: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:48:30 compute-0 nova_compute[355794]: 2025-10-02 19:48:30.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:31 compute-0 openstack_network_exporter[372736]: ERROR   19:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:48:31 compute-0 openstack_network_exporter[372736]: ERROR   19:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:48:31 compute-0 openstack_network_exporter[372736]: ERROR   19:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:48:31 compute-0 openstack_network_exporter[372736]: ERROR   19:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:48:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:48:31 compute-0 openstack_network_exporter[372736]: ERROR   19:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:48:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:48:32 compute-0 ceph-mon[191910]: pgmap v1302: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:48:32.299 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:48:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:48:32.299 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:48:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:48:32.300 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:48:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:48:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:48:33 compute-0 nova_compute[355794]: 2025-10-02 19:48:33.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:48:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:48:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:48:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:48:34 compute-0 ceph-mon[191910]: pgmap v1303: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:34 compute-0 podman[426119]: 2025-10-02 19:48:34.691917118 +0000 UTC m=+0.109040831 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:48:34 compute-0 podman[426118]: 2025-10-02 19:48:34.699181571 +0000 UTC m=+0.118304027 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 02 19:48:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:48:35 compute-0 nova_compute[355794]: 2025-10-02 19:48:35.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:36 compute-0 ceph-mon[191910]: pgmap v1304: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:38 compute-0 ceph-mon[191910]: pgmap v1305: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:38 compute-0 nova_compute[355794]: 2025-10-02 19:48:38.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:40 compute-0 ceph-mon[191910]: pgmap v1306: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:48:40 compute-0 nova_compute[355794]: 2025-10-02 19:48:40.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:42 compute-0 ceph-mon[191910]: pgmap v1307: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:43 compute-0 nova_compute[355794]: 2025-10-02 19:48:43.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:44 compute-0 ceph-mon[191910]: pgmap v1308: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:48:45 compute-0 nova_compute[355794]: 2025-10-02 19:48:45.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:45 compute-0 nova_compute[355794]: 2025-10-02 19:48:45.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:45 compute-0 nova_compute[355794]: 2025-10-02 19:48:45.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:45 compute-0 nova_compute[355794]: 2025-10-02 19:48:45.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:48:45 compute-0 podman[426158]: 2025-10-02 19:48:45.726259563 +0000 UTC m=+0.142629614 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 02 19:48:46 compute-0 ceph-mon[191910]: pgmap v1309: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:46 compute-0 nova_compute[355794]: 2025-10-02 19:48:46.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:47 compute-0 nova_compute[355794]: 2025-10-02 19:48:47.577 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:47 compute-0 nova_compute[355794]: 2025-10-02 19:48:47.578 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:48:47 compute-0 nova_compute[355794]: 2025-10-02 19:48:47.578 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:48:48 compute-0 nova_compute[355794]: 2025-10-02 19:48:48.076 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:48:48 compute-0 nova_compute[355794]: 2025-10-02 19:48:48.077 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:48:48 compute-0 nova_compute[355794]: 2025-10-02 19:48:48.079 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:48:48 compute-0 nova_compute[355794]: 2025-10-02 19:48:48.080 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:48:48 compute-0 ceph-mon[191910]: pgmap v1310: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:48 compute-0 nova_compute[355794]: 2025-10-02 19:48:48.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:50 compute-0 ceph-mon[191910]: pgmap v1311: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:48:50 compute-0 nova_compute[355794]: 2025-10-02 19:48:50.261 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:48:50 compute-0 nova_compute[355794]: 2025-10-02 19:48:50.317 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:48:50 compute-0 nova_compute[355794]: 2025-10-02 19:48:50.319 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:48:50 compute-0 nova_compute[355794]: 2025-10-02 19:48:50.320 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:50 compute-0 nova_compute[355794]: 2025-10-02 19:48:50.321 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:50 compute-0 nova_compute[355794]: 2025-10-02 19:48:50.323 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:50 compute-0 nova_compute[355794]: 2025-10-02 19:48:50.365 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:48:50 compute-0 nova_compute[355794]: 2025-10-02 19:48:50.367 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:48:50 compute-0 nova_compute[355794]: 2025-10-02 19:48:50.368 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:48:50 compute-0 nova_compute[355794]: 2025-10-02 19:48:50.369 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:48:50 compute-0 nova_compute[355794]: 2025-10-02 19:48:50.370 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:48:50 compute-0 nova_compute[355794]: 2025-10-02 19:48:50.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:48:50 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3218912075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:48:50 compute-0 nova_compute[355794]: 2025-10-02 19:48:50.900 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:48:51 compute-0 nova_compute[355794]: 2025-10-02 19:48:51.009 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:48:51 compute-0 nova_compute[355794]: 2025-10-02 19:48:51.010 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:48:51 compute-0 nova_compute[355794]: 2025-10-02 19:48:51.011 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:48:51 compute-0 nova_compute[355794]: 2025-10-02 19:48:51.020 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:48:51 compute-0 nova_compute[355794]: 2025-10-02 19:48:51.021 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:48:51 compute-0 nova_compute[355794]: 2025-10-02 19:48:51.021 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:48:51 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3218912075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:48:51 compute-0 nova_compute[355794]: 2025-10-02 19:48:51.587 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:48:51 compute-0 nova_compute[355794]: 2025-10-02 19:48:51.590 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3741MB free_disk=59.92201232910156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:48:51 compute-0 nova_compute[355794]: 2025-10-02 19:48:51.591 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:48:51 compute-0 nova_compute[355794]: 2025-10-02 19:48:51.591 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:48:51 compute-0 nova_compute[355794]: 2025-10-02 19:48:51.696 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:48:51 compute-0 nova_compute[355794]: 2025-10-02 19:48:51.696 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:48:51 compute-0 nova_compute[355794]: 2025-10-02 19:48:51.697 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:48:51 compute-0 nova_compute[355794]: 2025-10-02 19:48:51.698 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:48:51 compute-0 nova_compute[355794]: 2025-10-02 19:48:51.752 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:48:52 compute-0 ceph-mon[191910]: pgmap v1312: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:48:52 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1858771493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:48:52 compute-0 nova_compute[355794]: 2025-10-02 19:48:52.295 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:48:52 compute-0 nova_compute[355794]: 2025-10-02 19:48:52.306 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:48:52 compute-0 nova_compute[355794]: 2025-10-02 19:48:52.337 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:48:52 compute-0 nova_compute[355794]: 2025-10-02 19:48:52.341 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:48:52 compute-0 nova_compute[355794]: 2025-10-02 19:48:52.343 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:48:52 compute-0 nova_compute[355794]: 2025-10-02 19:48:52.598 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:52 compute-0 nova_compute[355794]: 2025-10-02 19:48:52.599 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:52 compute-0 podman[426223]: 2025-10-02 19:48:52.724587652 +0000 UTC m=+0.141335110 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:48:52 compute-0 podman[426224]: 2025-10-02 19:48:52.719433915 +0000 UTC m=+0.137068976 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:48:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:53 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1858771493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:48:53 compute-0 nova_compute[355794]: 2025-10-02 19:48:53.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:54 compute-0 ceph-mon[191910]: pgmap v1313: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:48:55 compute-0 nova_compute[355794]: 2025-10-02 19:48:55.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:56 compute-0 ceph-mon[191910]: pgmap v1314: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:57 compute-0 podman[426261]: 2025-10-02 19:48:57.710146997 +0000 UTC m=+0.133989254 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct 02 19:48:57 compute-0 podman[426262]: 2025-10-02 19:48:57.718832708 +0000 UTC m=+0.134737954 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release-0.7.12=, vcs-type=git, config_id=edpm, container_name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Oct 02 19:48:58 compute-0 ceph-mon[191910]: pgmap v1315: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:58 compute-0 nova_compute[355794]: 2025-10-02 19:48:58.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:48:59 compute-0 podman[157186]: time="2025-10-02T19:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:48:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:48:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9042 "" "Go-http-client/1.1"
Oct 02 19:49:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:49:00 compute-0 ceph-mon[191910]: pgmap v1316: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:00 compute-0 nova_compute[355794]: 2025-10-02 19:49:00.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:00 compute-0 podman[426298]: 2025-10-02 19:49:00.703621079 +0000 UTC m=+0.118592344 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:49:00 compute-0 podman[426299]: 2025-10-02 19:49:00.746685794 +0000 UTC m=+0.154335314 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 02 19:49:00 compute-0 podman[426300]: 2025-10-02 19:49:00.761510958 +0000 UTC m=+0.160190899 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Oct 02 19:49:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:01 compute-0 openstack_network_exporter[372736]: ERROR   19:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:49:01 compute-0 openstack_network_exporter[372736]: ERROR   19:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:49:01 compute-0 openstack_network_exporter[372736]: ERROR   19:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:49:01 compute-0 openstack_network_exporter[372736]: ERROR   19:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:49:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:49:01 compute-0 openstack_network_exporter[372736]: ERROR   19:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:49:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:49:02 compute-0 ceph-mon[191910]: pgmap v1317: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:49:03
Oct 02 19:49:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:49:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:49:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'images', '.mgr', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control']
Oct 02 19:49:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:49:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:49:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:49:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:49:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:49:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:49:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:49:03 compute-0 nova_compute[355794]: 2025-10-02 19:49:03.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:49:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:49:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:49:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:49:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:49:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:49:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:49:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:49:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:49:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:49:04 compute-0 ceph-mon[191910]: pgmap v1318: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.296 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.297 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3438cb5f40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.308 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.313 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4cdbea11-17d2-4466-a5f5-9a3d25e25d8a', 'name': 'vn-wkgthwg-cyepyxtlaijo-cznrkcgobntv-vnf-te2v6j4ustz5', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {'metering.server_group': 'd2d7e2b0-01e0-44b1-b2c7-fe502b333743'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.314 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.314 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.315 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.315 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:49:04.315120) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.384 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.385 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.398 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.504 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.505 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.505 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.507 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.507 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.508 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.508 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:49:04.508437) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.545 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.546 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.547 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.572 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.572 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.573 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.574 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.575 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:49:04.574477) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.575 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.575 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.576 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.bytes volume: 41807872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.576 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.576 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.577 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.577 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:49:04.578032) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.578 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.578 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.579 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.579 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.latency volume: 6231088971 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.579 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.latency volume: 36317650 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.580 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.581 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.581 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:49:04.581368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 sudo[426362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:49:04 compute-0 sudo[426362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:49:04 compute-0 sudo[426362]: pam_unix(sudo:session): session closed for user root
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.628 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.660 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.661 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.661 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.661 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.662 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.662 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.662 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:49:04.662264) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.662 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.664 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.665 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.667 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.668 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.669 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.670 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.671 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.672 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.672 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.674 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.675 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.676 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.676 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:49:04.676143) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.685 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.690 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.bytes.delta volume: 3363 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.691 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.691 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.691 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.692 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.692 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.692 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.693 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.694 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.694 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:49:04.692944) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.694 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.695 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.695 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.696 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.696 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.696 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.697 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:49:04.696752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.697 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.698 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.698 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.699 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.699 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.699 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.699 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:49:04.699543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.700 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.701 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.701 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.702 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.702 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.702 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.703 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.703 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:49:04.702765) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.703 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.704 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.705 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.705 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.705 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.705 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.706 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:49:04.705909) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.706 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.706 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.bytes.delta volume: 2614 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.707 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.708 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.708 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.708 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.709 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.709 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.709 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:49:04.709081) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.710 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.710 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.711 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.711 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.712 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.712 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:49:04.712263) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.712 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.713 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.713 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.714 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.714 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.715 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.715 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.716 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.716 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.717 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.717 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:49:04.717501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.717 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.718 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.bytes volume: 4760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.719 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.719 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.720 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.720 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.720 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.721 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:49:04.720820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.721 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.721 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.722 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.722 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.723 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.723 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.724 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.724 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.725 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.725 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.726 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.726 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.726 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.727 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.727 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:49:04.726712) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.727 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/memory.usage volume: 49.125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.728 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.728 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.728 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.729 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.729 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.729 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.729 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2310 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.730 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.bytes volume: 5149 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.730 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:49:04.729669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.732 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.732 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.732 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.732 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.733 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.733 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:49:04.732893) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.733 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.packets volume: 41 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.734 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.734 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.735 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.735 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.735 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.735 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.735 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.735 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:49:04.735620) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.736 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.736 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.736 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.737 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.737 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.738 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.738 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.738 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.738 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.738 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.739 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.739 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.739 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:49:04.739090) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.739 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.740 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.740 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.740 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.740 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.741 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.741 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.741 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.741 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:49:04.741277) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.741 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.742 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.742 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.742 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.743 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.743 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.743 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.743 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 36830000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:49:04.743483) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.744 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/cpu volume: 140010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.744 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.744 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.744 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.745 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.745 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.745 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.745 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.746 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:49:04.745618) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.746 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.746 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.747 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.latency volume: 1764876744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.747 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.latency volume: 323566119 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.747 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.latency volume: 193343486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.748 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.751 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.751 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.751 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.751 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:49:04.751 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:04 compute-0 sudo[426387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:49:04 compute-0 sudo[426387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:49:04 compute-0 sudo[426387]: pam_unix(sudo:session): session closed for user root
Oct 02 19:49:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:04 compute-0 sudo[426422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:49:04 compute-0 podman[426411]: 2025-10-02 19:49:04.86618401 +0000 UTC m=+0.095371887 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9)
Oct 02 19:49:04 compute-0 sudo[426422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:49:04 compute-0 sudo[426422]: pam_unix(sudo:session): session closed for user root
Oct 02 19:49:04 compute-0 podman[426412]: 2025-10-02 19:49:04.892344745 +0000 UTC m=+0.111572448 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:49:04 compute-0 sudo[426476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:49:04 compute-0 sudo[426476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:49:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:49:05 compute-0 nova_compute[355794]: 2025-10-02 19:49:05.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:05 compute-0 sudo[426476]: pam_unix(sudo:session): session closed for user root
Oct 02 19:49:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:49:05 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:49:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:49:05 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:49:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:49:05 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:49:05 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev adcd5107-d183-49e0-8858-7ba0eb338cee does not exist
Oct 02 19:49:05 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev b53e94ba-e1b0-4b2d-9f40-15d4c1aa08f4 does not exist
Oct 02 19:49:05 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 8175b17b-0807-4239-b5a8-91136f9c98ad does not exist
Oct 02 19:49:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:49:05 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:49:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:49:05 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:49:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:49:05 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:49:05 compute-0 sudo[426531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:49:05 compute-0 sudo[426531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:49:05 compute-0 sudo[426531]: pam_unix(sudo:session): session closed for user root
Oct 02 19:49:06 compute-0 sudo[426556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:49:06 compute-0 sudo[426556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:49:06 compute-0 sudo[426556]: pam_unix(sudo:session): session closed for user root
Oct 02 19:49:06 compute-0 sudo[426581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:49:06 compute-0 sudo[426581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:49:06 compute-0 sudo[426581]: pam_unix(sudo:session): session closed for user root
Oct 02 19:49:06 compute-0 sudo[426606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:49:06 compute-0 sudo[426606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:49:06 compute-0 ceph-mon[191910]: pgmap v1319: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:49:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:49:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:49:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:49:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:49:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:49:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1320: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:06 compute-0 podman[426671]: 2025-10-02 19:49:06.855547371 +0000 UTC m=+0.082958547 container create c3549c04bba1bdf6232d3741e962c8b22953298ab4d39baef900dd6085d172d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chandrasekhar, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:49:06 compute-0 podman[426671]: 2025-10-02 19:49:06.831487322 +0000 UTC m=+0.058898518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:49:06 compute-0 systemd[1]: Started libpod-conmon-c3549c04bba1bdf6232d3741e962c8b22953298ab4d39baef900dd6085d172d1.scope.
Oct 02 19:49:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:49:07 compute-0 podman[426671]: 2025-10-02 19:49:07.017102037 +0000 UTC m=+0.244513283 container init c3549c04bba1bdf6232d3741e962c8b22953298ab4d39baef900dd6085d172d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chandrasekhar, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:49:07 compute-0 podman[426671]: 2025-10-02 19:49:07.03749829 +0000 UTC m=+0.264909516 container start c3549c04bba1bdf6232d3741e962c8b22953298ab4d39baef900dd6085d172d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chandrasekhar, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:49:07 compute-0 podman[426671]: 2025-10-02 19:49:07.045157674 +0000 UTC m=+0.272568930 container attach c3549c04bba1bdf6232d3741e962c8b22953298ab4d39baef900dd6085d172d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 19:49:07 compute-0 crazy_chandrasekhar[426687]: 167 167
Oct 02 19:49:07 compute-0 systemd[1]: libpod-c3549c04bba1bdf6232d3741e962c8b22953298ab4d39baef900dd6085d172d1.scope: Deactivated successfully.
Oct 02 19:49:07 compute-0 podman[426671]: 2025-10-02 19:49:07.050963428 +0000 UTC m=+0.278374634 container died c3549c04bba1bdf6232d3741e962c8b22953298ab4d39baef900dd6085d172d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chandrasekhar, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 02 19:49:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9d134db877241335f4a10c5ebb1bebab5ebbac7b10bcf29fa3c396103fc5587-merged.mount: Deactivated successfully.
Oct 02 19:49:07 compute-0 podman[426671]: 2025-10-02 19:49:07.1375478 +0000 UTC m=+0.364959006 container remove c3549c04bba1bdf6232d3741e962c8b22953298ab4d39baef900dd6085d172d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 19:49:07 compute-0 systemd[1]: libpod-conmon-c3549c04bba1bdf6232d3741e962c8b22953298ab4d39baef900dd6085d172d1.scope: Deactivated successfully.
Oct 02 19:49:07 compute-0 podman[426710]: 2025-10-02 19:49:07.449003653 +0000 UTC m=+0.092448040 container create c198daf587560e1359decc7f870c6d43b616f428f75c7649769182ae0d7eb66c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ptolemy, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:49:07 compute-0 podman[426710]: 2025-10-02 19:49:07.403815931 +0000 UTC m=+0.047260368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:49:07 compute-0 systemd[1]: Started libpod-conmon-c198daf587560e1359decc7f870c6d43b616f428f75c7649769182ae0d7eb66c.scope.
Oct 02 19:49:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4c4486751d37f317aa08c6444a13b1539f9bc902795a2d5138e9631681a9f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4c4486751d37f317aa08c6444a13b1539f9bc902795a2d5138e9631681a9f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4c4486751d37f317aa08c6444a13b1539f9bc902795a2d5138e9631681a9f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4c4486751d37f317aa08c6444a13b1539f9bc902795a2d5138e9631681a9f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4c4486751d37f317aa08c6444a13b1539f9bc902795a2d5138e9631681a9f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:49:07 compute-0 podman[426710]: 2025-10-02 19:49:07.605658789 +0000 UTC m=+0.249103226 container init c198daf587560e1359decc7f870c6d43b616f428f75c7649769182ae0d7eb66c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:49:07 compute-0 podman[426710]: 2025-10-02 19:49:07.628953878 +0000 UTC m=+0.272398225 container start c198daf587560e1359decc7f870c6d43b616f428f75c7649769182ae0d7eb66c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:49:07 compute-0 podman[426710]: 2025-10-02 19:49:07.635758019 +0000 UTC m=+0.279202446 container attach c198daf587560e1359decc7f870c6d43b616f428f75c7649769182ae0d7eb66c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:49:08 compute-0 ceph-mon[191910]: pgmap v1320: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:08 compute-0 nova_compute[355794]: 2025-10-02 19:49:08.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:08 compute-0 beautiful_ptolemy[426725]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:49:08 compute-0 beautiful_ptolemy[426725]: --> relative data size: 1.0
Oct 02 19:49:08 compute-0 beautiful_ptolemy[426725]: --> All data devices are unavailable
Oct 02 19:49:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:08 compute-0 systemd[1]: libpod-c198daf587560e1359decc7f870c6d43b616f428f75c7649769182ae0d7eb66c.scope: Deactivated successfully.
Oct 02 19:49:08 compute-0 systemd[1]: libpod-c198daf587560e1359decc7f870c6d43b616f428f75c7649769182ae0d7eb66c.scope: Consumed 1.196s CPU time.
Oct 02 19:49:08 compute-0 podman[426710]: 2025-10-02 19:49:08.879502162 +0000 UTC m=+1.522946549 container died c198daf587560e1359decc7f870c6d43b616f428f75c7649769182ae0d7eb66c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:49:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac4c4486751d37f317aa08c6444a13b1539f9bc902795a2d5138e9631681a9f0-merged.mount: Deactivated successfully.
Oct 02 19:49:08 compute-0 podman[426710]: 2025-10-02 19:49:08.977296233 +0000 UTC m=+1.620740590 container remove c198daf587560e1359decc7f870c6d43b616f428f75c7649769182ae0d7eb66c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ptolemy, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 19:49:09 compute-0 systemd[1]: libpod-conmon-c198daf587560e1359decc7f870c6d43b616f428f75c7649769182ae0d7eb66c.scope: Deactivated successfully.
Oct 02 19:49:09 compute-0 sudo[426606]: pam_unix(sudo:session): session closed for user root
Oct 02 19:49:09 compute-0 sudo[426768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:49:09 compute-0 sudo[426768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:49:09 compute-0 sudo[426768]: pam_unix(sudo:session): session closed for user root
Oct 02 19:49:09 compute-0 sudo[426793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:49:09 compute-0 sudo[426793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:49:09 compute-0 sudo[426793]: pam_unix(sudo:session): session closed for user root
Oct 02 19:49:09 compute-0 sudo[426818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:49:09 compute-0 sudo[426818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:49:09 compute-0 sudo[426818]: pam_unix(sudo:session): session closed for user root
Oct 02 19:49:09 compute-0 sudo[426843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:49:09 compute-0 sudo[426843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:49:10 compute-0 podman[426907]: 2025-10-02 19:49:10.021208832 +0000 UTC m=+0.053395080 container create 71304b77ccff1eee77855d8728fd13097dd25463e16a07515acafda68df5c976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_merkle, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:49:10 compute-0 systemd[1]: Started libpod-conmon-71304b77ccff1eee77855d8728fd13097dd25463e16a07515acafda68df5c976.scope.
Oct 02 19:49:10 compute-0 podman[426907]: 2025-10-02 19:49:10.004304103 +0000 UTC m=+0.036490381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:49:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:49:10 compute-0 podman[426907]: 2025-10-02 19:49:10.130316034 +0000 UTC m=+0.162502302 container init 71304b77ccff1eee77855d8728fd13097dd25463e16a07515acafda68df5c976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_merkle, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:49:10 compute-0 podman[426907]: 2025-10-02 19:49:10.146653998 +0000 UTC m=+0.178840286 container start 71304b77ccff1eee77855d8728fd13097dd25463e16a07515acafda68df5c976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_merkle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 19:49:10 compute-0 podman[426907]: 2025-10-02 19:49:10.153767367 +0000 UTC m=+0.185953665 container attach 71304b77ccff1eee77855d8728fd13097dd25463e16a07515acafda68df5c976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_merkle, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:49:10 compute-0 distracted_merkle[426922]: 167 167
Oct 02 19:49:10 compute-0 systemd[1]: libpod-71304b77ccff1eee77855d8728fd13097dd25463e16a07515acafda68df5c976.scope: Deactivated successfully.
Oct 02 19:49:10 compute-0 podman[426907]: 2025-10-02 19:49:10.160498097 +0000 UTC m=+0.192684385 container died 71304b77ccff1eee77855d8728fd13097dd25463e16a07515acafda68df5c976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:49:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:49:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2e0027743569bd105b724b5bdc72b34839ba86985ad0f42ff3b0f0614476e06-merged.mount: Deactivated successfully.
Oct 02 19:49:10 compute-0 podman[426907]: 2025-10-02 19:49:10.235276305 +0000 UTC m=+0.267462583 container remove 71304b77ccff1eee77855d8728fd13097dd25463e16a07515acafda68df5c976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_merkle, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 19:49:10 compute-0 systemd[1]: libpod-conmon-71304b77ccff1eee77855d8728fd13097dd25463e16a07515acafda68df5c976.scope: Deactivated successfully.
Oct 02 19:49:10 compute-0 ceph-mon[191910]: pgmap v1321: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:10 compute-0 nova_compute[355794]: 2025-10-02 19:49:10.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:10 compute-0 podman[426946]: 2025-10-02 19:49:10.527130266 +0000 UTC m=+0.084653492 container create 3d28d4620da932845cb16a9567fd1eb86abaecfa68b2a176a9f7351d5639fd51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:49:10 compute-0 podman[426946]: 2025-10-02 19:49:10.49456579 +0000 UTC m=+0.052089076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:49:10 compute-0 systemd[1]: Started libpod-conmon-3d28d4620da932845cb16a9567fd1eb86abaecfa68b2a176a9f7351d5639fd51.scope.
Oct 02 19:49:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caeacf11662bfb037daa44b2d6afbdc090079a65f817737904995636c164a4c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caeacf11662bfb037daa44b2d6afbdc090079a65f817737904995636c164a4c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caeacf11662bfb037daa44b2d6afbdc090079a65f817737904995636c164a4c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caeacf11662bfb037daa44b2d6afbdc090079a65f817737904995636c164a4c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:49:10 compute-0 podman[426946]: 2025-10-02 19:49:10.662962238 +0000 UTC m=+0.220485464 container init 3d28d4620da932845cb16a9567fd1eb86abaecfa68b2a176a9f7351d5639fd51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_easley, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 19:49:10 compute-0 podman[426946]: 2025-10-02 19:49:10.678007128 +0000 UTC m=+0.235530354 container start 3d28d4620da932845cb16a9567fd1eb86abaecfa68b2a176a9f7351d5639fd51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_easley, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 19:49:10 compute-0 podman[426946]: 2025-10-02 19:49:10.683102894 +0000 UTC m=+0.240626150 container attach 3d28d4620da932845cb16a9567fd1eb86abaecfa68b2a176a9f7351d5639fd51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:49:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:11 compute-0 friendly_easley[426962]: {
Oct 02 19:49:11 compute-0 friendly_easley[426962]:     "0": [
Oct 02 19:49:11 compute-0 friendly_easley[426962]:         {
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "devices": [
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "/dev/loop3"
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             ],
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "lv_name": "ceph_lv0",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "lv_size": "21470642176",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "name": "ceph_lv0",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "tags": {
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.cluster_name": "ceph",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.crush_device_class": "",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.encrypted": "0",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.osd_id": "0",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.type": "block",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.vdo": "0"
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             },
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "type": "block",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "vg_name": "ceph_vg0"
Oct 02 19:49:11 compute-0 friendly_easley[426962]:         }
Oct 02 19:49:11 compute-0 friendly_easley[426962]:     ],
Oct 02 19:49:11 compute-0 friendly_easley[426962]:     "1": [
Oct 02 19:49:11 compute-0 friendly_easley[426962]:         {
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "devices": [
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "/dev/loop4"
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             ],
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "lv_name": "ceph_lv1",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "lv_size": "21470642176",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "name": "ceph_lv1",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "tags": {
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.cluster_name": "ceph",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.crush_device_class": "",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.encrypted": "0",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.osd_id": "1",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.type": "block",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.vdo": "0"
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             },
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "type": "block",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "vg_name": "ceph_vg1"
Oct 02 19:49:11 compute-0 friendly_easley[426962]:         }
Oct 02 19:49:11 compute-0 friendly_easley[426962]:     ],
Oct 02 19:49:11 compute-0 friendly_easley[426962]:     "2": [
Oct 02 19:49:11 compute-0 friendly_easley[426962]:         {
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "devices": [
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "/dev/loop5"
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             ],
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "lv_name": "ceph_lv2",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "lv_size": "21470642176",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "name": "ceph_lv2",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "tags": {
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.cluster_name": "ceph",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.crush_device_class": "",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.encrypted": "0",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.osd_id": "2",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.type": "block",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:                 "ceph.vdo": "0"
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             },
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "type": "block",
Oct 02 19:49:11 compute-0 friendly_easley[426962]:             "vg_name": "ceph_vg2"
Oct 02 19:49:11 compute-0 friendly_easley[426962]:         }
Oct 02 19:49:11 compute-0 friendly_easley[426962]:     ]
Oct 02 19:49:11 compute-0 friendly_easley[426962]: }
Oct 02 19:49:11 compute-0 systemd[1]: libpod-3d28d4620da932845cb16a9567fd1eb86abaecfa68b2a176a9f7351d5639fd51.scope: Deactivated successfully.
Oct 02 19:49:11 compute-0 podman[426946]: 2025-10-02 19:49:11.560794812 +0000 UTC m=+1.118318068 container died 3d28d4620da932845cb16a9567fd1eb86abaecfa68b2a176a9f7351d5639fd51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_easley, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:49:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-caeacf11662bfb037daa44b2d6afbdc090079a65f817737904995636c164a4c0-merged.mount: Deactivated successfully.
Oct 02 19:49:11 compute-0 podman[426946]: 2025-10-02 19:49:11.662125247 +0000 UTC m=+1.219648473 container remove 3d28d4620da932845cb16a9567fd1eb86abaecfa68b2a176a9f7351d5639fd51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_easley, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 19:49:11 compute-0 systemd[1]: libpod-conmon-3d28d4620da932845cb16a9567fd1eb86abaecfa68b2a176a9f7351d5639fd51.scope: Deactivated successfully.
Oct 02 19:49:11 compute-0 sudo[426843]: pam_unix(sudo:session): session closed for user root
Oct 02 19:49:11 compute-0 sudo[426982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:49:11 compute-0 sudo[426982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:49:11 compute-0 sudo[426982]: pam_unix(sudo:session): session closed for user root
Oct 02 19:49:12 compute-0 sudo[427007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:49:12 compute-0 sudo[427007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:49:12 compute-0 sudo[427007]: pam_unix(sudo:session): session closed for user root
Oct 02 19:49:12 compute-0 sudo[427032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:49:12 compute-0 sudo[427032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:49:12 compute-0 sudo[427032]: pam_unix(sudo:session): session closed for user root
Oct 02 19:49:12 compute-0 sudo[427057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:49:12 compute-0 sudo[427057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:49:12 compute-0 ceph-mon[191910]: pgmap v1322: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011046977762583837 of space, bias 1.0, pg target 0.3314093328775151 quantized to 32 (current 32)
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:49:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:12 compute-0 podman[427118]: 2025-10-02 19:49:12.932581831 +0000 UTC m=+0.089069919 container create 90356e5bef7f7ce698bcbc81dd516c877a66a1ee5895b0354036d3fccdf8f210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:49:12 compute-0 podman[427118]: 2025-10-02 19:49:12.900589971 +0000 UTC m=+0.057078139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:49:12 compute-0 systemd[1]: Started libpod-conmon-90356e5bef7f7ce698bcbc81dd516c877a66a1ee5895b0354036d3fccdf8f210.scope.
Oct 02 19:49:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:49:13 compute-0 podman[427118]: 2025-10-02 19:49:13.054327689 +0000 UTC m=+0.210815827 container init 90356e5bef7f7ce698bcbc81dd516c877a66a1ee5895b0354036d3fccdf8f210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 19:49:13 compute-0 podman[427118]: 2025-10-02 19:49:13.072482532 +0000 UTC m=+0.228970650 container start 90356e5bef7f7ce698bcbc81dd516c877a66a1ee5895b0354036d3fccdf8f210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 19:49:13 compute-0 podman[427118]: 2025-10-02 19:49:13.081426959 +0000 UTC m=+0.237915077 container attach 90356e5bef7f7ce698bcbc81dd516c877a66a1ee5895b0354036d3fccdf8f210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_merkle, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:49:13 compute-0 cranky_merkle[427134]: 167 167
Oct 02 19:49:13 compute-0 systemd[1]: libpod-90356e5bef7f7ce698bcbc81dd516c877a66a1ee5895b0354036d3fccdf8f210.scope: Deactivated successfully.
Oct 02 19:49:13 compute-0 podman[427118]: 2025-10-02 19:49:13.089679309 +0000 UTC m=+0.246167447 container died 90356e5bef7f7ce698bcbc81dd516c877a66a1ee5895b0354036d3fccdf8f210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_merkle, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:49:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-891752c5fda955356c3c724a6d897e10f40bce3b0f174a1505491237db4c181c-merged.mount: Deactivated successfully.
Oct 02 19:49:13 compute-0 podman[427118]: 2025-10-02 19:49:13.176830626 +0000 UTC m=+0.333318714 container remove 90356e5bef7f7ce698bcbc81dd516c877a66a1ee5895b0354036d3fccdf8f210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_merkle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 19:49:13 compute-0 systemd[1]: libpod-conmon-90356e5bef7f7ce698bcbc81dd516c877a66a1ee5895b0354036d3fccdf8f210.scope: Deactivated successfully.
Oct 02 19:49:13 compute-0 podman[427158]: 2025-10-02 19:49:13.465627166 +0000 UTC m=+0.074776449 container create 2a31c7459e4c73acf387775611ee576118fdf43a86339778a9090b89aea24641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goldberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:49:13 compute-0 podman[427158]: 2025-10-02 19:49:13.438094934 +0000 UTC m=+0.047244267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:49:13 compute-0 systemd[1]: Started libpod-conmon-2a31c7459e4c73acf387775611ee576118fdf43a86339778a9090b89aea24641.scope.
Oct 02 19:49:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:49:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa2bb04340f2e5aecb1dde9d79ea52df7c2ee396cfb38c64a1dea4e6568fbcac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:49:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa2bb04340f2e5aecb1dde9d79ea52df7c2ee396cfb38c64a1dea4e6568fbcac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:49:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa2bb04340f2e5aecb1dde9d79ea52df7c2ee396cfb38c64a1dea4e6568fbcac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:49:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa2bb04340f2e5aecb1dde9d79ea52df7c2ee396cfb38c64a1dea4e6568fbcac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:49:13 compute-0 podman[427158]: 2025-10-02 19:49:13.621229044 +0000 UTC m=+0.230378427 container init 2a31c7459e4c73acf387775611ee576118fdf43a86339778a9090b89aea24641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goldberg, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:49:13 compute-0 podman[427158]: 2025-10-02 19:49:13.641500923 +0000 UTC m=+0.250650216 container start 2a31c7459e4c73acf387775611ee576118fdf43a86339778a9090b89aea24641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:49:13 compute-0 podman[427158]: 2025-10-02 19:49:13.645748916 +0000 UTC m=+0.254898309 container attach 2a31c7459e4c73acf387775611ee576118fdf43a86339778a9090b89aea24641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goldberg, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:49:13 compute-0 nova_compute[355794]: 2025-10-02 19:49:13.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:14 compute-0 ceph-mon[191910]: pgmap v1323: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]: {
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:         "osd_id": 1,
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:         "type": "bluestore"
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:     },
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:         "osd_id": 2,
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:         "type": "bluestore"
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:     },
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:         "osd_id": 0,
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:         "type": "bluestore"
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]:     }
Oct 02 19:49:14 compute-0 trusting_goldberg[427173]: }
Oct 02 19:49:14 compute-0 podman[427158]: 2025-10-02 19:49:14.894791201 +0000 UTC m=+1.503940534 container died 2a31c7459e4c73acf387775611ee576118fdf43a86339778a9090b89aea24641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goldberg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:49:14 compute-0 systemd[1]: libpod-2a31c7459e4c73acf387775611ee576118fdf43a86339778a9090b89aea24641.scope: Deactivated successfully.
Oct 02 19:49:14 compute-0 systemd[1]: libpod-2a31c7459e4c73acf387775611ee576118fdf43a86339778a9090b89aea24641.scope: Consumed 1.225s CPU time.
Oct 02 19:49:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa2bb04340f2e5aecb1dde9d79ea52df7c2ee396cfb38c64a1dea4e6568fbcac-merged.mount: Deactivated successfully.
Oct 02 19:49:14 compute-0 podman[427158]: 2025-10-02 19:49:14.986476388 +0000 UTC m=+1.595625691 container remove 2a31c7459e4c73acf387775611ee576118fdf43a86339778a9090b89aea24641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goldberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:49:15 compute-0 systemd[1]: libpod-conmon-2a31c7459e4c73acf387775611ee576118fdf43a86339778a9090b89aea24641.scope: Deactivated successfully.
Oct 02 19:49:15 compute-0 sudo[427057]: pam_unix(sudo:session): session closed for user root
Oct 02 19:49:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:49:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:49:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:49:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:49:15 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev b798a983-0e76-4d28-bc0b-f12c7a2d870a does not exist
Oct 02 19:49:15 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 7f6cb465-209d-4a86-bc94-32c7f11b144e does not exist
Oct 02 19:49:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:49:15 compute-0 sudo[427219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:49:15 compute-0 sudo[427219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:49:15 compute-0 sudo[427219]: pam_unix(sudo:session): session closed for user root
Oct 02 19:49:15 compute-0 sudo[427244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:49:15 compute-0 sudo[427244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:49:15 compute-0 sudo[427244]: pam_unix(sudo:session): session closed for user root
Oct 02 19:49:15 compute-0 nova_compute[355794]: 2025-10-02 19:49:15.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:16 compute-0 ceph-mon[191910]: pgmap v1324: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:49:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:49:16 compute-0 podman[427269]: 2025-10-02 19:49:16.702073369 +0000 UTC m=+0.116509679 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:49:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:18 compute-0 ceph-mon[191910]: pgmap v1325: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:18 compute-0 nova_compute[355794]: 2025-10-02 19:49:18.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:20 compute-0 ceph-mon[191910]: pgmap v1326: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:49:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2320523054' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:49:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:49:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2320523054' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:49:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:49:20 compute-0 nova_compute[355794]: 2025-10-02 19:49:20.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2320523054' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:49:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2320523054' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:49:22 compute-0 ceph-mon[191910]: pgmap v1327: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:23 compute-0 nova_compute[355794]: 2025-10-02 19:49:23.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:23 compute-0 podman[427289]: 2025-10-02 19:49:23.700132951 +0000 UTC m=+0.112497872 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:49:23 compute-0 podman[427290]: 2025-10-02 19:49:23.74559287 +0000 UTC m=+0.155111996 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:49:24 compute-0 ceph-mon[191910]: pgmap v1328: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:49:25 compute-0 nova_compute[355794]: 2025-10-02 19:49:25.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:26 compute-0 ceph-mon[191910]: pgmap v1329: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:28 compute-0 ceph-mon[191910]: pgmap v1330: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:28 compute-0 nova_compute[355794]: 2025-10-02 19:49:28.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:28 compute-0 podman[427331]: 2025-10-02 19:49:28.734363862 +0000 UTC m=+0.158143297 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Oct 02 19:49:28 compute-0 podman[427332]: 2025-10-02 19:49:28.73806825 +0000 UTC m=+0.149910887 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, container_name=kepler, name=ubi9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 19:49:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:29 compute-0 podman[157186]: time="2025-10-02T19:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:49:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:49:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9045 "" "Go-http-client/1.1"
Oct 02 19:49:30 compute-0 ceph-mon[191910]: pgmap v1331: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:49:30 compute-0 nova_compute[355794]: 2025-10-02 19:49:30.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:30 compute-0 nova_compute[355794]: 2025-10-02 19:49:30.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:30.930 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '6e:8a:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:85:f4:22:b9:4f'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:49:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:30.933 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:49:31 compute-0 openstack_network_exporter[372736]: ERROR   19:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:49:31 compute-0 openstack_network_exporter[372736]: ERROR   19:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:49:31 compute-0 openstack_network_exporter[372736]: ERROR   19:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:49:31 compute-0 openstack_network_exporter[372736]: ERROR   19:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:49:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:49:31 compute-0 openstack_network_exporter[372736]: ERROR   19:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:49:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:49:31 compute-0 podman[427370]: 2025-10-02 19:49:31.667980662 +0000 UTC m=+0.088221587 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:49:31 compute-0 podman[427371]: 2025-10-02 19:49:31.694352273 +0000 UTC m=+0.106876393 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 19:49:31 compute-0 podman[427372]: 2025-10-02 19:49:31.723683473 +0000 UTC m=+0.138781721 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:49:32 compute-0 ceph-mon[191910]: pgmap v1332: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:32.300 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:49:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:32.301 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:49:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:32.302 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:49:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:49:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:49:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:49:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:49:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:49:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:49:33 compute-0 nova_compute[355794]: 2025-10-02 19:49:33.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:34 compute-0 ceph-mon[191910]: pgmap v1333: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:49:35 compute-0 nova_compute[355794]: 2025-10-02 19:49:35.232 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:49:35 compute-0 nova_compute[355794]: 2025-10-02 19:49:35.234 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:49:35 compute-0 nova_compute[355794]: 2025-10-02 19:49:35.257 2 DEBUG nova.compute.manager [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:49:35 compute-0 nova_compute[355794]: 2025-10-02 19:49:35.346 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:49:35 compute-0 nova_compute[355794]: 2025-10-02 19:49:35.348 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:49:35 compute-0 nova_compute[355794]: 2025-10-02 19:49:35.362 2 DEBUG nova.virt.hardware [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:49:35 compute-0 nova_compute[355794]: 2025-10-02 19:49:35.363 2 INFO nova.compute.claims [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:49:35 compute-0 nova_compute[355794]: 2025-10-02 19:49:35.506 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:35 compute-0 nova_compute[355794]: 2025-10-02 19:49:35.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:35 compute-0 podman[427433]: 2025-10-02 19:49:35.746294143 +0000 UTC m=+0.155529597 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:49:35 compute-0 podman[427432]: 2025-10-02 19:49:35.752554079 +0000 UTC m=+0.172645632 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, architecture=x86_64, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-type=git, release=1755695350, managed_by=edpm_ansible)
Oct 02 19:49:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:49:35 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3363196696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.026 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.038 2 DEBUG nova.compute.provider_tree [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.061 2 DEBUG nova.scheduler.client.report [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.090 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.092 2 DEBUG nova.compute.manager [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.143 2 DEBUG nova.compute.manager [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.145 2 DEBUG nova.network.neutron [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.168 2 INFO nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:49:36 compute-0 ceph-mon[191910]: pgmap v1334: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:36 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3363196696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.209 2 DEBUG nova.compute.manager [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:49:36.214037) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434576214143, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2043, "num_deletes": 251, "total_data_size": 3372708, "memory_usage": 3420512, "flush_reason": "Manual Compaction"}
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434576236543, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3317726, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25541, "largest_seqno": 27583, "table_properties": {"data_size": 3308392, "index_size": 5892, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18586, "raw_average_key_size": 20, "raw_value_size": 3289949, "raw_average_value_size": 3552, "num_data_blocks": 261, "num_entries": 926, "num_filter_entries": 926, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759434347, "oldest_key_time": 1759434347, "file_creation_time": 1759434576, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 22588 microseconds, and 14519 cpu microseconds.
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:49:36.236634) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3317726 bytes OK
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:49:36.236661) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:49:36.239632) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:49:36.239655) EVENT_LOG_v1 {"time_micros": 1759434576239648, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:49:36.239679) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3364180, prev total WAL file size 3364180, number of live WAL files 2.
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:49:36.241568) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3239KB)], [59(7281KB)]
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434576241631, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10774355, "oldest_snapshot_seqno": -1}
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5030 keys, 8993824 bytes, temperature: kUnknown
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434576291117, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8993824, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8958574, "index_size": 21572, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 124775, "raw_average_key_size": 24, "raw_value_size": 8865949, "raw_average_value_size": 1762, "num_data_blocks": 894, "num_entries": 5030, "num_filter_entries": 5030, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759434576, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:49:36.291366) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8993824 bytes
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:49:36.293409) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.4 rd, 181.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.1 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 5544, records dropped: 514 output_compression: NoCompression
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:49:36.293428) EVENT_LOG_v1 {"time_micros": 1759434576293419, "job": 32, "event": "compaction_finished", "compaction_time_micros": 49561, "compaction_time_cpu_micros": 26549, "output_level": 6, "num_output_files": 1, "total_output_size": 8993824, "num_input_records": 5544, "num_output_records": 5030, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434576294225, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434576295996, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:49:36.241363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:49:36.296503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:49:36.296517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:49:36.296522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:49:36.296527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:49:36 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:49:36.296531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.310 2 DEBUG nova.compute.manager [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.312 2 DEBUG nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.312 2 INFO nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Creating image(s)
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.361 2 DEBUG nova.storage.rbd_utils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.415 2 DEBUG nova.storage.rbd_utils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.471 2 DEBUG nova.storage.rbd_utils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.481 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.545 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.546 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "29c290047b888f2c82efe3bcb0c2a3e42b009a3e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.547 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "29c290047b888f2c82efe3bcb0c2a3e42b009a3e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.547 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "29c290047b888f2c82efe3bcb0c2a3e42b009a3e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.587 2 DEBUG nova.storage.rbd_utils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.596 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:36.937 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1de2af17-a89c-45e5-97c6-db433f26bbb6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:49:36 compute-0 nova_compute[355794]: 2025-10-02 19:49:36.964 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:37 compute-0 nova_compute[355794]: 2025-10-02 19:49:37.118 2 DEBUG nova.storage.rbd_utils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] resizing rbd image b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 19:49:37 compute-0 nova_compute[355794]: 2025-10-02 19:49:37.359 2 DEBUG nova.objects.instance [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lazy-loading 'migration_context' on Instance uuid b88114e8-b15d-4a78-ac15-3dd7ee30b949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:49:37 compute-0 nova_compute[355794]: 2025-10-02 19:49:37.417 2 DEBUG nova.storage.rbd_utils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:49:37 compute-0 nova_compute[355794]: 2025-10-02 19:49:37.474 2 DEBUG nova.storage.rbd_utils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:49:37 compute-0 nova_compute[355794]: 2025-10-02 19:49:37.484 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:37 compute-0 nova_compute[355794]: 2025-10-02 19:49:37.574 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:37 compute-0 nova_compute[355794]: 2025-10-02 19:49:37.575 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:49:37 compute-0 nova_compute[355794]: 2025-10-02 19:49:37.575 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:49:37 compute-0 nova_compute[355794]: 2025-10-02 19:49:37.576 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:49:37 compute-0 nova_compute[355794]: 2025-10-02 19:49:37.620 2 DEBUG nova.storage.rbd_utils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:49:37 compute-0 nova_compute[355794]: 2025-10-02 19:49:37.629 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:37 compute-0 nova_compute[355794]: 2025-10-02 19:49:37.834 2 DEBUG nova.network.neutron [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Successfully updated port: 55da210c-644a-4f1e-8f20-ee3303b72db2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 19:49:37 compute-0 nova_compute[355794]: 2025-10-02 19:49:37.852 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "refresh_cache-b88114e8-b15d-4a78-ac15-3dd7ee30b949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:49:37 compute-0 nova_compute[355794]: 2025-10-02 19:49:37.852 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquired lock "refresh_cache-b88114e8-b15d-4a78-ac15-3dd7ee30b949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:49:37 compute-0 nova_compute[355794]: 2025-10-02 19:49:37.852 2 DEBUG nova.network.neutron [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:49:37 compute-0 nova_compute[355794]: 2025-10-02 19:49:37.934 2 DEBUG nova.compute.manager [req-cfa16ff7-436b-47a1-b006-196f17e823d7 req-d43a8e03-7801-411f-b26d-bf2260bddee0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Received event network-changed-55da210c-644a-4f1e-8f20-ee3303b72db2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:49:37 compute-0 nova_compute[355794]: 2025-10-02 19:49:37.935 2 DEBUG nova.compute.manager [req-cfa16ff7-436b-47a1-b006-196f17e823d7 req-d43a8e03-7801-411f-b26d-bf2260bddee0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Refreshing instance network info cache due to event network-changed-55da210c-644a-4f1e-8f20-ee3303b72db2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:49:37 compute-0 nova_compute[355794]: 2025-10-02 19:49:37.936 2 DEBUG oslo_concurrency.lockutils [req-cfa16ff7-436b-47a1-b006-196f17e823d7 req-d43a8e03-7801-411f-b26d-bf2260bddee0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-b88114e8-b15d-4a78-ac15-3dd7ee30b949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:49:38 compute-0 nova_compute[355794]: 2025-10-02 19:49:38.060 2 DEBUG nova.network.neutron [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:49:38 compute-0 nova_compute[355794]: 2025-10-02 19:49:38.117 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:38 compute-0 ceph-mon[191910]: pgmap v1335: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:49:38 compute-0 nova_compute[355794]: 2025-10-02 19:49:38.308 2 DEBUG nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:49:38 compute-0 nova_compute[355794]: 2025-10-02 19:49:38.308 2 DEBUG nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Ensure instance console log exists: /var/lib/nova/instances/b88114e8-b15d-4a78-ac15-3dd7ee30b949/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:49:38 compute-0 nova_compute[355794]: 2025-10-02 19:49:38.309 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:49:38 compute-0 nova_compute[355794]: 2025-10-02 19:49:38.309 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:49:38 compute-0 nova_compute[355794]: 2025-10-02 19:49:38.309 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:49:38 compute-0 nova_compute[355794]: 2025-10-02 19:49:38.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 143 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 258 KiB/s wr, 0 op/s
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.247 2 DEBUG nova.network.neutron [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Updating instance_info_cache with network_info: [{"id": "55da210c-644a-4f1e-8f20-ee3303b72db2", "address": "fa:16:3e:c5:df:6b", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55da210c-64", "ovs_interfaceid": "55da210c-644a-4f1e-8f20-ee3303b72db2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.369 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Releasing lock "refresh_cache-b88114e8-b15d-4a78-ac15-3dd7ee30b949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.370 2 DEBUG nova.compute.manager [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Instance network_info: |[{"id": "55da210c-644a-4f1e-8f20-ee3303b72db2", "address": "fa:16:3e:c5:df:6b", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55da210c-64", "ovs_interfaceid": "55da210c-644a-4f1e-8f20-ee3303b72db2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.370 2 DEBUG oslo_concurrency.lockutils [req-cfa16ff7-436b-47a1-b006-196f17e823d7 req-d43a8e03-7801-411f-b26d-bf2260bddee0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-b88114e8-b15d-4a78-ac15-3dd7ee30b949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.371 2 DEBUG nova.network.neutron [req-cfa16ff7-436b-47a1-b006-196f17e823d7 req-d43a8e03-7801-411f-b26d-bf2260bddee0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Refreshing network info cache for port 55da210c-644a-4f1e-8f20-ee3303b72db2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.377 2 DEBUG nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Start _get_guest_xml network_info=[{"id": "55da210c-644a-4f1e-8f20-ee3303b72db2", "address": "fa:16:3e:c5:df:6b", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55da210c-64", "ovs_interfaceid": "55da210c-644a-4f1e-8f20-ee3303b72db2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:43:18Z,direct_url=<?>,disk_format='qcow2',id=ce28338d-119e-49e1-ab67-60da8882593a,min_disk=0,min_ram=0,name='cirros',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:43:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'disk_bus': 'virtio', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'image_id': 'ce28338d-119e-49e1-ab67-60da8882593a'}], 'ephemerals': [{'encryption_secret_uuid': None, 'device_name': '/dev/vdb', 'encrypted': False, 'disk_bus': 'virtio', 'size': 1, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.389 2 WARNING nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.407 2 DEBUG nova.virt.libvirt.host [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.408 2 DEBUG nova.virt.libvirt.host [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.416 2 DEBUG nova.virt.libvirt.host [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.418 2 DEBUG nova.virt.libvirt.host [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.422 2 DEBUG nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.424 2 DEBUG nova.virt.hardware [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:43:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='8f0521f8-dc4e-4ca1-bf77-f443ae74db03',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:43:18Z,direct_url=<?>,disk_format='qcow2',id=ce28338d-119e-49e1-ab67-60da8882593a,min_disk=0,min_ram=0,name='cirros',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:43:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.427 2 DEBUG nova.virt.hardware [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.427 2 DEBUG nova.virt.hardware [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.429 2 DEBUG nova.virt.hardware [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.430 2 DEBUG nova.virt.hardware [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.432 2 DEBUG nova.virt.hardware [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.433 2 DEBUG nova.virt.hardware [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.433 2 DEBUG nova.virt.hardware [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.434 2 DEBUG nova.virt.hardware [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.435 2 DEBUG nova.virt.hardware [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.435 2 DEBUG nova.virt.hardware [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.440 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 19:49:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3380829154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.957 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:39 compute-0 nova_compute[355794]: 2025-10-02 19:49:39.959 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:49:40 compute-0 ceph-mon[191910]: pgmap v1336: 321 pgs: 321 active+clean; 143 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 258 KiB/s wr, 0 op/s
Oct 02 19:49:40 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3380829154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:49:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 19:49:40 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/98858637' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:49:40 compute-0 nova_compute[355794]: 2025-10-02 19:49:40.432 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:40 compute-0 nova_compute[355794]: 2025-10-02 19:49:40.487 2 DEBUG nova.storage.rbd_utils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:49:40 compute-0 nova_compute[355794]: 2025-10-02 19:49:40.500 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:40 compute-0 nova_compute[355794]: 2025-10-02 19:49:40.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 172 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.4 MiB/s wr, 35 op/s
Oct 02 19:49:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 19:49:40 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2962792644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:49:40 compute-0 nova_compute[355794]: 2025-10-02 19:49:40.992 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:40 compute-0 nova_compute[355794]: 2025-10-02 19:49:40.995 2 DEBUG nova.virt.libvirt.vif [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:49:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-wkgthwg-uxhkceofcvut-4we5flt73ruq-vnf-oj5mzexrwavf',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-wkgthwg-uxhkceofcvut-4we5flt73ruq-vnf-oj5mzexrwavf',id=3,image_ref='ce28338d-119e-49e1-ab67-60da8882593a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='d2d7e2b0-01e0-44b1-b2c7-fe502b333743'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c35486f37b94d43a7bf2f2fa09c70b9',ramdisk_id='',reservation_id='r-vgn5al0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='ce28338d-119e-49e1-ab67-60da8882593a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:49:36Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT02NjE4MDIzNjg3NTk0NDQ5NTgwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTY2MTgwMjM2ODc1OTQ0NDk1ODA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NjYxODAyMzY4NzU5NDQ0OTU4MD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTY2MTgwMjM2ODc1OTQ0NDk1ODA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT02NjE4MDIzNjg3NTk0NDQ5NTgwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT02NjE4MDIzNjg3NTk0NDQ5NTgwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3
Oct 02 19:49:40 compute-0 nova_compute[355794]: Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NjYxODAyMzY4NzU5NDQ0OTU4MD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTY2MTgwMjM2ODc1OTQ0NDk1ODA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT02NjE4MDIzNjg3NTk0NDQ5NTgwPT0tLQo=',user_id='811fb7ac717e4ba9b9874e5454ee08f4',uuid=b88114e8-b15d-4a78-ac15-3dd7ee30b949,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55da210c-644a-4f1e-8f20-ee3303b72db2", "address": "fa:16:3e:c5:df:6b", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55da210c-64", "ovs_interfaceid": "55da210c-644a-4f1e-8f20-ee3303b72db2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:49:40 compute-0 nova_compute[355794]: 2025-10-02 19:49:40.996 2 DEBUG nova.network.os_vif_util [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converting VIF {"id": "55da210c-644a-4f1e-8f20-ee3303b72db2", "address": "fa:16:3e:c5:df:6b", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55da210c-64", "ovs_interfaceid": "55da210c-644a-4f1e-8f20-ee3303b72db2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:49:40 compute-0 nova_compute[355794]: 2025-10-02 19:49:40.999 2 DEBUG nova.network.os_vif_util [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:df:6b,bridge_name='br-int',has_traffic_filtering=True,id=55da210c-644a-4f1e-8f20-ee3303b72db2,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap55da210c-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.001 2 DEBUG nova.objects.instance [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lazy-loading 'pci_devices' on Instance uuid b88114e8-b15d-4a78-ac15-3dd7ee30b949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.032 2 DEBUG nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:49:41 compute-0 nova_compute[355794]:   <uuid>b88114e8-b15d-4a78-ac15-3dd7ee30b949</uuid>
Oct 02 19:49:41 compute-0 nova_compute[355794]:   <name>instance-00000003</name>
Oct 02 19:49:41 compute-0 nova_compute[355794]:   <memory>524288</memory>
Oct 02 19:49:41 compute-0 nova_compute[355794]:   <vcpu>1</vcpu>
Oct 02 19:49:41 compute-0 nova_compute[355794]:   <metadata>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <nova:name>vn-wkgthwg-uxhkceofcvut-4we5flt73ruq-vnf-oj5mzexrwavf</nova:name>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <nova:creationTime>2025-10-02 19:49:39</nova:creationTime>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <nova:flavor name="m1.small">
Oct 02 19:49:41 compute-0 nova_compute[355794]:         <nova:memory>512</nova:memory>
Oct 02 19:49:41 compute-0 nova_compute[355794]:         <nova:disk>1</nova:disk>
Oct 02 19:49:41 compute-0 nova_compute[355794]:         <nova:swap>0</nova:swap>
Oct 02 19:49:41 compute-0 nova_compute[355794]:         <nova:ephemeral>1</nova:ephemeral>
Oct 02 19:49:41 compute-0 nova_compute[355794]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       </nova:flavor>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <nova:owner>
Oct 02 19:49:41 compute-0 nova_compute[355794]:         <nova:user uuid="811fb7ac717e4ba9b9874e5454ee08f4">admin</nova:user>
Oct 02 19:49:41 compute-0 nova_compute[355794]:         <nova:project uuid="1c35486f37b94d43a7bf2f2fa09c70b9">admin</nova:project>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       </nova:owner>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <nova:root type="image" uuid="ce28338d-119e-49e1-ab67-60da8882593a"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <nova:ports>
Oct 02 19:49:41 compute-0 nova_compute[355794]:         <nova:port uuid="55da210c-644a-4f1e-8f20-ee3303b72db2">
Oct 02 19:49:41 compute-0 nova_compute[355794]:           <nova:ip type="fixed" address="192.168.0.207" ipVersion="4"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:         </nova:port>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       </nova:ports>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     </nova:instance>
Oct 02 19:49:41 compute-0 nova_compute[355794]:   </metadata>
Oct 02 19:49:41 compute-0 nova_compute[355794]:   <sysinfo type="smbios">
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <system>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <entry name="serial">b88114e8-b15d-4a78-ac15-3dd7ee30b949</entry>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <entry name="uuid">b88114e8-b15d-4a78-ac15-3dd7ee30b949</entry>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     </system>
Oct 02 19:49:41 compute-0 nova_compute[355794]:   </sysinfo>
Oct 02 19:49:41 compute-0 nova_compute[355794]:   <os>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <boot dev="hd"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <smbios mode="sysinfo"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:   </os>
Oct 02 19:49:41 compute-0 nova_compute[355794]:   <features>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <acpi/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <apic/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <vmcoreinfo/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:   </features>
Oct 02 19:49:41 compute-0 nova_compute[355794]:   <clock offset="utc">
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <timer name="hpet" present="no"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:   </clock>
Oct 02 19:49:41 compute-0 nova_compute[355794]:   <cpu mode="host-model" match="exact">
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:   </cpu>
Oct 02 19:49:41 compute-0 nova_compute[355794]:   <devices>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk">
Oct 02 19:49:41 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       </source>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 19:49:41 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       </auth>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <target dev="vda" bus="virtio"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk.eph0">
Oct 02 19:49:41 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       </source>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 19:49:41 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       </auth>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <target dev="vdb" bus="virtio"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <disk type="network" device="cdrom">
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk.config">
Oct 02 19:49:41 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       </source>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 19:49:41 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       </auth>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <target dev="sda" bus="sata"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <interface type="ethernet">
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <mac address="fa:16:3e:c5:df:6b"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <mtu size="1442"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <target dev="tap55da210c-64"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     </interface>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <serial type="pty">
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <log file="/var/lib/nova/instances/b88114e8-b15d-4a78-ac15-3dd7ee30b949/console.log" append="off"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     </serial>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <video>
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     </video>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <input type="tablet" bus="usb"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <rng model="virtio">
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     </rng>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <controller type="usb" index="0"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     <memballoon model="virtio">
Oct 02 19:49:41 compute-0 nova_compute[355794]:       <stats period="10"/>
Oct 02 19:49:41 compute-0 nova_compute[355794]:     </memballoon>
Oct 02 19:49:41 compute-0 nova_compute[355794]:   </devices>
Oct 02 19:49:41 compute-0 nova_compute[355794]: </domain>
Oct 02 19:49:41 compute-0 nova_compute[355794]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.035 2 DEBUG nova.compute.manager [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Preparing to wait for external event network-vif-plugged-55da210c-644a-4f1e-8f20-ee3303b72db2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.036 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.037 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.038 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.040 2 DEBUG nova.virt.libvirt.vif [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:49:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-wkgthwg-uxhkceofcvut-4we5flt73ruq-vnf-oj5mzexrwavf',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-wkgthwg-uxhkceofcvut-4we5flt73ruq-vnf-oj5mzexrwavf',id=3,image_ref='ce28338d-119e-49e1-ab67-60da8882593a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='d2d7e2b0-01e0-44b1-b2c7-fe502b333743'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c35486f37b94d43a7bf2f2fa09c70b9',ramdisk_id='',reservation_id='r-vgn5al0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='ce28338d-119e-49e1-ab67-60da8882593a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:49:36Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT02NjE4MDIzNjg3NTk0NDQ5NTgwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTY2MTgwMjM2ODc1OTQ0NDk1ODA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NjYxODAyMzY4NzU5NDQ0OTU4MD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTY2MTgwMjM2ODc1OTQ0NDk1ODA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT02NjE4MDIzNjg3NTk0NDQ5NTgwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT02NjE4MDIzNjg3NTk0NDQ5NTgwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4o
Oct 02 19:49:41 compute-0 nova_compute[355794]: YXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NjYxODAyMzY4NzU5NDQ0OTU4MD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTY2MTgwMjM2ODc1OTQ0NDk1ODA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT02NjE4MDIzNjg3NTk0NDQ5NTgwPT0tLQo=',user_id='811fb7ac717e4ba9b9874e5454ee08f4',uuid=b88114e8-b15d-4a78-ac15-3dd7ee30b949,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55da210c-644a-4f1e-8f20-ee3303b72db2", "address": "fa:16:3e:c5:df:6b", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55da210c-64", "ovs_interfaceid": "55da210c-644a-4f1e-8f20-ee3303b72db2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.042 2 DEBUG nova.network.os_vif_util [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converting VIF {"id": "55da210c-644a-4f1e-8f20-ee3303b72db2", "address": "fa:16:3e:c5:df:6b", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55da210c-64", "ovs_interfaceid": "55da210c-644a-4f1e-8f20-ee3303b72db2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.043 2 DEBUG nova.network.os_vif_util [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:df:6b,bridge_name='br-int',has_traffic_filtering=True,id=55da210c-644a-4f1e-8f20-ee3303b72db2,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap55da210c-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.045 2 DEBUG os_vif [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:df:6b,bridge_name='br-int',has_traffic_filtering=True,id=55da210c-644a-4f1e-8f20-ee3303b72db2,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap55da210c-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.047 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.049 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.057 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55da210c-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.058 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap55da210c-64, col_values=(('external_ids', {'iface-id': '55da210c-644a-4f1e-8f20-ee3303b72db2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:df:6b', 'vm-uuid': 'b88114e8-b15d-4a78-ac15-3dd7ee30b949'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:41 compute-0 NetworkManager[44968]: <info>  [1759434581.0625] manager: (tap55da210c-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.073 2 INFO os_vif [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:df:6b,bridge_name='br-int',has_traffic_filtering=True,id=55da210c-644a-4f1e-8f20-ee3303b72db2,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap55da210c-64')
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.136 2 DEBUG nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.136 2 DEBUG nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.137 2 DEBUG nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.137 2 DEBUG nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No VIF found with MAC fa:16:3e:c5:df:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.138 2 INFO nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Using config drive
Oct 02 19:49:41 compute-0 rsyslogd[187702]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 19:49:40.995 2 DEBUG nova.virt.libvirt.vif [None req-19595273-8374-4e [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.183 2 DEBUG nova.storage.rbd_utils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.196 2 DEBUG nova.network.neutron [req-cfa16ff7-436b-47a1-b006-196f17e823d7 req-d43a8e03-7801-411f-b26d-bf2260bddee0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Updated VIF entry in instance network info cache for port 55da210c-644a-4f1e-8f20-ee3303b72db2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.197 2 DEBUG nova.network.neutron [req-cfa16ff7-436b-47a1-b006-196f17e823d7 req-d43a8e03-7801-411f-b26d-bf2260bddee0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Updating instance_info_cache with network_info: [{"id": "55da210c-644a-4f1e-8f20-ee3303b72db2", "address": "fa:16:3e:c5:df:6b", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55da210c-64", "ovs_interfaceid": "55da210c-644a-4f1e-8f20-ee3303b72db2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.221 2 DEBUG oslo_concurrency.lockutils [req-cfa16ff7-436b-47a1-b006-196f17e823d7 req-d43a8e03-7801-411f-b26d-bf2260bddee0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-b88114e8-b15d-4a78-ac15-3dd7ee30b949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:49:41 compute-0 rsyslogd[187702]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 19:49:41.040 2 DEBUG nova.virt.libvirt.vif [None req-19595273-8374-4e [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 19:49:41 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/98858637' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:49:41 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2962792644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.791 2 INFO nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Creating config drive at /var/lib/nova/instances/b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.config
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.800 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp66q8r3yg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:41 compute-0 nova_compute[355794]: 2025-10-02 19:49:41.949 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp66q8r3yg" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:42 compute-0 nova_compute[355794]: 2025-10-02 19:49:42.004 2 DEBUG nova.storage.rbd_utils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:49:42 compute-0 nova_compute[355794]: 2025-10-02 19:49:42.016 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.config b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:42 compute-0 ceph-mon[191910]: pgmap v1337: 321 pgs: 321 active+clean; 172 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.4 MiB/s wr, 35 op/s
Oct 02 19:49:42 compute-0 nova_compute[355794]: 2025-10-02 19:49:42.293 2 DEBUG oslo_concurrency.processutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.config b88114e8-b15d-4a78-ac15-3dd7ee30b949_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:42 compute-0 nova_compute[355794]: 2025-10-02 19:49:42.294 2 INFO nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Deleting local config drive /var/lib/nova/instances/b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.config because it was imported into RBD.
Oct 02 19:49:42 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 02 19:49:42 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 02 19:49:42 compute-0 NetworkManager[44968]: <info>  [1759434582.4490] manager: (tap55da210c-64): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Oct 02 19:49:42 compute-0 kernel: tap55da210c-64: entered promiscuous mode
Oct 02 19:49:42 compute-0 ovn_controller[88435]: 2025-10-02T19:49:42Z|00040|binding|INFO|Claiming lport 55da210c-644a-4f1e-8f20-ee3303b72db2 for this chassis.
Oct 02 19:49:42 compute-0 ovn_controller[88435]: 2025-10-02T19:49:42Z|00041|binding|INFO|55da210c-644a-4f1e-8f20-ee3303b72db2: Claiming fa:16:3e:c5:df:6b 192.168.0.207
Oct 02 19:49:42 compute-0 nova_compute[355794]: 2025-10-02 19:49:42.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:42.463 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:df:6b 192.168.0.207'], port_security=['fa:16:3e:c5:df:6b 192.168.0.207'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-rlri6wkgthwg-uxhkceofcvut-4we5flt73ruq-port-h6neckvltb4i', 'neutron:cidrs': '192.168.0.207/24', 'neutron:device_id': 'b88114e8-b15d-4a78-ac15-3dd7ee30b949', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-rlri6wkgthwg-uxhkceofcvut-4we5flt73ruq-port-h6neckvltb4i', 'neutron:project_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '457fbfbc-bc7f-4eb1-986f-4ac88ca280aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a344602b-6814-4d50-9f7f-4fc26c3ad853, chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=55da210c-644a-4f1e-8f20-ee3303b72db2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:49:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:42.466 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 55da210c-644a-4f1e-8f20-ee3303b72db2 in datapath 6e3c6c60-2fbc-4181-942a-00056fc667f2 bound to our chassis
Oct 02 19:49:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:42.470 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e3c6c60-2fbc-4181-942a-00056fc667f2
Oct 02 19:49:42 compute-0 ovn_controller[88435]: 2025-10-02T19:49:42Z|00042|binding|INFO|Setting lport 55da210c-644a-4f1e-8f20-ee3303b72db2 ovn-installed in OVS
Oct 02 19:49:42 compute-0 ovn_controller[88435]: 2025-10-02T19:49:42Z|00043|binding|INFO|Setting lport 55da210c-644a-4f1e-8f20-ee3303b72db2 up in Southbound
Oct 02 19:49:42 compute-0 nova_compute[355794]: 2025-10-02 19:49:42.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:42.488 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[69c116cf-e9bc-4334-b12b-33dbffb515b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:49:42 compute-0 systemd-udevd[427971]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:49:42 compute-0 systemd-machined[137646]: New machine qemu-3-instance-00000003.
Oct 02 19:49:42 compute-0 NetworkManager[44968]: <info>  [1759434582.5366] device (tap55da210c-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:49:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:42.535 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[71111c04-39d5-4485-9be9-9b337ba3e38f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:49:42 compute-0 NetworkManager[44968]: <info>  [1759434582.5372] device (tap55da210c-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:49:42 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Oct 02 19:49:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:42.540 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[5dee749c-ae65-45b5-8910-79a23f35b533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:49:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:42.595 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[c33c2ae1-243f-483d-91c0-8bf2f4888610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:49:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:42.621 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[a0af9bc9-7d9e-4bdb-ab70-bc0d07221b58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e3c6c60-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:df:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545739, 'reachable_time': 16846, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 427980, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:49:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:42.648 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[80a80001-9a44-43b0-a9b9-2714ac4727ec]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e3c6c60-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545752, 'tstamp': 545752}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 427983, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap6e3c6c60-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545756, 'tstamp': 545756}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 427983, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:49:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:42.650 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e3c6c60-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:49:42 compute-0 nova_compute[355794]: 2025-10-02 19:49:42.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:42.654 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e3c6c60-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:49:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:42.654 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:49:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:42.655 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e3c6c60-20, col_values=(('external_ids', {'iface-id': '4a8af5bc-5352-4506-b6d3-43b5d33802a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:49:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:49:42.655 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:49:42 compute-0 nova_compute[355794]: 2025-10-02 19:49:42.685 2 DEBUG nova.compute.manager [req-afca8cf6-be92-4705-8e58-8e2626233c22 req-b7415265-f0ca-4d24-8294-fb45bc33751d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Received event network-vif-plugged-55da210c-644a-4f1e-8f20-ee3303b72db2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:49:42 compute-0 nova_compute[355794]: 2025-10-02 19:49:42.685 2 DEBUG oslo_concurrency.lockutils [req-afca8cf6-be92-4705-8e58-8e2626233c22 req-b7415265-f0ca-4d24-8294-fb45bc33751d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:49:42 compute-0 nova_compute[355794]: 2025-10-02 19:49:42.686 2 DEBUG oslo_concurrency.lockutils [req-afca8cf6-be92-4705-8e58-8e2626233c22 req-b7415265-f0ca-4d24-8294-fb45bc33751d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:49:42 compute-0 nova_compute[355794]: 2025-10-02 19:49:42.686 2 DEBUG oslo_concurrency.lockutils [req-afca8cf6-be92-4705-8e58-8e2626233c22 req-b7415265-f0ca-4d24-8294-fb45bc33751d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:49:42 compute-0 nova_compute[355794]: 2025-10-02 19:49:42.687 2 DEBUG nova.compute.manager [req-afca8cf6-be92-4705-8e58-8e2626233c22 req-b7415265-f0ca-4d24-8294-fb45bc33751d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Processing event network-vif-plugged-55da210c-644a-4f1e-8f20-ee3303b72db2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 19:49:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 MiB/s wr, 38 op/s
Oct 02 19:49:43 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 02 19:49:43 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 02 19:49:43 compute-0 nova_compute[355794]: 2025-10-02 19:49:43.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:44 compute-0 ceph-mon[191910]: pgmap v1338: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 MiB/s wr, 38 op/s
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.782 2 DEBUG nova.compute.manager [req-3629909a-afa5-4031-9d23-6ea230eb63b0 req-3fc7fc48-e7b8-4f36-92a4-053848b7b18a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Received event network-vif-plugged-55da210c-644a-4f1e-8f20-ee3303b72db2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.783 2 DEBUG oslo_concurrency.lockutils [req-3629909a-afa5-4031-9d23-6ea230eb63b0 req-3fc7fc48-e7b8-4f36-92a4-053848b7b18a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.783 2 DEBUG oslo_concurrency.lockutils [req-3629909a-afa5-4031-9d23-6ea230eb63b0 req-3fc7fc48-e7b8-4f36-92a4-053848b7b18a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.784 2 DEBUG oslo_concurrency.lockutils [req-3629909a-afa5-4031-9d23-6ea230eb63b0 req-3fc7fc48-e7b8-4f36-92a4-053848b7b18a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.784 2 DEBUG nova.compute.manager [req-3629909a-afa5-4031-9d23-6ea230eb63b0 req-3fc7fc48-e7b8-4f36-92a4-053848b7b18a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] No waiting events found dispatching network-vif-plugged-55da210c-644a-4f1e-8f20-ee3303b72db2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.785 2 WARNING nova.compute.manager [req-3629909a-afa5-4031-9d23-6ea230eb63b0 req-3fc7fc48-e7b8-4f36-92a4-053848b7b18a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Received unexpected event network-vif-plugged-55da210c-644a-4f1e-8f20-ee3303b72db2 for instance with vm_state building and task_state spawning.
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.786 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759434584.7769244, b88114e8-b15d-4a78-ac15-3dd7ee30b949 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.787 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] VM Started (Lifecycle Event)
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.791 2 DEBUG nova.compute.manager [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.799 2 DEBUG nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.807 2 INFO nova.virt.libvirt.driver [-] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Instance spawned successfully.
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.808 2 DEBUG nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.816 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.827 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.840 2 DEBUG nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.841 2 DEBUG nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.842 2 DEBUG nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.842 2 DEBUG nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.843 2 DEBUG nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.844 2 DEBUG nova.virt.libvirt.driver [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.853 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.854 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759434584.777334, b88114e8-b15d-4a78-ac15-3dd7ee30b949 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.855 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] VM Paused (Lifecycle Event)
Oct 02 19:49:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.4 MiB/s wr, 48 op/s
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.883 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.890 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759434584.7981892, b88114e8-b15d-4a78-ac15-3dd7ee30b949 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.890 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] VM Resumed (Lifecycle Event)
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.901 2 INFO nova.compute.manager [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Took 8.59 seconds to spawn the instance on the hypervisor.
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.901 2 DEBUG nova.compute.manager [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.909 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.919 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.947 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.966 2 INFO nova.compute.manager [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Took 9.66 seconds to build instance.
Oct 02 19:49:44 compute-0 nova_compute[355794]: 2025-10-02 19:49:44.981 2 DEBUG oslo_concurrency.lockutils [None req-19595273-8374-4e17-93ef-fa9550af0e42 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:49:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:49:45 compute-0 nova_compute[355794]: 2025-10-02 19:49:45.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:45 compute-0 nova_compute[355794]: 2025-10-02 19:49:45.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:49:46 compute-0 nova_compute[355794]: 2025-10-02 19:49:46.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:46 compute-0 ceph-mon[191910]: pgmap v1339: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.4 MiB/s wr, 48 op/s
Oct 02 19:49:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Oct 02 19:49:47 compute-0 nova_compute[355794]: 2025-10-02 19:49:47.577 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:47 compute-0 podman[428061]: 2025-10-02 19:49:47.722863382 +0000 UTC m=+0.136320266 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 02 19:49:48 compute-0 ceph-mon[191910]: pgmap v1340: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Oct 02 19:49:48 compute-0 nova_compute[355794]: 2025-10-02 19:49:48.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:48 compute-0 nova_compute[355794]: 2025-10-02 19:49:48.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:48 compute-0 nova_compute[355794]: 2025-10-02 19:49:48.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 234 KiB/s rd, 1.4 MiB/s wr, 55 op/s
Oct 02 19:49:49 compute-0 nova_compute[355794]: 2025-10-02 19:49:49.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:49 compute-0 nova_compute[355794]: 2025-10-02 19:49:49.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:49:49 compute-0 nova_compute[355794]: 2025-10-02 19:49:49.839 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:49:49 compute-0 nova_compute[355794]: 2025-10-02 19:49:49.843 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:49:49 compute-0 nova_compute[355794]: 2025-10-02 19:49:49.844 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:49:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:49:50 compute-0 ceph-mon[191910]: pgmap v1341: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 234 KiB/s rd, 1.4 MiB/s wr, 55 op/s
Oct 02 19:49:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.1 MiB/s wr, 94 op/s
Oct 02 19:49:51 compute-0 nova_compute[355794]: 2025-10-02 19:49:51.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:51 compute-0 nova_compute[355794]: 2025-10-02 19:49:51.805 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Updating instance_info_cache with network_info: [{"id": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "address": "fa:16:3e:10:ab:29", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc759e48d-48", "ovs_interfaceid": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:49:51 compute-0 nova_compute[355794]: 2025-10-02 19:49:51.828 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:49:51 compute-0 nova_compute[355794]: 2025-10-02 19:49:51.829 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:49:51 compute-0 nova_compute[355794]: 2025-10-02 19:49:51.830 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:51 compute-0 nova_compute[355794]: 2025-10-02 19:49:51.830 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:51 compute-0 nova_compute[355794]: 2025-10-02 19:49:51.831 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:51 compute-0 nova_compute[355794]: 2025-10-02 19:49:51.854 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:49:51 compute-0 nova_compute[355794]: 2025-10-02 19:49:51.855 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:49:51 compute-0 nova_compute[355794]: 2025-10-02 19:49:51.856 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:49:51 compute-0 nova_compute[355794]: 2025-10-02 19:49:51.856 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:49:51 compute-0 nova_compute[355794]: 2025-10-02 19:49:51.857 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:52 compute-0 ceph-mon[191910]: pgmap v1342: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.1 MiB/s wr, 94 op/s
Oct 02 19:49:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:49:52 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4087178394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:49:52 compute-0 nova_compute[355794]: 2025-10-02 19:49:52.384 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:52 compute-0 nova_compute[355794]: 2025-10-02 19:49:52.497 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:49:52 compute-0 nova_compute[355794]: 2025-10-02 19:49:52.499 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:49:52 compute-0 nova_compute[355794]: 2025-10-02 19:49:52.499 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:49:52 compute-0 nova_compute[355794]: 2025-10-02 19:49:52.506 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:49:52 compute-0 nova_compute[355794]: 2025-10-02 19:49:52.508 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:49:52 compute-0 nova_compute[355794]: 2025-10-02 19:49:52.509 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:49:52 compute-0 nova_compute[355794]: 2025-10-02 19:49:52.516 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:49:52 compute-0 nova_compute[355794]: 2025-10-02 19:49:52.517 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:49:52 compute-0 nova_compute[355794]: 2025-10-02 19:49:52.517 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:49:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 21 KiB/s wr, 62 op/s
Oct 02 19:49:52 compute-0 nova_compute[355794]: 2025-10-02 19:49:52.960 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:49:52 compute-0 nova_compute[355794]: 2025-10-02 19:49:52.961 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3565MB free_disk=59.9058837890625GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:49:52 compute-0 nova_compute[355794]: 2025-10-02 19:49:52.962 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:49:52 compute-0 nova_compute[355794]: 2025-10-02 19:49:52.962 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:49:53 compute-0 nova_compute[355794]: 2025-10-02 19:49:53.058 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:49:53 compute-0 nova_compute[355794]: 2025-10-02 19:49:53.058 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:49:53 compute-0 nova_compute[355794]: 2025-10-02 19:49:53.059 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance b88114e8-b15d-4a78-ac15-3dd7ee30b949 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:49:53 compute-0 nova_compute[355794]: 2025-10-02 19:49:53.059 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:49:53 compute-0 nova_compute[355794]: 2025-10-02 19:49:53.060 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:49:53 compute-0 nova_compute[355794]: 2025-10-02 19:49:53.079 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing inventories for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 19:49:53 compute-0 nova_compute[355794]: 2025-10-02 19:49:53.099 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating ProviderTree inventory for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 19:49:53 compute-0 nova_compute[355794]: 2025-10-02 19:49:53.100 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating inventory in ProviderTree for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:49:53 compute-0 nova_compute[355794]: 2025-10-02 19:49:53.120 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing aggregate associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 19:49:53 compute-0 nova_compute[355794]: 2025-10-02 19:49:53.170 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing trait associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, traits: COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 19:49:53 compute-0 nova_compute[355794]: 2025-10-02 19:49:53.270 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:53 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4087178394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:49:53 compute-0 nova_compute[355794]: 2025-10-02 19:49:53.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:49:53 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1684219108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:49:53 compute-0 nova_compute[355794]: 2025-10-02 19:49:53.790 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:53 compute-0 nova_compute[355794]: 2025-10-02 19:49:53.803 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:49:53 compute-0 nova_compute[355794]: 2025-10-02 19:49:53.850 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:49:53 compute-0 nova_compute[355794]: 2025-10-02 19:49:53.919 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:49:53 compute-0 nova_compute[355794]: 2025-10-02 19:49:53.921 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:49:54 compute-0 ceph-mon[191910]: pgmap v1343: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 21 KiB/s wr, 62 op/s
Oct 02 19:49:54 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1684219108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:49:54 compute-0 podman[428127]: 2025-10-02 19:49:54.694591774 +0000 UTC m=+0.126523496 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:49:54 compute-0 podman[428128]: 2025-10-02 19:49:54.711155014 +0000 UTC m=+0.137662521 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 19:49:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 58 op/s
Oct 02 19:49:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:49:56 compute-0 nova_compute[355794]: 2025-10-02 19:49:56.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:56 compute-0 ceph-mon[191910]: pgmap v1344: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 58 op/s
Oct 02 19:49:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Oct 02 19:49:57 compute-0 nova_compute[355794]: 2025-10-02 19:49:57.916 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:58 compute-0 ceph-mon[191910]: pgmap v1345: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Oct 02 19:49:58 compute-0 nova_compute[355794]: 2025-10-02 19:49:58.569 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:58 compute-0 nova_compute[355794]: 2025-10-02 19:49:58.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1346: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Oct 02 19:49:59 compute-0 ceph-mon[191910]: pgmap v1346: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Oct 02 19:49:59 compute-0 podman[428169]: 2025-10-02 19:49:59.716873247 +0000 UTC m=+0.122663793 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:49:59 compute-0 podman[428170]: 2025-10-02 19:49:59.717614256 +0000 UTC m=+0.119394526 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release=1214.1726694543, com.redhat.component=ubi9-container, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Oct 02 19:49:59 compute-0 podman[157186]: time="2025-10-02T19:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:49:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:49:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9037 "" "Go-http-client/1.1"
Oct 02 19:50:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:50:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 41 op/s
Oct 02 19:50:01 compute-0 nova_compute[355794]: 2025-10-02 19:50:01.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:01 compute-0 openstack_network_exporter[372736]: ERROR   19:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:50:01 compute-0 openstack_network_exporter[372736]: ERROR   19:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:50:01 compute-0 openstack_network_exporter[372736]: ERROR   19:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:50:01 compute-0 openstack_network_exporter[372736]: ERROR   19:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:50:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:50:01 compute-0 openstack_network_exporter[372736]: ERROR   19:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:50:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:50:01 compute-0 ceph-mon[191910]: pgmap v1347: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 41 op/s
Oct 02 19:50:02 compute-0 podman[428207]: 2025-10-02 19:50:02.711210282 +0000 UTC m=+0.127742298 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 02 19:50:02 compute-0 podman[428209]: 2025-10-02 19:50:02.742596256 +0000 UTC m=+0.140657781 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 19:50:02 compute-0 podman[428208]: 2025-10-02 19:50:02.74384904 +0000 UTC m=+0.151355796 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:50:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 2 op/s
Oct 02 19:50:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:50:03
Oct 02 19:50:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:50:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:50:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'vms', 'images']
Oct 02 19:50:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:50:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:50:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:50:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:50:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:50:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:50:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:50:03 compute-0 nova_compute[355794]: 2025-10-02 19:50:03.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:03 compute-0 ceph-mon[191910]: pgmap v1348: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 2 op/s
Oct 02 19:50:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:50:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:50:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:50:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:50:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:50:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:50:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:50:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:50:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:50:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:50:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1349: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:50:05 compute-0 ceph-mon[191910]: pgmap v1349: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:06 compute-0 nova_compute[355794]: 2025-10-02 19:50:06.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:06 compute-0 podman[428265]: 2025-10-02 19:50:06.627938465 +0000 UTC m=+0.065940674 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 02 19:50:06 compute-0 podman[428266]: 2025-10-02 19:50:06.641572298 +0000 UTC m=+0.076652080 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:50:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:07 compute-0 ceph-mon[191910]: pgmap v1350: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:08 compute-0 nova_compute[355794]: 2025-10-02 19:50:08.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:09 compute-0 ceph-mon[191910]: pgmap v1351: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:50:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:11 compute-0 nova_compute[355794]: 2025-10-02 19:50:11.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:12 compute-0 ceph-mon[191910]: pgmap v1352: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:12 compute-0 ovn_controller[88435]: 2025-10-02T19:50:12Z|00044|memory_trim|INFO|Detected inactivity (last active 30016 ms ago): trimming memory
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001373559288923618 of space, bias 1.0, pg target 0.4120677866770854 quantized to 32 (current 32)
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:50:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:13 compute-0 nova_compute[355794]: 2025-10-02 19:50:13.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:14 compute-0 ceph-mon[191910]: pgmap v1353: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:50:15 compute-0 sudo[428305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:50:15 compute-0 sudo[428305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:50:15 compute-0 sudo[428305]: pam_unix(sudo:session): session closed for user root
Oct 02 19:50:15 compute-0 sudo[428330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:50:15 compute-0 sudo[428330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:50:15 compute-0 sudo[428330]: pam_unix(sudo:session): session closed for user root
Oct 02 19:50:15 compute-0 sudo[428355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:50:15 compute-0 sudo[428355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:50:15 compute-0 sudo[428355]: pam_unix(sudo:session): session closed for user root
Oct 02 19:50:15 compute-0 sudo[428380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:50:15 compute-0 sudo[428380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:50:16 compute-0 ceph-mon[191910]: pgmap v1354: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:16 compute-0 nova_compute[355794]: 2025-10-02 19:50:16.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:16 compute-0 sudo[428380]: pam_unix(sudo:session): session closed for user root
Oct 02 19:50:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:50:16 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:50:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:50:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:50:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:50:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:50:16 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 3b065fe7-ebb2-49ad-94e4-f730bda4fb72 does not exist
Oct 02 19:50:16 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev d7044f1f-08fc-4ddf-b189-8aea6d4442ac does not exist
Oct 02 19:50:16 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 6178123e-1cf1-4a71-a892-bc601516bb73 does not exist
Oct 02 19:50:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:50:16 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:50:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:50:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:50:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:50:16 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:50:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:16 compute-0 sudo[428436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:50:16 compute-0 sudo[428436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:50:16 compute-0 sudo[428436]: pam_unix(sudo:session): session closed for user root
Oct 02 19:50:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:50:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:50:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:50:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:50:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:50:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:50:17 compute-0 sudo[428461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:50:17 compute-0 sudo[428461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:50:17 compute-0 sudo[428461]: pam_unix(sudo:session): session closed for user root
Oct 02 19:50:17 compute-0 sudo[428486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:50:17 compute-0 sudo[428486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:50:17 compute-0 sudo[428486]: pam_unix(sudo:session): session closed for user root
Oct 02 19:50:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:50:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 2400.1 total, 600.0 interval
                                            Cumulative writes: 6814 writes, 27K keys, 6814 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 6814 writes, 1336 syncs, 5.10 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1014 writes, 3551 keys, 1014 commit groups, 1.0 writes per commit group, ingest: 3.82 MB, 0.01 MB/s
                                            Interval WAL: 1014 writes, 377 syncs, 2.69 writes per sync, written: 0.00 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 19:50:17 compute-0 sudo[428511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:50:17 compute-0 sudo[428511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:50:17 compute-0 podman[428576]: 2025-10-02 19:50:17.885922687 +0000 UTC m=+0.099661891 container create 04c9e50c8e7956ed496935bfc64c5fe58abf97860c448bdf8d394b3d9b69f762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:50:17 compute-0 podman[428576]: 2025-10-02 19:50:17.844943007 +0000 UTC m=+0.058682251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:50:17 compute-0 systemd[1]: Started libpod-conmon-04c9e50c8e7956ed496935bfc64c5fe58abf97860c448bdf8d394b3d9b69f762.scope.
Oct 02 19:50:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:50:18 compute-0 podman[428576]: 2025-10-02 19:50:18.028996522 +0000 UTC m=+0.242735726 container init 04c9e50c8e7956ed496935bfc64c5fe58abf97860c448bdf8d394b3d9b69f762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:50:18 compute-0 ceph-mon[191910]: pgmap v1355: 321 pgs: 321 active+clean; 172 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:18 compute-0 podman[428576]: 2025-10-02 19:50:18.041223837 +0000 UTC m=+0.254963041 container start 04c9e50c8e7956ed496935bfc64c5fe58abf97860c448bdf8d394b3d9b69f762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:50:18 compute-0 podman[428576]: 2025-10-02 19:50:18.048691115 +0000 UTC m=+0.262430299 container attach 04c9e50c8e7956ed496935bfc64c5fe58abf97860c448bdf8d394b3d9b69f762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:50:18 compute-0 cool_engelbart[428596]: 167 167
Oct 02 19:50:18 compute-0 systemd[1]: libpod-04c9e50c8e7956ed496935bfc64c5fe58abf97860c448bdf8d394b3d9b69f762.scope: Deactivated successfully.
Oct 02 19:50:18 compute-0 podman[428588]: 2025-10-02 19:50:18.087725583 +0000 UTC m=+0.122855148 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 19:50:18 compute-0 podman[428611]: 2025-10-02 19:50:18.12897714 +0000 UTC m=+0.048668335 container died 04c9e50c8e7956ed496935bfc64c5fe58abf97860c448bdf8d394b3d9b69f762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:50:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-2146d50b4d40e17308aa5b663eb4c07fd04116b1048faab4e90c62adf226fe45-merged.mount: Deactivated successfully.
Oct 02 19:50:18 compute-0 podman[428611]: 2025-10-02 19:50:18.191532934 +0000 UTC m=+0.111224109 container remove 04c9e50c8e7956ed496935bfc64c5fe58abf97860c448bdf8d394b3d9b69f762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 19:50:18 compute-0 systemd[1]: libpod-conmon-04c9e50c8e7956ed496935bfc64c5fe58abf97860c448bdf8d394b3d9b69f762.scope: Deactivated successfully.
Oct 02 19:50:18 compute-0 ovn_controller[88435]: 2025-10-02T19:50:18Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c5:df:6b 192.168.0.207
Oct 02 19:50:18 compute-0 podman[428632]: 2025-10-02 19:50:18.427576951 +0000 UTC m=+0.061609880 container create 05bdf1773df7287998f056e4d02644edc895bea4d81d12ba1154aa782c835061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 19:50:18 compute-0 ovn_controller[88435]: 2025-10-02T19:50:18Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c5:df:6b 192.168.0.207
Oct 02 19:50:18 compute-0 systemd[1]: Started libpod-conmon-05bdf1773df7287998f056e4d02644edc895bea4d81d12ba1154aa782c835061.scope.
Oct 02 19:50:18 compute-0 podman[428632]: 2025-10-02 19:50:18.406089209 +0000 UTC m=+0.040122138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:50:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c3a944476249a63031475df968f6e6c99c791f94cb6da008b09a0305c342f50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c3a944476249a63031475df968f6e6c99c791f94cb6da008b09a0305c342f50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c3a944476249a63031475df968f6e6c99c791f94cb6da008b09a0305c342f50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c3a944476249a63031475df968f6e6c99c791f94cb6da008b09a0305c342f50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c3a944476249a63031475df968f6e6c99c791f94cb6da008b09a0305c342f50/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:50:18 compute-0 podman[428632]: 2025-10-02 19:50:18.580065106 +0000 UTC m=+0.214098065 container init 05bdf1773df7287998f056e4d02644edc895bea4d81d12ba1154aa782c835061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 19:50:18 compute-0 podman[428632]: 2025-10-02 19:50:18.602430561 +0000 UTC m=+0.236463490 container start 05bdf1773df7287998f056e4d02644edc895bea4d81d12ba1154aa782c835061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct 02 19:50:18 compute-0 podman[428632]: 2025-10-02 19:50:18.608520143 +0000 UTC m=+0.242553072 container attach 05bdf1773df7287998f056e4d02644edc895bea4d81d12ba1154aa782c835061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:50:18 compute-0 nova_compute[355794]: 2025-10-02 19:50:18.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 176 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s wr, 1 op/s
Oct 02 19:50:19 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 02 19:50:19 compute-0 distracted_rhodes[428646]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:50:19 compute-0 distracted_rhodes[428646]: --> relative data size: 1.0
Oct 02 19:50:19 compute-0 distracted_rhodes[428646]: --> All data devices are unavailable
Oct 02 19:50:19 compute-0 systemd[1]: libpod-05bdf1773df7287998f056e4d02644edc895bea4d81d12ba1154aa782c835061.scope: Deactivated successfully.
Oct 02 19:50:19 compute-0 systemd[1]: libpod-05bdf1773df7287998f056e4d02644edc895bea4d81d12ba1154aa782c835061.scope: Consumed 1.177s CPU time.
Oct 02 19:50:19 compute-0 conmon[428646]: conmon 05bdf1773df7287998f0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-05bdf1773df7287998f056e4d02644edc895bea4d81d12ba1154aa782c835061.scope/container/memory.events
Oct 02 19:50:19 compute-0 podman[428632]: 2025-10-02 19:50:19.907855404 +0000 UTC m=+1.541888363 container died 05bdf1773df7287998f056e4d02644edc895bea4d81d12ba1154aa782c835061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 19:50:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c3a944476249a63031475df968f6e6c99c791f94cb6da008b09a0305c342f50-merged.mount: Deactivated successfully.
Oct 02 19:50:19 compute-0 podman[428632]: 2025-10-02 19:50:19.986923096 +0000 UTC m=+1.620956015 container remove 05bdf1773df7287998f056e4d02644edc895bea4d81d12ba1154aa782c835061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_rhodes, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:50:20 compute-0 systemd[1]: libpod-conmon-05bdf1773df7287998f056e4d02644edc895bea4d81d12ba1154aa782c835061.scope: Deactivated successfully.
Oct 02 19:50:20 compute-0 sudo[428511]: pam_unix(sudo:session): session closed for user root
Oct 02 19:50:20 compute-0 ceph-mon[191910]: pgmap v1356: 321 pgs: 321 active+clean; 176 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s wr, 1 op/s
Oct 02 19:50:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:50:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2488266297' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:50:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:50:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2488266297' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:50:20 compute-0 sudo[428691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:50:20 compute-0 sudo[428691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:50:20 compute-0 sudo[428691]: pam_unix(sudo:session): session closed for user root
Oct 02 19:50:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:50:20 compute-0 sudo[428716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:50:20 compute-0 sudo[428716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:50:20 compute-0 sudo[428716]: pam_unix(sudo:session): session closed for user root
Oct 02 19:50:20 compute-0 sudo[428741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:50:20 compute-0 sudo[428741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:50:20 compute-0 sudo[428741]: pam_unix(sudo:session): session closed for user root
Oct 02 19:50:20 compute-0 sudo[428766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:50:20 compute-0 sudo[428766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:50:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 192 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 1.5 MiB/s wr, 48 op/s
Oct 02 19:50:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2488266297' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:50:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2488266297' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:50:21 compute-0 podman[428831]: 2025-10-02 19:50:21.075278788 +0000 UTC m=+0.106776020 container create 93d05da65bdc13e35725973075249faae325a41621d195838ddc417b62223cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:50:21 compute-0 nova_compute[355794]: 2025-10-02 19:50:21.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:21 compute-0 podman[428831]: 2025-10-02 19:50:21.013331681 +0000 UTC m=+0.044828933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:50:21 compute-0 systemd[1]: Started libpod-conmon-93d05da65bdc13e35725973075249faae325a41621d195838ddc417b62223cd2.scope.
Oct 02 19:50:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:50:21 compute-0 podman[428831]: 2025-10-02 19:50:21.233117655 +0000 UTC m=+0.264614897 container init 93d05da65bdc13e35725973075249faae325a41621d195838ddc417b62223cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_grothendieck, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:50:21 compute-0 podman[428831]: 2025-10-02 19:50:21.249243074 +0000 UTC m=+0.280740296 container start 93d05da65bdc13e35725973075249faae325a41621d195838ddc417b62223cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 02 19:50:21 compute-0 podman[428831]: 2025-10-02 19:50:21.255740187 +0000 UTC m=+0.287237429 container attach 93d05da65bdc13e35725973075249faae325a41621d195838ddc417b62223cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:50:21 compute-0 busy_grothendieck[428847]: 167 167
Oct 02 19:50:21 compute-0 systemd[1]: libpod-93d05da65bdc13e35725973075249faae325a41621d195838ddc417b62223cd2.scope: Deactivated successfully.
Oct 02 19:50:21 compute-0 conmon[428847]: conmon 93d05da65bdc13e35725 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-93d05da65bdc13e35725973075249faae325a41621d195838ddc417b62223cd2.scope/container/memory.events
Oct 02 19:50:21 compute-0 podman[428852]: 2025-10-02 19:50:21.345408742 +0000 UTC m=+0.052154628 container died 93d05da65bdc13e35725973075249faae325a41621d195838ddc417b62223cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_grothendieck, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 19:50:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-91c8ec0b13c50401b9a420006c27844093e8908931fdbacbfab3b46f848ecf13-merged.mount: Deactivated successfully.
Oct 02 19:50:21 compute-0 podman[428852]: 2025-10-02 19:50:21.40215519 +0000 UTC m=+0.108901066 container remove 93d05da65bdc13e35725973075249faae325a41621d195838ddc417b62223cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_grothendieck, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:50:21 compute-0 systemd[1]: libpod-conmon-93d05da65bdc13e35725973075249faae325a41621d195838ddc417b62223cd2.scope: Deactivated successfully.
Oct 02 19:50:21 compute-0 podman[428871]: 2025-10-02 19:50:21.740066516 +0000 UTC m=+0.101303355 container create a66d71823eb25cbcec56ac72105bce72c98f707f86b54677896f4976d9b61008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:50:21 compute-0 podman[428871]: 2025-10-02 19:50:21.70486855 +0000 UTC m=+0.066105849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:50:21 compute-0 systemd[1]: Started libpod-conmon-a66d71823eb25cbcec56ac72105bce72c98f707f86b54677896f4976d9b61008.scope.
Oct 02 19:50:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b9a5f40f4fc2cb9b4fc193db396bbe55abf2eb5f20d97437a9328d9edac398/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b9a5f40f4fc2cb9b4fc193db396bbe55abf2eb5f20d97437a9328d9edac398/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b9a5f40f4fc2cb9b4fc193db396bbe55abf2eb5f20d97437a9328d9edac398/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b9a5f40f4fc2cb9b4fc193db396bbe55abf2eb5f20d97437a9328d9edac398/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:50:21 compute-0 podman[428871]: 2025-10-02 19:50:21.927717997 +0000 UTC m=+0.288954846 container init a66d71823eb25cbcec56ac72105bce72c98f707f86b54677896f4976d9b61008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 19:50:21 compute-0 podman[428871]: 2025-10-02 19:50:21.948591322 +0000 UTC m=+0.309828161 container start a66d71823eb25cbcec56ac72105bce72c98f707f86b54677896f4976d9b61008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 19:50:21 compute-0 podman[428871]: 2025-10-02 19:50:21.954631572 +0000 UTC m=+0.315868401 container attach a66d71823eb25cbcec56ac72105bce72c98f707f86b54677896f4976d9b61008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 19:50:22 compute-0 ceph-mon[191910]: pgmap v1357: 321 pgs: 321 active+clean; 192 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 1.5 MiB/s wr, 48 op/s
Oct 02 19:50:22 compute-0 affectionate_germain[428887]: {
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:     "0": [
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:         {
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "devices": [
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "/dev/loop3"
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             ],
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "lv_name": "ceph_lv0",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "lv_size": "21470642176",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "name": "ceph_lv0",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "tags": {
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.cluster_name": "ceph",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.crush_device_class": "",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.encrypted": "0",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.osd_id": "0",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.type": "block",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.vdo": "0"
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             },
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "type": "block",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "vg_name": "ceph_vg0"
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:         }
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:     ],
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:     "1": [
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:         {
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "devices": [
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "/dev/loop4"
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             ],
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "lv_name": "ceph_lv1",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "lv_size": "21470642176",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "name": "ceph_lv1",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "tags": {
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.cluster_name": "ceph",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.crush_device_class": "",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.encrypted": "0",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.osd_id": "1",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.type": "block",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.vdo": "0"
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             },
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "type": "block",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "vg_name": "ceph_vg1"
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:         }
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:     ],
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:     "2": [
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:         {
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "devices": [
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "/dev/loop5"
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             ],
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "lv_name": "ceph_lv2",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "lv_size": "21470642176",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "name": "ceph_lv2",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "tags": {
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.cluster_name": "ceph",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.crush_device_class": "",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.encrypted": "0",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.osd_id": "2",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.type": "block",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:                 "ceph.vdo": "0"
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             },
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "type": "block",
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:             "vg_name": "ceph_vg2"
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:         }
Oct 02 19:50:22 compute-0 affectionate_germain[428887]:     ]
Oct 02 19:50:22 compute-0 affectionate_germain[428887]: }
Oct 02 19:50:22 compute-0 systemd[1]: libpod-a66d71823eb25cbcec56ac72105bce72c98f707f86b54677896f4976d9b61008.scope: Deactivated successfully.
Oct 02 19:50:22 compute-0 podman[428871]: 2025-10-02 19:50:22.815517365 +0000 UTC m=+1.176754204 container died a66d71823eb25cbcec56ac72105bce72c98f707f86b54677896f4976d9b61008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 19:50:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7b9a5f40f4fc2cb9b4fc193db396bbe55abf2eb5f20d97437a9328d9edac398-merged.mount: Deactivated successfully.
Oct 02 19:50:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 193 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 1.5 MiB/s wr, 48 op/s
Oct 02 19:50:22 compute-0 podman[428871]: 2025-10-02 19:50:22.917673712 +0000 UTC m=+1.278910531 container remove a66d71823eb25cbcec56ac72105bce72c98f707f86b54677896f4976d9b61008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Oct 02 19:50:22 compute-0 sudo[428766]: pam_unix(sudo:session): session closed for user root
Oct 02 19:50:22 compute-0 systemd[1]: libpod-conmon-a66d71823eb25cbcec56ac72105bce72c98f707f86b54677896f4976d9b61008.scope: Deactivated successfully.
Oct 02 19:50:23 compute-0 sudo[428906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:50:23 compute-0 sudo[428906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:50:23 compute-0 sudo[428906]: pam_unix(sudo:session): session closed for user root
Oct 02 19:50:23 compute-0 sudo[428931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:50:23 compute-0 sudo[428931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:50:23 compute-0 sudo[428931]: pam_unix(sudo:session): session closed for user root
Oct 02 19:50:23 compute-0 sudo[428956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:50:23 compute-0 sudo[428956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:50:23 compute-0 sudo[428956]: pam_unix(sudo:session): session closed for user root
Oct 02 19:50:23 compute-0 sudo[428981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:50:23 compute-0 sudo[428981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:50:23 compute-0 nova_compute[355794]: 2025-10-02 19:50:23.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:50:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 2400.1 total, 600.0 interval
                                            Cumulative writes: 7627 writes, 30K keys, 7627 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 7627 writes, 1582 syncs, 4.82 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 634 writes, 2061 keys, 634 commit groups, 1.0 writes per commit group, ingest: 2.08 MB, 0.00 MB/s
                                            Interval WAL: 634 writes, 263 syncs, 2.41 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 19:50:24 compute-0 ceph-mon[191910]: pgmap v1358: 321 pgs: 321 active+clean; 193 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 1.5 MiB/s wr, 48 op/s
Oct 02 19:50:24 compute-0 podman[429045]: 2025-10-02 19:50:24.125254093 +0000 UTC m=+0.107074309 container create 36a3535731b1c4fe8e4ce3d0f3fd774a34aebd29c6c3ab8699027b7dcb1568e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ramanujan, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:50:24 compute-0 podman[429045]: 2025-10-02 19:50:24.082220638 +0000 UTC m=+0.064040904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:50:24 compute-0 systemd[1]: Started libpod-conmon-36a3535731b1c4fe8e4ce3d0f3fd774a34aebd29c6c3ab8699027b7dcb1568e3.scope.
Oct 02 19:50:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:50:24 compute-0 podman[429045]: 2025-10-02 19:50:24.287248051 +0000 UTC m=+0.269068277 container init 36a3535731b1c4fe8e4ce3d0f3fd774a34aebd29c6c3ab8699027b7dcb1568e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:50:24 compute-0 podman[429045]: 2025-10-02 19:50:24.303522613 +0000 UTC m=+0.285342819 container start 36a3535731b1c4fe8e4ce3d0f3fd774a34aebd29c6c3ab8699027b7dcb1568e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 19:50:24 compute-0 podman[429045]: 2025-10-02 19:50:24.310103378 +0000 UTC m=+0.291923634 container attach 36a3535731b1c4fe8e4ce3d0f3fd774a34aebd29c6c3ab8699027b7dcb1568e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ramanujan, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:50:24 compute-0 nostalgic_ramanujan[429060]: 167 167
Oct 02 19:50:24 compute-0 systemd[1]: libpod-36a3535731b1c4fe8e4ce3d0f3fd774a34aebd29c6c3ab8699027b7dcb1568e3.scope: Deactivated successfully.
Oct 02 19:50:24 compute-0 conmon[429060]: conmon 36a3535731b1c4fe8e4c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-36a3535731b1c4fe8e4ce3d0f3fd774a34aebd29c6c3ab8699027b7dcb1568e3.scope/container/memory.events
Oct 02 19:50:24 compute-0 podman[429045]: 2025-10-02 19:50:24.319305993 +0000 UTC m=+0.301126199 container died 36a3535731b1c4fe8e4ce3d0f3fd774a34aebd29c6c3ab8699027b7dcb1568e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ramanujan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 19:50:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d0b4875b3fb54f5c169886535d48c05da7d85dd62cf04d8cab8f8ea752c162b-merged.mount: Deactivated successfully.
Oct 02 19:50:24 compute-0 podman[429045]: 2025-10-02 19:50:24.393254409 +0000 UTC m=+0.375074605 container remove 36a3535731b1c4fe8e4ce3d0f3fd774a34aebd29c6c3ab8699027b7dcb1568e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ramanujan, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:50:24 compute-0 systemd[1]: libpod-conmon-36a3535731b1c4fe8e4ce3d0f3fd774a34aebd29c6c3ab8699027b7dcb1568e3.scope: Deactivated successfully.
Oct 02 19:50:24 compute-0 podman[429082]: 2025-10-02 19:50:24.709738566 +0000 UTC m=+0.103566876 container create 8ad8b4570fdf7830d693a971625e82f3c5fd825e45716352ccc8f077c97c3304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 19:50:24 compute-0 podman[429082]: 2025-10-02 19:50:24.682111641 +0000 UTC m=+0.075940041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:50:24 compute-0 systemd[1]: Started libpod-conmon-8ad8b4570fdf7830d693a971625e82f3c5fd825e45716352ccc8f077c97c3304.scope.
Oct 02 19:50:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:50:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f5f9e5818bcd1033983d1737b99c47489a4aac8a9a4b2c45fd57be8fdbf5a23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:50:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f5f9e5818bcd1033983d1737b99c47489a4aac8a9a4b2c45fd57be8fdbf5a23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:50:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f5f9e5818bcd1033983d1737b99c47489a4aac8a9a4b2c45fd57be8fdbf5a23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:50:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f5f9e5818bcd1033983d1737b99c47489a4aac8a9a4b2c45fd57be8fdbf5a23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:50:24 compute-0 podman[429082]: 2025-10-02 19:50:24.869945376 +0000 UTC m=+0.263773736 container init 8ad8b4570fdf7830d693a971625e82f3c5fd825e45716352ccc8f077c97c3304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_turing, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:50:24 compute-0 podman[429082]: 2025-10-02 19:50:24.885816298 +0000 UTC m=+0.279644608 container start 8ad8b4570fdf7830d693a971625e82f3c5fd825e45716352ccc8f077c97c3304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 19:50:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct 02 19:50:24 compute-0 podman[429082]: 2025-10-02 19:50:24.892271619 +0000 UTC m=+0.286099959 container attach 8ad8b4570fdf7830d693a971625e82f3c5fd825e45716352ccc8f077c97c3304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_turing, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 19:50:24 compute-0 podman[429097]: 2025-10-02 19:50:24.901798973 +0000 UTC m=+0.127240365 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 19:50:24 compute-0 podman[429096]: 2025-10-02 19:50:24.920702015 +0000 UTC m=+0.149606969 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:50:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:50:26 compute-0 dazzling_turing[429116]: {
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:         "osd_id": 1,
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:         "type": "bluestore"
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:     },
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:         "osd_id": 2,
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:         "type": "bluestore"
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:     },
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:         "osd_id": 0,
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:         "type": "bluestore"
Oct 02 19:50:26 compute-0 dazzling_turing[429116]:     }
Oct 02 19:50:26 compute-0 dazzling_turing[429116]: }
Oct 02 19:50:26 compute-0 nova_compute[355794]: 2025-10-02 19:50:26.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:26 compute-0 ceph-mon[191910]: pgmap v1359: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct 02 19:50:26 compute-0 systemd[1]: libpod-8ad8b4570fdf7830d693a971625e82f3c5fd825e45716352ccc8f077c97c3304.scope: Deactivated successfully.
Oct 02 19:50:26 compute-0 systemd[1]: libpod-8ad8b4570fdf7830d693a971625e82f3c5fd825e45716352ccc8f077c97c3304.scope: Consumed 1.236s CPU time.
Oct 02 19:50:26 compute-0 conmon[429116]: conmon 8ad8b4570fdf7830d693 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ad8b4570fdf7830d693a971625e82f3c5fd825e45716352ccc8f077c97c3304.scope/container/memory.events
Oct 02 19:50:26 compute-0 podman[429174]: 2025-10-02 19:50:26.277500906 +0000 UTC m=+0.082408933 container died 8ad8b4570fdf7830d693a971625e82f3c5fd825e45716352ccc8f077c97c3304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_turing, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 19:50:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f5f9e5818bcd1033983d1737b99c47489a4aac8a9a4b2c45fd57be8fdbf5a23-merged.mount: Deactivated successfully.
Oct 02 19:50:26 compute-0 podman[429174]: 2025-10-02 19:50:26.394324932 +0000 UTC m=+0.199232909 container remove 8ad8b4570fdf7830d693a971625e82f3c5fd825e45716352ccc8f077c97c3304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:50:26 compute-0 systemd[1]: libpod-conmon-8ad8b4570fdf7830d693a971625e82f3c5fd825e45716352ccc8f077c97c3304.scope: Deactivated successfully.
Oct 02 19:50:26 compute-0 sudo[428981]: pam_unix(sudo:session): session closed for user root
Oct 02 19:50:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:50:26 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:50:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:50:26 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:50:26 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev bebb906e-9614-4916-83f3-56b9b87615c6 does not exist
Oct 02 19:50:26 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 8ba8b488-c435-41d0-b488-3d511516a55d does not exist
Oct 02 19:50:26 compute-0 sudo[429188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:50:26 compute-0 sudo[429188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:50:26 compute-0 sudo[429188]: pam_unix(sudo:session): session closed for user root
Oct 02 19:50:26 compute-0 sudo[429213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:50:26 compute-0 sudo[429213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:50:26 compute-0 sudo[429213]: pam_unix(sudo:session): session closed for user root
Oct 02 19:50:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Oct 02 19:50:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:50:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:50:27 compute-0 ceph-mon[191910]: pgmap v1360: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Oct 02 19:50:28 compute-0 nova_compute[355794]: 2025-10-02 19:50:28.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Oct 02 19:50:29 compute-0 podman[157186]: time="2025-10-02T19:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:50:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:50:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9038 "" "Go-http-client/1.1"
Oct 02 19:50:29 compute-0 ceph-mon[191910]: pgmap v1361: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Oct 02 19:50:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:50:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:50:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 2400.1 total, 600.0 interval
                                            Cumulative writes: 6379 writes, 25K keys, 6379 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 6379 writes, 1172 syncs, 5.44 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 495 writes, 1494 keys, 495 commit groups, 1.0 writes per commit group, ingest: 1.53 MB, 0.00 MB/s
                                            Interval WAL: 495 writes, 204 syncs, 2.43 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 19:50:30 compute-0 podman[429238]: 2025-10-02 19:50:30.746131975 +0000 UTC m=+0.157969232 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi)
Oct 02 19:50:30 compute-0 podman[429239]: 2025-10-02 19:50:30.752202986 +0000 UTC m=+0.156053511 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, vcs-type=git, io.openshift.tags=base rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, name=ubi9, managed_by=edpm_ansible, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public)
Oct 02 19:50:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.2 MiB/s wr, 56 op/s
Oct 02 19:50:31 compute-0 nova_compute[355794]: 2025-10-02 19:50:31.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:31 compute-0 openstack_network_exporter[372736]: ERROR   19:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:50:31 compute-0 openstack_network_exporter[372736]: ERROR   19:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:50:31 compute-0 openstack_network_exporter[372736]: ERROR   19:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:50:31 compute-0 openstack_network_exporter[372736]: ERROR   19:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:50:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:50:31 compute-0 openstack_network_exporter[372736]: ERROR   19:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:50:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:50:31 compute-0 ceph-mgr[192222]: [devicehealth INFO root] Check health
Oct 02 19:50:31 compute-0 ceph-mon[191910]: pgmap v1362: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.2 MiB/s wr, 56 op/s
Oct 02 19:50:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:50:32.301 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:50:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:50:32.301 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:50:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:50:32.302 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:50:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1363: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 20 KiB/s wr, 9 op/s
Oct 02 19:50:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:50:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:50:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:50:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:50:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:50:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:50:33 compute-0 podman[429277]: 2025-10-02 19:50:33.712041244 +0000 UTC m=+0.118949843 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible)
Oct 02 19:50:33 compute-0 podman[429276]: 2025-10-02 19:50:33.729302663 +0000 UTC m=+0.143742933 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:50:33 compute-0 nova_compute[355794]: 2025-10-02 19:50:33.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:33 compute-0 podman[429278]: 2025-10-02 19:50:33.76904401 +0000 UTC m=+0.162363348 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:50:33 compute-0 ceph-mon[191910]: pgmap v1363: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 20 KiB/s wr, 9 op/s
Oct 02 19:50:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 20 KiB/s wr, 9 op/s
Oct 02 19:50:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:50:36 compute-0 ceph-mon[191910]: pgmap v1364: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 20 KiB/s wr, 9 op/s
Oct 02 19:50:36 compute-0 nova_compute[355794]: 2025-10-02 19:50:36.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Oct 02 19:50:37 compute-0 podman[429337]: 2025-10-02 19:50:37.704158072 +0000 UTC m=+0.116543710 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:50:37 compute-0 podman[429336]: 2025-10-02 19:50:37.751915432 +0000 UTC m=+0.168112172 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_id=edpm, version=9.6, distribution-scope=public, name=ubi9-minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Oct 02 19:50:38 compute-0 ceph-mon[191910]: pgmap v1365: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Oct 02 19:50:38 compute-0 nova_compute[355794]: 2025-10-02 19:50:38.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1366: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:40 compute-0 ceph-mon[191910]: pgmap v1366: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:50:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:41 compute-0 nova_compute[355794]: 2025-10-02 19:50:41.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:42 compute-0 ceph-mon[191910]: pgmap v1367: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:43 compute-0 nova_compute[355794]: 2025-10-02 19:50:43.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:44 compute-0 ceph-mon[191910]: pgmap v1368: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1369: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:50:45 compute-0 nova_compute[355794]: 2025-10-02 19:50:45.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:45 compute-0 nova_compute[355794]: 2025-10-02 19:50:45.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:50:46 compute-0 ceph-mon[191910]: pgmap v1369: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:46 compute-0 nova_compute[355794]: 2025-10-02 19:50:46.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:47 compute-0 nova_compute[355794]: 2025-10-02 19:50:47.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:48 compute-0 ceph-mon[191910]: pgmap v1370: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:48 compute-0 podman[429377]: 2025-10-02 19:50:48.700115516 +0000 UTC m=+0.122315503 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 19:50:48 compute-0 nova_compute[355794]: 2025-10-02 19:50:48.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1371: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:49 compute-0 nova_compute[355794]: 2025-10-02 19:50:49.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:50 compute-0 ceph-mon[191910]: pgmap v1371: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:50:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:50:50 compute-0 nova_compute[355794]: 2025-10-02 19:50:50.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:50 compute-0 nova_compute[355794]: 2025-10-02 19:50:50.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:50 compute-0 nova_compute[355794]: 2025-10-02 19:50:50.620 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:50:50 compute-0 nova_compute[355794]: 2025-10-02 19:50:50.621 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:50:50 compute-0 nova_compute[355794]: 2025-10-02 19:50:50.622 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:50:50 compute-0 nova_compute[355794]: 2025-10-02 19:50:50.622 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:50:50 compute-0 nova_compute[355794]: 2025-10-02 19:50:50.623 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:50:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1372: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 19:50:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:50:51 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2223813846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:50:51 compute-0 nova_compute[355794]: 2025-10-02 19:50:51.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:51 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2223813846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:50:51 compute-0 nova_compute[355794]: 2025-10-02 19:50:51.150 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:50:51 compute-0 nova_compute[355794]: 2025-10-02 19:50:51.282 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:50:51 compute-0 nova_compute[355794]: 2025-10-02 19:50:51.287 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:50:51 compute-0 nova_compute[355794]: 2025-10-02 19:50:51.287 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:50:51 compute-0 nova_compute[355794]: 2025-10-02 19:50:51.295 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:50:51 compute-0 nova_compute[355794]: 2025-10-02 19:50:51.295 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:50:51 compute-0 nova_compute[355794]: 2025-10-02 19:50:51.296 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:50:51 compute-0 nova_compute[355794]: 2025-10-02 19:50:51.302 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:50:51 compute-0 nova_compute[355794]: 2025-10-02 19:50:51.303 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:50:51 compute-0 nova_compute[355794]: 2025-10-02 19:50:51.303 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:50:51 compute-0 nova_compute[355794]: 2025-10-02 19:50:51.952 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:50:51 compute-0 nova_compute[355794]: 2025-10-02 19:50:51.955 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3444MB free_disk=59.888851165771484GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:50:51 compute-0 nova_compute[355794]: 2025-10-02 19:50:51.956 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:50:51 compute-0 nova_compute[355794]: 2025-10-02 19:50:51.957 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:50:52 compute-0 nova_compute[355794]: 2025-10-02 19:50:52.063 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:50:52 compute-0 nova_compute[355794]: 2025-10-02 19:50:52.065 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:50:52 compute-0 nova_compute[355794]: 2025-10-02 19:50:52.065 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance b88114e8-b15d-4a78-ac15-3dd7ee30b949 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:50:52 compute-0 nova_compute[355794]: 2025-10-02 19:50:52.066 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:50:52 compute-0 nova_compute[355794]: 2025-10-02 19:50:52.067 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:50:52 compute-0 ceph-mon[191910]: pgmap v1372: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 19:50:52 compute-0 nova_compute[355794]: 2025-10-02 19:50:52.153 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:50:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:50:52 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2573651919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:50:52 compute-0 nova_compute[355794]: 2025-10-02 19:50:52.666 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:50:52 compute-0 nova_compute[355794]: 2025-10-02 19:50:52.676 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:50:52 compute-0 nova_compute[355794]: 2025-10-02 19:50:52.830 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:50:52 compute-0 nova_compute[355794]: 2025-10-02 19:50:52.834 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:50:52 compute-0 nova_compute[355794]: 2025-10-02 19:50:52.835 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:50:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 19:50:53 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2573651919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:50:53 compute-0 nova_compute[355794]: 2025-10-02 19:50:53.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:53 compute-0 nova_compute[355794]: 2025-10-02 19:50:53.837 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:53 compute-0 nova_compute[355794]: 2025-10-02 19:50:53.837 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:50:53 compute-0 nova_compute[355794]: 2025-10-02 19:50:53.837 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:50:54 compute-0 ceph-mon[191910]: pgmap v1373: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 19:50:54 compute-0 nova_compute[355794]: 2025-10-02 19:50:54.821 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:50:54 compute-0 nova_compute[355794]: 2025-10-02 19:50:54.821 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:50:54 compute-0 nova_compute[355794]: 2025-10-02 19:50:54.821 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:50:54 compute-0 nova_compute[355794]: 2025-10-02 19:50:54.822 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:50:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1374: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:50:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:50:55 compute-0 podman[429442]: 2025-10-02 19:50:55.702470413 +0000 UTC m=+0.127496602 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm)
Oct 02 19:50:55 compute-0 podman[429441]: 2025-10-02 19:50:55.702867493 +0000 UTC m=+0.128039156 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:50:56 compute-0 nova_compute[355794]: 2025-10-02 19:50:56.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:56 compute-0 ceph-mon[191910]: pgmap v1374: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:50:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:50:57 compute-0 nova_compute[355794]: 2025-10-02 19:50:57.146 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:50:57 compute-0 nova_compute[355794]: 2025-10-02 19:50:57.252 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:50:57 compute-0 nova_compute[355794]: 2025-10-02 19:50:57.252 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:50:57 compute-0 nova_compute[355794]: 2025-10-02 19:50:57.253 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:57 compute-0 nova_compute[355794]: 2025-10-02 19:50:57.253 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:57 compute-0 nova_compute[355794]: 2025-10-02 19:50:57.985 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:58 compute-0 ceph-mon[191910]: pgmap v1375: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:50:58 compute-0 nova_compute[355794]: 2025-10-02 19:50:58.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1376: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:50:59 compute-0 podman[157186]: time="2025-10-02T19:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:50:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:50:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9052 "" "Go-http-client/1.1"
Oct 02 19:51:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:51:00 compute-0 ceph-mon[191910]: pgmap v1376: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:51:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:51:01 compute-0 nova_compute[355794]: 2025-10-02 19:51:01.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:01 compute-0 openstack_network_exporter[372736]: ERROR   19:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:51:01 compute-0 openstack_network_exporter[372736]: ERROR   19:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:51:01 compute-0 openstack_network_exporter[372736]: ERROR   19:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:51:01 compute-0 openstack_network_exporter[372736]: ERROR   19:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:51:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:51:01 compute-0 openstack_network_exporter[372736]: ERROR   19:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:51:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:51:01 compute-0 podman[429483]: 2025-10-02 19:51:01.719896488 +0000 UTC m=+0.138255757 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64)
Oct 02 19:51:01 compute-0 podman[429482]: 2025-10-02 19:51:01.72220352 +0000 UTC m=+0.140572539 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Oct 02 19:51:02 compute-0 ceph-mon[191910]: pgmap v1377: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:51:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 767 B/s wr, 0 op/s
Oct 02 19:51:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:51:03
Oct 02 19:51:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:51:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:51:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'volumes', 'vms']
Oct 02 19:51:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:51:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:51:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:51:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:51:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:51:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:51:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:51:03 compute-0 nova_compute[355794]: 2025-10-02 19:51:03.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:51:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:51:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:51:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:51:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:51:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:51:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:51:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:51:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:51:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.296 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.297 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.309 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:51:04 compute-0 ceph-mon[191910]: pgmap v1378: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 767 B/s wr, 0 op/s
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.314 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b88114e8-b15d-4a78-ac15-3dd7ee30b949 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.315 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b88114e8-b15d-4a78-ac15-3dd7ee30b949 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}83403cbd27ca997ec29507654ce1f65c7e3234a2ba47473760e788c908204f10" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 19:51:04 compute-0 podman[429522]: 2025-10-02 19:51:04.699107111 +0000 UTC m=+0.120207497 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:51:04 compute-0 podman[429523]: 2025-10-02 19:51:04.704662819 +0000 UTC m=+0.120545527 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:51:04 compute-0 podman[429524]: 2025-10-02 19:51:04.757565906 +0000 UTC m=+0.171989355 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 19:51:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1379: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 767 B/s wr, 0 op/s
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.998 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Thu, 02 Oct 2025 19:51:04 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-0d4bf11a-01a2-491b-bb5e-9f8888e36b94 x-openstack-request-id: req-0d4bf11a-01a2-491b-bb5e-9f8888e36b94 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.999 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b88114e8-b15d-4a78-ac15-3dd7ee30b949", "name": "vn-wkgthwg-uxhkceofcvut-4we5flt73ruq-vnf-oj5mzexrwavf", "status": "ACTIVE", "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "user_id": "811fb7ac717e4ba9b9874e5454ee08f4", "metadata": {"metering.server_group": "d2d7e2b0-01e0-44b1-b2c7-fe502b333743"}, "hostId": "0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d", "image": {"id": "ce28338d-119e-49e1-ab67-60da8882593a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ce28338d-119e-49e1-ab67-60da8882593a"}]}, "flavor": {"id": "8f0521f8-dc4e-4ca1-bf77-f443ae74db03", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8f0521f8-dc4e-4ca1-bf77-f443ae74db03"}]}, "created": "2025-10-02T19:49:34Z", "updated": "2025-10-02T19:49:44Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.207", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c5:df:6b"}, {"version": 4, "addr": "192.168.122.220", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c5:df:6b"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b88114e8-b15d-4a78-ac15-3dd7ee30b949"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b88114e8-b15d-4a78-ac15-3dd7ee30b949"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-02T19:49:44.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 19:51:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:04.999 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b88114e8-b15d-4a78-ac15-3dd7ee30b949 used request id req-0d4bf11a-01a2-491b-bb5e-9f8888e36b94 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.004 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b88114e8-b15d-4a78-ac15-3dd7ee30b949', 'name': 'vn-wkgthwg-uxhkceofcvut-4we5flt73ruq-vnf-oj5mzexrwavf', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {'metering.server_group': 'd2d7e2b0-01e0-44b1-b2c7-fe502b333743'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.008 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4cdbea11-17d2-4466-a5f5-9a3d25e25d8a', 'name': 'vn-wkgthwg-cyepyxtlaijo-cznrkcgobntv-vnf-te2v6j4ustz5', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {'metering.server_group': 'd2d7e2b0-01e0-44b1-b2c7-fe502b333743'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.009 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.009 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.009 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:51:05.009671) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.061 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.062 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.062 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.138 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.139 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.140 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.220 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.220 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.220 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.221 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.222 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.222 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.222 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.222 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.222 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.223 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:51:05.222469) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.243 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.244 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.244 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.272 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.282 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.282 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.330 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.331 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.331 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.332 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.332 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.333 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.333 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.333 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.333 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:51:05.333288) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.334 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.334 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.334 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.bytes volume: 41701376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.335 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.335 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.336 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.bytes volume: 41807872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.336 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.337 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.337 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.338 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.338 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.339 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.339 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.339 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.latency volume: 9451075277 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.340 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.latency volume: 22315613 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.340 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.340 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.latency volume: 6231088971 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.341 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.latency volume: 36317650 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.341 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.342 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.342 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.342 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:51:05.338607) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:51:05.342697) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.371 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.402 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.439 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.439 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.439 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.440 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.440 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.440 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.440 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.440 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.440 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.441 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.441 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.requests volume: 222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.442 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.442 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.442 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.443 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.443 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.443 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:51:05.440458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.444 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.444 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.444 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.444 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.445 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.445 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.445 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:51:05.445135) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.450 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.455 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b88114e8-b15d-4a78-ac15-3dd7ee30b949 / tap55da210c-64 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.456 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.461 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.462 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.462 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.462 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.462 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.462 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.463 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.463 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.463 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.464 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.464 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.464 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:51:05.462947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.464 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.464 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.465 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-02T19:51:05.464722) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.465 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.465 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-wkgthwg-uxhkceofcvut-4we5flt73ruq-vnf-oj5mzexrwavf>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-wkgthwg-uxhkceofcvut-4we5flt73ruq-vnf-oj5mzexrwavf>]
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.465 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.465 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.465 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.466 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.466 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.467 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.467 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.467 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.467 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.467 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.467 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:51:05.466052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.468 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:51:05.467604) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.468 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.468 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.468 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.packets volume: 35 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.469 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.469 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.469 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.469 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:51:05.469713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.470 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.470 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.471 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.471 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.472 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.472 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.472 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.472 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.473 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:51:05.472129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.473 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.474 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.474 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.474 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.474 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.474 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.475 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.475 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.475 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.475 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:51:05.475014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.476 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.476 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.476 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.477 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.477 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.477 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.477 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.477 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.478 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.478 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:51:05.477500) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.478 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.479 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.479 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.479 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.480 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.480 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.481 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.481 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.482 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.482 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.482 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.482 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.483 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.483 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.bytes volume: 2146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.483 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.bytes volume: 4830 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.484 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.484 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:51:05.482858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.484 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.484 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.485 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.485 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.485 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.485 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.485 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:51:05.485265) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.486 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.486 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.486 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.487 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.487 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.488 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.488 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.488 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.489 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.489 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.489 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.489 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.490 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.490 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.490 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.490 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-wkgthwg-uxhkceofcvut-4we5flt73ruq-vnf-oj5mzexrwavf>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-wkgthwg-uxhkceofcvut-4we5flt73ruq-vnf-oj5mzexrwavf>]
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.491 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.491 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.491 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.492 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-02T19:51:05.490151) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.491 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.492 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.492 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.492 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:51:05.492258) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.493 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/memory.usage volume: 49.05859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.493 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/memory.usage volume: 49.15625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.494 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.494 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.494 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.494 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.494 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.495 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.495 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2394 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.495 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:51:05.495032) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.495 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.bytes volume: 1786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.496 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.bytes volume: 5233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.496 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.496 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.497 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.497 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.497 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.497 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.497 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.497 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.497 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.packets volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.498 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.498 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:51:05.497356) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.498 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.498 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.498 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.499 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.499 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.499 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.499 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.500 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.500 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:51:05.499028) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.500 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.501 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.501 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.502 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.502 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.502 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.502 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.503 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:51:05.502252) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.503 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.503 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.503 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.504 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.504 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.504 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.504 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.505 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.505 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.505 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 38740000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.505 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/cpu volume: 33090000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.506 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/cpu volume: 259470000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.506 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.507 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.507 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.507 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.507 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.latency volume: 1897675157 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.508 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.latency volume: 270926831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.508 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.latency volume: 180472901 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.508 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.latency volume: 1764876744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:51:05.503910) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:51:05.505404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:51:05.506964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.509 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.latency volume: 323566119 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.509 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.latency volume: 193343486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.509 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.510 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.510 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.510 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.511 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.511 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.511 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:51:05.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:06 compute-0 nova_compute[355794]: 2025-10-02 19:51:06.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:06 compute-0 ceph-mon[191910]: pgmap v1379: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 767 B/s wr, 0 op/s
Oct 02 19:51:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1380: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:08 compute-0 ceph-mon[191910]: pgmap v1380: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:08 compute-0 podman[429582]: 2025-10-02 19:51:08.7190559 +0000 UTC m=+0.118210825 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:51:08 compute-0 podman[429581]: 2025-10-02 19:51:08.726338983 +0000 UTC m=+0.139025568 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter)
Oct 02 19:51:08 compute-0 nova_compute[355794]: 2025-10-02 19:51:08.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1381: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:51:10 compute-0 ceph-mon[191910]: pgmap v1381: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1382: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:11 compute-0 nova_compute[355794]: 2025-10-02 19:51:11.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:12 compute-0 ceph-mon[191910]: pgmap v1382: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016574917993423194 of space, bias 1.0, pg target 0.4972475398026958 quantized to 32 (current 32)
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:51:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:13 compute-0 nova_compute[355794]: 2025-10-02 19:51:13.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:14 compute-0 ceph-mon[191910]: pgmap v1383: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1384: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:51:16 compute-0 nova_compute[355794]: 2025-10-02 19:51:16.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:16 compute-0 ceph-mon[191910]: pgmap v1384: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1385: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:18 compute-0 ceph-mon[191910]: pgmap v1385: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:18 compute-0 nova_compute[355794]: 2025-10-02 19:51:18.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:19 compute-0 podman[429627]: 2025-10-02 19:51:19.694050316 +0000 UTC m=+0.114181117 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:51:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:51:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4129738950' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:51:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:51:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4129738950' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:51:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:51:20.246170) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434680246229, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1312, "num_deletes": 505, "total_data_size": 1538521, "memory_usage": 1563656, "flush_reason": "Manual Compaction"}
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434680259281, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 918729, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27584, "largest_seqno": 28895, "table_properties": {"data_size": 914095, "index_size": 1646, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14671, "raw_average_key_size": 19, "raw_value_size": 902174, "raw_average_value_size": 1176, "num_data_blocks": 75, "num_entries": 767, "num_filter_entries": 767, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759434577, "oldest_key_time": 1759434577, "file_creation_time": 1759434680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 14034 microseconds, and 7881 cpu microseconds.
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:51:20.260208) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 918729 bytes OK
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:51:20.260793) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:51:20.265503) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:51:20.265523) EVENT_LOG_v1 {"time_micros": 1759434680265516, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:51:20.265541) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1531568, prev total WAL file size 1531568, number of live WAL files 2.
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:51:20.268860) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(897KB)], [62(8783KB)]
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434680268971, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 9912553, "oldest_snapshot_seqno": -1}
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 4820 keys, 7135530 bytes, temperature: kUnknown
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434680346341, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 7135530, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7104423, "index_size": 17941, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12101, "raw_key_size": 121731, "raw_average_key_size": 25, "raw_value_size": 7018206, "raw_average_value_size": 1456, "num_data_blocks": 743, "num_entries": 4820, "num_filter_entries": 4820, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759434680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:51:20.347686) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 7135530 bytes
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:51:20.354361) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.4 rd, 91.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.6 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(18.6) write-amplify(7.8) OK, records in: 5797, records dropped: 977 output_compression: NoCompression
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:51:20.354454) EVENT_LOG_v1 {"time_micros": 1759434680354431, "job": 34, "event": "compaction_finished", "compaction_time_micros": 78440, "compaction_time_cpu_micros": 41831, "output_level": 6, "num_output_files": 1, "total_output_size": 7135530, "num_input_records": 5797, "num_output_records": 4820, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434680355668, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434680359578, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:51:20.268589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:51:20.359700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:51:20.359704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:51:20.359706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:51:20.359707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:51:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:51:20.359709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:51:20 compute-0 ceph-mon[191910]: pgmap v1386: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/4129738950' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:51:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/4129738950' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:51:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1387: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:21 compute-0 nova_compute[355794]: 2025-10-02 19:51:21.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:21 compute-0 ceph-mon[191910]: pgmap v1387: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1388: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:23 compute-0 nova_compute[355794]: 2025-10-02 19:51:23.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:23 compute-0 ceph-mon[191910]: pgmap v1388: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:51:26 compute-0 ceph-mon[191910]: pgmap v1389: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:26 compute-0 nova_compute[355794]: 2025-10-02 19:51:26.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:26 compute-0 podman[429647]: 2025-10-02 19:51:26.703776008 +0000 UTC m=+0.111219938 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:51:26 compute-0 podman[429648]: 2025-10-02 19:51:26.734704281 +0000 UTC m=+0.134537969 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm)
Oct 02 19:51:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:51:26 compute-0 sudo[429688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:51:26 compute-0 sudo[429688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:51:26 compute-0 sudo[429688]: pam_unix(sudo:session): session closed for user root
Oct 02 19:51:27 compute-0 sudo[429713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:51:27 compute-0 sudo[429713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:51:27 compute-0 sudo[429713]: pam_unix(sudo:session): session closed for user root
Oct 02 19:51:27 compute-0 sudo[429738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:51:27 compute-0 sudo[429738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:51:27 compute-0 sudo[429738]: pam_unix(sudo:session): session closed for user root
Oct 02 19:51:27 compute-0 sudo[429763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:51:27 compute-0 sudo[429763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:51:27 compute-0 sudo[429763]: pam_unix(sudo:session): session closed for user root
Oct 02 19:51:28 compute-0 ceph-mon[191910]: pgmap v1390: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:51:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:51:28 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:51:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:51:28 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:51:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:51:28 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:51:28 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 861ab377-60e3-4a57-b779-2d4d6546bad0 does not exist
Oct 02 19:51:28 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 67ae9f98-b7b4-4adb-9582-a56021881255 does not exist
Oct 02 19:51:28 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 531f1585-dcee-4422-8711-3cc7ecc5025d does not exist
Oct 02 19:51:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:51:28 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:51:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:51:28 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:51:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:51:28 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:51:28 compute-0 sudo[429818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:51:28 compute-0 sudo[429818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:51:28 compute-0 sudo[429818]: pam_unix(sudo:session): session closed for user root
Oct 02 19:51:28 compute-0 sudo[429843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:51:28 compute-0 sudo[429843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:51:28 compute-0 sudo[429843]: pam_unix(sudo:session): session closed for user root
Oct 02 19:51:28 compute-0 sudo[429868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:51:28 compute-0 sudo[429868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:51:28 compute-0 sudo[429868]: pam_unix(sudo:session): session closed for user root
Oct 02 19:51:28 compute-0 sudo[429893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:51:28 compute-0 sudo[429893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:51:28 compute-0 nova_compute[355794]: 2025-10-02 19:51:28.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1391: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:51:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:51:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:51:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:51:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:51:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:51:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:51:29 compute-0 podman[429955]: 2025-10-02 19:51:29.220355139 +0000 UTC m=+0.082278409 container create ab6631e0de71fe14b5e0d2000865332d063031d6fae89428997fba6a4f09d945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_edison, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 19:51:29 compute-0 podman[429955]: 2025-10-02 19:51:29.181597118 +0000 UTC m=+0.043520458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:51:29 compute-0 systemd[1]: Started libpod-conmon-ab6631e0de71fe14b5e0d2000865332d063031d6fae89428997fba6a4f09d945.scope.
Oct 02 19:51:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:51:29 compute-0 podman[429955]: 2025-10-02 19:51:29.374780136 +0000 UTC m=+0.236703466 container init ab6631e0de71fe14b5e0d2000865332d063031d6fae89428997fba6a4f09d945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_edison, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:51:29 compute-0 podman[429955]: 2025-10-02 19:51:29.393057742 +0000 UTC m=+0.254981012 container start ab6631e0de71fe14b5e0d2000865332d063031d6fae89428997fba6a4f09d945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 19:51:29 compute-0 podman[429955]: 2025-10-02 19:51:29.39902391 +0000 UTC m=+0.260947210 container attach ab6631e0de71fe14b5e0d2000865332d063031d6fae89428997fba6a4f09d945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 19:51:29 compute-0 elated_edison[429972]: 167 167
Oct 02 19:51:29 compute-0 systemd[1]: libpod-ab6631e0de71fe14b5e0d2000865332d063031d6fae89428997fba6a4f09d945.scope: Deactivated successfully.
Oct 02 19:51:29 compute-0 podman[429955]: 2025-10-02 19:51:29.407711841 +0000 UTC m=+0.269635141 container died ab6631e0de71fe14b5e0d2000865332d063031d6fae89428997fba6a4f09d945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:51:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6bf30f5ca1deb21d5558ebadcffb2cff2215f5f0c01dbbedd5b7adfcb798460-merged.mount: Deactivated successfully.
Oct 02 19:51:29 compute-0 podman[429955]: 2025-10-02 19:51:29.490649707 +0000 UTC m=+0.352573007 container remove ab6631e0de71fe14b5e0d2000865332d063031d6fae89428997fba6a4f09d945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_edison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 02 19:51:29 compute-0 systemd[1]: libpod-conmon-ab6631e0de71fe14b5e0d2000865332d063031d6fae89428997fba6a4f09d945.scope: Deactivated successfully.
Oct 02 19:51:29 compute-0 podman[157186]: time="2025-10-02T19:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:51:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:51:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9053 "" "Go-http-client/1.1"
Oct 02 19:51:29 compute-0 podman[429995]: 2025-10-02 19:51:29.804971775 +0000 UTC m=+0.112796270 container create 7c6ff71e67f262156aa510293575c61c75a53721776d6d8a94efe15c2a9cd84e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 19:51:29 compute-0 podman[429995]: 2025-10-02 19:51:29.750334932 +0000 UTC m=+0.058159487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:51:29 compute-0 systemd[1]: Started libpod-conmon-7c6ff71e67f262156aa510293575c61c75a53721776d6d8a94efe15c2a9cd84e.scope.
Oct 02 19:51:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d64f5896e4d6d2df2b12040c89274c80bdb168b668bf779e13c207e5036f9e7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d64f5896e4d6d2df2b12040c89274c80bdb168b668bf779e13c207e5036f9e7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d64f5896e4d6d2df2b12040c89274c80bdb168b668bf779e13c207e5036f9e7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d64f5896e4d6d2df2b12040c89274c80bdb168b668bf779e13c207e5036f9e7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d64f5896e4d6d2df2b12040c89274c80bdb168b668bf779e13c207e5036f9e7e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:51:29 compute-0 podman[429995]: 2025-10-02 19:51:29.970313642 +0000 UTC m=+0.278138207 container init 7c6ff71e67f262156aa510293575c61c75a53721776d6d8a94efe15c2a9cd84e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_maxwell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:51:29 compute-0 podman[429995]: 2025-10-02 19:51:29.990613992 +0000 UTC m=+0.298438497 container start 7c6ff71e67f262156aa510293575c61c75a53721776d6d8a94efe15c2a9cd84e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_maxwell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:51:30 compute-0 podman[429995]: 2025-10-02 19:51:29.997731101 +0000 UTC m=+0.305555606 container attach 7c6ff71e67f262156aa510293575c61c75a53721776d6d8a94efe15c2a9cd84e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 19:51:30 compute-0 ceph-mon[191910]: pgmap v1391: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:51:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:51:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:51:31 compute-0 inspiring_maxwell[430013]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:51:31 compute-0 inspiring_maxwell[430013]: --> relative data size: 1.0
Oct 02 19:51:31 compute-0 inspiring_maxwell[430013]: --> All data devices are unavailable
Oct 02 19:51:31 compute-0 nova_compute[355794]: 2025-10-02 19:51:31.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:31 compute-0 systemd[1]: libpod-7c6ff71e67f262156aa510293575c61c75a53721776d6d8a94efe15c2a9cd84e.scope: Deactivated successfully.
Oct 02 19:51:31 compute-0 podman[429995]: 2025-10-02 19:51:31.210323616 +0000 UTC m=+1.518148121 container died 7c6ff71e67f262156aa510293575c61c75a53721776d6d8a94efe15c2a9cd84e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 19:51:31 compute-0 systemd[1]: libpod-7c6ff71e67f262156aa510293575c61c75a53721776d6d8a94efe15c2a9cd84e.scope: Consumed 1.139s CPU time.
Oct 02 19:51:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d64f5896e4d6d2df2b12040c89274c80bdb168b668bf779e13c207e5036f9e7e-merged.mount: Deactivated successfully.
Oct 02 19:51:31 compute-0 podman[429995]: 2025-10-02 19:51:31.340283542 +0000 UTC m=+1.648108057 container remove 7c6ff71e67f262156aa510293575c61c75a53721776d6d8a94efe15c2a9cd84e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_maxwell, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:51:31 compute-0 systemd[1]: libpod-conmon-7c6ff71e67f262156aa510293575c61c75a53721776d6d8a94efe15c2a9cd84e.scope: Deactivated successfully.
Oct 02 19:51:31 compute-0 sudo[429893]: pam_unix(sudo:session): session closed for user root
Oct 02 19:51:31 compute-0 openstack_network_exporter[372736]: ERROR   19:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:51:31 compute-0 openstack_network_exporter[372736]: ERROR   19:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:51:31 compute-0 openstack_network_exporter[372736]: ERROR   19:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:51:31 compute-0 openstack_network_exporter[372736]: ERROR   19:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:51:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:51:31 compute-0 openstack_network_exporter[372736]: ERROR   19:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:51:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:51:31 compute-0 sudo[430056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:51:31 compute-0 sudo[430056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:51:31 compute-0 sudo[430056]: pam_unix(sudo:session): session closed for user root
Oct 02 19:51:31 compute-0 sudo[430081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:51:31 compute-0 sudo[430081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:51:31 compute-0 sudo[430081]: pam_unix(sudo:session): session closed for user root
Oct 02 19:51:31 compute-0 sudo[430106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:51:31 compute-0 sudo[430106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:51:31 compute-0 sudo[430106]: pam_unix(sudo:session): session closed for user root
Oct 02 19:51:31 compute-0 sudo[430131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:51:31 compute-0 sudo[430131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:51:32 compute-0 podman[430155]: 2025-10-02 19:51:32.021475816 +0000 UTC m=+0.105239670 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:51:32 compute-0 podman[430156]: 2025-10-02 19:51:32.030151647 +0000 UTC m=+0.113150610 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, name=ubi9, vcs-type=git, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, release-0.7.12=, release=1214.1726694543, com.redhat.component=ubi9-container)
Oct 02 19:51:32 compute-0 ceph-mon[191910]: pgmap v1392: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:51:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:32.302 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:51:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:32.302 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:51:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:32.303 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:51:32 compute-0 podman[430234]: 2025-10-02 19:51:32.392634126 +0000 UTC m=+0.059369410 container create 264d002fa92e0893ec60e25eba467607ce70efed75ea830a78a5e9be8dc83d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:51:32 compute-0 systemd[1]: Started libpod-conmon-264d002fa92e0893ec60e25eba467607ce70efed75ea830a78a5e9be8dc83d17.scope.
Oct 02 19:51:32 compute-0 podman[430234]: 2025-10-02 19:51:32.370429075 +0000 UTC m=+0.037164369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:51:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:51:32 compute-0 podman[430234]: 2025-10-02 19:51:32.530113942 +0000 UTC m=+0.196849296 container init 264d002fa92e0893ec60e25eba467607ce70efed75ea830a78a5e9be8dc83d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:51:32 compute-0 podman[430234]: 2025-10-02 19:51:32.541914856 +0000 UTC m=+0.208650110 container start 264d002fa92e0893ec60e25eba467607ce70efed75ea830a78a5e9be8dc83d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 19:51:32 compute-0 podman[430234]: 2025-10-02 19:51:32.546999551 +0000 UTC m=+0.213734845 container attach 264d002fa92e0893ec60e25eba467607ce70efed75ea830a78a5e9be8dc83d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cerf, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:51:32 compute-0 recursing_cerf[430250]: 167 167
Oct 02 19:51:32 compute-0 systemd[1]: libpod-264d002fa92e0893ec60e25eba467607ce70efed75ea830a78a5e9be8dc83d17.scope: Deactivated successfully.
Oct 02 19:51:32 compute-0 conmon[430250]: conmon 264d002fa92e0893ec60 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-264d002fa92e0893ec60e25eba467607ce70efed75ea830a78a5e9be8dc83d17.scope/container/memory.events
Oct 02 19:51:32 compute-0 podman[430234]: 2025-10-02 19:51:32.551709356 +0000 UTC m=+0.218444670 container died 264d002fa92e0893ec60e25eba467607ce70efed75ea830a78a5e9be8dc83d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cerf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 19:51:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e6e1f4d4f3ac968a0c0c9008efe1258727137cafeec45aac1dd01f8b3288f81-merged.mount: Deactivated successfully.
Oct 02 19:51:32 compute-0 podman[430234]: 2025-10-02 19:51:32.620118555 +0000 UTC m=+0.286853829 container remove 264d002fa92e0893ec60e25eba467607ce70efed75ea830a78a5e9be8dc83d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:51:32 compute-0 systemd[1]: libpod-conmon-264d002fa92e0893ec60e25eba467607ce70efed75ea830a78a5e9be8dc83d17.scope: Deactivated successfully.
Oct 02 19:51:32 compute-0 podman[430273]: 2025-10-02 19:51:32.862190582 +0000 UTC m=+0.066407826 container create b9e24c46a04f7f95ed4f35c931580314b48076995fe93a03d9619b8a1da9cbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kepler, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:51:32 compute-0 podman[430273]: 2025-10-02 19:51:32.843020073 +0000 UTC m=+0.047237327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:51:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1393: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:51:32 compute-0 systemd[1]: Started libpod-conmon-b9e24c46a04f7f95ed4f35c931580314b48076995fe93a03d9619b8a1da9cbc9.scope.
Oct 02 19:51:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36c94a9c655fd8c16b9e0c484e7bfb7109bf0c625aeaa212368599badbebb48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36c94a9c655fd8c16b9e0c484e7bfb7109bf0c625aeaa212368599badbebb48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36c94a9c655fd8c16b9e0c484e7bfb7109bf0c625aeaa212368599badbebb48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36c94a9c655fd8c16b9e0c484e7bfb7109bf0c625aeaa212368599badbebb48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:51:33 compute-0 podman[430273]: 2025-10-02 19:51:33.03546854 +0000 UTC m=+0.239685854 container init b9e24c46a04f7f95ed4f35c931580314b48076995fe93a03d9619b8a1da9cbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kepler, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:51:33 compute-0 podman[430273]: 2025-10-02 19:51:33.055164344 +0000 UTC m=+0.259381588 container start b9e24c46a04f7f95ed4f35c931580314b48076995fe93a03d9619b8a1da9cbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kepler, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 19:51:33 compute-0 podman[430273]: 2025-10-02 19:51:33.061183184 +0000 UTC m=+0.265400538 container attach b9e24c46a04f7f95ed4f35c931580314b48076995fe93a03d9619b8a1da9cbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kepler, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:51:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:51:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:51:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:51:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:51:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:51:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:51:33 compute-0 nova_compute[355794]: 2025-10-02 19:51:33.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]: {
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:     "0": [
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:         {
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "devices": [
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "/dev/loop3"
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             ],
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "lv_name": "ceph_lv0",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "lv_size": "21470642176",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "name": "ceph_lv0",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "tags": {
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.cluster_name": "ceph",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.crush_device_class": "",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.encrypted": "0",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.osd_id": "0",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.type": "block",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.vdo": "0"
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             },
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "type": "block",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "vg_name": "ceph_vg0"
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:         }
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:     ],
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:     "1": [
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:         {
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "devices": [
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "/dev/loop4"
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             ],
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "lv_name": "ceph_lv1",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "lv_size": "21470642176",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "name": "ceph_lv1",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "tags": {
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.cluster_name": "ceph",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.crush_device_class": "",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.encrypted": "0",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.osd_id": "1",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.type": "block",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.vdo": "0"
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             },
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "type": "block",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "vg_name": "ceph_vg1"
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:         }
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:     ],
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:     "2": [
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:         {
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "devices": [
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "/dev/loop5"
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             ],
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "lv_name": "ceph_lv2",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "lv_size": "21470642176",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "name": "ceph_lv2",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "tags": {
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.cluster_name": "ceph",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.crush_device_class": "",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.encrypted": "0",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.osd_id": "2",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.type": "block",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:                 "ceph.vdo": "0"
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             },
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "type": "block",
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:             "vg_name": "ceph_vg2"
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:         }
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]:     ]
Oct 02 19:51:33 compute-0 dazzling_kepler[430289]: }
Oct 02 19:51:33 compute-0 systemd[1]: libpod-b9e24c46a04f7f95ed4f35c931580314b48076995fe93a03d9619b8a1da9cbc9.scope: Deactivated successfully.
Oct 02 19:51:33 compute-0 podman[430273]: 2025-10-02 19:51:33.903862513 +0000 UTC m=+1.108079757 container died b9e24c46a04f7f95ed4f35c931580314b48076995fe93a03d9619b8a1da9cbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kepler, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:51:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a36c94a9c655fd8c16b9e0c484e7bfb7109bf0c625aeaa212368599badbebb48-merged.mount: Deactivated successfully.
Oct 02 19:51:34 compute-0 podman[430273]: 2025-10-02 19:51:34.007270923 +0000 UTC m=+1.211488177 container remove b9e24c46a04f7f95ed4f35c931580314b48076995fe93a03d9619b8a1da9cbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kepler, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 19:51:34 compute-0 systemd[1]: libpod-conmon-b9e24c46a04f7f95ed4f35c931580314b48076995fe93a03d9619b8a1da9cbc9.scope: Deactivated successfully.
Oct 02 19:51:34 compute-0 sudo[430131]: pam_unix(sudo:session): session closed for user root
Oct 02 19:51:34 compute-0 ceph-mon[191910]: pgmap v1393: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:51:34 compute-0 sudo[430309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:51:34 compute-0 sudo[430309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:51:34 compute-0 sudo[430309]: pam_unix(sudo:session): session closed for user root
Oct 02 19:51:34 compute-0 sudo[430334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:51:34 compute-0 sudo[430334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:51:34 compute-0 sudo[430334]: pam_unix(sudo:session): session closed for user root
Oct 02 19:51:34 compute-0 sudo[430359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:51:34 compute-0 sudo[430359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:51:34 compute-0 sudo[430359]: pam_unix(sudo:session): session closed for user root
Oct 02 19:51:34 compute-0 sudo[430384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:51:34 compute-0 sudo[430384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:51:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1394: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:51:34 compute-0 podman[430447]: 2025-10-02 19:51:34.939836411 +0000 UTC m=+0.063596472 container create 2312c1abc325a9c76b5a5a49c52d24baa26fee1998a5016a18e23e5d5bce3906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 19:51:35 compute-0 systemd[1]: Started libpod-conmon-2312c1abc325a9c76b5a5a49c52d24baa26fee1998a5016a18e23e5d5bce3906.scope.
Oct 02 19:51:35 compute-0 podman[430447]: 2025-10-02 19:51:34.918528234 +0000 UTC m=+0.042288285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:51:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:51:35 compute-0 podman[430447]: 2025-10-02 19:51:35.050847263 +0000 UTC m=+0.174607334 container init 2312c1abc325a9c76b5a5a49c52d24baa26fee1998a5016a18e23e5d5bce3906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_roentgen, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 19:51:35 compute-0 podman[430447]: 2025-10-02 19:51:35.062059191 +0000 UTC m=+0.185819252 container start 2312c1abc325a9c76b5a5a49c52d24baa26fee1998a5016a18e23e5d5bce3906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_roentgen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 19:51:35 compute-0 magical_roentgen[430481]: 167 167
Oct 02 19:51:35 compute-0 podman[430447]: 2025-10-02 19:51:35.067954498 +0000 UTC m=+0.191714549 container attach 2312c1abc325a9c76b5a5a49c52d24baa26fee1998a5016a18e23e5d5bce3906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_roentgen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:51:35 compute-0 systemd[1]: libpod-2312c1abc325a9c76b5a5a49c52d24baa26fee1998a5016a18e23e5d5bce3906.scope: Deactivated successfully.
Oct 02 19:51:35 compute-0 podman[430447]: 2025-10-02 19:51:35.069525909 +0000 UTC m=+0.193285970 container died 2312c1abc325a9c76b5a5a49c52d24baa26fee1998a5016a18e23e5d5bce3906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_roentgen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 19:51:35 compute-0 nova_compute[355794]: 2025-10-02 19:51:35.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:35 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:35.077 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '6e:8a:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:85:f4:22:b9:4f'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:51:35 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:35.078 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:51:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-8efcc2595d0ba01eebcdafdf42cec6554247a4fbc77d534b9cb1395e46a04b3d-merged.mount: Deactivated successfully.
Oct 02 19:51:35 compute-0 podman[430464]: 2025-10-02 19:51:35.122070167 +0000 UTC m=+0.107984083 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 19:51:35 compute-0 podman[430447]: 2025-10-02 19:51:35.130544732 +0000 UTC m=+0.254304783 container remove 2312c1abc325a9c76b5a5a49c52d24baa26fee1998a5016a18e23e5d5bce3906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 19:51:35 compute-0 systemd[1]: libpod-conmon-2312c1abc325a9c76b5a5a49c52d24baa26fee1998a5016a18e23e5d5bce3906.scope: Deactivated successfully.
Oct 02 19:51:35 compute-0 podman[430465]: 2025-10-02 19:51:35.14512704 +0000 UTC m=+0.144761831 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:51:35 compute-0 podman[430461]: 2025-10-02 19:51:35.150879023 +0000 UTC m=+0.150965836 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:51:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:51:35 compute-0 podman[430545]: 2025-10-02 19:51:35.36400127 +0000 UTC m=+0.074441060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:51:35 compute-0 podman[430545]: 2025-10-02 19:51:35.524305653 +0000 UTC m=+0.234745403 container create dd0df82cffebcb754e884b10ad30317fa02b03e495c7384cef347165086e6a4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tharp, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 19:51:35 compute-0 systemd[1]: Started libpod-conmon-dd0df82cffebcb754e884b10ad30317fa02b03e495c7384cef347165086e6a4b.scope.
Oct 02 19:51:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d7f7b3abf7b93a0b62bb82140828b866cc0f82687cbb3375620f5ed85af110/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d7f7b3abf7b93a0b62bb82140828b866cc0f82687cbb3375620f5ed85af110/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d7f7b3abf7b93a0b62bb82140828b866cc0f82687cbb3375620f5ed85af110/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d7f7b3abf7b93a0b62bb82140828b866cc0f82687cbb3375620f5ed85af110/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:51:35 compute-0 podman[430545]: 2025-10-02 19:51:35.698724561 +0000 UTC m=+0.409164321 container init dd0df82cffebcb754e884b10ad30317fa02b03e495c7384cef347165086e6a4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tharp, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 19:51:35 compute-0 podman[430545]: 2025-10-02 19:51:35.711054779 +0000 UTC m=+0.421494539 container start dd0df82cffebcb754e884b10ad30317fa02b03e495c7384cef347165086e6a4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:51:35 compute-0 podman[430545]: 2025-10-02 19:51:35.728053991 +0000 UTC m=+0.438493761 container attach dd0df82cffebcb754e884b10ad30317fa02b03e495c7384cef347165086e6a4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 19:51:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:36.081 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1de2af17-a89c-45e5-97c6-db433f26bbb6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:51:36 compute-0 ceph-mon[191910]: pgmap v1394: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:51:36 compute-0 nova_compute[355794]: 2025-10-02 19:51:36.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:36 compute-0 kind_tharp[430561]: {
Oct 02 19:51:36 compute-0 kind_tharp[430561]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:51:36 compute-0 kind_tharp[430561]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:51:36 compute-0 kind_tharp[430561]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:51:36 compute-0 kind_tharp[430561]:         "osd_id": 1,
Oct 02 19:51:36 compute-0 kind_tharp[430561]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:51:36 compute-0 kind_tharp[430561]:         "type": "bluestore"
Oct 02 19:51:36 compute-0 kind_tharp[430561]:     },
Oct 02 19:51:36 compute-0 kind_tharp[430561]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:51:36 compute-0 kind_tharp[430561]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:51:36 compute-0 kind_tharp[430561]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:51:36 compute-0 kind_tharp[430561]:         "osd_id": 2,
Oct 02 19:51:36 compute-0 kind_tharp[430561]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:51:36 compute-0 kind_tharp[430561]:         "type": "bluestore"
Oct 02 19:51:36 compute-0 kind_tharp[430561]:     },
Oct 02 19:51:36 compute-0 kind_tharp[430561]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:51:36 compute-0 kind_tharp[430561]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:51:36 compute-0 kind_tharp[430561]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:51:36 compute-0 kind_tharp[430561]:         "osd_id": 0,
Oct 02 19:51:36 compute-0 kind_tharp[430561]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:51:36 compute-0 kind_tharp[430561]:         "type": "bluestore"
Oct 02 19:51:36 compute-0 kind_tharp[430561]:     }
Oct 02 19:51:36 compute-0 kind_tharp[430561]: }
Oct 02 19:51:36 compute-0 systemd[1]: libpod-dd0df82cffebcb754e884b10ad30317fa02b03e495c7384cef347165086e6a4b.scope: Deactivated successfully.
Oct 02 19:51:36 compute-0 systemd[1]: libpod-dd0df82cffebcb754e884b10ad30317fa02b03e495c7384cef347165086e6a4b.scope: Consumed 1.098s CPU time.
Oct 02 19:51:36 compute-0 podman[430594]: 2025-10-02 19:51:36.889148867 +0000 UTC m=+0.051335116 container died dd0df82cffebcb754e884b10ad30317fa02b03e495c7384cef347165086e6a4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:51:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-69d7f7b3abf7b93a0b62bb82140828b866cc0f82687cbb3375620f5ed85af110-merged.mount: Deactivated successfully.
Oct 02 19:51:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1395: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:51:36 compute-0 podman[430594]: 2025-10-02 19:51:36.982068448 +0000 UTC m=+0.144254657 container remove dd0df82cffebcb754e884b10ad30317fa02b03e495c7384cef347165086e6a4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tharp, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 19:51:36 compute-0 systemd[1]: libpod-conmon-dd0df82cffebcb754e884b10ad30317fa02b03e495c7384cef347165086e6a4b.scope: Deactivated successfully.
Oct 02 19:51:37 compute-0 sudo[430384]: pam_unix(sudo:session): session closed for user root
Oct 02 19:51:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:51:37 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:51:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:51:37 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:51:37 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev d17f3900-2297-4c91-a27d-8e21ec4e5bfa does not exist
Oct 02 19:51:37 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev f27822f5-a73c-4c2a-a2cf-4a603f6c5b4a does not exist
Oct 02 19:51:37 compute-0 sudo[430609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:51:37 compute-0 sudo[430609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:51:37 compute-0 sudo[430609]: pam_unix(sudo:session): session closed for user root
Oct 02 19:51:37 compute-0 sudo[430634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:51:37 compute-0 sudo[430634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:51:37 compute-0 sudo[430634]: pam_unix(sudo:session): session closed for user root
Oct 02 19:51:38 compute-0 ceph-mon[191910]: pgmap v1395: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:51:38 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:51:38 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:51:38 compute-0 nova_compute[355794]: 2025-10-02 19:51:38.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:39 compute-0 podman[430660]: 2025-10-02 19:51:39.652091419 +0000 UTC m=+0.071480362 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:51:39 compute-0 podman[430659]: 2025-10-02 19:51:39.68220341 +0000 UTC m=+0.102632830 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, architecture=x86_64, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal)
Oct 02 19:51:40 compute-0 ceph-mon[191910]: pgmap v1396: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:51:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1397: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:41 compute-0 nova_compute[355794]: 2025-10-02 19:51:41.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:42 compute-0 ceph-mon[191910]: pgmap v1397: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:42 compute-0 nova_compute[355794]: 2025-10-02 19:51:42.322 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:51:42 compute-0 nova_compute[355794]: 2025-10-02 19:51:42.323 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:51:42 compute-0 nova_compute[355794]: 2025-10-02 19:51:42.344 2 DEBUG nova.compute.manager [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:51:42 compute-0 nova_compute[355794]: 2025-10-02 19:51:42.433 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:51:42 compute-0 nova_compute[355794]: 2025-10-02 19:51:42.434 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:51:42 compute-0 nova_compute[355794]: 2025-10-02 19:51:42.451 2 DEBUG nova.virt.hardware [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:51:42 compute-0 nova_compute[355794]: 2025-10-02 19:51:42.452 2 INFO nova.compute.claims [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:51:42 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 02 19:51:42 compute-0 nova_compute[355794]: 2025-10-02 19:51:42.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:42 compute-0 nova_compute[355794]: 2025-10-02 19:51:42.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 19:51:42 compute-0 nova_compute[355794]: 2025-10-02 19:51:42.597 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 19:51:42 compute-0 nova_compute[355794]: 2025-10-02 19:51:42.619 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:51:43 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1801578565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.143 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.155 2 DEBUG nova.compute.provider_tree [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:51:43 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1801578565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.180 2 DEBUG nova.scheduler.client.report [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.208 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.209 2 DEBUG nova.compute.manager [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.271 2 DEBUG nova.compute.manager [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.272 2 DEBUG nova.network.neutron [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.324 2 INFO nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.386 2 DEBUG nova.compute.manager [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.483 2 DEBUG nova.compute.manager [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.484 2 DEBUG nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.485 2 INFO nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Creating image(s)
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.513 2 DEBUG nova.storage.rbd_utils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.546 2 DEBUG nova.storage.rbd_utils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.577 2 DEBUG nova.storage.rbd_utils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.585 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.660 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.661 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "29c290047b888f2c82efe3bcb0c2a3e42b009a3e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.663 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "29c290047b888f2c82efe3bcb0c2a3e42b009a3e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.663 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "29c290047b888f2c82efe3bcb0c2a3e42b009a3e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.704 2 DEBUG nova.storage.rbd_utils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.712 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e 58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:43 compute-0 nova_compute[355794]: 2025-10-02 19:51:43.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:44 compute-0 nova_compute[355794]: 2025-10-02 19:51:44.071 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e 58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:44 compute-0 ceph-mon[191910]: pgmap v1398: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:51:44 compute-0 nova_compute[355794]: 2025-10-02 19:51:44.221 2 DEBUG nova.storage.rbd_utils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] resizing rbd image 58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 19:51:44 compute-0 nova_compute[355794]: 2025-10-02 19:51:44.416 2 DEBUG nova.objects.instance [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lazy-loading 'migration_context' on Instance uuid 58f8959a-5f7e-44a5-9dca-65be0506a4c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:51:44 compute-0 nova_compute[355794]: 2025-10-02 19:51:44.487 2 DEBUG nova.storage.rbd_utils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:51:44 compute-0 nova_compute[355794]: 2025-10-02 19:51:44.547 2 DEBUG nova.storage.rbd_utils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:51:44 compute-0 nova_compute[355794]: 2025-10-02 19:51:44.559 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:44 compute-0 nova_compute[355794]: 2025-10-02 19:51:44.651 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:44 compute-0 nova_compute[355794]: 2025-10-02 19:51:44.653 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:51:44 compute-0 nova_compute[355794]: 2025-10-02 19:51:44.654 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:51:44 compute-0 nova_compute[355794]: 2025-10-02 19:51:44.654 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:51:44 compute-0 nova_compute[355794]: 2025-10-02 19:51:44.703 2 DEBUG nova.storage.rbd_utils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:51:44 compute-0 nova_compute[355794]: 2025-10-02 19:51:44.718 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 321 active+clean; 222 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 807 KiB/s wr, 2 op/s
Oct 02 19:51:45 compute-0 nova_compute[355794]: 2025-10-02 19:51:45.147 2 DEBUG nova.network.neutron [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Successfully updated port: 90a967c2-93a2-4057-add0-3bebfcb9615a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 19:51:45 compute-0 nova_compute[355794]: 2025-10-02 19:51:45.206 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:51:45 compute-0 nova_compute[355794]: 2025-10-02 19:51:45.289 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "refresh_cache-58f8959a-5f7e-44a5-9dca-65be0506a4c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:51:45 compute-0 nova_compute[355794]: 2025-10-02 19:51:45.289 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquired lock "refresh_cache-58f8959a-5f7e-44a5-9dca-65be0506a4c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:51:45 compute-0 nova_compute[355794]: 2025-10-02 19:51:45.290 2 DEBUG nova.network.neutron [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:51:45 compute-0 nova_compute[355794]: 2025-10-02 19:51:45.297 2 DEBUG nova.compute.manager [req-e03e7050-49b9-45f9-b25f-9f3ef044af97 req-223022de-cc51-40ce-bf61-39aebcfee4cb 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Received event network-changed-90a967c2-93a2-4057-add0-3bebfcb9615a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:51:45 compute-0 nova_compute[355794]: 2025-10-02 19:51:45.297 2 DEBUG nova.compute.manager [req-e03e7050-49b9-45f9-b25f-9f3ef044af97 req-223022de-cc51-40ce-bf61-39aebcfee4cb 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Refreshing instance network info cache due to event network-changed-90a967c2-93a2-4057-add0-3bebfcb9615a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:51:45 compute-0 nova_compute[355794]: 2025-10-02 19:51:45.298 2 DEBUG oslo_concurrency.lockutils [req-e03e7050-49b9-45f9-b25f-9f3ef044af97 req-223022de-cc51-40ce-bf61-39aebcfee4cb 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-58f8959a-5f7e-44a5-9dca-65be0506a4c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:51:45 compute-0 nova_compute[355794]: 2025-10-02 19:51:45.491 2 DEBUG nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:51:45 compute-0 nova_compute[355794]: 2025-10-02 19:51:45.492 2 DEBUG nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Ensure instance console log exists: /var/lib/nova/instances/58f8959a-5f7e-44a5-9dca-65be0506a4c1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:51:45 compute-0 nova_compute[355794]: 2025-10-02 19:51:45.493 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:51:45 compute-0 nova_compute[355794]: 2025-10-02 19:51:45.493 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:51:45 compute-0 nova_compute[355794]: 2025-10-02 19:51:45.494 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:51:45 compute-0 nova_compute[355794]: 2025-10-02 19:51:45.599 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:45 compute-0 nova_compute[355794]: 2025-10-02 19:51:45.600 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:51:45 compute-0 nova_compute[355794]: 2025-10-02 19:51:45.874 2 DEBUG nova.network.neutron [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:46 compute-0 ceph-mon[191910]: pgmap v1399: 321 pgs: 321 active+clean; 222 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 807 KiB/s wr, 2 op/s
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.516 2 DEBUG nova.network.neutron [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Updating instance_info_cache with network_info: [{"id": "90a967c2-93a2-4057-add0-3bebfcb9615a", "address": "fa:16:3e:5d:8f:b8", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a967c2-93", "ovs_interfaceid": "90a967c2-93a2-4057-add0-3bebfcb9615a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.544 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Releasing lock "refresh_cache-58f8959a-5f7e-44a5-9dca-65be0506a4c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.544 2 DEBUG nova.compute.manager [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Instance network_info: |[{"id": "90a967c2-93a2-4057-add0-3bebfcb9615a", "address": "fa:16:3e:5d:8f:b8", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a967c2-93", "ovs_interfaceid": "90a967c2-93a2-4057-add0-3bebfcb9615a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.545 2 DEBUG oslo_concurrency.lockutils [req-e03e7050-49b9-45f9-b25f-9f3ef044af97 req-223022de-cc51-40ce-bf61-39aebcfee4cb 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-58f8959a-5f7e-44a5-9dca-65be0506a4c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.545 2 DEBUG nova.network.neutron [req-e03e7050-49b9-45f9-b25f-9f3ef044af97 req-223022de-cc51-40ce-bf61-39aebcfee4cb 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Refreshing network info cache for port 90a967c2-93a2-4057-add0-3bebfcb9615a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.550 2 DEBUG nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Start _get_guest_xml network_info=[{"id": "90a967c2-93a2-4057-add0-3bebfcb9615a", "address": "fa:16:3e:5d:8f:b8", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a967c2-93", "ovs_interfaceid": "90a967c2-93a2-4057-add0-3bebfcb9615a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:43:18Z,direct_url=<?>,disk_format='qcow2',id=ce28338d-119e-49e1-ab67-60da8882593a,min_disk=0,min_ram=0,name='cirros',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:43:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'disk_bus': 'virtio', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'image_id': 'ce28338d-119e-49e1-ab67-60da8882593a'}], 'ephemerals': [{'encryption_secret_uuid': None, 'device_name': '/dev/vdb', 'encrypted': False, 'disk_bus': 'virtio', 'size': 1, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.558 2 WARNING nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.567 2 DEBUG nova.virt.libvirt.host [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.567 2 DEBUG nova.virt.libvirt.host [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.578 2 DEBUG nova.virt.libvirt.host [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.579 2 DEBUG nova.virt.libvirt.host [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.579 2 DEBUG nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.580 2 DEBUG nova.virt.hardware [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:43:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='8f0521f8-dc4e-4ca1-bf77-f443ae74db03',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:43:18Z,direct_url=<?>,disk_format='qcow2',id=ce28338d-119e-49e1-ab67-60da8882593a,min_disk=0,min_ram=0,name='cirros',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:43:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:51:46 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.581 2 DEBUG nova.virt.hardware [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.583 2 DEBUG nova.virt.hardware [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.584 2 DEBUG nova.virt.hardware [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.584 2 DEBUG nova.virt.hardware [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.585 2 DEBUG nova.virt.hardware [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.586 2 DEBUG nova.virt.hardware [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.587 2 DEBUG nova.virt.hardware [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.588 2 DEBUG nova.virt.hardware [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.589 2 DEBUG nova.virt.hardware [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.590 2 DEBUG nova.virt.hardware [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:51:46 compute-0 nova_compute[355794]: 2025-10-02 19:51:46.595 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1400: 321 pgs: 321 active+clean; 224 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 842 KiB/s wr, 24 op/s
Oct 02 19:51:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 19:51:47 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2061080405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:51:47 compute-0 nova_compute[355794]: 2025-10-02 19:51:47.048 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:47 compute-0 nova_compute[355794]: 2025-10-02 19:51:47.050 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:47 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2061080405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:51:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 19:51:47 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506034870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:51:47 compute-0 nova_compute[355794]: 2025-10-02 19:51:47.504 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:47 compute-0 nova_compute[355794]: 2025-10-02 19:51:47.564 2 DEBUG nova.storage.rbd_utils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:51:47 compute-0 nova_compute[355794]: 2025-10-02 19:51:47.576 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:47 compute-0 nova_compute[355794]: 2025-10-02 19:51:47.616 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:47 compute-0 nova_compute[355794]: 2025-10-02 19:51:47.917 2 DEBUG nova.network.neutron [req-e03e7050-49b9-45f9-b25f-9f3ef044af97 req-223022de-cc51-40ce-bf61-39aebcfee4cb 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Updated VIF entry in instance network info cache for port 90a967c2-93a2-4057-add0-3bebfcb9615a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:51:47 compute-0 nova_compute[355794]: 2025-10-02 19:51:47.918 2 DEBUG nova.network.neutron [req-e03e7050-49b9-45f9-b25f-9f3ef044af97 req-223022de-cc51-40ce-bf61-39aebcfee4cb 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Updating instance_info_cache with network_info: [{"id": "90a967c2-93a2-4057-add0-3bebfcb9615a", "address": "fa:16:3e:5d:8f:b8", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a967c2-93", "ovs_interfaceid": "90a967c2-93a2-4057-add0-3bebfcb9615a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:51:47 compute-0 nova_compute[355794]: 2025-10-02 19:51:47.937 2 DEBUG oslo_concurrency.lockutils [req-e03e7050-49b9-45f9-b25f-9f3ef044af97 req-223022de-cc51-40ce-bf61-39aebcfee4cb 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-58f8959a-5f7e-44a5-9dca-65be0506a4c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:51:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 19:51:48 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/907222986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.092 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.094 2 DEBUG nova.virt.libvirt.vif [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:51:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-wkgthwg-l57hn7k2t5oc-62etg35sdh32-vnf-wjcrezdtgbmg',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-wkgthwg-l57hn7k2t5oc-62etg35sdh32-vnf-wjcrezdtgbmg',id=4,image_ref='ce28338d-119e-49e1-ab67-60da8882593a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='d2d7e2b0-01e0-44b1-b2c7-fe502b333743'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c35486f37b94d43a7bf2f2fa09c70b9',ramdisk_id='',reservation_id='r-0clyp0yi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='ce28338d-119e-49e1-ab67-60da8882593a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:51:43Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NzU5MzAxODA5MjM3Mzk3MjkxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU3NTkzMDE4MDkyMzczOTcyOTE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTc1OTMwMTgwOTIzNzM5NzI5MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU3NTkzMDE4MDkyMzczOTcyOTE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NzU5MzAxODA5MjM3Mzk3MjkxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NzU5MzAxODA5MjM3Mzk3MjkxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3
Oct 02 19:51:48 compute-0 nova_compute[355794]: Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTc1OTMwMTgwOTIzNzM5NzI5MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU3NTkzMDE4MDkyMzczOTcyOTE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NzU5MzAxODA5MjM3Mzk3MjkxPT0tLQo=',user_id='811fb7ac717e4ba9b9874e5454ee08f4',uuid=58f8959a-5f7e-44a5-9dca-65be0506a4c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "90a967c2-93a2-4057-add0-3bebfcb9615a", "address": "fa:16:3e:5d:8f:b8", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a967c2-93", "ovs_interfaceid": "90a967c2-93a2-4057-add0-3bebfcb9615a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.095 2 DEBUG nova.network.os_vif_util [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converting VIF {"id": "90a967c2-93a2-4057-add0-3bebfcb9615a", "address": "fa:16:3e:5d:8f:b8", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a967c2-93", "ovs_interfaceid": "90a967c2-93a2-4057-add0-3bebfcb9615a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.096 2 DEBUG nova.network.os_vif_util [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:8f:b8,bridge_name='br-int',has_traffic_filtering=True,id=90a967c2-93a2-4057-add0-3bebfcb9615a,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap90a967c2-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.097 2 DEBUG nova.objects.instance [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58f8959a-5f7e-44a5-9dca-65be0506a4c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.117 2 DEBUG nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:51:48 compute-0 nova_compute[355794]:   <uuid>58f8959a-5f7e-44a5-9dca-65be0506a4c1</uuid>
Oct 02 19:51:48 compute-0 nova_compute[355794]:   <name>instance-00000004</name>
Oct 02 19:51:48 compute-0 nova_compute[355794]:   <memory>524288</memory>
Oct 02 19:51:48 compute-0 nova_compute[355794]:   <vcpu>1</vcpu>
Oct 02 19:51:48 compute-0 nova_compute[355794]:   <metadata>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <nova:name>vn-wkgthwg-l57hn7k2t5oc-62etg35sdh32-vnf-wjcrezdtgbmg</nova:name>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <nova:creationTime>2025-10-02 19:51:46</nova:creationTime>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <nova:flavor name="m1.small">
Oct 02 19:51:48 compute-0 nova_compute[355794]:         <nova:memory>512</nova:memory>
Oct 02 19:51:48 compute-0 nova_compute[355794]:         <nova:disk>1</nova:disk>
Oct 02 19:51:48 compute-0 nova_compute[355794]:         <nova:swap>0</nova:swap>
Oct 02 19:51:48 compute-0 nova_compute[355794]:         <nova:ephemeral>1</nova:ephemeral>
Oct 02 19:51:48 compute-0 nova_compute[355794]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       </nova:flavor>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <nova:owner>
Oct 02 19:51:48 compute-0 nova_compute[355794]:         <nova:user uuid="811fb7ac717e4ba9b9874e5454ee08f4">admin</nova:user>
Oct 02 19:51:48 compute-0 nova_compute[355794]:         <nova:project uuid="1c35486f37b94d43a7bf2f2fa09c70b9">admin</nova:project>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       </nova:owner>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <nova:root type="image" uuid="ce28338d-119e-49e1-ab67-60da8882593a"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <nova:ports>
Oct 02 19:51:48 compute-0 nova_compute[355794]:         <nova:port uuid="90a967c2-93a2-4057-add0-3bebfcb9615a">
Oct 02 19:51:48 compute-0 nova_compute[355794]:           <nova:ip type="fixed" address="192.168.0.24" ipVersion="4"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:         </nova:port>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       </nova:ports>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     </nova:instance>
Oct 02 19:51:48 compute-0 nova_compute[355794]:   </metadata>
Oct 02 19:51:48 compute-0 nova_compute[355794]:   <sysinfo type="smbios">
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <system>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <entry name="serial">58f8959a-5f7e-44a5-9dca-65be0506a4c1</entry>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <entry name="uuid">58f8959a-5f7e-44a5-9dca-65be0506a4c1</entry>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     </system>
Oct 02 19:51:48 compute-0 nova_compute[355794]:   </sysinfo>
Oct 02 19:51:48 compute-0 nova_compute[355794]:   <os>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <boot dev="hd"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <smbios mode="sysinfo"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:   </os>
Oct 02 19:51:48 compute-0 nova_compute[355794]:   <features>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <acpi/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <apic/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <vmcoreinfo/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:   </features>
Oct 02 19:51:48 compute-0 nova_compute[355794]:   <clock offset="utc">
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <timer name="hpet" present="no"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:   </clock>
Oct 02 19:51:48 compute-0 nova_compute[355794]:   <cpu mode="host-model" match="exact">
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:   </cpu>
Oct 02 19:51:48 compute-0 nova_compute[355794]:   <devices>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk">
Oct 02 19:51:48 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       </source>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 19:51:48 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       </auth>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <target dev="vda" bus="virtio"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk.eph0">
Oct 02 19:51:48 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       </source>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 19:51:48 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       </auth>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <target dev="vdb" bus="virtio"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <disk type="network" device="cdrom">
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk.config">
Oct 02 19:51:48 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       </source>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 19:51:48 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       </auth>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <target dev="sda" bus="sata"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <interface type="ethernet">
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <mac address="fa:16:3e:5d:8f:b8"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <mtu size="1442"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <target dev="tap90a967c2-93"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     </interface>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <serial type="pty">
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <log file="/var/lib/nova/instances/58f8959a-5f7e-44a5-9dca-65be0506a4c1/console.log" append="off"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     </serial>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <video>
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     </video>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <input type="tablet" bus="usb"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <rng model="virtio">
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     </rng>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <controller type="usb" index="0"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     <memballoon model="virtio">
Oct 02 19:51:48 compute-0 nova_compute[355794]:       <stats period="10"/>
Oct 02 19:51:48 compute-0 nova_compute[355794]:     </memballoon>
Oct 02 19:51:48 compute-0 nova_compute[355794]:   </devices>
Oct 02 19:51:48 compute-0 nova_compute[355794]: </domain>
Oct 02 19:51:48 compute-0 nova_compute[355794]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.119 2 DEBUG nova.compute.manager [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Preparing to wait for external event network-vif-plugged-90a967c2-93a2-4057-add0-3bebfcb9615a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.120 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.121 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.122 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.124 2 DEBUG nova.virt.libvirt.vif [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:51:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-wkgthwg-l57hn7k2t5oc-62etg35sdh32-vnf-wjcrezdtgbmg',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-wkgthwg-l57hn7k2t5oc-62etg35sdh32-vnf-wjcrezdtgbmg',id=4,image_ref='ce28338d-119e-49e1-ab67-60da8882593a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='d2d7e2b0-01e0-44b1-b2c7-fe502b333743'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c35486f37b94d43a7bf2f2fa09c70b9',ramdisk_id='',reservation_id='r-0clyp0yi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='ce28338d-119e-49e1-ab67-60da8882593a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:51:43Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NzU5MzAxODA5MjM3Mzk3MjkxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU3NTkzMDE4MDkyMzczOTcyOTE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTc1OTMwMTgwOTIzNzM5NzI5MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU3NTkzMDE4MDkyMzczOTcyOTE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NzU5MzAxODA5MjM3Mzk3MjkxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NzU5MzAxODA5MjM3Mzk3MjkxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4o
Oct 02 19:51:48 compute-0 nova_compute[355794]: YXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTc1OTMwMTgwOTIzNzM5NzI5MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU3NTkzMDE4MDkyMzczOTcyOTE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NzU5MzAxODA5MjM3Mzk3MjkxPT0tLQo=',user_id='811fb7ac717e4ba9b9874e5454ee08f4',uuid=58f8959a-5f7e-44a5-9dca-65be0506a4c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "90a967c2-93a2-4057-add0-3bebfcb9615a", "address": "fa:16:3e:5d:8f:b8", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a967c2-93", "ovs_interfaceid": "90a967c2-93a2-4057-add0-3bebfcb9615a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.125 2 DEBUG nova.network.os_vif_util [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converting VIF {"id": "90a967c2-93a2-4057-add0-3bebfcb9615a", "address": "fa:16:3e:5d:8f:b8", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a967c2-93", "ovs_interfaceid": "90a967c2-93a2-4057-add0-3bebfcb9615a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.127 2 DEBUG nova.network.os_vif_util [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:8f:b8,bridge_name='br-int',has_traffic_filtering=True,id=90a967c2-93a2-4057-add0-3bebfcb9615a,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap90a967c2-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.128 2 DEBUG os_vif [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:8f:b8,bridge_name='br-int',has_traffic_filtering=True,id=90a967c2-93a2-4057-add0-3bebfcb9615a,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap90a967c2-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.132 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.138 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap90a967c2-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap90a967c2-93, col_values=(('external_ids', {'iface-id': '90a967c2-93a2-4057-add0-3bebfcb9615a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:8f:b8', 'vm-uuid': '58f8959a-5f7e-44a5-9dca-65be0506a4c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:48 compute-0 NetworkManager[44968]: <info>  [1759434708.1476] manager: (tap90a967c2-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.161 2 INFO os_vif [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:8f:b8,bridge_name='br-int',has_traffic_filtering=True,id=90a967c2-93a2-4057-add0-3bebfcb9615a,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap90a967c2-93')
Oct 02 19:51:48 compute-0 ceph-mon[191910]: pgmap v1400: 321 pgs: 321 active+clean; 224 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 842 KiB/s wr, 24 op/s
Oct 02 19:51:48 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2506034870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:51:48 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/907222986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:51:48 compute-0 rsyslogd[187702]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 19:51:48.094 2 DEBUG nova.virt.libvirt.vif [None req-ac6bd375-8ea0-45 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.254 2 DEBUG nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.255 2 DEBUG nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.255 2 DEBUG nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.256 2 DEBUG nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No VIF found with MAC fa:16:3e:5d:8f:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.257 2 INFO nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Using config drive
Oct 02 19:51:48 compute-0 rsyslogd[187702]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 19:51:48.124 2 DEBUG nova.virt.libvirt.vif [None req-ac6bd375-8ea0-45 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.289 2 DEBUG nova.storage.rbd_utils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.516 2 INFO nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Creating config drive at /var/lib/nova/instances/58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.config
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.525 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpagac2sde execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.679 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpagac2sde" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.738 2 DEBUG nova.storage.rbd_utils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.749 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.config 58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:48 compute-0 nova_compute[355794]: 2025-10-02 19:51:48.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 232 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.4 MiB/s wr, 34 op/s
Oct 02 19:51:49 compute-0 nova_compute[355794]: 2025-10-02 19:51:49.013 2 DEBUG oslo_concurrency.processutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.config 58f8959a-5f7e-44a5-9dca-65be0506a4c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:49 compute-0 nova_compute[355794]: 2025-10-02 19:51:49.013 2 INFO nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Deleting local config drive /var/lib/nova/instances/58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.config because it was imported into RBD.
Oct 02 19:51:49 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 02 19:51:49 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 02 19:51:49 compute-0 NetworkManager[44968]: <info>  [1759434709.1755] manager: (tap90a967c2-93): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Oct 02 19:51:49 compute-0 kernel: tap90a967c2-93: entered promiscuous mode
Oct 02 19:51:49 compute-0 nova_compute[355794]: 2025-10-02 19:51:49.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:49 compute-0 ovn_controller[88435]: 2025-10-02T19:51:49Z|00045|binding|INFO|Claiming lport 90a967c2-93a2-4057-add0-3bebfcb9615a for this chassis.
Oct 02 19:51:49 compute-0 ovn_controller[88435]: 2025-10-02T19:51:49Z|00046|binding|INFO|90a967c2-93a2-4057-add0-3bebfcb9615a: Claiming fa:16:3e:5d:8f:b8 192.168.0.24
Oct 02 19:51:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:49.206 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:8f:b8 192.168.0.24'], port_security=['fa:16:3e:5d:8f:b8 192.168.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-rlri6wkgthwg-l57hn7k2t5oc-62etg35sdh32-port-ioebu7t2mzpe', 'neutron:cidrs': '192.168.0.24/24', 'neutron:device_id': '58f8959a-5f7e-44a5-9dca-65be0506a4c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-rlri6wkgthwg-l57hn7k2t5oc-62etg35sdh32-port-ioebu7t2mzpe', 'neutron:project_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '457fbfbc-bc7f-4eb1-986f-4ac88ca280aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a344602b-6814-4d50-9f7f-4fc26c3ad853, chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=90a967c2-93a2-4057-add0-3bebfcb9615a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:51:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:49.207 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 90a967c2-93a2-4057-add0-3bebfcb9615a in datapath 6e3c6c60-2fbc-4181-942a-00056fc667f2 bound to our chassis
Oct 02 19:51:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:49.208 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e3c6c60-2fbc-4181-942a-00056fc667f2
Oct 02 19:51:49 compute-0 ovn_controller[88435]: 2025-10-02T19:51:49Z|00047|binding|INFO|Setting lport 90a967c2-93a2-4057-add0-3bebfcb9615a ovn-installed in OVS
Oct 02 19:51:49 compute-0 ovn_controller[88435]: 2025-10-02T19:51:49Z|00048|binding|INFO|Setting lport 90a967c2-93a2-4057-add0-3bebfcb9615a up in Southbound
Oct 02 19:51:49 compute-0 nova_compute[355794]: 2025-10-02 19:51:49.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:49.248 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[74d64bcd-c1d0-4579-8d28-7732183c2eaa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:51:49 compute-0 systemd-udevd[431194]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:51:49 compute-0 systemd-machined[137646]: New machine qemu-4-instance-00000004.
Oct 02 19:51:49 compute-0 NetworkManager[44968]: <info>  [1759434709.2678] device (tap90a967c2-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:51:49 compute-0 NetworkManager[44968]: <info>  [1759434709.2688] device (tap90a967c2-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:51:49 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Oct 02 19:51:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:49.298 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[4f03b0ce-06b2-44a2-b0c7-f12c0d7553f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:51:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:49.301 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[729d6a08-cf94-47c0-9249-c8ee2818ff16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:51:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:49.343 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[aa0178ff-3778-4dfd-a2e5-cccbf6cb960e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:51:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:49.361 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0ac8b0-c5fc-42c9-8940-272634c62b21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e3c6c60-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:df:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 832, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 832, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545739, 'reachable_time': 16846, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 431206, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:51:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:49.378 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[25a0cb00-0637-4f76-bfe8-40614ba45379]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e3c6c60-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545752, 'tstamp': 545752}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 431208, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap6e3c6c60-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545756, 'tstamp': 545756}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 431208, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:51:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:49.379 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e3c6c60-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:51:49 compute-0 nova_compute[355794]: 2025-10-02 19:51:49.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:49.382 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e3c6c60-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:51:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:49.383 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:51:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:49.383 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e3c6c60-20, col_values=(('external_ids', {'iface-id': '4a8af5bc-5352-4506-b6d3-43b5d33802a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:51:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:51:49.384 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:51:49 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 02 19:51:49 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.130 2 DEBUG nova.compute.manager [req-c772411f-0367-483b-89b3-90e5d2d55f90 req-6f6fa3c8-52f8-4a3a-a149-628d98caa252 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Received event network-vif-plugged-90a967c2-93a2-4057-add0-3bebfcb9615a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.131 2 DEBUG oslo_concurrency.lockutils [req-c772411f-0367-483b-89b3-90e5d2d55f90 req-6f6fa3c8-52f8-4a3a-a149-628d98caa252 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.131 2 DEBUG oslo_concurrency.lockutils [req-c772411f-0367-483b-89b3-90e5d2d55f90 req-6f6fa3c8-52f8-4a3a-a149-628d98caa252 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.132 2 DEBUG oslo_concurrency.lockutils [req-c772411f-0367-483b-89b3-90e5d2d55f90 req-6f6fa3c8-52f8-4a3a-a149-628d98caa252 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.132 2 DEBUG nova.compute.manager [req-c772411f-0367-483b-89b3-90e5d2d55f90 req-6f6fa3c8-52f8-4a3a-a149-628d98caa252 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Processing event network-vif-plugged-90a967c2-93a2-4057-add0-3bebfcb9615a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 19:51:50 compute-0 ceph-mon[191910]: pgmap v1401: 321 pgs: 321 active+clean; 232 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.4 MiB/s wr, 34 op/s
Oct 02 19:51:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.604 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.605 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.605 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.606 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.607 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:50 compute-0 podman[431288]: 2025-10-02 19:51:50.733733511 +0000 UTC m=+0.151234042 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.822 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759434710.8208795, 58f8959a-5f7e-44a5-9dca-65be0506a4c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.823 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] VM Started (Lifecycle Event)
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.833 2 DEBUG nova.compute.manager [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.844 2 DEBUG nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.851 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.865 2 INFO nova.virt.libvirt.driver [-] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Instance spawned successfully.
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.867 2 DEBUG nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.882 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.909 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.910 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759434710.821158, 58f8959a-5f7e-44a5-9dca-65be0506a4c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.910 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] VM Paused (Lifecycle Event)
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.919 2 DEBUG nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.919 2 DEBUG nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.920 2 DEBUG nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.920 2 DEBUG nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.921 2 DEBUG nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.921 2 DEBUG nova.virt.libvirt.driver [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.929 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.935 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759434710.8430455, 58f8959a-5f7e-44a5-9dca-65be0506a4c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.936 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] VM Resumed (Lifecycle Event)
Oct 02 19:51:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1402: 321 pgs: 321 active+clean; 234 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.4 MiB/s wr, 43 op/s
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.959 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.965 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.983 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.989 2 INFO nova.compute.manager [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Took 7.51 seconds to spawn the instance on the hypervisor.
Oct 02 19:51:50 compute-0 nova_compute[355794]: 2025-10-02 19:51:50.989 2 DEBUG nova.compute.manager [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.047 2 INFO nova.compute.manager [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Took 8.66 seconds to build instance.
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.065 2 DEBUG oslo_concurrency.lockutils [None req-ac6bd375-8ea0-4567-beee-67d8bbe6b32e 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:51:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:51:51 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3794598156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.181 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:51 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3794598156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.350 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.350 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.351 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.360 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.360 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.361 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.372 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.372 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.373 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.383 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.384 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.385 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.896 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.898 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3439MB free_disk=59.87297821044922GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.898 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:51:51 compute-0 nova_compute[355794]: 2025-10-02 19:51:51.900 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:51:52 compute-0 nova_compute[355794]: 2025-10-02 19:51:52.060 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:51:52 compute-0 nova_compute[355794]: 2025-10-02 19:51:52.061 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:51:52 compute-0 nova_compute[355794]: 2025-10-02 19:51:52.062 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance b88114e8-b15d-4a78-ac15-3dd7ee30b949 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:51:52 compute-0 nova_compute[355794]: 2025-10-02 19:51:52.064 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 58f8959a-5f7e-44a5-9dca-65be0506a4c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:51:52 compute-0 nova_compute[355794]: 2025-10-02 19:51:52.065 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:51:52 compute-0 nova_compute[355794]: 2025-10-02 19:51:52.066 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:51:52 compute-0 nova_compute[355794]: 2025-10-02 19:51:52.193 2 DEBUG nova.compute.manager [req-9a2806ee-3ddd-441b-9460-120ce026e43b req-cecffc5b-c0ba-458e-820b-e6dd173e4e58 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Received event network-vif-plugged-90a967c2-93a2-4057-add0-3bebfcb9615a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:51:52 compute-0 nova_compute[355794]: 2025-10-02 19:51:52.194 2 DEBUG oslo_concurrency.lockutils [req-9a2806ee-3ddd-441b-9460-120ce026e43b req-cecffc5b-c0ba-458e-820b-e6dd173e4e58 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:51:52 compute-0 nova_compute[355794]: 2025-10-02 19:51:52.196 2 DEBUG oslo_concurrency.lockutils [req-9a2806ee-3ddd-441b-9460-120ce026e43b req-cecffc5b-c0ba-458e-820b-e6dd173e4e58 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:51:52 compute-0 nova_compute[355794]: 2025-10-02 19:51:52.198 2 DEBUG oslo_concurrency.lockutils [req-9a2806ee-3ddd-441b-9460-120ce026e43b req-cecffc5b-c0ba-458e-820b-e6dd173e4e58 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:51:52 compute-0 nova_compute[355794]: 2025-10-02 19:51:52.198 2 DEBUG nova.compute.manager [req-9a2806ee-3ddd-441b-9460-120ce026e43b req-cecffc5b-c0ba-458e-820b-e6dd173e4e58 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] No waiting events found dispatching network-vif-plugged-90a967c2-93a2-4057-add0-3bebfcb9615a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:51:52 compute-0 nova_compute[355794]: 2025-10-02 19:51:52.198 2 WARNING nova.compute.manager [req-9a2806ee-3ddd-441b-9460-120ce026e43b req-cecffc5b-c0ba-458e-820b-e6dd173e4e58 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Received unexpected event network-vif-plugged-90a967c2-93a2-4057-add0-3bebfcb9615a for instance with vm_state active and task_state None.
Oct 02 19:51:52 compute-0 nova_compute[355794]: 2025-10-02 19:51:52.230 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:52 compute-0 ceph-mon[191910]: pgmap v1402: 321 pgs: 321 active+clean; 234 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.4 MiB/s wr, 43 op/s
Oct 02 19:51:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:51:52 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2773500095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:51:52 compute-0 nova_compute[355794]: 2025-10-02 19:51:52.704 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:52 compute-0 nova_compute[355794]: 2025-10-02 19:51:52.715 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:51:52 compute-0 nova_compute[355794]: 2025-10-02 19:51:52.732 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:51:52 compute-0 nova_compute[355794]: 2025-10-02 19:51:52.756 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:51:52 compute-0 nova_compute[355794]: 2025-10-02 19:51:52.757 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:51:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Oct 02 19:51:53 compute-0 nova_compute[355794]: 2025-10-02 19:51:53.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:53 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2773500095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:51:53 compute-0 nova_compute[355794]: 2025-10-02 19:51:53.759 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:53 compute-0 nova_compute[355794]: 2025-10-02 19:51:53.760 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:51:53 compute-0 nova_compute[355794]: 2025-10-02 19:51:53.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:54 compute-0 ceph-mon[191910]: pgmap v1403: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Oct 02 19:51:54 compute-0 nova_compute[355794]: 2025-10-02 19:51:54.872 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:51:54 compute-0 nova_compute[355794]: 2025-10-02 19:51:54.872 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:51:54 compute-0 nova_compute[355794]: 2025-10-02 19:51:54.873 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:51:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 768 KiB/s rd, 1.4 MiB/s wr, 73 op/s
Oct 02 19:51:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:51:56 compute-0 ceph-mon[191910]: pgmap v1404: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 768 KiB/s rd, 1.4 MiB/s wr, 73 op/s
Oct 02 19:51:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1405: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 811 KiB/s rd, 603 KiB/s wr, 77 op/s
Oct 02 19:51:57 compute-0 nova_compute[355794]: 2025-10-02 19:51:57.657 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Updating instance_info_cache with network_info: [{"id": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "address": "fa:16:3e:10:ab:29", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc759e48d-48", "ovs_interfaceid": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:51:57 compute-0 nova_compute[355794]: 2025-10-02 19:51:57.683 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:51:57 compute-0 nova_compute[355794]: 2025-10-02 19:51:57.683 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:51:57 compute-0 nova_compute[355794]: 2025-10-02 19:51:57.684 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:57 compute-0 nova_compute[355794]: 2025-10-02 19:51:57.684 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:57 compute-0 nova_compute[355794]: 2025-10-02 19:51:57.685 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:57 compute-0 nova_compute[355794]: 2025-10-02 19:51:57.685 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:57 compute-0 podman[431352]: 2025-10-02 19:51:57.713210629 +0000 UTC m=+0.127239674 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:51:57 compute-0 podman[431353]: 2025-10-02 19:51:57.721701015 +0000 UTC m=+0.138342950 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:51:58 compute-0 nova_compute[355794]: 2025-10-02 19:51:58.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:58 compute-0 ceph-mon[191910]: pgmap v1405: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 811 KiB/s rd, 603 KiB/s wr, 77 op/s
Oct 02 19:51:58 compute-0 nova_compute[355794]: 2025-10-02 19:51:58.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 569 KiB/s wr, 92 op/s
Oct 02 19:51:59 compute-0 podman[157186]: time="2025-10-02T19:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:51:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:51:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9058 "" "Go-http-client/1.1"
Oct 02 19:52:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:52:00 compute-0 ceph-mon[191910]: pgmap v1406: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 569 KiB/s wr, 92 op/s
Oct 02 19:52:00 compute-0 nova_compute[355794]: 2025-10-02 19:52:00.512 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1407: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 23 KiB/s wr, 122 op/s
Oct 02 19:52:01 compute-0 openstack_network_exporter[372736]: ERROR   19:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:52:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:52:01 compute-0 openstack_network_exporter[372736]: ERROR   19:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:52:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:52:01 compute-0 openstack_network_exporter[372736]: ERROR   19:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:52:01 compute-0 openstack_network_exporter[372736]: ERROR   19:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:52:01 compute-0 openstack_network_exporter[372736]: ERROR   19:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:52:02 compute-0 ceph-mon[191910]: pgmap v1407: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 23 KiB/s wr, 122 op/s
Oct 02 19:52:02 compute-0 nova_compute[355794]: 2025-10-02 19:52:02.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:02 compute-0 podman[431393]: 2025-10-02 19:52:02.692262632 +0000 UTC m=+0.110221512 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:52:02 compute-0 podman[431394]: 2025-10-02 19:52:02.709621394 +0000 UTC m=+0.110080648 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release-0.7.12=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, release=1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_id=edpm, container_name=kepler, version=9.4)
Oct 02 19:52:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1408: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 113 op/s
Oct 02 19:52:03 compute-0 nova_compute[355794]: 2025-10-02 19:52:03.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:52:03
Oct 02 19:52:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:52:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:52:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['images', 'default.rgw.log', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root']
Oct 02 19:52:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:52:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:52:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:52:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:52:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:52:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:52:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:52:03 compute-0 nova_compute[355794]: 2025-10-02 19:52:03.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:52:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:52:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:52:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:52:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:52:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:52:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:52:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:52:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:52:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:52:04 compute-0 ceph-mon[191910]: pgmap v1408: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 113 op/s
Oct 02 19:52:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 0 B/s wr, 108 op/s
Oct 02 19:52:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:52:05 compute-0 podman[431433]: 2025-10-02 19:52:05.844866266 +0000 UTC m=+0.248520039 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 19:52:05 compute-0 podman[431432]: 2025-10-02 19:52:05.892988956 +0000 UTC m=+0.314793862 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 19:52:05 compute-0 podman[431439]: 2025-10-02 19:52:05.94915894 +0000 UTC m=+0.338274007 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:52:06 compute-0 ceph-mon[191910]: pgmap v1409: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 0 B/s wr, 108 op/s
Oct 02 19:52:06 compute-0 nova_compute[355794]: 2025-10-02 19:52:06.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:06 compute-0 nova_compute[355794]: 2025-10-02 19:52:06.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 19:52:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1410: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 813 KiB/s rd, 0 B/s wr, 84 op/s
Oct 02 19:52:08 compute-0 nova_compute[355794]: 2025-10-02 19:52:08.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:08 compute-0 ceph-mon[191910]: pgmap v1410: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 813 KiB/s rd, 0 B/s wr, 84 op/s
Oct 02 19:52:08 compute-0 nova_compute[355794]: 2025-10-02 19:52:08.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1411: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 770 KiB/s rd, 0 B/s wr, 77 op/s
Oct 02 19:52:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:52:10 compute-0 ceph-mon[191910]: pgmap v1411: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 770 KiB/s rd, 0 B/s wr, 77 op/s
Oct 02 19:52:10 compute-0 podman[431491]: 2025-10-02 19:52:10.681167722 +0000 UTC m=+0.109072561 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:52:10 compute-0 podman[431490]: 2025-10-02 19:52:10.68861173 +0000 UTC m=+0.108886786 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, config_id=edpm, vcs-type=git, vendor=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 02 19:52:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 169 KiB/s rd, 0 B/s wr, 41 op/s
Oct 02 19:52:11 compute-0 ceph-mon[191910]: pgmap v1412: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 169 KiB/s rd, 0 B/s wr, 41 op/s
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019265440841310152 of space, bias 1.0, pg target 0.5779632252393045 quantized to 32 (current 32)
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:52:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1413: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Oct 02 19:52:13 compute-0 nova_compute[355794]: 2025-10-02 19:52:13.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:13 compute-0 nova_compute[355794]: 2025-10-02 19:52:13.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:14 compute-0 ceph-mon[191910]: pgmap v1413: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Oct 02 19:52:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1414: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:52:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:52:16 compute-0 ceph-mon[191910]: pgmap v1414: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:52:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:52:18 compute-0 ceph-mon[191910]: pgmap v1415: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:52:18 compute-0 nova_compute[355794]: 2025-10-02 19:52:18.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:18 compute-0 nova_compute[355794]: 2025-10-02 19:52:18.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:52:19 compute-0 ovn_controller[88435]: 2025-10-02T19:52:19Z|00049|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Oct 02 19:52:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:52:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3032352413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:52:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:52:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3032352413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:52:20 compute-0 ceph-mon[191910]: pgmap v1416: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:52:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3032352413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:52:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3032352413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:52:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:52:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1417: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:52:21 compute-0 podman[431533]: 2025-10-02 19:52:21.691332774 +0000 UTC m=+0.124467891 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 19:52:22 compute-0 ceph-mon[191910]: pgmap v1417: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:52:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1418: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:52:23 compute-0 nova_compute[355794]: 2025-10-02 19:52:23.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:23 compute-0 nova_compute[355794]: 2025-10-02 19:52:23.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:24 compute-0 ceph-mon[191910]: pgmap v1418: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:52:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 19:52:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:52:26 compute-0 ceph-mon[191910]: pgmap v1419: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 19:52:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1420: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 19:52:27 compute-0 ovn_controller[88435]: 2025-10-02T19:52:27Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:8f:b8 192.168.0.24
Oct 02 19:52:27 compute-0 ovn_controller[88435]: 2025-10-02T19:52:27Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:8f:b8 192.168.0.24
Oct 02 19:52:28 compute-0 nova_compute[355794]: 2025-10-02 19:52:28.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:28 compute-0 ceph-mon[191910]: pgmap v1420: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 19:52:28 compute-0 podman[431552]: 2025-10-02 19:52:28.736013945 +0000 UTC m=+0.148476619 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:52:28 compute-0 podman[431553]: 2025-10-02 19:52:28.736616672 +0000 UTC m=+0.140165059 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 19:52:28 compute-0 nova_compute[355794]: 2025-10-02 19:52:28.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 235 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 56 KiB/s wr, 10 op/s
Oct 02 19:52:29 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 02 19:52:29 compute-0 podman[157186]: time="2025-10-02T19:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:52:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:52:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9054 "" "Go-http-client/1.1"
Oct 02 19:52:30 compute-0 ceph-mon[191910]: pgmap v1421: 321 pgs: 321 active+clean; 235 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 56 KiB/s wr, 10 op/s
Oct 02 19:52:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:52:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1422: 321 pgs: 321 active+clean; 256 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 1.5 MiB/s wr, 44 op/s
Oct 02 19:52:31 compute-0 openstack_network_exporter[372736]: ERROR   19:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:52:31 compute-0 openstack_network_exporter[372736]: ERROR   19:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:52:31 compute-0 openstack_network_exporter[372736]: ERROR   19:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:52:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:52:31 compute-0 openstack_network_exporter[372736]: ERROR   19:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:52:31 compute-0 openstack_network_exporter[372736]: ERROR   19:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:52:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:52:32 compute-0 ceph-mon[191910]: pgmap v1422: 321 pgs: 321 active+clean; 256 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 1.5 MiB/s wr, 44 op/s
Oct 02 19:52:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:52:32.303 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:52:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:52:32.304 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:52:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:52:32.305 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:52:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1423: 321 pgs: 321 active+clean; 262 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 148 KiB/s rd, 1.5 MiB/s wr, 54 op/s
Oct 02 19:52:33 compute-0 nova_compute[355794]: 2025-10-02 19:52:33.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:52:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:52:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:52:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:52:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:52:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:52:33 compute-0 podman[431594]: 2025-10-02 19:52:33.704642462 +0000 UTC m=+0.117430454 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, container_name=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-type=git, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 19:52:33 compute-0 podman[431593]: 2025-10-02 19:52:33.742474848 +0000 UTC m=+0.157293254 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm)
Oct 02 19:52:33 compute-0 nova_compute[355794]: 2025-10-02 19:52:33.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:34 compute-0 ceph-mon[191910]: pgmap v1423: 321 pgs: 321 active+clean; 262 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 148 KiB/s rd, 1.5 MiB/s wr, 54 op/s
Oct 02 19:52:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct 02 19:52:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:52:36 compute-0 nova_compute[355794]: 2025-10-02 19:52:35.999 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:36 compute-0 nova_compute[355794]: 2025-10-02 19:52:36.031 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Triggering sync for uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 19:52:36 compute-0 nova_compute[355794]: 2025-10-02 19:52:36.032 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Triggering sync for uuid 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 19:52:36 compute-0 nova_compute[355794]: 2025-10-02 19:52:36.032 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Triggering sync for uuid b88114e8-b15d-4a78-ac15-3dd7ee30b949 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 19:52:36 compute-0 nova_compute[355794]: 2025-10-02 19:52:36.033 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Triggering sync for uuid 58f8959a-5f7e-44a5-9dca-65be0506a4c1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 19:52:36 compute-0 nova_compute[355794]: 2025-10-02 19:52:36.033 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:52:36 compute-0 nova_compute[355794]: 2025-10-02 19:52:36.034 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:52:36 compute-0 nova_compute[355794]: 2025-10-02 19:52:36.034 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:52:36 compute-0 nova_compute[355794]: 2025-10-02 19:52:36.035 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:52:36 compute-0 nova_compute[355794]: 2025-10-02 19:52:36.035 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:52:36 compute-0 nova_compute[355794]: 2025-10-02 19:52:36.036 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:52:36 compute-0 nova_compute[355794]: 2025-10-02 19:52:36.036 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:52:36 compute-0 nova_compute[355794]: 2025-10-02 19:52:36.037 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:52:36 compute-0 nova_compute[355794]: 2025-10-02 19:52:36.134 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:52:36 compute-0 nova_compute[355794]: 2025-10-02 19:52:36.148 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:52:36 compute-0 nova_compute[355794]: 2025-10-02 19:52:36.160 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:52:36 compute-0 nova_compute[355794]: 2025-10-02 19:52:36.162 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:52:36 compute-0 ceph-mon[191910]: pgmap v1424: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct 02 19:52:36 compute-0 podman[431628]: 2025-10-02 19:52:36.689804302 +0000 UTC m=+0.120969587 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 19:52:36 compute-0 podman[431629]: 2025-10-02 19:52:36.721912356 +0000 UTC m=+0.135277348 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:52:36 compute-0 podman[431630]: 2025-10-02 19:52:36.757458682 +0000 UTC m=+0.174280346 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 02 19:52:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1425: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct 02 19:52:37 compute-0 sudo[431685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:52:37 compute-0 sudo[431685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:37 compute-0 sudo[431685]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:37 compute-0 sudo[431710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:52:37 compute-0 sudo[431710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:37 compute-0 sudo[431710]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:37 compute-0 sudo[431735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:52:37 compute-0 sudo[431735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:37 compute-0 sudo[431735]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:37 compute-0 sudo[431760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 19:52:37 compute-0 sudo[431760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:38 compute-0 nova_compute[355794]: 2025-10-02 19:52:38.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:38 compute-0 ceph-mon[191910]: pgmap v1425: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct 02 19:52:38 compute-0 sudo[431760]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:52:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:52:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:52:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:52:38 compute-0 sudo[431803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:52:38 compute-0 sudo[431803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:38 compute-0 sudo[431803]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:38 compute-0 sudo[431828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:52:38 compute-0 sudo[431828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:38 compute-0 sudo[431828]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:38 compute-0 sudo[431853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:52:38 compute-0 nova_compute[355794]: 2025-10-02 19:52:38.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:38 compute-0 sudo[431853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:38 compute-0 sudo[431853]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1426: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct 02 19:52:38 compute-0 sudo[431878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:52:38 compute-0 sudo[431878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:52:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:52:39 compute-0 sudo[431878]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:52:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:52:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:52:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:52:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:52:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:52:39 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 697a0e88-3af1-49d1-9c3d-6d0b9d308ea9 does not exist
Oct 02 19:52:39 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev fc62705e-2601-4268-ada7-abbffc7f12a2 does not exist
Oct 02 19:52:39 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 9e46f827-be61-4d25-99ee-ada142cfc985 does not exist
Oct 02 19:52:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:52:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:52:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:52:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:52:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:52:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:52:39 compute-0 sudo[431933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:52:39 compute-0 sudo[431933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:39 compute-0 sudo[431933]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:40 compute-0 sudo[431958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:52:40 compute-0 sudo[431958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:40 compute-0 sudo[431958]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:40 compute-0 sudo[431983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:52:40 compute-0 sudo[431983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:40 compute-0 sudo[431983]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:52:40 compute-0 sudo[432008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:52:40 compute-0 sudo[432008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:40 compute-0 ceph-mon[191910]: pgmap v1426: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct 02 19:52:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:52:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:52:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:52:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:52:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:52:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:52:40 compute-0 podman[432070]: 2025-10-02 19:52:40.878744495 +0000 UTC m=+0.085771202 container create 6ba0c22ebc086a3e673e2b6ff36bc11223c4eceba823e19f7d5ea400f3c53a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:52:40 compute-0 podman[432070]: 2025-10-02 19:52:40.84511819 +0000 UTC m=+0.052144937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:52:40 compute-0 systemd[1]: Started libpod-conmon-6ba0c22ebc086a3e673e2b6ff36bc11223c4eceba823e19f7d5ea400f3c53a5f.scope.
Oct 02 19:52:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 1.4 MiB/s wr, 47 op/s
Oct 02 19:52:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:52:41 compute-0 podman[432070]: 2025-10-02 19:52:41.009021279 +0000 UTC m=+0.216047986 container init 6ba0c22ebc086a3e673e2b6ff36bc11223c4eceba823e19f7d5ea400f3c53a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_gates, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:52:41 compute-0 podman[432070]: 2025-10-02 19:52:41.031170028 +0000 UTC m=+0.238196705 container start 6ba0c22ebc086a3e673e2b6ff36bc11223c4eceba823e19f7d5ea400f3c53a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_gates, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:52:41 compute-0 podman[432070]: 2025-10-02 19:52:41.037927058 +0000 UTC m=+0.244953745 container attach 6ba0c22ebc086a3e673e2b6ff36bc11223c4eceba823e19f7d5ea400f3c53a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_gates, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:52:41 compute-0 stoic_gates[432088]: 167 167
Oct 02 19:52:41 compute-0 systemd[1]: libpod-6ba0c22ebc086a3e673e2b6ff36bc11223c4eceba823e19f7d5ea400f3c53a5f.scope: Deactivated successfully.
Oct 02 19:52:41 compute-0 podman[432070]: 2025-10-02 19:52:41.040592219 +0000 UTC m=+0.247618896 container died 6ba0c22ebc086a3e673e2b6ff36bc11223c4eceba823e19f7d5ea400f3c53a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_gates, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct 02 19:52:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9ccba53a9b0cca970661a55456ae2d17467a6b2eb859424bc62a827304f54f8-merged.mount: Deactivated successfully.
Oct 02 19:52:41 compute-0 podman[432084]: 2025-10-02 19:52:41.0921717 +0000 UTC m=+0.135310699 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.6, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64, vcs-type=git, release=1755695350, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 19:52:41 compute-0 podman[432087]: 2025-10-02 19:52:41.09477896 +0000 UTC m=+0.138058363 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:52:41 compute-0 podman[432070]: 2025-10-02 19:52:41.101459027 +0000 UTC m=+0.308485694 container remove 6ba0c22ebc086a3e673e2b6ff36bc11223c4eceba823e19f7d5ea400f3c53a5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:52:41 compute-0 systemd[1]: libpod-conmon-6ba0c22ebc086a3e673e2b6ff36bc11223c4eceba823e19f7d5ea400f3c53a5f.scope: Deactivated successfully.
Oct 02 19:52:41 compute-0 podman[432153]: 2025-10-02 19:52:41.374011005 +0000 UTC m=+0.074869692 container create 63bac0dd1ec8b6f6e6ce492b7029f49ec9db9d478f252fe3c0e207be25c793f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hofstadter, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 19:52:41 compute-0 podman[432153]: 2025-10-02 19:52:41.346319919 +0000 UTC m=+0.047178596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:52:41 compute-0 systemd[1]: Started libpod-conmon-63bac0dd1ec8b6f6e6ce492b7029f49ec9db9d478f252fe3c0e207be25c793f5.scope.
Oct 02 19:52:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d925b252d560809c61e1783c50a941c0498cfa1d50eed207b5b1d3769754a7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d925b252d560809c61e1783c50a941c0498cfa1d50eed207b5b1d3769754a7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d925b252d560809c61e1783c50a941c0498cfa1d50eed207b5b1d3769754a7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d925b252d560809c61e1783c50a941c0498cfa1d50eed207b5b1d3769754a7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d925b252d560809c61e1783c50a941c0498cfa1d50eed207b5b1d3769754a7d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:52:41 compute-0 podman[432153]: 2025-10-02 19:52:41.626933071 +0000 UTC m=+0.327791768 container init 63bac0dd1ec8b6f6e6ce492b7029f49ec9db9d478f252fe3c0e207be25c793f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hofstadter, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:52:41 compute-0 podman[432153]: 2025-10-02 19:52:41.647162689 +0000 UTC m=+0.348021376 container start 63bac0dd1ec8b6f6e6ce492b7029f49ec9db9d478f252fe3c0e207be25c793f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hofstadter, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:52:41 compute-0 podman[432153]: 2025-10-02 19:52:41.665975219 +0000 UTC m=+0.366833926 container attach 63bac0dd1ec8b6f6e6ce492b7029f49ec9db9d478f252fe3c0e207be25c793f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hofstadter, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:52:42 compute-0 ceph-mon[191910]: pgmap v1427: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 1.4 MiB/s wr, 47 op/s
Oct 02 19:52:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1428: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 29 KiB/s wr, 13 op/s
Oct 02 19:52:43 compute-0 elated_hofstadter[432168]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:52:43 compute-0 elated_hofstadter[432168]: --> relative data size: 1.0
Oct 02 19:52:43 compute-0 elated_hofstadter[432168]: --> All data devices are unavailable
Oct 02 19:52:43 compute-0 systemd[1]: libpod-63bac0dd1ec8b6f6e6ce492b7029f49ec9db9d478f252fe3c0e207be25c793f5.scope: Deactivated successfully.
Oct 02 19:52:43 compute-0 podman[432153]: 2025-10-02 19:52:43.082358904 +0000 UTC m=+1.783217591 container died 63bac0dd1ec8b6f6e6ce492b7029f49ec9db9d478f252fe3c0e207be25c793f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 19:52:43 compute-0 systemd[1]: libpod-63bac0dd1ec8b6f6e6ce492b7029f49ec9db9d478f252fe3c0e207be25c793f5.scope: Consumed 1.346s CPU time.
Oct 02 19:52:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d925b252d560809c61e1783c50a941c0498cfa1d50eed207b5b1d3769754a7d-merged.mount: Deactivated successfully.
Oct 02 19:52:43 compute-0 nova_compute[355794]: 2025-10-02 19:52:43.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:43 compute-0 podman[432153]: 2025-10-02 19:52:43.198200014 +0000 UTC m=+1.899058701 container remove 63bac0dd1ec8b6f6e6ce492b7029f49ec9db9d478f252fe3c0e207be25c793f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hofstadter, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:52:43 compute-0 systemd[1]: libpod-conmon-63bac0dd1ec8b6f6e6ce492b7029f49ec9db9d478f252fe3c0e207be25c793f5.scope: Deactivated successfully.
Oct 02 19:52:43 compute-0 sudo[432008]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:43 compute-0 sudo[432211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:52:43 compute-0 sudo[432211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:43 compute-0 sudo[432211]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:43 compute-0 sudo[432236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:52:43 compute-0 sudo[432236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:43 compute-0 sudo[432236]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:43 compute-0 sudo[432261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:52:43 compute-0 sudo[432261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:43 compute-0 sudo[432261]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:43 compute-0 sudo[432286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:52:43 compute-0 sudo[432286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:43 compute-0 nova_compute[355794]: 2025-10-02 19:52:43.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:44 compute-0 podman[432350]: 2025-10-02 19:52:44.28973078 +0000 UTC m=+0.077689797 container create 0f58c1c4746bb2cdf0b02f9f553f3f80d7ca5815606f14734017389ff74f9983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 19:52:44 compute-0 podman[432350]: 2025-10-02 19:52:44.262295811 +0000 UTC m=+0.050254818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:52:44 compute-0 systemd[1]: Started libpod-conmon-0f58c1c4746bb2cdf0b02f9f553f3f80d7ca5815606f14734017389ff74f9983.scope.
Oct 02 19:52:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:52:44 compute-0 podman[432350]: 2025-10-02 19:52:44.414627572 +0000 UTC m=+0.202586599 container init 0f58c1c4746bb2cdf0b02f9f553f3f80d7ca5815606f14734017389ff74f9983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nobel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 19:52:44 compute-0 podman[432350]: 2025-10-02 19:52:44.425003138 +0000 UTC m=+0.212962175 container start 0f58c1c4746bb2cdf0b02f9f553f3f80d7ca5815606f14734017389ff74f9983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 19:52:44 compute-0 podman[432350]: 2025-10-02 19:52:44.4318815 +0000 UTC m=+0.219840537 container attach 0f58c1c4746bb2cdf0b02f9f553f3f80d7ca5815606f14734017389ff74f9983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nobel, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:52:44 compute-0 amazing_nobel[432366]: 167 167
Oct 02 19:52:44 compute-0 systemd[1]: libpod-0f58c1c4746bb2cdf0b02f9f553f3f80d7ca5815606f14734017389ff74f9983.scope: Deactivated successfully.
Oct 02 19:52:44 compute-0 ceph-mon[191910]: pgmap v1428: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 29 KiB/s wr, 13 op/s
Oct 02 19:52:44 compute-0 podman[432350]: 2025-10-02 19:52:44.438293451 +0000 UTC m=+0.226252458 container died 0f58c1c4746bb2cdf0b02f9f553f3f80d7ca5815606f14734017389ff74f9983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nobel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:52:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e593e37b1bb8318e05fe769422123253e6f416c6d5bfa5bd49901585415eab8-merged.mount: Deactivated successfully.
Oct 02 19:52:44 compute-0 podman[432350]: 2025-10-02 19:52:44.502010265 +0000 UTC m=+0.289969272 container remove 0f58c1c4746bb2cdf0b02f9f553f3f80d7ca5815606f14734017389ff74f9983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 19:52:44 compute-0 systemd[1]: libpod-conmon-0f58c1c4746bb2cdf0b02f9f553f3f80d7ca5815606f14734017389ff74f9983.scope: Deactivated successfully.
Oct 02 19:52:44 compute-0 podman[432390]: 2025-10-02 19:52:44.799726993 +0000 UTC m=+0.093060266 container create f47a25509c1e53aa1d0c122a6f5f4f8eaea9a67e51fd325ac028a28c838acd6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:52:44 compute-0 podman[432390]: 2025-10-02 19:52:44.75865412 +0000 UTC m=+0.051987163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:52:44 compute-0 systemd[1]: Started libpod-conmon-f47a25509c1e53aa1d0c122a6f5f4f8eaea9a67e51fd325ac028a28c838acd6c.scope.
Oct 02 19:52:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff7b89ee10f1c793bd2b16857ce72e045f298f2cfc636f888d6b776c4b37781/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff7b89ee10f1c793bd2b16857ce72e045f298f2cfc636f888d6b776c4b37781/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff7b89ee10f1c793bd2b16857ce72e045f298f2cfc636f888d6b776c4b37781/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff7b89ee10f1c793bd2b16857ce72e045f298f2cfc636f888d6b776c4b37781/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:52:44 compute-0 podman[432390]: 2025-10-02 19:52:44.964040432 +0000 UTC m=+0.257373475 container init f47a25509c1e53aa1d0c122a6f5f4f8eaea9a67e51fd325ac028a28c838acd6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bardeen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:52:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1429: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 9.3 KiB/s wr, 3 op/s
Oct 02 19:52:44 compute-0 podman[432390]: 2025-10-02 19:52:44.992349845 +0000 UTC m=+0.285682868 container start f47a25509c1e53aa1d0c122a6f5f4f8eaea9a67e51fd325ac028a28c838acd6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bardeen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 19:52:44 compute-0 podman[432390]: 2025-10-02 19:52:44.997535303 +0000 UTC m=+0.290868326 container attach f47a25509c1e53aa1d0c122a6f5f4f8eaea9a67e51fd325ac028a28c838acd6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bardeen, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 19:52:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]: {
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:     "0": [
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:         {
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "devices": [
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "/dev/loop3"
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             ],
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "lv_name": "ceph_lv0",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "lv_size": "21470642176",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "name": "ceph_lv0",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "tags": {
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.cluster_name": "ceph",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.crush_device_class": "",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.encrypted": "0",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.osd_id": "0",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.type": "block",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.vdo": "0"
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             },
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "type": "block",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "vg_name": "ceph_vg0"
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:         }
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:     ],
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:     "1": [
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:         {
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "devices": [
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "/dev/loop4"
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             ],
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "lv_name": "ceph_lv1",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "lv_size": "21470642176",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "name": "ceph_lv1",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "tags": {
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.cluster_name": "ceph",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.crush_device_class": "",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.encrypted": "0",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.osd_id": "1",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.type": "block",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.vdo": "0"
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             },
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "type": "block",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "vg_name": "ceph_vg1"
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:         }
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:     ],
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:     "2": [
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:         {
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "devices": [
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "/dev/loop5"
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             ],
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "lv_name": "ceph_lv2",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "lv_size": "21470642176",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "name": "ceph_lv2",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "tags": {
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.cluster_name": "ceph",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.crush_device_class": "",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.encrypted": "0",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.osd_id": "2",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.type": "block",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:                 "ceph.vdo": "0"
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             },
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "type": "block",
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:             "vg_name": "ceph_vg2"
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:         }
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]:     ]
Oct 02 19:52:45 compute-0 jolly_bardeen[432407]: }
Oct 02 19:52:45 compute-0 systemd[1]: libpod-f47a25509c1e53aa1d0c122a6f5f4f8eaea9a67e51fd325ac028a28c838acd6c.scope: Deactivated successfully.
Oct 02 19:52:45 compute-0 podman[432390]: 2025-10-02 19:52:45.889507972 +0000 UTC m=+1.182841025 container died f47a25509c1e53aa1d0c122a6f5f4f8eaea9a67e51fd325ac028a28c838acd6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bardeen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:52:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ff7b89ee10f1c793bd2b16857ce72e045f298f2cfc636f888d6b776c4b37781-merged.mount: Deactivated successfully.
Oct 02 19:52:45 compute-0 podman[432390]: 2025-10-02 19:52:45.976982818 +0000 UTC m=+1.270315841 container remove f47a25509c1e53aa1d0c122a6f5f4f8eaea9a67e51fd325ac028a28c838acd6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bardeen, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:52:45 compute-0 systemd[1]: libpod-conmon-f47a25509c1e53aa1d0c122a6f5f4f8eaea9a67e51fd325ac028a28c838acd6c.scope: Deactivated successfully.
Oct 02 19:52:46 compute-0 sudo[432286]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:46 compute-0 sudo[432430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:52:46 compute-0 sudo[432430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:46 compute-0 sudo[432430]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:46 compute-0 sudo[432455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:52:46 compute-0 sudo[432455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:46 compute-0 sudo[432455]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:46 compute-0 ceph-mon[191910]: pgmap v1429: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 9.3 KiB/s wr, 3 op/s
Oct 02 19:52:46 compute-0 sudo[432480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:52:46 compute-0 sudo[432480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:46 compute-0 sudo[432480]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:46 compute-0 sudo[432505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:52:46 compute-0 sudo[432505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Oct 02 19:52:47 compute-0 podman[432566]: 2025-10-02 19:52:47.265563684 +0000 UTC m=+0.090669512 container create 4941361d9ae2e609c3cd8c047b196543a306cad972f4dc5ae4493d6358bef520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct 02 19:52:47 compute-0 podman[432566]: 2025-10-02 19:52:47.231528709 +0000 UTC m=+0.056634627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:52:47 compute-0 systemd[1]: Started libpod-conmon-4941361d9ae2e609c3cd8c047b196543a306cad972f4dc5ae4493d6358bef520.scope.
Oct 02 19:52:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:52:47 compute-0 podman[432566]: 2025-10-02 19:52:47.430604013 +0000 UTC m=+0.255709911 container init 4941361d9ae2e609c3cd8c047b196543a306cad972f4dc5ae4493d6358bef520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:52:47 compute-0 podman[432566]: 2025-10-02 19:52:47.448434937 +0000 UTC m=+0.273540755 container start 4941361d9ae2e609c3cd8c047b196543a306cad972f4dc5ae4493d6358bef520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:52:47 compute-0 clever_saha[432582]: 167 167
Oct 02 19:52:47 compute-0 systemd[1]: libpod-4941361d9ae2e609c3cd8c047b196543a306cad972f4dc5ae4493d6358bef520.scope: Deactivated successfully.
Oct 02 19:52:47 compute-0 podman[432566]: 2025-10-02 19:52:47.485911993 +0000 UTC m=+0.311017911 container attach 4941361d9ae2e609c3cd8c047b196543a306cad972f4dc5ae4493d6358bef520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 19:52:47 compute-0 podman[432566]: 2025-10-02 19:52:47.486668163 +0000 UTC m=+0.311774031 container died 4941361d9ae2e609c3cd8c047b196543a306cad972f4dc5ae4493d6358bef520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:52:47 compute-0 ceph-mon[191910]: pgmap v1430: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Oct 02 19:52:47 compute-0 nova_compute[355794]: 2025-10-02 19:52:47.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:47 compute-0 nova_compute[355794]: 2025-10-02 19:52:47.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:52:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-00106bc17ba58c3e3be6e9252040192105bc023bdb6a2e76c85162e320ff4fb0-merged.mount: Deactivated successfully.
Oct 02 19:52:47 compute-0 podman[432566]: 2025-10-02 19:52:47.630838017 +0000 UTC m=+0.455943845 container remove 4941361d9ae2e609c3cd8c047b196543a306cad972f4dc5ae4493d6358bef520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:52:47 compute-0 systemd[1]: libpod-conmon-4941361d9ae2e609c3cd8c047b196543a306cad972f4dc5ae4493d6358bef520.scope: Deactivated successfully.
Oct 02 19:52:47 compute-0 podman[432606]: 2025-10-02 19:52:47.937591075 +0000 UTC m=+0.099469657 container create 9520a1531ec84ebad1ab4611207734bc54f2154e6b7b76597e1d0d4b74c03f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_sanderson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:52:47 compute-0 podman[432606]: 2025-10-02 19:52:47.893921263 +0000 UTC m=+0.055799885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:52:48 compute-0 systemd[1]: Started libpod-conmon-9520a1531ec84ebad1ab4611207734bc54f2154e6b7b76597e1d0d4b74c03f23.scope.
Oct 02 19:52:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e9bcdbdcf0b61c7d2c3ba67ac0434642969e5ce2a81befc323a651438cadb61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e9bcdbdcf0b61c7d2c3ba67ac0434642969e5ce2a81befc323a651438cadb61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e9bcdbdcf0b61c7d2c3ba67ac0434642969e5ce2a81befc323a651438cadb61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e9bcdbdcf0b61c7d2c3ba67ac0434642969e5ce2a81befc323a651438cadb61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:52:48 compute-0 podman[432606]: 2025-10-02 19:52:48.113757619 +0000 UTC m=+0.275636181 container init 9520a1531ec84ebad1ab4611207734bc54f2154e6b7b76597e1d0d4b74c03f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_sanderson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:52:48 compute-0 podman[432606]: 2025-10-02 19:52:48.132311993 +0000 UTC m=+0.294190535 container start 9520a1531ec84ebad1ab4611207734bc54f2154e6b7b76597e1d0d4b74c03f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:52:48 compute-0 podman[432606]: 2025-10-02 19:52:48.13822511 +0000 UTC m=+0.300103692 container attach 9520a1531ec84ebad1ab4611207734bc54f2154e6b7b76597e1d0d4b74c03f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_sanderson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 19:52:48 compute-0 nova_compute[355794]: 2025-10-02 19:52:48.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:48 compute-0 nova_compute[355794]: 2025-10-02 19:52:48.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 19:52:49 compute-0 strange_sanderson[432622]: {
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:         "osd_id": 1,
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:         "type": "bluestore"
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:     },
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:         "osd_id": 2,
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:         "type": "bluestore"
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:     },
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:         "osd_id": 0,
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:         "type": "bluestore"
Oct 02 19:52:49 compute-0 strange_sanderson[432622]:     }
Oct 02 19:52:49 compute-0 strange_sanderson[432622]: }
Oct 02 19:52:49 compute-0 systemd[1]: libpod-9520a1531ec84ebad1ab4611207734bc54f2154e6b7b76597e1d0d4b74c03f23.scope: Deactivated successfully.
Oct 02 19:52:49 compute-0 systemd[1]: libpod-9520a1531ec84ebad1ab4611207734bc54f2154e6b7b76597e1d0d4b74c03f23.scope: Consumed 1.165s CPU time.
Oct 02 19:52:49 compute-0 podman[432606]: 2025-10-02 19:52:49.303300592 +0000 UTC m=+1.465179164 container died 9520a1531ec84ebad1ab4611207734bc54f2154e6b7b76597e1d0d4b74c03f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:52:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e9bcdbdcf0b61c7d2c3ba67ac0434642969e5ce2a81befc323a651438cadb61-merged.mount: Deactivated successfully.
Oct 02 19:52:49 compute-0 podman[432606]: 2025-10-02 19:52:49.413206814 +0000 UTC m=+1.575085376 container remove 9520a1531ec84ebad1ab4611207734bc54f2154e6b7b76597e1d0d4b74c03f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_sanderson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:52:49 compute-0 systemd[1]: libpod-conmon-9520a1531ec84ebad1ab4611207734bc54f2154e6b7b76597e1d0d4b74c03f23.scope: Deactivated successfully.
Oct 02 19:52:49 compute-0 sudo[432505]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:52:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:52:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:52:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:52:49 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev fb121714-75ba-419c-b68f-e5213d18797e does not exist
Oct 02 19:52:49 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 5d5e7919-6497-4b05-987d-d89f1f8b2199 does not exist
Oct 02 19:52:49 compute-0 nova_compute[355794]: 2025-10-02 19:52:49.579 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:49 compute-0 sudo[432669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:52:49 compute-0 sudo[432669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:49 compute-0 sudo[432669]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:49 compute-0 sudo[432694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:52:49 compute-0 sudo[432694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:52:49 compute-0 sudo[432694]: pam_unix(sudo:session): session closed for user root
Oct 02 19:52:50 compute-0 ceph-mon[191910]: pgmap v1431: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 19:52:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:52:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:52:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:52:50 compute-0 nova_compute[355794]: 2025-10-02 19:52:50.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 19:52:51 compute-0 nova_compute[355794]: 2025-10-02 19:52:51.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:51 compute-0 nova_compute[355794]: 2025-10-02 19:52:51.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:52:51 compute-0 nova_compute[355794]: 2025-10-02 19:52:51.968 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-b88114e8-b15d-4a78-ac15-3dd7ee30b949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:52:51 compute-0 nova_compute[355794]: 2025-10-02 19:52:51.969 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-b88114e8-b15d-4a78-ac15-3dd7ee30b949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:52:51 compute-0 nova_compute[355794]: 2025-10-02 19:52:51.970 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:52:52 compute-0 ceph-mon[191910]: pgmap v1432: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 19:52:52 compute-0 podman[432719]: 2025-10-02 19:52:52.75015636 +0000 UTC m=+0.165658746 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:52:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:52:52 compute-0 nova_compute[355794]: 2025-10-02 19:52:52.983 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Updating instance_info_cache with network_info: [{"id": "55da210c-644a-4f1e-8f20-ee3303b72db2", "address": "fa:16:3e:c5:df:6b", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55da210c-64", "ovs_interfaceid": "55da210c-644a-4f1e-8f20-ee3303b72db2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.008 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-b88114e8-b15d-4a78-ac15-3dd7ee30b949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.009 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.009 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.036 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.037 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.037 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.037 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.037 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:52:53 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1364412802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.568 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.689 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.691 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.692 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.706 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.707 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.708 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.715 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.716 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.716 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.724 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.725 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.725 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:52:53 compute-0 nova_compute[355794]: 2025-10-02 19:52:53.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:54 compute-0 ceph-mon[191910]: pgmap v1433: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:52:54 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1364412802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:52:54 compute-0 nova_compute[355794]: 2025-10-02 19:52:54.287 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:52:54 compute-0 nova_compute[355794]: 2025-10-02 19:52:54.290 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3205MB free_disk=59.85567855834961GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:52:54 compute-0 nova_compute[355794]: 2025-10-02 19:52:54.291 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:52:54 compute-0 nova_compute[355794]: 2025-10-02 19:52:54.291 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:52:54 compute-0 nova_compute[355794]: 2025-10-02 19:52:54.385 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:52:54 compute-0 nova_compute[355794]: 2025-10-02 19:52:54.386 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:52:54 compute-0 nova_compute[355794]: 2025-10-02 19:52:54.386 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance b88114e8-b15d-4a78-ac15-3dd7ee30b949 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:52:54 compute-0 nova_compute[355794]: 2025-10-02 19:52:54.386 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 58f8959a-5f7e-44a5-9dca-65be0506a4c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:52:54 compute-0 nova_compute[355794]: 2025-10-02 19:52:54.386 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:52:54 compute-0 nova_compute[355794]: 2025-10-02 19:52:54.387 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:52:54 compute-0 nova_compute[355794]: 2025-10-02 19:52:54.483 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:52:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:52:54 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099969259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:52:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:52:54 compute-0 nova_compute[355794]: 2025-10-02 19:52:54.994 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:52:55 compute-0 nova_compute[355794]: 2025-10-02 19:52:55.008 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:52:55 compute-0 nova_compute[355794]: 2025-10-02 19:52:55.030 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:52:55 compute-0 nova_compute[355794]: 2025-10-02 19:52:55.035 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:52:55 compute-0 nova_compute[355794]: 2025-10-02 19:52:55.036 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:52:55 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1099969259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:52:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:52:55 compute-0 nova_compute[355794]: 2025-10-02 19:52:55.604 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:55 compute-0 nova_compute[355794]: 2025-10-02 19:52:55.604 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:55 compute-0 nova_compute[355794]: 2025-10-02 19:52:55.604 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:56 compute-0 ceph-mon[191910]: pgmap v1434: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:52:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:52:57 compute-0 nova_compute[355794]: 2025-10-02 19:52:57.571 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:58 compute-0 ceph-mon[191910]: pgmap v1435: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:52:58 compute-0 nova_compute[355794]: 2025-10-02 19:52:58.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:58 compute-0 nova_compute[355794]: 2025-10-02 19:52:58.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:52:59 compute-0 podman[432783]: 2025-10-02 19:52:59.689949883 +0000 UTC m=+0.108794715 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:52:59 compute-0 podman[432784]: 2025-10-02 19:52:59.7394537 +0000 UTC m=+0.143945139 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:52:59 compute-0 podman[157186]: time="2025-10-02T19:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:52:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:52:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9046 "" "Go-http-client/1.1"
Oct 02 19:53:00 compute-0 ceph-mon[191910]: pgmap v1436: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:53:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Oct 02 19:53:01 compute-0 openstack_network_exporter[372736]: ERROR   19:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:53:01 compute-0 openstack_network_exporter[372736]: ERROR   19:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:53:01 compute-0 openstack_network_exporter[372736]: ERROR   19:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:53:01 compute-0 openstack_network_exporter[372736]: ERROR   19:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:53:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:53:01 compute-0 openstack_network_exporter[372736]: ERROR   19:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:53:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:53:02 compute-0 ceph-mon[191910]: pgmap v1437: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Oct 02 19:53:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 426 B/s wr, 0 op/s
Oct 02 19:53:03 compute-0 nova_compute[355794]: 2025-10-02 19:53:03.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:53:03
Oct 02 19:53:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:53:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:53:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'volumes', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr']
Oct 02 19:53:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:53:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:53:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:53:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:53:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:53:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:53:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:53:03 compute-0 nova_compute[355794]: 2025-10-02 19:53:03.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:53:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:53:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:53:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:53:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:53:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:53:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:53:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:53:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:53:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:53:04 compute-0 ceph-mon[191910]: pgmap v1438: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 426 B/s wr, 0 op/s
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.297 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.298 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.311 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.317 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b88114e8-b15d-4a78-ac15-3dd7ee30b949', 'name': 'vn-wkgthwg-uxhkceofcvut-4we5flt73ruq-vnf-oj5mzexrwavf', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {'metering.server_group': 'd2d7e2b0-01e0-44b1-b2c7-fe502b333743'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.321 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 58f8959a-5f7e-44a5-9dca-65be0506a4c1 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 19:53:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:04.323 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/58f8959a-5f7e-44a5-9dca-65be0506a4c1 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}83403cbd27ca997ec29507654ce1f65c7e3234a2ba47473760e788c908204f10" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 19:53:04 compute-0 podman[432827]: 2025-10-02 19:53:04.714862726 +0000 UTC m=+0.134986281 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Oct 02 19:53:04 compute-0 podman[432828]: 2025-10-02 19:53:04.719039627 +0000 UTC m=+0.131847467 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.tags=base rhel9, release=1214.1726694543, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct 02 19:53:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1439: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.102 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Thu, 02 Oct 2025 19:53:04 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-da4bc0e5-d9d0-4827-bc64-ba195a20a219 x-openstack-request-id: req-da4bc0e5-d9d0-4827-bc64-ba195a20a219 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.102 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "58f8959a-5f7e-44a5-9dca-65be0506a4c1", "name": "vn-wkgthwg-l57hn7k2t5oc-62etg35sdh32-vnf-wjcrezdtgbmg", "status": "ACTIVE", "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "user_id": "811fb7ac717e4ba9b9874e5454ee08f4", "metadata": {"metering.server_group": "d2d7e2b0-01e0-44b1-b2c7-fe502b333743"}, "hostId": "0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d", "image": {"id": "ce28338d-119e-49e1-ab67-60da8882593a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ce28338d-119e-49e1-ab67-60da8882593a"}]}, "flavor": {"id": "8f0521f8-dc4e-4ca1-bf77-f443ae74db03", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8f0521f8-dc4e-4ca1-bf77-f443ae74db03"}]}, "created": "2025-10-02T19:51:40Z", "updated": "2025-10-02T19:51:51Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.24", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:5d:8f:b8"}, {"version": 4, "addr": "192.168.122.207", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:5d:8f:b8"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/58f8959a-5f7e-44a5-9dca-65be0506a4c1"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/58f8959a-5f7e-44a5-9dca-65be0506a4c1"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-02T19:51:50.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.103 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/58f8959a-5f7e-44a5-9dca-65be0506a4c1 used request id req-da4bc0e5-d9d0-4827-bc64-ba195a20a219 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.105 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '58f8959a-5f7e-44a5-9dca-65be0506a4c1', 'name': 'vn-wkgthwg-l57hn7k2t5oc-62etg35sdh32-vnf-wjcrezdtgbmg', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {'metering.server_group': 'd2d7e2b0-01e0-44b1-b2c7-fe502b333743'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.111 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4cdbea11-17d2-4466-a5f5-9a3d25e25d8a', 'name': 'vn-wkgthwg-cyepyxtlaijo-cznrkcgobntv-vnf-te2v6j4ustz5', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {'metering.server_group': 'd2d7e2b0-01e0-44b1-b2c7-fe502b333743'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.112 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.112 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.112 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.113 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:53:05.113150) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.193 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.194 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.194 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.267 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.268 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.269 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.338 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.339 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.340 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.409 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.410 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.411 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.412 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.413 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.413 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.413 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.414 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:53:05.414015) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.445 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.447 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.447 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.485 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.486 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.487 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.518 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.519 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.520 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.553 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.554 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.554 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.555 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.555 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.555 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.555 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.555 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.555 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.555 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.556 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.556 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.556 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.557 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.557 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.bytes volume: 41701376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.557 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:53:05.555543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.557 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.557 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.557 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.bytes volume: 41824256 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.558 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.558 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.558 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.559 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.559 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.559 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.559 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.559 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.559 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.559 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.560 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.latency volume: 9610815435 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.560 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.latency volume: 22315613 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.560 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.560 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.latency volume: 6424829060 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.560 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.latency volume: 35191695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.561 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.561 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.latency volume: 6278316166 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.561 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.latency volume: 36317650 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.561 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.562 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.562 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.562 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.563 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.563 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.563 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:53:05.559313) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.563 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.564 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:53:05.563275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.590 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.620 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.656 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.691 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.692 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.692 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.693 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.693 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.693 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.693 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.693 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.694 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.694 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.694 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.695 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.695 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.696 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.requests volume: 226 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.696 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.696 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:53:05.693580) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.697 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.697 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.697 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.698 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.699 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.699 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.699 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.699 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.699 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.701 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:53:05.699704) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.706 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.711 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.715 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 58f8959a-5f7e-44a5-9dca-65be0506a4c1 / tap90a967c2-93 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.716 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.720 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.721 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.721 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.721 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.721 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.721 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.722 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.722 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.722 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.722 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.723 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.723 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.723 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.723 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-wkgthwg-l57hn7k2t5oc-62etg35sdh32-vnf-wjcrezdtgbmg>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-wkgthwg-l57hn7k2t5oc-62etg35sdh32-vnf-wjcrezdtgbmg>]
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.724 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.724 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.724 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.724 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.723 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:53:05.721589) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.724 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-02T19:53:05.723090) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.725 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.725 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.726 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.726 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.726 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.726 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.726 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.727 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.727 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.packets volume: 56 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.726 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:53:05.724502) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.727 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.727 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:53:05.726118) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.728 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.728 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.728 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.729 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.729 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.729 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.729 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.729 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.729 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:53:05.728069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.730 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.730 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.730 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.731 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.731 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.bytes.delta volume: 2540 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.731 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.731 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:53:05.730093) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.732 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.732 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.732 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.732 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.733 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.733 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.733 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.733 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.734 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.734 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.734 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.734 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.735 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.735 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.735 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.735 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.736 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.736 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.736 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.736 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.733 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:53:05.731995) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.737 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.737 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:53:05.733945) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.737 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.738 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.738 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.738 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.738 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.738 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:53:05.738085) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.739 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.outgoing.bytes volume: 2146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.739 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.bytes volume: 7370 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.739 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.739 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.739 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.739 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.739 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.740 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.740 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.740 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.740 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:53:05.740003) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.740 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.741 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.741 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.741 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.741 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.741 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.742 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.742 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.742 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.742 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.743 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.743 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.743 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.743 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.743 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.743 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.744 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.744 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-wkgthwg-l57hn7k2t5oc-62etg35sdh32-vnf-wjcrezdtgbmg>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-wkgthwg-l57hn7k2t5oc-62etg35sdh32-vnf-wjcrezdtgbmg>]
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.744 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-02T19:53:05.743712) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.744 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.744 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.744 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.744 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.745 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.745 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/memory.usage volume: 49.05859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.745 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:53:05.744807) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.745 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/memory.usage volume: 49.015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.745 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/memory.usage volume: 48.96875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.746 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.746 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.746 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.746 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.746 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.746 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.747 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2478 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.747 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.bytes volume: 1870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.747 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:53:05.746826) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.747 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.incoming.bytes volume: 1786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.747 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.bytes volume: 8664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.748 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.748 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.748 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.748 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.748 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.748 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.749 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.749 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:53:05.748764) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.749 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.749 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.749 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.packets volume: 63 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.750 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.750 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.750 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.750 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.750 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.751 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.751 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.751 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.751 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:53:05.750977) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.751 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.752 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.752 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.752 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.752 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.752 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.753 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.753 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.753 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.753 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.754 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.754 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.754 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.754 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.754 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.755 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.755 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.755 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:53:05.754864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.755 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.755 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.755 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.756 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.756 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.756 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.756 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.756 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.757 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.757 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:53:05.756815) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.757 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.757 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.757 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.758 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.758 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.758 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.758 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.758 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.759 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 40650000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.759 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:53:05.758824) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.759 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/cpu volume: 34970000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.759 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/cpu volume: 36530000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.759 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/cpu volume: 297740000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.760 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.760 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.760 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.760 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.760 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.760 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.761 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.761 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:53:05.760795) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.761 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.761 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.761 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.latency volume: 1897675157 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.762 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.latency volume: 270926831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.762 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.latency volume: 180472901 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.762 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.latency volume: 1997650221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.762 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.latency volume: 337600166 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.762 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.latency volume: 232324009 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.763 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.latency volume: 1764876744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.763 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.latency volume: 323566119 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.763 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.latency volume: 193343486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.764 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:53:05.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:06 compute-0 ceph-mon[191910]: pgmap v1439: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:53:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:53:07 compute-0 podman[432866]: 2025-10-02 19:53:07.70727627 +0000 UTC m=+0.117496406 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 02 19:53:07 compute-0 podman[432867]: 2025-10-02 19:53:07.722912335 +0000 UTC m=+0.131033095 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:53:07 compute-0 podman[432868]: 2025-10-02 19:53:07.805223614 +0000 UTC m=+0.200459731 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 19:53:08 compute-0 ceph-mon[191910]: pgmap v1440: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:53:08 compute-0 nova_compute[355794]: 2025-10-02 19:53:08.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:08 compute-0 nova_compute[355794]: 2025-10-02 19:53:08.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:53:10 compute-0 ceph-mon[191910]: pgmap v1441: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:53:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:53:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:53:11 compute-0 podman[432930]: 2025-10-02 19:53:11.680073254 +0000 UTC m=+0.088270678 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:53:11 compute-0 podman[432929]: 2025-10-02 19:53:11.700177709 +0000 UTC m=+0.114959208 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=openstack_network_exporter, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7)
Oct 02 19:53:12 compute-0 ceph-mon[191910]: pgmap v1442: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022104765945497166 of space, bias 1.0, pg target 0.663142978364915 quantized to 32 (current 32)
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:53:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Oct 02 19:53:13 compute-0 nova_compute[355794]: 2025-10-02 19:53:13.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:13 compute-0 nova_compute[355794]: 2025-10-02 19:53:13.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:14 compute-0 ceph-mon[191910]: pgmap v1443: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Oct 02 19:53:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s wr, 0 op/s
Oct 02 19:53:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:53:16 compute-0 ceph-mon[191910]: pgmap v1444: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s wr, 0 op/s
Oct 02 19:53:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:53:18 compute-0 nova_compute[355794]: 2025-10-02 19:53:18.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:18 compute-0 ceph-mon[191910]: pgmap v1445: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:53:18 compute-0 nova_compute[355794]: 2025-10-02 19:53:18.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:53:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:53:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3341598858' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:53:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:53:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3341598858' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:53:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:53:20 compute-0 ceph-mon[191910]: pgmap v1446: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:53:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3341598858' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:53:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3341598858' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:53:20.296088) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434800296245, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1237, "num_deletes": 251, "total_data_size": 1859574, "memory_usage": 1893344, "flush_reason": "Manual Compaction"}
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434800317834, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1819740, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28896, "largest_seqno": 30132, "table_properties": {"data_size": 1813837, "index_size": 3234, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12520, "raw_average_key_size": 19, "raw_value_size": 1802004, "raw_average_value_size": 2855, "num_data_blocks": 145, "num_entries": 631, "num_filter_entries": 631, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759434680, "oldest_key_time": 1759434680, "file_creation_time": 1759434800, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 21837 microseconds, and 11818 cpu microseconds.
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:53:20.317945) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1819740 bytes OK
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:53:20.317978) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:53:20.320972) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:53:20.321004) EVENT_LOG_v1 {"time_micros": 1759434800320994, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:53:20.321035) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1853952, prev total WAL file size 1853952, number of live WAL files 2.
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:53:20.324110) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1777KB)], [65(6968KB)]
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434800324224, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 8955270, "oldest_snapshot_seqno": -1}
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 4937 keys, 7224971 bytes, temperature: kUnknown
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434800371709, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7224971, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7193065, "index_size": 18484, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 124824, "raw_average_key_size": 25, "raw_value_size": 7104740, "raw_average_value_size": 1439, "num_data_blocks": 761, "num_entries": 4937, "num_filter_entries": 4937, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759434800, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:53:20.372064) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7224971 bytes
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:53:20.375266) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.2 rd, 151.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 6.8 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(8.9) write-amplify(4.0) OK, records in: 5451, records dropped: 514 output_compression: NoCompression
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:53:20.375297) EVENT_LOG_v1 {"time_micros": 1759434800375281, "job": 36, "event": "compaction_finished", "compaction_time_micros": 47586, "compaction_time_cpu_micros": 31403, "output_level": 6, "num_output_files": 1, "total_output_size": 7224971, "num_input_records": 5451, "num_output_records": 4937, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434800376690, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759434800380931, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:53:20.323802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:53:20.381298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:53:20.381306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:53:20.381310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:53:20.381313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:53:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:53:20.381331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:53:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1447: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:53:22 compute-0 ceph-mon[191910]: pgmap v1447: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:53:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:53:23 compute-0 nova_compute[355794]: 2025-10-02 19:53:23.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:23 compute-0 podman[432977]: 2025-10-02 19:53:23.751313054 +0000 UTC m=+0.166264472 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:53:23 compute-0 nova_compute[355794]: 2025-10-02 19:53:23.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:24 compute-0 ceph-mon[191910]: pgmap v1448: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct 02 19:53:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:53:26 compute-0 ceph-mon[191910]: pgmap v1449: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1450: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:28 compute-0 nova_compute[355794]: 2025-10-02 19:53:28.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:28 compute-0 ceph-mon[191910]: pgmap v1450: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:28 compute-0 nova_compute[355794]: 2025-10-02 19:53:28.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1451: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:29 compute-0 podman[157186]: time="2025-10-02T19:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:53:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:53:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9052 "" "Go-http-client/1.1"
Oct 02 19:53:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:53:30 compute-0 ceph-mon[191910]: pgmap v1451: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:30 compute-0 podman[432997]: 2025-10-02 19:53:30.727511017 +0000 UTC m=+0.137173739 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:53:30 compute-0 podman[432996]: 2025-10-02 19:53:30.744753205 +0000 UTC m=+0.160892729 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:53:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:31 compute-0 openstack_network_exporter[372736]: ERROR   19:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:53:31 compute-0 openstack_network_exporter[372736]: ERROR   19:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:53:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:53:31 compute-0 openstack_network_exporter[372736]: ERROR   19:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:53:31 compute-0 openstack_network_exporter[372736]: ERROR   19:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:53:31 compute-0 openstack_network_exporter[372736]: ERROR   19:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:53:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:53:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:53:32.305 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:53:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:53:32.306 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:53:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:53:32.307 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:53:32 compute-0 ceph-mon[191910]: pgmap v1452: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:33 compute-0 nova_compute[355794]: 2025-10-02 19:53:33.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:53:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:53:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:53:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:53:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:53:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:53:33 compute-0 nova_compute[355794]: 2025-10-02 19:53:33.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:34 compute-0 ceph-mon[191910]: pgmap v1453: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:53:35 compute-0 podman[433038]: 2025-10-02 19:53:35.700608582 +0000 UTC m=+0.111475805 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:53:35 compute-0 podman[433039]: 2025-10-02 19:53:35.724080207 +0000 UTC m=+0.129785013 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, distribution-scope=public, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, name=ubi9, version=9.4, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct 02 19:53:36 compute-0 ceph-mon[191910]: pgmap v1454: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:37 compute-0 ceph-mon[191910]: pgmap v1455: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:38 compute-0 nova_compute[355794]: 2025-10-02 19:53:38.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:38 compute-0 podman[433077]: 2025-10-02 19:53:38.743247882 +0000 UTC m=+0.154124959 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 02 19:53:38 compute-0 podman[433078]: 2025-10-02 19:53:38.757343687 +0000 UTC m=+0.163290743 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid)
Oct 02 19:53:38 compute-0 podman[433079]: 2025-10-02 19:53:38.763352947 +0000 UTC m=+0.164982689 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 19:53:38 compute-0 nova_compute[355794]: 2025-10-02 19:53:38.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1456: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:40 compute-0 ceph-mon[191910]: pgmap v1456: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:53:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:42 compute-0 ceph-mon[191910]: pgmap v1457: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:42 compute-0 podman[433136]: 2025-10-02 19:53:42.706886123 +0000 UTC m=+0.129084783 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.6, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:53:42 compute-0 podman[433137]: 2025-10-02 19:53:42.750785361 +0000 UTC m=+0.154559371 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:53:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:43 compute-0 nova_compute[355794]: 2025-10-02 19:53:43.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:43 compute-0 nova_compute[355794]: 2025-10-02 19:53:43.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:44 compute-0 ceph-mon[191910]: pgmap v1458: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1459: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:53:46 compute-0 ceph-mon[191910]: pgmap v1459: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1460: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:48 compute-0 nova_compute[355794]: 2025-10-02 19:53:48.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:48 compute-0 ceph-mon[191910]: pgmap v1460: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:48 compute-0 nova_compute[355794]: 2025-10-02 19:53:48.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:53:48 compute-0 nova_compute[355794]: 2025-10-02 19:53:48.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:53:48 compute-0 nova_compute[355794]: 2025-10-02 19:53:48.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:49 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 02 19:53:49 compute-0 sudo[433180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:53:49 compute-0 sudo[433180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:53:49 compute-0 sudo[433180]: pam_unix(sudo:session): session closed for user root
Oct 02 19:53:50 compute-0 sudo[433205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:53:50 compute-0 sudo[433205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:53:50 compute-0 sudo[433205]: pam_unix(sudo:session): session closed for user root
Oct 02 19:53:50 compute-0 sudo[433230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:53:50 compute-0 sudo[433230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:53:50 compute-0 sudo[433230]: pam_unix(sudo:session): session closed for user root
Oct 02 19:53:50 compute-0 ceph-mon[191910]: pgmap v1461: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:53:50 compute-0 sudo[433255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:53:50 compute-0 sudo[433255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:53:50 compute-0 nova_compute[355794]: 2025-10-02 19:53:50.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:53:50 compute-0 sudo[433255]: pam_unix(sudo:session): session closed for user root
Oct 02 19:53:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1462: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 19:53:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 19:53:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:53:51 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:53:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:53:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:53:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:53:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:53:51 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 5f41aaa3-dcbd-40bc-a4aa-2a66172e4082 does not exist
Oct 02 19:53:51 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 5d5d5fc7-1a7e-41d7-a088-85b61581f609 does not exist
Oct 02 19:53:51 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 1274401b-ed74-4857-98fb-2909e160cd60 does not exist
Oct 02 19:53:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:53:51 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:53:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:53:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:53:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:53:51 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:53:51 compute-0 sudo[433310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:53:51 compute-0 sudo[433310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:53:51 compute-0 sudo[433310]: pam_unix(sudo:session): session closed for user root
Oct 02 19:53:51 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 19:53:51 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:53:51 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:53:51 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:53:51 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:53:51 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:53:51 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:53:51 compute-0 sudo[433335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:53:51 compute-0 sudo[433335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:53:51 compute-0 sudo[433335]: pam_unix(sudo:session): session closed for user root
Oct 02 19:53:51 compute-0 sudo[433360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:53:51 compute-0 sudo[433360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:53:51 compute-0 sudo[433360]: pam_unix(sudo:session): session closed for user root
Oct 02 19:53:51 compute-0 sudo[433385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:53:51 compute-0 sudo[433385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:53:52 compute-0 podman[433448]: 2025-10-02 19:53:52.189318349 +0000 UTC m=+0.079840844 container create 8aa7da13d77f11259b111a3f5f413209f2b033d0544b8ed8a545ad8a6ee12fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_pare, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:53:52 compute-0 podman[433448]: 2025-10-02 19:53:52.163878162 +0000 UTC m=+0.054400617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:53:52 compute-0 systemd[1]: Started libpod-conmon-8aa7da13d77f11259b111a3f5f413209f2b033d0544b8ed8a545ad8a6ee12fb9.scope.
Oct 02 19:53:52 compute-0 ceph-mon[191910]: pgmap v1462: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:53:52 compute-0 podman[433448]: 2025-10-02 19:53:52.357869831 +0000 UTC m=+0.248392336 container init 8aa7da13d77f11259b111a3f5f413209f2b033d0544b8ed8a545ad8a6ee12fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:53:52 compute-0 podman[433448]: 2025-10-02 19:53:52.376962659 +0000 UTC m=+0.267485114 container start 8aa7da13d77f11259b111a3f5f413209f2b033d0544b8ed8a545ad8a6ee12fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_pare, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 19:53:52 compute-0 podman[433448]: 2025-10-02 19:53:52.384659933 +0000 UTC m=+0.275182488 container attach 8aa7da13d77f11259b111a3f5f413209f2b033d0544b8ed8a545ad8a6ee12fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_pare, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 19:53:52 compute-0 trusting_pare[433464]: 167 167
Oct 02 19:53:52 compute-0 systemd[1]: libpod-8aa7da13d77f11259b111a3f5f413209f2b033d0544b8ed8a545ad8a6ee12fb9.scope: Deactivated successfully.
Oct 02 19:53:52 compute-0 podman[433448]: 2025-10-02 19:53:52.396894679 +0000 UTC m=+0.287417154 container died 8aa7da13d77f11259b111a3f5f413209f2b033d0544b8ed8a545ad8a6ee12fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 02 19:53:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-4535902c768fe258bc2e718216c48ad2b9fd3b1c72789975e68dccddcf3ae752-merged.mount: Deactivated successfully.
Oct 02 19:53:52 compute-0 podman[433448]: 2025-10-02 19:53:52.49696227 +0000 UTC m=+0.387484725 container remove 8aa7da13d77f11259b111a3f5f413209f2b033d0544b8ed8a545ad8a6ee12fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_pare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 19:53:52 compute-0 systemd[1]: libpod-conmon-8aa7da13d77f11259b111a3f5f413209f2b033d0544b8ed8a545ad8a6ee12fb9.scope: Deactivated successfully.
Oct 02 19:53:52 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 02 19:53:52 compute-0 nova_compute[355794]: 2025-10-02 19:53:52.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:53:52 compute-0 nova_compute[355794]: 2025-10-02 19:53:52.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:53:52 compute-0 nova_compute[355794]: 2025-10-02 19:53:52.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:53:52 compute-0 podman[433488]: 2025-10-02 19:53:52.775577829 +0000 UTC m=+0.085412793 container create fc0ba2bb434e78ba1431d12559d6d199a3202ac956d2fa5f47add4e9c30f180a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:53:52 compute-0 podman[433488]: 2025-10-02 19:53:52.738465432 +0000 UTC m=+0.048300446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:53:52 compute-0 systemd[1]: Started libpod-conmon-fc0ba2bb434e78ba1431d12559d6d199a3202ac956d2fa5f47add4e9c30f180a.scope.
Oct 02 19:53:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e034eb990065fc95ebb690d86b236d9b1cab008141911ec0bdf83648ffc2bbcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e034eb990065fc95ebb690d86b236d9b1cab008141911ec0bdf83648ffc2bbcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e034eb990065fc95ebb690d86b236d9b1cab008141911ec0bdf83648ffc2bbcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e034eb990065fc95ebb690d86b236d9b1cab008141911ec0bdf83648ffc2bbcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e034eb990065fc95ebb690d86b236d9b1cab008141911ec0bdf83648ffc2bbcf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:53:52 compute-0 podman[433488]: 2025-10-02 19:53:52.98057274 +0000 UTC m=+0.290407724 container init fc0ba2bb434e78ba1431d12559d6d199a3202ac956d2fa5f47add4e9c30f180a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct 02 19:53:52 compute-0 nova_compute[355794]: 2025-10-02 19:53:52.986 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:53:52 compute-0 nova_compute[355794]: 2025-10-02 19:53:52.987 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:53:52 compute-0 nova_compute[355794]: 2025-10-02 19:53:52.987 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:53:52 compute-0 nova_compute[355794]: 2025-10-02 19:53:52.987 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:53:53 compute-0 podman[433488]: 2025-10-02 19:53:53.013347292 +0000 UTC m=+0.323182246 container start fc0ba2bb434e78ba1431d12559d6d199a3202ac956d2fa5f47add4e9c30f180a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_nash, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:53:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:53 compute-0 podman[433488]: 2025-10-02 19:53:53.080728823 +0000 UTC m=+0.390563807 container attach fc0ba2bb434e78ba1431d12559d6d199a3202ac956d2fa5f47add4e9c30f180a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:53:53 compute-0 nova_compute[355794]: 2025-10-02 19:53:53.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:53 compute-0 nova_compute[355794]: 2025-10-02 19:53:53.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.170 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.193 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.193 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.194 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.194 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.225 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.225 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.226 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.226 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.226 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:53:54 compute-0 elated_nash[433505]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:53:54 compute-0 elated_nash[433505]: --> relative data size: 1.0
Oct 02 19:53:54 compute-0 elated_nash[433505]: --> All data devices are unavailable
Oct 02 19:53:54 compute-0 systemd[1]: libpod-fc0ba2bb434e78ba1431d12559d6d199a3202ac956d2fa5f47add4e9c30f180a.scope: Deactivated successfully.
Oct 02 19:53:54 compute-0 podman[433488]: 2025-10-02 19:53:54.282806898 +0000 UTC m=+1.592641872 container died fc0ba2bb434e78ba1431d12559d6d199a3202ac956d2fa5f47add4e9c30f180a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_nash, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:53:54 compute-0 systemd[1]: libpod-fc0ba2bb434e78ba1431d12559d6d199a3202ac956d2fa5f47add4e9c30f180a.scope: Consumed 1.197s CPU time.
Oct 02 19:53:54 compute-0 ceph-mon[191910]: pgmap v1463: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e034eb990065fc95ebb690d86b236d9b1cab008141911ec0bdf83648ffc2bbcf-merged.mount: Deactivated successfully.
Oct 02 19:53:54 compute-0 podman[433488]: 2025-10-02 19:53:54.393226785 +0000 UTC m=+1.703061749 container remove fc0ba2bb434e78ba1431d12559d6d199a3202ac956d2fa5f47add4e9c30f180a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_nash, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:53:54 compute-0 systemd[1]: libpod-conmon-fc0ba2bb434e78ba1431d12559d6d199a3202ac956d2fa5f47add4e9c30f180a.scope: Deactivated successfully.
Oct 02 19:53:54 compute-0 sudo[433385]: pam_unix(sudo:session): session closed for user root
Oct 02 19:53:54 compute-0 podman[433536]: 2025-10-02 19:53:54.444786796 +0000 UTC m=+0.131146199 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:53:54 compute-0 sudo[433585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:53:54 compute-0 sudo[433585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:53:54 compute-0 sudo[433585]: pam_unix(sudo:session): session closed for user root
Oct 02 19:53:54 compute-0 sudo[433610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:53:54 compute-0 sudo[433610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:53:54 compute-0 sudo[433610]: pam_unix(sudo:session): session closed for user root
Oct 02 19:53:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:53:54 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3016255085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:53:54 compute-0 sudo[433635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:53:54 compute-0 sudo[433635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:53:54 compute-0 sudo[433635]: pam_unix(sudo:session): session closed for user root
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.731 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:53:54 compute-0 sudo[433662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:53:54 compute-0 sudo[433662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.834 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.834 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.834 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.840 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.841 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.841 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.846 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.846 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.847 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.851 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.852 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:53:54 compute-0 nova_compute[355794]: 2025-10-02 19:53:54.852 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:53:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:53:55 compute-0 nova_compute[355794]: 2025-10-02 19:53:55.284 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:53:55 compute-0 nova_compute[355794]: 2025-10-02 19:53:55.286 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3206MB free_disk=59.85567855834961GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:53:55 compute-0 nova_compute[355794]: 2025-10-02 19:53:55.287 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:53:55 compute-0 nova_compute[355794]: 2025-10-02 19:53:55.288 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:53:55 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3016255085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:53:55 compute-0 podman[433725]: 2025-10-02 19:53:55.3309231 +0000 UTC m=+0.076581547 container create aac3d883b55461ea288cae898e66f5176fc29f3a4f1776f2288dabf026e22dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:53:55 compute-0 podman[433725]: 2025-10-02 19:53:55.305145905 +0000 UTC m=+0.050804372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:53:55 compute-0 systemd[1]: Started libpod-conmon-aac3d883b55461ea288cae898e66f5176fc29f3a4f1776f2288dabf026e22dff.scope.
Oct 02 19:53:55 compute-0 nova_compute[355794]: 2025-10-02 19:53:55.409 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:53:55 compute-0 nova_compute[355794]: 2025-10-02 19:53:55.410 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:53:55 compute-0 nova_compute[355794]: 2025-10-02 19:53:55.410 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance b88114e8-b15d-4a78-ac15-3dd7ee30b949 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:53:55 compute-0 nova_compute[355794]: 2025-10-02 19:53:55.411 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 58f8959a-5f7e-44a5-9dca-65be0506a4c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:53:55 compute-0 nova_compute[355794]: 2025-10-02 19:53:55.412 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:53:55 compute-0 nova_compute[355794]: 2025-10-02 19:53:55.412 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:53:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:53:55 compute-0 podman[433725]: 2025-10-02 19:53:55.47382375 +0000 UTC m=+0.219482247 container init aac3d883b55461ea288cae898e66f5176fc29f3a4f1776f2288dabf026e22dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 19:53:55 compute-0 podman[433725]: 2025-10-02 19:53:55.485048449 +0000 UTC m=+0.230706906 container start aac3d883b55461ea288cae898e66f5176fc29f3a4f1776f2288dabf026e22dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:53:55 compute-0 podman[433725]: 2025-10-02 19:53:55.490063612 +0000 UTC m=+0.235722109 container attach aac3d883b55461ea288cae898e66f5176fc29f3a4f1776f2288dabf026e22dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 19:53:55 compute-0 heuristic_blackwell[433738]: 167 167
Oct 02 19:53:55 compute-0 systemd[1]: libpod-aac3d883b55461ea288cae898e66f5176fc29f3a4f1776f2288dabf026e22dff.scope: Deactivated successfully.
Oct 02 19:53:55 compute-0 podman[433725]: 2025-10-02 19:53:55.505266237 +0000 UTC m=+0.250924714 container died aac3d883b55461ea288cae898e66f5176fc29f3a4f1776f2288dabf026e22dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 02 19:53:55 compute-0 nova_compute[355794]: 2025-10-02 19:53:55.548 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:53:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-479eefbcc7f224441da4f499ae42f9d6a5444281661ff07fccfbda6a19720269-merged.mount: Deactivated successfully.
Oct 02 19:53:55 compute-0 podman[433725]: 2025-10-02 19:53:55.575073973 +0000 UTC m=+0.320732430 container remove aac3d883b55461ea288cae898e66f5176fc29f3a4f1776f2288dabf026e22dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 02 19:53:55 compute-0 systemd[1]: libpod-conmon-aac3d883b55461ea288cae898e66f5176fc29f3a4f1776f2288dabf026e22dff.scope: Deactivated successfully.
Oct 02 19:53:55 compute-0 podman[433783]: 2025-10-02 19:53:55.851927825 +0000 UTC m=+0.092893411 container create db53b1e7a7c74d4d63c2dfee5903badbed9545742a40630816366f07d1db564c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 19:53:55 compute-0 podman[433783]: 2025-10-02 19:53:55.807105553 +0000 UTC m=+0.048071179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:53:55 compute-0 systemd[1]: Started libpod-conmon-db53b1e7a7c74d4d63c2dfee5903badbed9545742a40630816366f07d1db564c.scope.
Oct 02 19:53:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:53:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dd80cc18a1fa5867258a6ce059f3d344ca6d828962a9d86ab9a5a250e9159c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:53:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dd80cc18a1fa5867258a6ce059f3d344ca6d828962a9d86ab9a5a250e9159c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:53:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dd80cc18a1fa5867258a6ce059f3d344ca6d828962a9d86ab9a5a250e9159c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:53:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dd80cc18a1fa5867258a6ce059f3d344ca6d828962a9d86ab9a5a250e9159c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:53:56 compute-0 podman[433783]: 2025-10-02 19:53:56.023415895 +0000 UTC m=+0.264381501 container init db53b1e7a7c74d4d63c2dfee5903badbed9545742a40630816366f07d1db564c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:53:56 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:53:56 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2749802145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:53:56 compute-0 podman[433783]: 2025-10-02 19:53:56.055875479 +0000 UTC m=+0.296841075 container start db53b1e7a7c74d4d63c2dfee5903badbed9545742a40630816366f07d1db564c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 19:53:56 compute-0 podman[433783]: 2025-10-02 19:53:56.061500448 +0000 UTC m=+0.302466044 container attach db53b1e7a7c74d4d63c2dfee5903badbed9545742a40630816366f07d1db564c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 19:53:56 compute-0 nova_compute[355794]: 2025-10-02 19:53:56.082 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:53:56 compute-0 nova_compute[355794]: 2025-10-02 19:53:56.094 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:53:56 compute-0 nova_compute[355794]: 2025-10-02 19:53:56.107 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:53:56 compute-0 nova_compute[355794]: 2025-10-02 19:53:56.109 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:53:56 compute-0 nova_compute[355794]: 2025-10-02 19:53:56.109 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:53:56 compute-0 ceph-mon[191910]: pgmap v1464: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:56 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2749802145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:53:56 compute-0 silly_banzai[433799]: {
Oct 02 19:53:56 compute-0 silly_banzai[433799]:     "0": [
Oct 02 19:53:56 compute-0 silly_banzai[433799]:         {
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "devices": [
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "/dev/loop3"
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             ],
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "lv_name": "ceph_lv0",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "lv_size": "21470642176",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "name": "ceph_lv0",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "tags": {
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.cluster_name": "ceph",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.crush_device_class": "",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.encrypted": "0",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.osd_id": "0",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.type": "block",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.vdo": "0"
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             },
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "type": "block",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "vg_name": "ceph_vg0"
Oct 02 19:53:56 compute-0 silly_banzai[433799]:         }
Oct 02 19:53:56 compute-0 silly_banzai[433799]:     ],
Oct 02 19:53:56 compute-0 silly_banzai[433799]:     "1": [
Oct 02 19:53:56 compute-0 silly_banzai[433799]:         {
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "devices": [
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "/dev/loop4"
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             ],
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "lv_name": "ceph_lv1",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "lv_size": "21470642176",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "name": "ceph_lv1",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "tags": {
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.cluster_name": "ceph",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.crush_device_class": "",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.encrypted": "0",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.osd_id": "1",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.type": "block",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.vdo": "0"
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             },
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "type": "block",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "vg_name": "ceph_vg1"
Oct 02 19:53:56 compute-0 silly_banzai[433799]:         }
Oct 02 19:53:56 compute-0 silly_banzai[433799]:     ],
Oct 02 19:53:56 compute-0 silly_banzai[433799]:     "2": [
Oct 02 19:53:56 compute-0 silly_banzai[433799]:         {
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "devices": [
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "/dev/loop5"
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             ],
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "lv_name": "ceph_lv2",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "lv_size": "21470642176",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "name": "ceph_lv2",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "tags": {
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.cluster_name": "ceph",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.crush_device_class": "",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.encrypted": "0",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.osd_id": "2",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.type": "block",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:                 "ceph.vdo": "0"
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             },
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "type": "block",
Oct 02 19:53:56 compute-0 silly_banzai[433799]:             "vg_name": "ceph_vg2"
Oct 02 19:53:56 compute-0 silly_banzai[433799]:         }
Oct 02 19:53:56 compute-0 silly_banzai[433799]:     ]
Oct 02 19:53:56 compute-0 silly_banzai[433799]: }
Oct 02 19:53:56 compute-0 systemd[1]: libpod-db53b1e7a7c74d4d63c2dfee5903badbed9545742a40630816366f07d1db564c.scope: Deactivated successfully.
Oct 02 19:53:56 compute-0 podman[433783]: 2025-10-02 19:53:56.892956188 +0000 UTC m=+1.133921804 container died db53b1e7a7c74d4d63c2dfee5903badbed9545742a40630816366f07d1db564c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct 02 19:53:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1dd80cc18a1fa5867258a6ce059f3d344ca6d828962a9d86ab9a5a250e9159c1-merged.mount: Deactivated successfully.
Oct 02 19:53:56 compute-0 podman[433783]: 2025-10-02 19:53:56.990232225 +0000 UTC m=+1.231197811 container remove db53b1e7a7c74d4d63c2dfee5903badbed9545742a40630816366f07d1db564c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 02 19:53:57 compute-0 systemd[1]: libpod-conmon-db53b1e7a7c74d4d63c2dfee5903badbed9545742a40630816366f07d1db564c.scope: Deactivated successfully.
Oct 02 19:53:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1465: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:57 compute-0 sudo[433662]: pam_unix(sudo:session): session closed for user root
Oct 02 19:53:57 compute-0 sudo[433821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:53:57 compute-0 sudo[433821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:53:57 compute-0 sudo[433821]: pam_unix(sudo:session): session closed for user root
Oct 02 19:53:57 compute-0 sudo[433846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:53:57 compute-0 sudo[433846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:53:57 compute-0 sudo[433846]: pam_unix(sudo:session): session closed for user root
Oct 02 19:53:57 compute-0 sudo[433871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:53:57 compute-0 sudo[433871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:53:57 compute-0 sudo[433871]: pam_unix(sudo:session): session closed for user root
Oct 02 19:53:57 compute-0 nova_compute[355794]: 2025-10-02 19:53:57.490 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:53:57 compute-0 nova_compute[355794]: 2025-10-02 19:53:57.490 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:53:57 compute-0 nova_compute[355794]: 2025-10-02 19:53:57.491 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:53:57 compute-0 sudo[433896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:53:57 compute-0 sudo[433896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:53:57 compute-0 nova_compute[355794]: 2025-10-02 19:53:57.569 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:53:58 compute-0 podman[433960]: 2025-10-02 19:53:58.125153855 +0000 UTC m=+0.096128778 container create db963150513845891a618efe469b6b09442364776e4040758e10412a1ad8d8ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wright, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 19:53:58 compute-0 podman[433960]: 2025-10-02 19:53:58.089792384 +0000 UTC m=+0.060767347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:53:58 compute-0 systemd[1]: Started libpod-conmon-db963150513845891a618efe469b6b09442364776e4040758e10412a1ad8d8ad.scope.
Oct 02 19:53:58 compute-0 nova_compute[355794]: 2025-10-02 19:53:58.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:53:58 compute-0 podman[433960]: 2025-10-02 19:53:58.286146386 +0000 UTC m=+0.257121299 container init db963150513845891a618efe469b6b09442364776e4040758e10412a1ad8d8ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:53:58 compute-0 podman[433960]: 2025-10-02 19:53:58.296119431 +0000 UTC m=+0.267094324 container start db963150513845891a618efe469b6b09442364776e4040758e10412a1ad8d8ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:53:58 compute-0 podman[433960]: 2025-10-02 19:53:58.30099461 +0000 UTC m=+0.271969493 container attach db963150513845891a618efe469b6b09442364776e4040758e10412a1ad8d8ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:53:58 compute-0 hungry_wright[433976]: 167 167
Oct 02 19:53:58 compute-0 systemd[1]: libpod-db963150513845891a618efe469b6b09442364776e4040758e10412a1ad8d8ad.scope: Deactivated successfully.
Oct 02 19:53:58 compute-0 conmon[433976]: conmon db963150513845891a61 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db963150513845891a618efe469b6b09442364776e4040758e10412a1ad8d8ad.scope/container/memory.events
Oct 02 19:53:58 compute-0 podman[433960]: 2025-10-02 19:53:58.308652494 +0000 UTC m=+0.279627377 container died db963150513845891a618efe469b6b09442364776e4040758e10412a1ad8d8ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wright, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 19:53:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-7da4ba88f0a9abc3de4f8e3d002b57227f4b6bda6948e2ae28ad0cf3b4cc93ac-merged.mount: Deactivated successfully.
Oct 02 19:53:58 compute-0 ceph-mon[191910]: pgmap v1465: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:58 compute-0 podman[433960]: 2025-10-02 19:53:58.370745065 +0000 UTC m=+0.341719948 container remove db963150513845891a618efe469b6b09442364776e4040758e10412a1ad8d8ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 19:53:58 compute-0 systemd[1]: libpod-conmon-db963150513845891a618efe469b6b09442364776e4040758e10412a1ad8d8ad.scope: Deactivated successfully.
Oct 02 19:53:58 compute-0 podman[433999]: 2025-10-02 19:53:58.616221913 +0000 UTC m=+0.081665002 container create 730337b7d3ae265a6579da0eb602072226566095578ceb64b7e70cec5b5cc2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lalande, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 19:53:58 compute-0 podman[433999]: 2025-10-02 19:53:58.590202711 +0000 UTC m=+0.055645880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:53:58 compute-0 systemd[1]: Started libpod-conmon-730337b7d3ae265a6579da0eb602072226566095578ceb64b7e70cec5b5cc2fd.scope.
Oct 02 19:53:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8a09c4bc26ecf73f0cf453d83198b07111e4eabe1a7f1adc337b44a8af0b0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8a09c4bc26ecf73f0cf453d83198b07111e4eabe1a7f1adc337b44a8af0b0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8a09c4bc26ecf73f0cf453d83198b07111e4eabe1a7f1adc337b44a8af0b0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8a09c4bc26ecf73f0cf453d83198b07111e4eabe1a7f1adc337b44a8af0b0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:53:58 compute-0 podman[433999]: 2025-10-02 19:53:58.767204798 +0000 UTC m=+0.232647937 container init 730337b7d3ae265a6579da0eb602072226566095578ceb64b7e70cec5b5cc2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 19:53:58 compute-0 podman[433999]: 2025-10-02 19:53:58.799849666 +0000 UTC m=+0.265292765 container start 730337b7d3ae265a6579da0eb602072226566095578ceb64b7e70cec5b5cc2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lalande, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 19:53:58 compute-0 podman[433999]: 2025-10-02 19:53:58.817233869 +0000 UTC m=+0.282676978 container attach 730337b7d3ae265a6579da0eb602072226566095578ceb64b7e70cec5b5cc2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 19:53:58 compute-0 nova_compute[355794]: 2025-10-02 19:53:58.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:53:59 compute-0 podman[157186]: time="2025-10-02T19:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:53:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47839 "" "Go-http-client/1.1"
Oct 02 19:53:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9476 "" "Go-http-client/1.1"
Oct 02 19:53:59 compute-0 determined_lalande[434015]: {
Oct 02 19:53:59 compute-0 determined_lalande[434015]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:53:59 compute-0 determined_lalande[434015]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:53:59 compute-0 determined_lalande[434015]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:53:59 compute-0 determined_lalande[434015]:         "osd_id": 1,
Oct 02 19:53:59 compute-0 determined_lalande[434015]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:53:59 compute-0 determined_lalande[434015]:         "type": "bluestore"
Oct 02 19:53:59 compute-0 determined_lalande[434015]:     },
Oct 02 19:53:59 compute-0 determined_lalande[434015]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:53:59 compute-0 determined_lalande[434015]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:53:59 compute-0 determined_lalande[434015]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:53:59 compute-0 determined_lalande[434015]:         "osd_id": 2,
Oct 02 19:53:59 compute-0 determined_lalande[434015]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:53:59 compute-0 determined_lalande[434015]:         "type": "bluestore"
Oct 02 19:53:59 compute-0 determined_lalande[434015]:     },
Oct 02 19:53:59 compute-0 determined_lalande[434015]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:53:59 compute-0 determined_lalande[434015]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:53:59 compute-0 determined_lalande[434015]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:53:59 compute-0 determined_lalande[434015]:         "osd_id": 0,
Oct 02 19:53:59 compute-0 determined_lalande[434015]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:53:59 compute-0 determined_lalande[434015]:         "type": "bluestore"
Oct 02 19:53:59 compute-0 determined_lalande[434015]:     }
Oct 02 19:53:59 compute-0 determined_lalande[434015]: }
Oct 02 19:53:59 compute-0 systemd[1]: libpod-730337b7d3ae265a6579da0eb602072226566095578ceb64b7e70cec5b5cc2fd.scope: Deactivated successfully.
Oct 02 19:53:59 compute-0 systemd[1]: libpod-730337b7d3ae265a6579da0eb602072226566095578ceb64b7e70cec5b5cc2fd.scope: Consumed 1.127s CPU time.
Oct 02 19:53:59 compute-0 podman[433999]: 2025-10-02 19:53:59.930076891 +0000 UTC m=+1.395520020 container died 730337b7d3ae265a6579da0eb602072226566095578ceb64b7e70cec5b5cc2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 19:54:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf8a09c4bc26ecf73f0cf453d83198b07111e4eabe1a7f1adc337b44a8af0b0d-merged.mount: Deactivated successfully.
Oct 02 19:54:00 compute-0 podman[433999]: 2025-10-02 19:54:00.055713412 +0000 UTC m=+1.521156501 container remove 730337b7d3ae265a6579da0eb602072226566095578ceb64b7e70cec5b5cc2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lalande, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 19:54:00 compute-0 systemd[1]: libpod-conmon-730337b7d3ae265a6579da0eb602072226566095578ceb64b7e70cec5b5cc2fd.scope: Deactivated successfully.
Oct 02 19:54:00 compute-0 sudo[433896]: pam_unix(sudo:session): session closed for user root
Oct 02 19:54:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:54:00 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:54:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:54:00 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:54:00 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 1ac35a66-5307-4b79-afc9-579d2f500d0d does not exist
Oct 02 19:54:00 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev d8f5b492-99a0-4d8d-9490-41b6bc4833e4 does not exist
Oct 02 19:54:00 compute-0 sudo[434062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:54:00 compute-0 sudo[434062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:54:00 compute-0 sudo[434062]: pam_unix(sudo:session): session closed for user root
Oct 02 19:54:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:54:00 compute-0 sudo[434087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:54:00 compute-0 sudo[434087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:54:00 compute-0 sudo[434087]: pam_unix(sudo:session): session closed for user root
Oct 02 19:54:00 compute-0 ceph-mon[191910]: pgmap v1466: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:00 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:54:00 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:54:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:01 compute-0 openstack_network_exporter[372736]: ERROR   19:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:54:01 compute-0 openstack_network_exporter[372736]: ERROR   19:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:54:01 compute-0 openstack_network_exporter[372736]: ERROR   19:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:54:01 compute-0 openstack_network_exporter[372736]: ERROR   19:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:54:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:54:01 compute-0 openstack_network_exporter[372736]: ERROR   19:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:54:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:54:01 compute-0 podman[434112]: 2025-10-02 19:54:01.703113691 +0000 UTC m=+0.118534974 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:54:01 compute-0 podman[434113]: 2025-10-02 19:54:01.734231478 +0000 UTC m=+0.135060093 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 19:54:02 compute-0 ceph-mon[191910]: pgmap v1467: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1468: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:03 compute-0 nova_compute[355794]: 2025-10-02 19:54:03.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:54:03
Oct 02 19:54:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:54:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:54:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.meta']
Oct 02 19:54:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:54:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:54:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:54:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:54:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:54:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:54:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:54:03 compute-0 nova_compute[355794]: 2025-10-02 19:54:03.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:54:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:54:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:54:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:54:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:54:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:54:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:54:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:54:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:54:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:54:04 compute-0 ceph-mgr[192222]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2078717049
Oct 02 19:54:04 compute-0 ceph-mon[191910]: pgmap v1468: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:04 compute-0 nova_compute[355794]: 2025-10-02 19:54:04.569 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1469: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:54:06 compute-0 ceph-mon[191910]: pgmap v1469: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:06 compute-0 podman[434154]: 2025-10-02 19:54:06.701520901 +0000 UTC m=+0.122089178 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:54:06 compute-0 podman[434155]: 2025-10-02 19:54:06.717751412 +0000 UTC m=+0.137310342 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, container_name=kepler, distribution-scope=public, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.29.0, name=ubi9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 19:54:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:07 compute-0 ceph-mon[191910]: pgmap v1470: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:08 compute-0 nova_compute[355794]: 2025-10-02 19:54:08.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:08 compute-0 nova_compute[355794]: 2025-10-02 19:54:08.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:09 compute-0 podman[434193]: 2025-10-02 19:54:09.661919234 +0000 UTC m=+0.082695691 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 02 19:54:09 compute-0 podman[434194]: 2025-10-02 19:54:09.688753937 +0000 UTC m=+0.094854113 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:54:09 compute-0 podman[434195]: 2025-10-02 19:54:09.704607329 +0000 UTC m=+0.117695071 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 19:54:10 compute-0 ceph-mon[191910]: pgmap v1471: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:54:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1472: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:12 compute-0 ceph-mon[191910]: pgmap v1472: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022104765945497166 of space, bias 1.0, pg target 0.663142978364915 quantized to 32 (current 32)
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:54:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:54:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:13 compute-0 nova_compute[355794]: 2025-10-02 19:54:13.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:13 compute-0 podman[434252]: 2025-10-02 19:54:13.72729504 +0000 UTC m=+0.146479106 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:54:13 compute-0 podman[434251]: 2025-10-02 19:54:13.728907353 +0000 UTC m=+0.142595353 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 19:54:13 compute-0 nova_compute[355794]: 2025-10-02 19:54:13.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:14 compute-0 ceph-mon[191910]: pgmap v1473: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:54:16 compute-0 ceph-mon[191910]: pgmap v1474: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1475: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:18 compute-0 nova_compute[355794]: 2025-10-02 19:54:18.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:18 compute-0 ceph-mon[191910]: pgmap v1475: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:18 compute-0 nova_compute[355794]: 2025-10-02 19:54:18.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:54:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3045385470' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:54:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:54:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3045385470' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:54:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:54:20 compute-0 ceph-mon[191910]: pgmap v1476: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3045385470' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:54:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3045385470' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:54:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:22 compute-0 ceph-mon[191910]: pgmap v1477: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1478: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:23 compute-0 nova_compute[355794]: 2025-10-02 19:54:23.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:23 compute-0 nova_compute[355794]: 2025-10-02 19:54:23.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:24 compute-0 ceph-mon[191910]: pgmap v1478: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:24 compute-0 podman[434295]: 2025-10-02 19:54:24.695899097 +0000 UTC m=+0.118683727 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 19:54:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:54:25 compute-0 ceph-mon[191910]: pgmap v1479: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:28 compute-0 ceph-mon[191910]: pgmap v1480: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:28 compute-0 nova_compute[355794]: 2025-10-02 19:54:28.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:28 compute-0 nova_compute[355794]: 2025-10-02 19:54:28.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:29 compute-0 podman[157186]: time="2025-10-02T19:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:54:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:54:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9044 "" "Go-http-client/1.1"
Oct 02 19:54:30 compute-0 ceph-mon[191910]: pgmap v1481: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:54:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:31 compute-0 openstack_network_exporter[372736]: ERROR   19:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:54:31 compute-0 openstack_network_exporter[372736]: ERROR   19:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:54:31 compute-0 openstack_network_exporter[372736]: ERROR   19:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:54:31 compute-0 openstack_network_exporter[372736]: ERROR   19:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:54:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:54:31 compute-0 openstack_network_exporter[372736]: ERROR   19:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:54:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:54:32 compute-0 ceph-mon[191910]: pgmap v1482: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:54:32.306 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:54:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:54:32.306 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:54:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:54:32.307 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:54:32 compute-0 podman[434314]: 2025-10-02 19:54:32.695015529 +0000 UTC m=+0.108540307 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:54:32 compute-0 podman[434315]: 2025-10-02 19:54:32.709258228 +0000 UTC m=+0.114055994 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm)
Oct 02 19:54:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:33 compute-0 nova_compute[355794]: 2025-10-02 19:54:33.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:54:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:54:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:54:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:54:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:54:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:54:33 compute-0 nova_compute[355794]: 2025-10-02 19:54:33.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:34 compute-0 ceph-mon[191910]: pgmap v1483: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:54:36 compute-0 ceph-mon[191910]: pgmap v1484: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:37 compute-0 podman[434358]: 2025-10-02 19:54:37.726274692 +0000 UTC m=+0.132121984 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_id=edpm, release-0.7.12=, container_name=kepler, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Oct 02 19:54:37 compute-0 podman[434357]: 2025-10-02 19:54:37.738739164 +0000 UTC m=+0.152468085 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:54:38 compute-0 ceph-mon[191910]: pgmap v1485: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:38 compute-0 nova_compute[355794]: 2025-10-02 19:54:38.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:38 compute-0 nova_compute[355794]: 2025-10-02 19:54:38.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:40 compute-0 ceph-mon[191910]: pgmap v1486: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:54:40 compute-0 podman[434393]: 2025-10-02 19:54:40.671585515 +0000 UTC m=+0.092045708 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:54:40 compute-0 podman[434392]: 2025-10-02 19:54:40.703566345 +0000 UTC m=+0.120261988 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:54:40 compute-0 podman[434394]: 2025-10-02 19:54:40.709280967 +0000 UTC m=+0.120572946 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 02 19:54:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:42 compute-0 ceph-mon[191910]: pgmap v1487: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:43 compute-0 nova_compute[355794]: 2025-10-02 19:54:43.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:43 compute-0 nova_compute[355794]: 2025-10-02 19:54:43.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:44 compute-0 ceph-mon[191910]: pgmap v1488: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:44 compute-0 podman[434456]: 2025-10-02 19:54:44.720972597 +0000 UTC m=+0.137306942 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 02 19:54:44 compute-0 podman[434457]: 2025-10-02 19:54:44.73726718 +0000 UTC m=+0.150211935 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:54:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:54:46 compute-0 ceph-mon[191910]: pgmap v1489: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:48 compute-0 nova_compute[355794]: 2025-10-02 19:54:48.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:48 compute-0 ceph-mon[191910]: pgmap v1490: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:48 compute-0 nova_compute[355794]: 2025-10-02 19:54:48.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1491: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:49 compute-0 nova_compute[355794]: 2025-10-02 19:54:49.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:49 compute-0 nova_compute[355794]: 2025-10-02 19:54:49.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:54:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:54:50 compute-0 ceph-mon[191910]: pgmap v1491: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:51 compute-0 nova_compute[355794]: 2025-10-02 19:54:51.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:52 compute-0 ceph-mon[191910]: pgmap v1492: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:53 compute-0 nova_compute[355794]: 2025-10-02 19:54:53.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:53 compute-0 nova_compute[355794]: 2025-10-02 19:54:53.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:53 compute-0 nova_compute[355794]: 2025-10-02 19:54:53.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:54:53 compute-0 nova_compute[355794]: 2025-10-02 19:54:53.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:53 compute-0 nova_compute[355794]: 2025-10-02 19:54:53.995 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:54:53 compute-0 nova_compute[355794]: 2025-10-02 19:54:53.996 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:54:53 compute-0 nova_compute[355794]: 2025-10-02 19:54:53.996 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:54:54 compute-0 ceph-mon[191910]: pgmap v1493: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:54:55 compute-0 nova_compute[355794]: 2025-10-02 19:54:55.516 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Updating instance_info_cache with network_info: [{"id": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "address": "fa:16:3e:10:ab:29", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc759e48d-48", "ovs_interfaceid": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:54:55 compute-0 nova_compute[355794]: 2025-10-02 19:54:55.535 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:54:55 compute-0 nova_compute[355794]: 2025-10-02 19:54:55.536 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:54:55 compute-0 nova_compute[355794]: 2025-10-02 19:54:55.537 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:55 compute-0 nova_compute[355794]: 2025-10-02 19:54:55.538 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:55 compute-0 nova_compute[355794]: 2025-10-02 19:54:55.575 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:54:55 compute-0 nova_compute[355794]: 2025-10-02 19:54:55.576 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:54:55 compute-0 nova_compute[355794]: 2025-10-02 19:54:55.576 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:54:55 compute-0 nova_compute[355794]: 2025-10-02 19:54:55.576 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:54:55 compute-0 nova_compute[355794]: 2025-10-02 19:54:55.577 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:54:55 compute-0 podman[434502]: 2025-10-02 19:54:55.755018344 +0000 UTC m=+0.162772249 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 19:54:56 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:54:56 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/288836846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:54:56 compute-0 nova_compute[355794]: 2025-10-02 19:54:56.131 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:54:56 compute-0 nova_compute[355794]: 2025-10-02 19:54:56.313 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:54:56 compute-0 nova_compute[355794]: 2025-10-02 19:54:56.314 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:54:56 compute-0 nova_compute[355794]: 2025-10-02 19:54:56.314 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:54:56 compute-0 nova_compute[355794]: 2025-10-02 19:54:56.325 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:54:56 compute-0 nova_compute[355794]: 2025-10-02 19:54:56.325 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:54:56 compute-0 nova_compute[355794]: 2025-10-02 19:54:56.326 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:54:56 compute-0 nova_compute[355794]: 2025-10-02 19:54:56.335 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:54:56 compute-0 nova_compute[355794]: 2025-10-02 19:54:56.335 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:54:56 compute-0 nova_compute[355794]: 2025-10-02 19:54:56.336 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:54:56 compute-0 nova_compute[355794]: 2025-10-02 19:54:56.348 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:54:56 compute-0 nova_compute[355794]: 2025-10-02 19:54:56.348 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:54:56 compute-0 nova_compute[355794]: 2025-10-02 19:54:56.349 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:54:56 compute-0 ceph-mon[191910]: pgmap v1494: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:56 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/288836846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:54:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:57 compute-0 nova_compute[355794]: 2025-10-02 19:54:57.071 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:54:57 compute-0 nova_compute[355794]: 2025-10-02 19:54:57.074 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3235MB free_disk=59.85567855834961GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:54:57 compute-0 nova_compute[355794]: 2025-10-02 19:54:57.074 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:54:57 compute-0 nova_compute[355794]: 2025-10-02 19:54:57.075 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:54:57 compute-0 nova_compute[355794]: 2025-10-02 19:54:57.271 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:54:57 compute-0 nova_compute[355794]: 2025-10-02 19:54:57.272 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:54:57 compute-0 nova_compute[355794]: 2025-10-02 19:54:57.272 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance b88114e8-b15d-4a78-ac15-3dd7ee30b949 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:54:57 compute-0 nova_compute[355794]: 2025-10-02 19:54:57.273 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 58f8959a-5f7e-44a5-9dca-65be0506a4c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:54:57 compute-0 nova_compute[355794]: 2025-10-02 19:54:57.274 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:54:57 compute-0 nova_compute[355794]: 2025-10-02 19:54:57.274 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:54:57 compute-0 nova_compute[355794]: 2025-10-02 19:54:57.313 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing inventories for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 19:54:57 compute-0 nova_compute[355794]: 2025-10-02 19:54:57.340 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating ProviderTree inventory for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 19:54:57 compute-0 nova_compute[355794]: 2025-10-02 19:54:57.341 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating inventory in ProviderTree for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:54:57 compute-0 nova_compute[355794]: 2025-10-02 19:54:57.364 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing aggregate associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 19:54:57 compute-0 nova_compute[355794]: 2025-10-02 19:54:57.397 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing trait associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, traits: COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 19:54:57 compute-0 nova_compute[355794]: 2025-10-02 19:54:57.505 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:54:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:54:57 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/389665324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:54:58 compute-0 nova_compute[355794]: 2025-10-02 19:54:58.008 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:54:58 compute-0 nova_compute[355794]: 2025-10-02 19:54:58.018 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:54:58 compute-0 nova_compute[355794]: 2025-10-02 19:54:58.038 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:54:58 compute-0 nova_compute[355794]: 2025-10-02 19:54:58.041 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:54:58 compute-0 nova_compute[355794]: 2025-10-02 19:54:58.041 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:54:58 compute-0 nova_compute[355794]: 2025-10-02 19:54:58.079 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:58 compute-0 nova_compute[355794]: 2025-10-02 19:54:58.080 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:58 compute-0 nova_compute[355794]: 2025-10-02 19:54:58.082 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:58 compute-0 nova_compute[355794]: 2025-10-02 19:54:58.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:58 compute-0 ceph-mon[191910]: pgmap v1495: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:58 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/389665324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:54:58 compute-0 nova_compute[355794]: 2025-10-02 19:54:58.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:58 compute-0 nova_compute[355794]: 2025-10-02 19:54:58.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:54:59 compute-0 podman[157186]: time="2025-10-02T19:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:54:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:54:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9052 "" "Go-http-client/1.1"
Oct 02 19:55:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:55:00 compute-0 ceph-mon[191910]: pgmap v1496: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:00 compute-0 sudo[434566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:55:00 compute-0 sudo[434566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:55:00 compute-0 sudo[434566]: pam_unix(sudo:session): session closed for user root
Oct 02 19:55:00 compute-0 sudo[434591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:55:00 compute-0 sudo[434591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:55:00 compute-0 sudo[434591]: pam_unix(sudo:session): session closed for user root
Oct 02 19:55:00 compute-0 sudo[434616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:55:00 compute-0 sudo[434616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:55:00 compute-0 sudo[434616]: pam_unix(sudo:session): session closed for user root
Oct 02 19:55:00 compute-0 sudo[434641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:55:00 compute-0 sudo[434641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:55:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:01 compute-0 openstack_network_exporter[372736]: ERROR   19:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:55:01 compute-0 openstack_network_exporter[372736]: ERROR   19:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:55:01 compute-0 openstack_network_exporter[372736]: ERROR   19:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:55:01 compute-0 openstack_network_exporter[372736]: ERROR   19:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:55:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:55:01 compute-0 openstack_network_exporter[372736]: ERROR   19:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:55:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:55:01 compute-0 sudo[434641]: pam_unix(sudo:session): session closed for user root
Oct 02 19:55:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:55:01 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:55:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:55:01 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:55:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:55:01 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:55:01 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 3d3acf59-059e-49f6-94fd-a64361367f1f does not exist
Oct 02 19:55:01 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev b0781815-1cef-4d16-82e1-807578aa6965 does not exist
Oct 02 19:55:01 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev df9a79af-fc63-4a19-a320-54737345d01b does not exist
Oct 02 19:55:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:55:01 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:55:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:55:01 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:55:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:55:01 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:55:01 compute-0 sudo[434696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:55:01 compute-0 sudo[434696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:55:01 compute-0 sudo[434696]: pam_unix(sudo:session): session closed for user root
Oct 02 19:55:01 compute-0 sudo[434721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:55:01 compute-0 sudo[434721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:55:01 compute-0 sudo[434721]: pam_unix(sudo:session): session closed for user root
Oct 02 19:55:02 compute-0 sudo[434746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:55:02 compute-0 sudo[434746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:55:02 compute-0 sudo[434746]: pam_unix(sudo:session): session closed for user root
Oct 02 19:55:02 compute-0 sudo[434771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:55:02 compute-0 sudo[434771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:55:02 compute-0 ceph-mon[191910]: pgmap v1497: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:02 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:55:02 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:55:02 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:55:02 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:55:02 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:55:02 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:55:02 compute-0 podman[434835]: 2025-10-02 19:55:02.738767885 +0000 UTC m=+0.093677652 container create 303dddeebcbecb259ae5126a9b902bb7e7ea7a04e72d769e9b7b64a8260d4c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 19:55:02 compute-0 podman[434835]: 2025-10-02 19:55:02.699151892 +0000 UTC m=+0.054061669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:55:02 compute-0 systemd[1]: Started libpod-conmon-303dddeebcbecb259ae5126a9b902bb7e7ea7a04e72d769e9b7b64a8260d4c5c.scope.
Oct 02 19:55:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:55:02 compute-0 podman[434835]: 2025-10-02 19:55:02.864449568 +0000 UTC m=+0.219359365 container init 303dddeebcbecb259ae5126a9b902bb7e7ea7a04e72d769e9b7b64a8260d4c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:55:02 compute-0 podman[434835]: 2025-10-02 19:55:02.876346604 +0000 UTC m=+0.231256381 container start 303dddeebcbecb259ae5126a9b902bb7e7ea7a04e72d769e9b7b64a8260d4c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 19:55:02 compute-0 cranky_villani[434853]: 167 167
Oct 02 19:55:02 compute-0 podman[434835]: 2025-10-02 19:55:02.884675696 +0000 UTC m=+0.239585463 container attach 303dddeebcbecb259ae5126a9b902bb7e7ea7a04e72d769e9b7b64a8260d4c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 19:55:02 compute-0 systemd[1]: libpod-303dddeebcbecb259ae5126a9b902bb7e7ea7a04e72d769e9b7b64a8260d4c5c.scope: Deactivated successfully.
Oct 02 19:55:02 compute-0 podman[434835]: 2025-10-02 19:55:02.886656598 +0000 UTC m=+0.241566365 container died 303dddeebcbecb259ae5126a9b902bb7e7ea7a04e72d769e9b7b64a8260d4c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 19:55:02 compute-0 podman[434852]: 2025-10-02 19:55:02.91829753 +0000 UTC m=+0.111466396 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:55:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-37ed452c49e13ff396fe8499d4ff8b106189f8f11a4e30a14eb6b524f8e6ea43-merged.mount: Deactivated successfully.
Oct 02 19:55:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:03 compute-0 podman[434835]: 2025-10-02 19:55:03.159971556 +0000 UTC m=+0.514881343 container remove 303dddeebcbecb259ae5126a9b902bb7e7ea7a04e72d769e9b7b64a8260d4c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 02 19:55:03 compute-0 systemd[1]: libpod-conmon-303dddeebcbecb259ae5126a9b902bb7e7ea7a04e72d769e9b7b64a8260d4c5c.scope: Deactivated successfully.
Oct 02 19:55:03 compute-0 podman[434849]: 2025-10-02 19:55:03.274840911 +0000 UTC m=+0.467594785 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:55:03 compute-0 nova_compute[355794]: 2025-10-02 19:55:03.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:03 compute-0 podman[434914]: 2025-10-02 19:55:03.471179292 +0000 UTC m=+0.078136549 container create 613ae1d2bf67c098f8d3e2adf591642836cd986b78b7eab125f4212b0ba5118c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_blackburn, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 19:55:03 compute-0 podman[434914]: 2025-10-02 19:55:03.428861367 +0000 UTC m=+0.035818654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:55:03 compute-0 systemd[1]: Started libpod-conmon-613ae1d2bf67c098f8d3e2adf591642836cd986b78b7eab125f4212b0ba5118c.scope.
Oct 02 19:55:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:55:03
Oct 02 19:55:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:55:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:55:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'volumes', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta']
Oct 02 19:55:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:55:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:55:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:55:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bdfa78e9e5dd6cfb09041a408fef03fc3014d4ab25fa03235aff6ab60f3aeac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bdfa78e9e5dd6cfb09041a408fef03fc3014d4ab25fa03235aff6ab60f3aeac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bdfa78e9e5dd6cfb09041a408fef03fc3014d4ab25fa03235aff6ab60f3aeac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bdfa78e9e5dd6cfb09041a408fef03fc3014d4ab25fa03235aff6ab60f3aeac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bdfa78e9e5dd6cfb09041a408fef03fc3014d4ab25fa03235aff6ab60f3aeac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:55:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:55:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:55:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:55:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:55:03 compute-0 podman[434914]: 2025-10-02 19:55:03.715756116 +0000 UTC m=+0.322713463 container init 613ae1d2bf67c098f8d3e2adf591642836cd986b78b7eab125f4212b0ba5118c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 19:55:03 compute-0 podman[434914]: 2025-10-02 19:55:03.731736801 +0000 UTC m=+0.338694088 container start 613ae1d2bf67c098f8d3e2adf591642836cd986b78b7eab125f4212b0ba5118c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_blackburn, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:55:03 compute-0 podman[434914]: 2025-10-02 19:55:03.74636597 +0000 UTC m=+0.353323257 container attach 613ae1d2bf67c098f8d3e2adf591642836cd986b78b7eab125f4212b0ba5118c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_blackburn, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:55:03 compute-0 nova_compute[355794]: 2025-10-02 19:55:03.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:55:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:55:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:55:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:55:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:55:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:55:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:55:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:55:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:55:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.298 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.298 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.313 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.319 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b88114e8-b15d-4a78-ac15-3dd7ee30b949', 'name': 'vn-wkgthwg-uxhkceofcvut-4we5flt73ruq-vnf-oj5mzexrwavf', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {'metering.server_group': 'd2d7e2b0-01e0-44b1-b2c7-fe502b333743'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.324 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '58f8959a-5f7e-44a5-9dca-65be0506a4c1', 'name': 'vn-wkgthwg-l57hn7k2t5oc-62etg35sdh32-vnf-wjcrezdtgbmg', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {'metering.server_group': 'd2d7e2b0-01e0-44b1-b2c7-fe502b333743'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.329 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4cdbea11-17d2-4466-a5f5-9a3d25e25d8a', 'name': 'vn-wkgthwg-cyepyxtlaijo-cznrkcgobntv-vnf-te2v6j4ustz5', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {'metering.server_group': 'd2d7e2b0-01e0-44b1-b2c7-fe502b333743'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.330 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.330 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.330 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.330 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:55:04.330889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.393 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.394 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.395 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.460 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.460 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.461 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceph-mon[191910]: pgmap v1498: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.530 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.531 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.531 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.579 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.580 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.580 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.581 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:55:04.581903) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.605 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.606 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.606 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.629 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.629 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.630 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.659 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.660 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.661 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.686 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.687 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.688 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.689 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.689 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.690 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.690 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.690 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.691 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.691 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:55:04.691115) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.692 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.694 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.695 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.696 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.696 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.697 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.698 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.699 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.700 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.700 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.bytes volume: 41824256 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.701 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.702 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.705 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.706 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.706 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.707 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.708 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.708 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.708 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:55:04.707939) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.709 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.710 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.711 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.latency volume: 9610815435 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.712 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.latency volume: 22315613 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.713 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.714 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.latency volume: 6692352882 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.715 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.latency volume: 35191695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.715 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.716 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.latency volume: 6278316166 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.716 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.latency volume: 36317650 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.717 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.717 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.718 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.718 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.719 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:55:04.719545) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.756 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.781 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.805 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.838 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.839 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.841 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.841 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.841 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.842 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.842 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.842 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:55:04.841992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.843 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.843 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.844 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.845 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.845 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.846 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.846 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.847 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.848 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.848 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.849 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.850 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.851 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.851 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.852 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.852 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.853 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:55:04.852521) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.859 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.865 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.871 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.878 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.879 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.879 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.879 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.879 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.879 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.880 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.881 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.881 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.882 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.882 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.882 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:55:04.880134) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.882 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.883 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.883 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.883 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.884 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.885 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.885 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.885 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.885 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.886 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.886 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.886 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.887 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.887 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.packets volume: 56 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.888 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.889 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.889 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.889 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.889 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.890 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.890 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.891 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.892 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.892 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.893 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.894 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.893 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:55:04.883451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.894 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.894 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:55:04.885993) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.894 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:55:04.889897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.894 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.895 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.894 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:55:04.894334) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.895 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.896 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.897 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.898 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.898 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.898 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.899 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.899 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.899 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.900 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.900 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.901 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.901 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.902 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.902 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.903 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.903 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.903 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:55:04.899888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.903 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.903 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.904 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:55:04.903744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.904 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.904 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.905 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.905 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.905 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.905 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.906 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.906 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.906 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.907 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.907 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.908 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.908 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.908 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.908 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.908 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.908 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.909 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.909 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.bytes volume: 7440 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.909 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.910 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.910 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.910 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.910 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.910 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.911 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:55:04.908318) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.911 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.911 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:55:04.910897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.911 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.911 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.912 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.912 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.912 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.912 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.913 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.913 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.913 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.913 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.914 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.914 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.914 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.915 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.915 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.915 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.915 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.915 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.915 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.915 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.916 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/memory.usage volume: 49.05859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.916 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/memory.usage volume: 49.015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.916 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/memory.usage volume: 48.96875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.917 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.917 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.917 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.917 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.917 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.917 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.917 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2478 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.918 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.bytes volume: 1870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.918 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.incoming.bytes volume: 1786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.918 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.incoming.bytes volume: 8664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.919 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.919 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.919 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:55:04.915644) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.919 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.920 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.920 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.920 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.920 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.920 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.921 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.packets volume: 64 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.921 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.921 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.922 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.922 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.922 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.922 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:55:04.917817) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.922 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.922 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:55:04.920201) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.922 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.923 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.923 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.923 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.924 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.924 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.924 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.925 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.925 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.925 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.926 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.926 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.927 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.927 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.927 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.927 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.927 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:55:04.922692) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.927 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.928 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.928 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.928 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.928 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.929 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.929 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.929 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.930 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.930 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.930 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.930 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.930 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.931 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.931 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.932 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.932 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:55:04.928030) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.932 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:55:04.930660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.932 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.932 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.932 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.932 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.933 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.933 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.933 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 42620000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.933 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/cpu volume: 36940000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.934 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/cpu volume: 38530000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.934 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/cpu volume: 299690000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.935 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:55:04.933181) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.935 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.935 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.936 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.936 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.936 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.936 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.936 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.936 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.937 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.937 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.latency volume: 1897675157 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.937 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.latency volume: 270926831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.937 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.latency volume: 180472901 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.938 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.latency volume: 1997650221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.938 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.latency volume: 337600166 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.938 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.latency volume: 232324009 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.939 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.latency volume: 1764876744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.939 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.latency volume: 323566119 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.939 14 DEBUG ceilometer.compute.pollsters [-] 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a/disk.device.read.latency volume: 193343486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.940 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.940 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.940 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:55:04.936313) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.941 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.941 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.941 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.941 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.942 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.942 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.942 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.942 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.942 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.943 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.943 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.943 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.943 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.943 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.945 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.945 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.945 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.945 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.945 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:55:04.945 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:05 compute-0 magical_blackburn[434929]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:55:05 compute-0 magical_blackburn[434929]: --> relative data size: 1.0
Oct 02 19:55:05 compute-0 magical_blackburn[434929]: --> All data devices are unavailable
Oct 02 19:55:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:05 compute-0 systemd[1]: libpod-613ae1d2bf67c098f8d3e2adf591642836cd986b78b7eab125f4212b0ba5118c.scope: Deactivated successfully.
Oct 02 19:55:05 compute-0 systemd[1]: libpod-613ae1d2bf67c098f8d3e2adf591642836cd986b78b7eab125f4212b0ba5118c.scope: Consumed 1.233s CPU time.
Oct 02 19:55:05 compute-0 conmon[434929]: conmon 613ae1d2bf67c098f8d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-613ae1d2bf67c098f8d3e2adf591642836cd986b78b7eab125f4212b0ba5118c.scope/container/memory.events
Oct 02 19:55:05 compute-0 podman[434914]: 2025-10-02 19:55:05.087771291 +0000 UTC m=+1.694728548 container died 613ae1d2bf67c098f8d3e2adf591642836cd986b78b7eab125f4212b0ba5118c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:55:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:55:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bdfa78e9e5dd6cfb09041a408fef03fc3014d4ab25fa03235aff6ab60f3aeac-merged.mount: Deactivated successfully.
Oct 02 19:55:05 compute-0 podman[434914]: 2025-10-02 19:55:05.526993571 +0000 UTC m=+2.133950848 container remove 613ae1d2bf67c098f8d3e2adf591642836cd986b78b7eab125f4212b0ba5118c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_blackburn, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 19:55:05 compute-0 systemd[1]: libpod-conmon-613ae1d2bf67c098f8d3e2adf591642836cd986b78b7eab125f4212b0ba5118c.scope: Deactivated successfully.
Oct 02 19:55:05 compute-0 sudo[434771]: pam_unix(sudo:session): session closed for user root
Oct 02 19:55:05 compute-0 ceph-mon[191910]: pgmap v1499: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:05 compute-0 sudo[434971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:55:05 compute-0 sudo[434971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:55:05 compute-0 sudo[434971]: pam_unix(sudo:session): session closed for user root
Oct 02 19:55:05 compute-0 sudo[434996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:55:05 compute-0 sudo[434996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:55:05 compute-0 sudo[434996]: pam_unix(sudo:session): session closed for user root
Oct 02 19:55:06 compute-0 sudo[435021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:55:06 compute-0 sudo[435021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:55:06 compute-0 sudo[435021]: pam_unix(sudo:session): session closed for user root
Oct 02 19:55:06 compute-0 sudo[435046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:55:06 compute-0 sudo[435046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:55:07 compute-0 podman[435111]: 2025-10-02 19:55:06.91236254 +0000 UTC m=+0.056446052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:55:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:07 compute-0 podman[435111]: 2025-10-02 19:55:07.344025829 +0000 UTC m=+0.488109341 container create 9d5f2586a52144eba334001f15717bcd396e0df9ac7974df417665dee2463d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:55:07 compute-0 systemd[1]: Started libpod-conmon-9d5f2586a52144eba334001f15717bcd396e0df9ac7974df417665dee2463d1c.scope.
Oct 02 19:55:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:55:07 compute-0 podman[435111]: 2025-10-02 19:55:07.593121683 +0000 UTC m=+0.737205245 container init 9d5f2586a52144eba334001f15717bcd396e0df9ac7974df417665dee2463d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:55:07 compute-0 podman[435111]: 2025-10-02 19:55:07.613165926 +0000 UTC m=+0.757249448 container start 9d5f2586a52144eba334001f15717bcd396e0df9ac7974df417665dee2463d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 19:55:07 compute-0 podman[435111]: 2025-10-02 19:55:07.62573653 +0000 UTC m=+0.769820012 container attach 9d5f2586a52144eba334001f15717bcd396e0df9ac7974df417665dee2463d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:55:07 compute-0 sharp_raman[435127]: 167 167
Oct 02 19:55:07 compute-0 systemd[1]: libpod-9d5f2586a52144eba334001f15717bcd396e0df9ac7974df417665dee2463d1c.scope: Deactivated successfully.
Oct 02 19:55:07 compute-0 conmon[435127]: conmon 9d5f2586a52144eba334 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9d5f2586a52144eba334001f15717bcd396e0df9ac7974df417665dee2463d1c.scope/container/memory.events
Oct 02 19:55:07 compute-0 podman[435132]: 2025-10-02 19:55:07.714208142 +0000 UTC m=+0.053024191 container died 9d5f2586a52144eba334001f15717bcd396e0df9ac7974df417665dee2463d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 19:55:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fda47b3ebd2e6798b0237675c0114462db392594e480135d155cddbff26af23-merged.mount: Deactivated successfully.
Oct 02 19:55:07 compute-0 podman[435132]: 2025-10-02 19:55:07.825886382 +0000 UTC m=+0.164702361 container remove 9d5f2586a52144eba334001f15717bcd396e0df9ac7974df417665dee2463d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 19:55:07 compute-0 systemd[1]: libpod-conmon-9d5f2586a52144eba334001f15717bcd396e0df9ac7974df417665dee2463d1c.scope: Deactivated successfully.
Oct 02 19:55:07 compute-0 podman[435147]: 2025-10-02 19:55:07.95441766 +0000 UTC m=+0.130856311 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release-0.7.12=, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1214.1726694543, version=9.4, distribution-scope=public, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, config_id=edpm, io.openshift.expose-services=)
Oct 02 19:55:07 compute-0 podman[435146]: 2025-10-02 19:55:07.97771458 +0000 UTC m=+0.159358349 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Oct 02 19:55:08 compute-0 podman[435193]: 2025-10-02 19:55:08.184659873 +0000 UTC m=+0.134887978 container create ce0e62f6ecb50299e50a5a0018c8465ac084b29166eadeccf179c4ffbcf06c7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jones, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:55:08 compute-0 podman[435193]: 2025-10-02 19:55:08.125601042 +0000 UTC m=+0.075829207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:55:08 compute-0 systemd[1]: Started libpod-conmon-ce0e62f6ecb50299e50a5a0018c8465ac084b29166eadeccf179c4ffbcf06c7b.scope.
Oct 02 19:55:08 compute-0 nova_compute[355794]: 2025-10-02 19:55:08.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272e12532240695eed43856c0d26e274747d95e3f3983da324930b7cc02fd08a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272e12532240695eed43856c0d26e274747d95e3f3983da324930b7cc02fd08a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272e12532240695eed43856c0d26e274747d95e3f3983da324930b7cc02fd08a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272e12532240695eed43856c0d26e274747d95e3f3983da324930b7cc02fd08a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:55:08 compute-0 ceph-mon[191910]: pgmap v1500: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:08 compute-0 podman[435193]: 2025-10-02 19:55:08.362301207 +0000 UTC m=+0.312529282 container init ce0e62f6ecb50299e50a5a0018c8465ac084b29166eadeccf179c4ffbcf06c7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:55:08 compute-0 podman[435193]: 2025-10-02 19:55:08.393107276 +0000 UTC m=+0.343335351 container start ce0e62f6ecb50299e50a5a0018c8465ac084b29166eadeccf179c4ffbcf06c7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jones, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 19:55:08 compute-0 podman[435193]: 2025-10-02 19:55:08.404006106 +0000 UTC m=+0.354234211 container attach ce0e62f6ecb50299e50a5a0018c8465ac084b29166eadeccf179c4ffbcf06c7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jones, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 19:55:08 compute-0 nova_compute[355794]: 2025-10-02 19:55:08.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:09 compute-0 suspicious_jones[435210]: {
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:     "0": [
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:         {
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "devices": [
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "/dev/loop3"
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             ],
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "lv_name": "ceph_lv0",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "lv_size": "21470642176",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "name": "ceph_lv0",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "tags": {
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.cluster_name": "ceph",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.crush_device_class": "",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.encrypted": "0",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.osd_id": "0",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.type": "block",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.vdo": "0"
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             },
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "type": "block",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "vg_name": "ceph_vg0"
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:         }
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:     ],
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:     "1": [
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:         {
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "devices": [
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "/dev/loop4"
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             ],
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "lv_name": "ceph_lv1",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "lv_size": "21470642176",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "name": "ceph_lv1",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "tags": {
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.cluster_name": "ceph",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.crush_device_class": "",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.encrypted": "0",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.osd_id": "1",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.type": "block",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.vdo": "0"
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             },
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "type": "block",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "vg_name": "ceph_vg1"
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:         }
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:     ],
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:     "2": [
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:         {
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "devices": [
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "/dev/loop5"
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             ],
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "lv_name": "ceph_lv2",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "lv_size": "21470642176",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "name": "ceph_lv2",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "tags": {
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.cluster_name": "ceph",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.crush_device_class": "",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.encrypted": "0",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.osd_id": "2",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.type": "block",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:                 "ceph.vdo": "0"
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             },
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "type": "block",
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:             "vg_name": "ceph_vg2"
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:         }
Oct 02 19:55:09 compute-0 suspicious_jones[435210]:     ]
Oct 02 19:55:09 compute-0 suspicious_jones[435210]: }
Oct 02 19:55:09 compute-0 systemd[1]: libpod-ce0e62f6ecb50299e50a5a0018c8465ac084b29166eadeccf179c4ffbcf06c7b.scope: Deactivated successfully.
Oct 02 19:55:09 compute-0 podman[435219]: 2025-10-02 19:55:09.451195662 +0000 UTC m=+0.062297048 container died ce0e62f6ecb50299e50a5a0018c8465ac084b29166eadeccf179c4ffbcf06c7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 19:55:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-272e12532240695eed43856c0d26e274747d95e3f3983da324930b7cc02fd08a-merged.mount: Deactivated successfully.
Oct 02 19:55:09 compute-0 podman[435219]: 2025-10-02 19:55:09.564445843 +0000 UTC m=+0.175547209 container remove ce0e62f6ecb50299e50a5a0018c8465ac084b29166eadeccf179c4ffbcf06c7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jones, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 19:55:09 compute-0 systemd[1]: libpod-conmon-ce0e62f6ecb50299e50a5a0018c8465ac084b29166eadeccf179c4ffbcf06c7b.scope: Deactivated successfully.
Oct 02 19:55:09 compute-0 sudo[435046]: pam_unix(sudo:session): session closed for user root
Oct 02 19:55:09 compute-0 sudo[435234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:55:09 compute-0 sudo[435234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:55:09 compute-0 sudo[435234]: pam_unix(sudo:session): session closed for user root
Oct 02 19:55:09 compute-0 sudo[435259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:55:09 compute-0 sudo[435259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:55:09 compute-0 sudo[435259]: pam_unix(sudo:session): session closed for user root
Oct 02 19:55:10 compute-0 sudo[435284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:55:10 compute-0 sudo[435284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:55:10 compute-0 sudo[435284]: pam_unix(sudo:session): session closed for user root
Oct 02 19:55:10 compute-0 sudo[435309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:55:10 compute-0 sudo[435309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:55:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:55:10 compute-0 ceph-mon[191910]: pgmap v1501: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:10 compute-0 podman[435370]: 2025-10-02 19:55:10.809931564 +0000 UTC m=+0.098923662 container create a4b952eb24a0e3eb9bf19b114d6892a3160791c91c79d921f8286a8409025482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leakey, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:55:10 compute-0 podman[435370]: 2025-10-02 19:55:10.7759661 +0000 UTC m=+0.064958288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:55:10 compute-0 systemd[1]: Started libpod-conmon-a4b952eb24a0e3eb9bf19b114d6892a3160791c91c79d921f8286a8409025482.scope.
Oct 02 19:55:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:55:10 compute-0 podman[435370]: 2025-10-02 19:55:10.977057998 +0000 UTC m=+0.266050136 container init a4b952eb24a0e3eb9bf19b114d6892a3160791c91c79d921f8286a8409025482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leakey, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 19:55:10 compute-0 podman[435370]: 2025-10-02 19:55:10.989212651 +0000 UTC m=+0.278204739 container start a4b952eb24a0e3eb9bf19b114d6892a3160791c91c79d921f8286a8409025482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 19:55:10 compute-0 eloquent_leakey[435409]: 167 167
Oct 02 19:55:10 compute-0 systemd[1]: libpod-a4b952eb24a0e3eb9bf19b114d6892a3160791c91c79d921f8286a8409025482.scope: Deactivated successfully.
Oct 02 19:55:11 compute-0 podman[435370]: 2025-10-02 19:55:10.999364681 +0000 UTC m=+0.288356769 container attach a4b952eb24a0e3eb9bf19b114d6892a3160791c91c79d921f8286a8409025482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leakey, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 19:55:11 compute-0 podman[435370]: 2025-10-02 19:55:11.001035775 +0000 UTC m=+0.290027893 container died a4b952eb24a0e3eb9bf19b114d6892a3160791c91c79d921f8286a8409025482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leakey, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 19:55:11 compute-0 podman[435384]: 2025-10-02 19:55:11.03466061 +0000 UTC m=+0.149032624 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, tcib_managed=true)
Oct 02 19:55:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c9e2b4051a4c4023029dad0b1c34fde6cb040e8bca0940c72c98b945d8ca028-merged.mount: Deactivated successfully.
Oct 02 19:55:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:11 compute-0 podman[435370]: 2025-10-02 19:55:11.073955394 +0000 UTC m=+0.362947492 container remove a4b952eb24a0e3eb9bf19b114d6892a3160791c91c79d921f8286a8409025482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:55:11 compute-0 podman[435383]: 2025-10-02 19:55:11.083483068 +0000 UTC m=+0.198589252 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 02 19:55:11 compute-0 podman[435385]: 2025-10-02 19:55:11.097071439 +0000 UTC m=+0.206415250 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 19:55:11 compute-0 systemd[1]: libpod-conmon-a4b952eb24a0e3eb9bf19b114d6892a3160791c91c79d921f8286a8409025482.scope: Deactivated successfully.
Oct 02 19:55:11 compute-0 podman[435468]: 2025-10-02 19:55:11.358615084 +0000 UTC m=+0.112445531 container create 28c9d4678c088c548c5348aa2bbef3a26a936b518d5b293f11595e9a51ec3316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:55:11 compute-0 podman[435468]: 2025-10-02 19:55:11.30467963 +0000 UTC m=+0.058510047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:55:11 compute-0 systemd[1]: Started libpod-conmon-28c9d4678c088c548c5348aa2bbef3a26a936b518d5b293f11595e9a51ec3316.scope.
Oct 02 19:55:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/223664c728c54111b471bdd11baaae3f9018130356c4fb1bca2c6164cacea518/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/223664c728c54111b471bdd11baaae3f9018130356c4fb1bca2c6164cacea518/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/223664c728c54111b471bdd11baaae3f9018130356c4fb1bca2c6164cacea518/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/223664c728c54111b471bdd11baaae3f9018130356c4fb1bca2c6164cacea518/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:55:11 compute-0 podman[435468]: 2025-10-02 19:55:11.631229344 +0000 UTC m=+0.385059831 container init 28c9d4678c088c548c5348aa2bbef3a26a936b518d5b293f11595e9a51ec3316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:55:11 compute-0 podman[435468]: 2025-10-02 19:55:11.649695435 +0000 UTC m=+0.403525872 container start 28c9d4678c088c548c5348aa2bbef3a26a936b518d5b293f11595e9a51ec3316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 19:55:11 compute-0 podman[435468]: 2025-10-02 19:55:11.684503031 +0000 UTC m=+0.438333518 container attach 28c9d4678c088c548c5348aa2bbef3a26a936b518d5b293f11595e9a51ec3316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 19:55:12 compute-0 ceph-mon[191910]: pgmap v1502: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:12 compute-0 festive_dewdney[435484]: {
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:         "osd_id": 1,
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:         "type": "bluestore"
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:     },
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:         "osd_id": 2,
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:         "type": "bluestore"
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:     },
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:         "osd_id": 0,
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:         "type": "bluestore"
Oct 02 19:55:12 compute-0 festive_dewdney[435484]:     }
Oct 02 19:55:12 compute-0 festive_dewdney[435484]: }
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022104765945497166 of space, bias 1.0, pg target 0.663142978364915 quantized to 32 (current 32)
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:55:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:55:12 compute-0 systemd[1]: libpod-28c9d4678c088c548c5348aa2bbef3a26a936b518d5b293f11595e9a51ec3316.scope: Deactivated successfully.
Oct 02 19:55:12 compute-0 systemd[1]: libpod-28c9d4678c088c548c5348aa2bbef3a26a936b518d5b293f11595e9a51ec3316.scope: Consumed 1.229s CPU time.
Oct 02 19:55:12 compute-0 podman[435517]: 2025-10-02 19:55:12.970274692 +0000 UTC m=+0.054030038 container died 28c9d4678c088c548c5348aa2bbef3a26a936b518d5b293f11595e9a51ec3316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dewdney, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:55:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-223664c728c54111b471bdd11baaae3f9018130356c4fb1bca2c6164cacea518-merged.mount: Deactivated successfully.
Oct 02 19:55:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:13 compute-0 podman[435517]: 2025-10-02 19:55:13.20299704 +0000 UTC m=+0.286752376 container remove 28c9d4678c088c548c5348aa2bbef3a26a936b518d5b293f11595e9a51ec3316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:55:13 compute-0 systemd[1]: libpod-conmon-28c9d4678c088c548c5348aa2bbef3a26a936b518d5b293f11595e9a51ec3316.scope: Deactivated successfully.
Oct 02 19:55:13 compute-0 sudo[435309]: pam_unix(sudo:session): session closed for user root
Oct 02 19:55:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:55:13 compute-0 nova_compute[355794]: 2025-10-02 19:55:13.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:13 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:55:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:55:13 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:55:13 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 95010c3f-1230-4d27-b88f-4d4599df0687 does not exist
Oct 02 19:55:13 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 9e00ec9d-8532-4884-a10d-ff3ad524f93a does not exist
Oct 02 19:55:13 compute-0 sudo[435532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:55:13 compute-0 sudo[435532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:55:13 compute-0 sudo[435532]: pam_unix(sudo:session): session closed for user root
Oct 02 19:55:13 compute-0 sudo[435557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:55:13 compute-0 sudo[435557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:55:13 compute-0 sudo[435557]: pam_unix(sudo:session): session closed for user root
Oct 02 19:55:13 compute-0 nova_compute[355794]: 2025-10-02 19:55:13.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:14 compute-0 ceph-mon[191910]: pgmap v1503: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:14 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:55:14 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:55:14 compute-0 podman[435583]: 2025-10-02 19:55:14.905997398 +0000 UTC m=+0.123173477 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:55:14 compute-0 podman[435582]: 2025-10-02 19:55:14.914987627 +0000 UTC m=+0.130708347 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64)
Oct 02 19:55:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:55:16 compute-0 ceph-mon[191910]: pgmap v1504: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:18 compute-0 nova_compute[355794]: 2025-10-02 19:55:18.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:18 compute-0 ceph-mon[191910]: pgmap v1505: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:18 compute-0 nova_compute[355794]: 2025-10-02 19:55:18.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:19 compute-0 ceph-mon[191910]: pgmap v1506: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:55:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2533374728' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:55:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:55:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2533374728' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:55:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:55:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2533374728' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:55:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2533374728' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:55:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:21 compute-0 ceph-mon[191910]: pgmap v1507: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:23 compute-0 nova_compute[355794]: 2025-10-02 19:55:23.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:23 compute-0 nova_compute[355794]: 2025-10-02 19:55:23.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:24 compute-0 ceph-mon[191910]: pgmap v1508: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:55:26 compute-0 ceph-mon[191910]: pgmap v1509: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:26 compute-0 podman[435626]: 2025-10-02 19:55:26.730125075 +0000 UTC m=+0.136460438 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_id=multipathd)
Oct 02 19:55:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:28 compute-0 ceph-mon[191910]: pgmap v1510: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:28 compute-0 nova_compute[355794]: 2025-10-02 19:55:28.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:28 compute-0 nova_compute[355794]: 2025-10-02 19:55:28.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:29 compute-0 podman[157186]: time="2025-10-02T19:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:55:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:55:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9053 "" "Go-http-client/1.1"
Oct 02 19:55:29 compute-0 nova_compute[355794]: 2025-10-02 19:55:29.961 2 DEBUG oslo_concurrency.lockutils [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:29 compute-0 nova_compute[355794]: 2025-10-02 19:55:29.962 2 DEBUG oslo_concurrency.lockutils [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:29 compute-0 nova_compute[355794]: 2025-10-02 19:55:29.963 2 DEBUG oslo_concurrency.lockutils [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:29 compute-0 nova_compute[355794]: 2025-10-02 19:55:29.963 2 DEBUG oslo_concurrency.lockutils [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:29 compute-0 nova_compute[355794]: 2025-10-02 19:55:29.964 2 DEBUG oslo_concurrency.lockutils [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:29 compute-0 nova_compute[355794]: 2025-10-02 19:55:29.967 2 INFO nova.compute.manager [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Terminating instance
Oct 02 19:55:29 compute-0 nova_compute[355794]: 2025-10-02 19:55:29.969 2 DEBUG nova.compute.manager [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 19:55:30 compute-0 kernel: tapc759e48d-48 (unregistering): left promiscuous mode
Oct 02 19:55:30 compute-0 NetworkManager[44968]: <info>  [1759434930.1828] device (tapc759e48d-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:30 compute-0 ovn_controller[88435]: 2025-10-02T19:55:30Z|00050|binding|INFO|Releasing lport c759e48d-48de-4316-a1e4-9c04eb965fd0 from this chassis (sb_readonly=0)
Oct 02 19:55:30 compute-0 ovn_controller[88435]: 2025-10-02T19:55:30Z|00051|binding|INFO|Setting lport c759e48d-48de-4316-a1e4-9c04eb965fd0 down in Southbound
Oct 02 19:55:30 compute-0 ovn_controller[88435]: 2025-10-02T19:55:30Z|00052|binding|INFO|Removing iface tapc759e48d-48 ovn-installed in OVS
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:30.220 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:ab:29 192.168.0.227'], port_security=['fa:16:3e:10:ab:29 192.168.0.227'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-rlri6wkgthwg-cyepyxtlaijo-cznrkcgobntv-port-7t6l3urie5h2', 'neutron:cidrs': '192.168.0.227/24', 'neutron:device_id': '4cdbea11-17d2-4466-a5f5-9a3d25e25d8a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-rlri6wkgthwg-cyepyxtlaijo-cznrkcgobntv-port-7t6l3urie5h2', 'neutron:project_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '457fbfbc-bc7f-4eb1-986f-4ac88ca280aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.174', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a344602b-6814-4d50-9f7f-4fc26c3ad853, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=c759e48d-48de-4316-a1e4-9c04eb965fd0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:55:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:30.222 285790 INFO neutron.agent.ovn.metadata.agent [-] Port c759e48d-48de-4316-a1e4-9c04eb965fd0 in datapath 6e3c6c60-2fbc-4181-942a-00056fc667f2 unbound from our chassis
Oct 02 19:55:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:30.224 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e3c6c60-2fbc-4181-942a-00056fc667f2
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:30 compute-0 ceph-mon[191910]: pgmap v1511: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:30.258 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[4589ba97-69df-4c14-9beb-91dba0537884]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:30 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct 02 19:55:30 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 6min 18.147s CPU time.
Oct 02 19:55:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:55:30 compute-0 systemd-machined[137646]: Machine qemu-2-instance-00000002 terminated.
Oct 02 19:55:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:30.312 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[51d8a699-b1eb-44a3-babb-76aec3c1dc84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:30.316 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[2af26558-02f9-4321-981d-3975ac317102]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:30.359 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[d504bc23-23ad-409f-b032-3a37841b5c38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:30.382 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[56262a36-0793-45ab-b0ad-d43f0448a5b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e3c6c60-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:df:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 832, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 832, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545739, 'reachable_time': 39769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 435656, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:30.418 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4c5f70-237a-466a-89a3-c6729717ce71]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e3c6c60-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545752, 'tstamp': 545752}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 435657, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap6e3c6c60-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545756, 'tstamp': 545756}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 435657, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:30.420 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e3c6c60-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.438 2 INFO nova.virt.libvirt.driver [-] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Instance destroyed successfully.
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.438 2 DEBUG nova.objects.instance [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lazy-loading 'resources' on Instance uuid 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:30.442 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e3c6c60-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:55:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:30.443 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:55:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:30.443 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e3c6c60-20, col_values=(('external_ids', {'iface-id': '4a8af5bc-5352-4506-b6d3-43b5d33802a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:55:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:30.443 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.464 2 DEBUG nova.virt.libvirt.vif [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:45:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-wkgthwg-cyepyxtlaijo-cznrkcgobntv-vnf-te2v6j4ustz5',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-wkgthwg-cyepyxtlaijo-cznrkcgobntv-vnf-te2v6j4ustz5',id=2,image_ref='ce28338d-119e-49e1-ab67-60da8882593a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:45:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='d2d7e2b0-01e0-44b1-b2c7-fe502b333743'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c35486f37b94d43a7bf2f2fa09c70b9',ramdisk_id='',reservation_id='r-03sz3rjv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='ce28338d-119e-49e1-ab67-60da8882593a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T19:45:58Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MDQyOTk1MjA4MzUzMzEzNzA3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgwNDI5OTUyMDgzNTMzMTM3MDc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODA0Mjk5NTIwODM1MzMxMzcwNz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgwNDI5OTUyMDgzNTMzMTM3MDc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MDQyOTk1MjA4MzUzMzEzNzA3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MDQyOTk1MjA4MzUzMzEzNzA3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1
Oct 02 19:55:30 compute-0 nova_compute[355794]: xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODA0Mjk5NTIwODM1MzMxMzcwNz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgwNDI5OTUyMDgzNTMzMTM3MDc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MDQyOTk1MjA4MzUzMzEzNzA3PT0tLQo=',user_id='811fb7ac717e4ba9b9874e5454ee08f4',uuid=4cdbea11-17d2-4466-a5f5-9a3d25e25d8a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "address": "fa:16:3e:10:ab:29", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc759e48d-48", "ovs_interfaceid": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.465 2 DEBUG nova.network.os_vif_util [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converting VIF {"id": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "address": "fa:16:3e:10:ab:29", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc759e48d-48", "ovs_interfaceid": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.466 2 DEBUG nova.network.os_vif_util [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:10:ab:29,bridge_name='br-int',has_traffic_filtering=True,id=c759e48d-48de-4316-a1e4-9c04eb965fd0,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc759e48d-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.466 2 DEBUG os_vif [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:ab:29,bridge_name='br-int',has_traffic_filtering=True,id=c759e48d-48de-4316-a1e4-9c04eb965fd0,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc759e48d-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.470 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc759e48d-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.478 2 INFO os_vif [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:ab:29,bridge_name='br-int',has_traffic_filtering=True,id=c759e48d-48de-4316-a1e4-9c04eb965fd0,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc759e48d-48')
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.529 2 DEBUG nova.compute.manager [req-d9c04201-dd43-4d73-bbf1-9fcaa3f865e5 req-9d635fd9-fd61-4f90-aa36-f3d183ccb11d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Received event network-vif-unplugged-c759e48d-48de-4316-a1e4-9c04eb965fd0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.530 2 DEBUG oslo_concurrency.lockutils [req-d9c04201-dd43-4d73-bbf1-9fcaa3f865e5 req-9d635fd9-fd61-4f90-aa36-f3d183ccb11d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.530 2 DEBUG oslo_concurrency.lockutils [req-d9c04201-dd43-4d73-bbf1-9fcaa3f865e5 req-9d635fd9-fd61-4f90-aa36-f3d183ccb11d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.530 2 DEBUG oslo_concurrency.lockutils [req-d9c04201-dd43-4d73-bbf1-9fcaa3f865e5 req-9d635fd9-fd61-4f90-aa36-f3d183ccb11d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.530 2 DEBUG nova.compute.manager [req-d9c04201-dd43-4d73-bbf1-9fcaa3f865e5 req-9d635fd9-fd61-4f90-aa36-f3d183ccb11d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] No waiting events found dispatching network-vif-unplugged-c759e48d-48de-4316-a1e4-9c04eb965fd0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.531 2 DEBUG nova.compute.manager [req-d9c04201-dd43-4d73-bbf1-9fcaa3f865e5 req-9d635fd9-fd61-4f90-aa36-f3d183ccb11d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Received event network-vif-unplugged-c759e48d-48de-4316-a1e4-9c04eb965fd0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 19:55:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:30.706 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '6e:8a:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:85:f4:22:b9:4f'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:55:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:30.708 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:55:30 compute-0 nova_compute[355794]: 2025-10-02 19:55:30.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:30 compute-0 rsyslogd[187702]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 19:55:30.464 2 DEBUG nova.virt.libvirt.vif [None req-6e66ae24-ce55-4f [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 19:55:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:31 compute-0 openstack_network_exporter[372736]: ERROR   19:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:55:31 compute-0 openstack_network_exporter[372736]: ERROR   19:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:55:31 compute-0 openstack_network_exporter[372736]: ERROR   19:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:55:31 compute-0 openstack_network_exporter[372736]: ERROR   19:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:55:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:55:31 compute-0 openstack_network_exporter[372736]: ERROR   19:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:55:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:55:32 compute-0 ceph-mon[191910]: pgmap v1512: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:32.307 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:32.308 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:32.309 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.558 2 INFO nova.virt.libvirt.driver [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Deleting instance files /var/lib/nova/instances/4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_del
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.559 2 INFO nova.virt.libvirt.driver [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Deletion of /var/lib/nova/instances/4cdbea11-17d2-4466-a5f5-9a3d25e25d8a_del complete
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.696 2 DEBUG nova.compute.manager [req-4512497e-ef01-4414-84dc-5edc7bbe7c1f req-988a872e-f29a-476e-b22d-5fa08ecccc47 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Received event network-vif-plugged-c759e48d-48de-4316-a1e4-9c04eb965fd0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.697 2 DEBUG oslo_concurrency.lockutils [req-4512497e-ef01-4414-84dc-5edc7bbe7c1f req-988a872e-f29a-476e-b22d-5fa08ecccc47 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.697 2 DEBUG oslo_concurrency.lockutils [req-4512497e-ef01-4414-84dc-5edc7bbe7c1f req-988a872e-f29a-476e-b22d-5fa08ecccc47 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.698 2 DEBUG oslo_concurrency.lockutils [req-4512497e-ef01-4414-84dc-5edc7bbe7c1f req-988a872e-f29a-476e-b22d-5fa08ecccc47 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.698 2 DEBUG nova.compute.manager [req-4512497e-ef01-4414-84dc-5edc7bbe7c1f req-988a872e-f29a-476e-b22d-5fa08ecccc47 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] No waiting events found dispatching network-vif-plugged-c759e48d-48de-4316-a1e4-9c04eb965fd0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.699 2 WARNING nova.compute.manager [req-4512497e-ef01-4414-84dc-5edc7bbe7c1f req-988a872e-f29a-476e-b22d-5fa08ecccc47 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Received unexpected event network-vif-plugged-c759e48d-48de-4316-a1e4-9c04eb965fd0 for instance with vm_state active and task_state deleting.
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.699 2 DEBUG nova.compute.manager [req-4512497e-ef01-4414-84dc-5edc7bbe7c1f req-988a872e-f29a-476e-b22d-5fa08ecccc47 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Received event network-changed-c759e48d-48de-4316-a1e4-9c04eb965fd0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.700 2 DEBUG nova.compute.manager [req-4512497e-ef01-4414-84dc-5edc7bbe7c1f req-988a872e-f29a-476e-b22d-5fa08ecccc47 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Refreshing instance network info cache due to event network-changed-c759e48d-48de-4316-a1e4-9c04eb965fd0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.701 2 DEBUG oslo_concurrency.lockutils [req-4512497e-ef01-4414-84dc-5edc7bbe7c1f req-988a872e-f29a-476e-b22d-5fa08ecccc47 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.701 2 DEBUG oslo_concurrency.lockutils [req-4512497e-ef01-4414-84dc-5edc7bbe7c1f req-988a872e-f29a-476e-b22d-5fa08ecccc47 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.702 2 DEBUG nova.network.neutron [req-4512497e-ef01-4414-84dc-5edc7bbe7c1f req-988a872e-f29a-476e-b22d-5fa08ecccc47 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Refreshing network info cache for port c759e48d-48de-4316-a1e4-9c04eb965fd0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.707 2 DEBUG nova.virt.libvirt.host [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.707 2 INFO nova.virt.libvirt.host [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] UEFI support detected
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.710 2 INFO nova.compute.manager [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Took 2.74 seconds to destroy the instance on the hypervisor.
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.710 2 DEBUG oslo.service.loopingcall [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.711 2 DEBUG nova.compute.manager [-] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 19:55:32 compute-0 nova_compute[355794]: 2025-10-02 19:55:32.711 2 DEBUG nova.network.neutron [-] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 19:55:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 242 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 341 B/s wr, 4 op/s
Oct 02 19:55:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:55:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:55:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:55:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:55:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:55:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:55:33 compute-0 podman[435688]: 2025-10-02 19:55:33.718551666 +0000 UTC m=+0.129011790 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute)
Oct 02 19:55:33 compute-0 podman[435687]: 2025-10-02 19:55:33.721898895 +0000 UTC m=+0.134367742 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:55:33 compute-0 nova_compute[355794]: 2025-10-02 19:55:33.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:34 compute-0 ceph-mon[191910]: pgmap v1513: 321 pgs: 321 active+clean; 242 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 341 B/s wr, 4 op/s
Oct 02 19:55:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 209 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1023 B/s wr, 29 op/s
Oct 02 19:55:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:55:35 compute-0 nova_compute[355794]: 2025-10-02 19:55:35.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:35 compute-0 nova_compute[355794]: 2025-10-02 19:55:35.541 2 DEBUG nova.network.neutron [-] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:55:35 compute-0 nova_compute[355794]: 2025-10-02 19:55:35.595 2 DEBUG nova.network.neutron [req-4512497e-ef01-4414-84dc-5edc7bbe7c1f req-988a872e-f29a-476e-b22d-5fa08ecccc47 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Updated VIF entry in instance network info cache for port c759e48d-48de-4316-a1e4-9c04eb965fd0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:55:35 compute-0 nova_compute[355794]: 2025-10-02 19:55:35.596 2 DEBUG nova.network.neutron [req-4512497e-ef01-4414-84dc-5edc7bbe7c1f req-988a872e-f29a-476e-b22d-5fa08ecccc47 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Updating instance_info_cache with network_info: [{"id": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "address": "fa:16:3e:10:ab:29", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc759e48d-48", "ovs_interfaceid": "c759e48d-48de-4316-a1e4-9c04eb965fd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:55:35 compute-0 nova_compute[355794]: 2025-10-02 19:55:35.601 2 INFO nova.compute.manager [-] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Took 2.89 seconds to deallocate network for instance.
Oct 02 19:55:35 compute-0 nova_compute[355794]: 2025-10-02 19:55:35.670 2 DEBUG oslo_concurrency.lockutils [req-4512497e-ef01-4414-84dc-5edc7bbe7c1f req-988a872e-f29a-476e-b22d-5fa08ecccc47 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:55:35 compute-0 nova_compute[355794]: 2025-10-02 19:55:35.677 2 DEBUG oslo_concurrency.lockutils [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:35 compute-0 nova_compute[355794]: 2025-10-02 19:55:35.678 2 DEBUG oslo_concurrency.lockutils [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:35 compute-0 nova_compute[355794]: 2025-10-02 19:55:35.805 2 DEBUG oslo_concurrency.processutils [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:55:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:55:36 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3032883784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:55:36 compute-0 ceph-mon[191910]: pgmap v1514: 321 pgs: 321 active+clean; 209 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1023 B/s wr, 29 op/s
Oct 02 19:55:36 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3032883784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:55:36 compute-0 nova_compute[355794]: 2025-10-02 19:55:36.349 2 DEBUG oslo_concurrency.processutils [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:55:36 compute-0 nova_compute[355794]: 2025-10-02 19:55:36.362 2 DEBUG nova.compute.provider_tree [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:55:36 compute-0 nova_compute[355794]: 2025-10-02 19:55:36.401 2 DEBUG nova.scheduler.client.report [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:55:36 compute-0 nova_compute[355794]: 2025-10-02 19:55:36.430 2 DEBUG oslo_concurrency.lockutils [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:36 compute-0 nova_compute[355794]: 2025-10-02 19:55:36.461 2 INFO nova.scheduler.client.report [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Deleted allocations for instance 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a
Oct 02 19:55:36 compute-0 nova_compute[355794]: 2025-10-02 19:55:36.553 2 DEBUG oslo_concurrency.lockutils [None req-6e66ae24-ce55-4f6e-85ea-f385695227a0 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "4cdbea11-17d2-4466-a5f5-9a3d25e25d8a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:55:37 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:55:37.714 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1de2af17-a89c-45e5-97c6-db433f26bbb6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:55:38 compute-0 ceph-mon[191910]: pgmap v1515: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:55:38 compute-0 podman[435751]: 2025-10-02 19:55:38.72388732 +0000 UTC m=+0.137621019 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 19:55:38 compute-0 podman[435752]: 2025-10-02 19:55:38.766172724 +0000 UTC m=+0.169902257 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.tags=base rhel9, release-0.7.12=, release=1214.1726694543, version=9.4, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vcs-type=git, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container)
Oct 02 19:55:38 compute-0 nova_compute[355794]: 2025-10-02 19:55:38.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:55:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:55:40 compute-0 ceph-mon[191910]: pgmap v1516: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:55:40 compute-0 nova_compute[355794]: 2025-10-02 19:55:40.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:55:41 compute-0 podman[435790]: 2025-10-02 19:55:41.713150691 +0000 UTC m=+0.125282991 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3)
Oct 02 19:55:41 compute-0 podman[435789]: 2025-10-02 19:55:41.724715228 +0000 UTC m=+0.145725214 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 19:55:41 compute-0 podman[435791]: 2025-10-02 19:55:41.814988217 +0000 UTC m=+0.211465251 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 02 19:55:42 compute-0 ceph-mon[191910]: pgmap v1517: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:55:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:55:43 compute-0 nova_compute[355794]: 2025-10-02 19:55:43.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:44 compute-0 ceph-mon[191910]: pgmap v1518: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:55:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 35 op/s
Oct 02 19:55:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:55:45 compute-0 nova_compute[355794]: 2025-10-02 19:55:45.433 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759434930.432443, 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:55:45 compute-0 nova_compute[355794]: 2025-10-02 19:55:45.434 2 INFO nova.compute.manager [-] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] VM Stopped (Lifecycle Event)
Oct 02 19:55:45 compute-0 nova_compute[355794]: 2025-10-02 19:55:45.459 2 DEBUG nova.compute.manager [None req-bf36f7a8-ed51-4d00-82aa-ad6476f9888a - - - - - -] [instance: 4cdbea11-17d2-4466-a5f5-9a3d25e25d8a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:55:45 compute-0 nova_compute[355794]: 2025-10-02 19:55:45.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:45 compute-0 podman[435852]: 2025-10-02 19:55:45.713819542 +0000 UTC m=+0.128501396 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:55:45 compute-0 podman[435851]: 2025-10-02 19:55:45.725604305 +0000 UTC m=+0.143992338 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 02 19:55:46 compute-0 ceph-mon[191910]: pgmap v1519: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 35 op/s
Oct 02 19:55:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 767 B/s wr, 10 op/s
Oct 02 19:55:48 compute-0 ceph-mon[191910]: pgmap v1520: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 767 B/s wr, 10 op/s
Oct 02 19:55:48 compute-0 nova_compute[355794]: 2025-10-02 19:55:48.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:49 compute-0 nova_compute[355794]: 2025-10-02 19:55:49.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:55:49 compute-0 nova_compute[355794]: 2025-10-02 19:55:49.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:55:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:55:50 compute-0 ceph-mon[191910]: pgmap v1521: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:50 compute-0 nova_compute[355794]: 2025-10-02 19:55:50.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:52 compute-0 ceph-mon[191910]: pgmap v1522: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:53 compute-0 nova_compute[355794]: 2025-10-02 19:55:53.577 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:55:53 compute-0 nova_compute[355794]: 2025-10-02 19:55:53.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:54 compute-0 ceph-mon[191910]: pgmap v1523: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:54 compute-0 nova_compute[355794]: 2025-10-02 19:55:54.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:55:54 compute-0 nova_compute[355794]: 2025-10-02 19:55:54.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:55:55 compute-0 nova_compute[355794]: 2025-10-02 19:55:55.042 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-b88114e8-b15d-4a78-ac15-3dd7ee30b949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:55:55 compute-0 nova_compute[355794]: 2025-10-02 19:55:55.043 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-b88114e8-b15d-4a78-ac15-3dd7ee30b949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:55:55 compute-0 nova_compute[355794]: 2025-10-02 19:55:55.045 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:55:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:55:55 compute-0 nova_compute[355794]: 2025-10-02 19:55:55.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:56 compute-0 nova_compute[355794]: 2025-10-02 19:55:56.352 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Updating instance_info_cache with network_info: [{"id": "55da210c-644a-4f1e-8f20-ee3303b72db2", "address": "fa:16:3e:c5:df:6b", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55da210c-64", "ovs_interfaceid": "55da210c-644a-4f1e-8f20-ee3303b72db2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:55:56 compute-0 nova_compute[355794]: 2025-10-02 19:55:56.375 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-b88114e8-b15d-4a78-ac15-3dd7ee30b949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:55:56 compute-0 nova_compute[355794]: 2025-10-02 19:55:56.376 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:55:56 compute-0 nova_compute[355794]: 2025-10-02 19:55:56.376 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:55:56 compute-0 nova_compute[355794]: 2025-10-02 19:55:56.377 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:55:56 compute-0 nova_compute[355794]: 2025-10-02 19:55:56.410 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:56 compute-0 nova_compute[355794]: 2025-10-02 19:55:56.411 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:56 compute-0 nova_compute[355794]: 2025-10-02 19:55:56.411 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:56 compute-0 nova_compute[355794]: 2025-10-02 19:55:56.411 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:55:56 compute-0 nova_compute[355794]: 2025-10-02 19:55:56.412 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:55:56 compute-0 ceph-mon[191910]: pgmap v1524: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:56 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:55:56 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3297629323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:55:56 compute-0 nova_compute[355794]: 2025-10-02 19:55:56.986 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:55:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:57 compute-0 podman[435916]: 2025-10-02 19:55:57.228123974 +0000 UTC m=+0.140969488 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 19:55:57 compute-0 nova_compute[355794]: 2025-10-02 19:55:57.359 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:55:57 compute-0 nova_compute[355794]: 2025-10-02 19:55:57.360 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:55:57 compute-0 nova_compute[355794]: 2025-10-02 19:55:57.360 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:55:57 compute-0 nova_compute[355794]: 2025-10-02 19:55:57.370 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:55:57 compute-0 nova_compute[355794]: 2025-10-02 19:55:57.371 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:55:57 compute-0 nova_compute[355794]: 2025-10-02 19:55:57.371 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:55:57 compute-0 nova_compute[355794]: 2025-10-02 19:55:57.380 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:55:57 compute-0 nova_compute[355794]: 2025-10-02 19:55:57.380 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:55:57 compute-0 nova_compute[355794]: 2025-10-02 19:55:57.380 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:55:57 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3297629323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:55:58 compute-0 nova_compute[355794]: 2025-10-02 19:55:58.003 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:55:58 compute-0 nova_compute[355794]: 2025-10-02 19:55:58.004 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3427MB free_disk=59.88888168334961GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:55:58 compute-0 nova_compute[355794]: 2025-10-02 19:55:58.004 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:58 compute-0 nova_compute[355794]: 2025-10-02 19:55:58.005 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:58 compute-0 nova_compute[355794]: 2025-10-02 19:55:58.098 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:55:58 compute-0 nova_compute[355794]: 2025-10-02 19:55:58.099 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance b88114e8-b15d-4a78-ac15-3dd7ee30b949 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:55:58 compute-0 nova_compute[355794]: 2025-10-02 19:55:58.099 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 58f8959a-5f7e-44a5-9dca-65be0506a4c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:55:58 compute-0 nova_compute[355794]: 2025-10-02 19:55:58.099 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:55:58 compute-0 nova_compute[355794]: 2025-10-02 19:55:58.100 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:55:58 compute-0 nova_compute[355794]: 2025-10-02 19:55:58.181 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:55:58 compute-0 ceph-mon[191910]: pgmap v1525: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:58 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:55:58 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/454467656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:55:58 compute-0 nova_compute[355794]: 2025-10-02 19:55:58.753 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:55:58 compute-0 nova_compute[355794]: 2025-10-02 19:55:58.770 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:55:58 compute-0 nova_compute[355794]: 2025-10-02 19:55:58.792 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:55:58 compute-0 nova_compute[355794]: 2025-10-02 19:55:58.795 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:55:58 compute-0 nova_compute[355794]: 2025-10-02 19:55:58.796 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:58 compute-0 nova_compute[355794]: 2025-10-02 19:55:58.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:55:59 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/454467656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:55:59 compute-0 podman[157186]: time="2025-10-02T19:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:55:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:55:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9057 "" "Go-http-client/1.1"
Oct 02 19:55:59 compute-0 nova_compute[355794]: 2025-10-02 19:55:59.995 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:55:59 compute-0 nova_compute[355794]: 2025-10-02 19:55:59.995 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:55:59 compute-0 nova_compute[355794]: 2025-10-02 19:55:59.996 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:56:00 compute-0 nova_compute[355794]: 2025-10-02 19:56:00.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:00 compute-0 nova_compute[355794]: 2025-10-02 19:56:00.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:00 compute-0 ceph-mon[191910]: pgmap v1526: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:01 compute-0 openstack_network_exporter[372736]: ERROR   19:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:56:01 compute-0 openstack_network_exporter[372736]: ERROR   19:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:56:01 compute-0 openstack_network_exporter[372736]: ERROR   19:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:56:01 compute-0 openstack_network_exporter[372736]: ERROR   19:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:56:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:56:01 compute-0 openstack_network_exporter[372736]: ERROR   19:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:56:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:56:01 compute-0 ceph-mon[191910]: pgmap v1527: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:56:03
Oct 02 19:56:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:56:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:56:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['.rgw.root', 'images', 'vms', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', '.mgr', 'backups', 'cephfs.cephfs.meta']
Oct 02 19:56:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:56:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:56:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:56:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:56:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:56:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:56:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:56:03 compute-0 nova_compute[355794]: 2025-10-02 19:56:03.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:04 compute-0 ceph-mon[191910]: pgmap v1528: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:56:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:56:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:56:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:56:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:56:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:56:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:56:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:56:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:56:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:56:04 compute-0 nova_compute[355794]: 2025-10-02 19:56:04.571 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:04 compute-0 podman[435959]: 2025-10-02 19:56:04.685163941 +0000 UTC m=+0.107151309 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:56:04 compute-0 podman[435960]: 2025-10-02 19:56:04.720657404 +0000 UTC m=+0.134941278 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:56:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:56:05 compute-0 nova_compute[355794]: 2025-10-02 19:56:05.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:06 compute-0 ceph-mon[191910]: pgmap v1529: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:07 compute-0 ovn_controller[88435]: 2025-10-02T19:56:07Z|00053|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Oct 02 19:56:08 compute-0 ceph-mon[191910]: pgmap v1530: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:08 compute-0 nova_compute[355794]: 2025-10-02 19:56:08.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:09 compute-0 podman[436000]: 2025-10-02 19:56:09.707049605 +0000 UTC m=+0.122377923 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 19:56:09 compute-0 podman[436001]: 2025-10-02 19:56:09.732772789 +0000 UTC m=+0.140830294 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, container_name=kepler, vcs-type=git, version=9.4, io.openshift.expose-services=, release=1214.1726694543, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release-0.7.12=, maintainer=Red Hat, Inc.)
Oct 02 19:56:10 compute-0 ceph-mon[191910]: pgmap v1531: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:56:10 compute-0 nova_compute[355794]: 2025-10-02 19:56:10.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:12 compute-0 ceph-mon[191910]: pgmap v1532: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:12 compute-0 podman[436038]: 2025-10-02 19:56:12.737031657 +0000 UTC m=+0.148427066 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:56:12 compute-0 podman[436037]: 2025-10-02 19:56:12.74916267 +0000 UTC m=+0.168560541 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 19:56:12 compute-0 podman[436039]: 2025-10-02 19:56:12.770024634 +0000 UTC m=+0.174020006 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016569830736797551 of space, bias 1.0, pg target 0.49709492210392653 quantized to 32 (current 32)
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:56:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:56:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:13 compute-0 sudo[436100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:56:13 compute-0 sudo[436100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:13 compute-0 sudo[436100]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:13 compute-0 nova_compute[355794]: 2025-10-02 19:56:13.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:14 compute-0 sudo[436125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:56:14 compute-0 sudo[436125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:14 compute-0 sudo[436125]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:14 compute-0 sudo[436150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:56:14 compute-0 sudo[436150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:14 compute-0 sudo[436150]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:14 compute-0 ceph-mon[191910]: pgmap v1533: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:14 compute-0 sudo[436175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 19:56:14 compute-0 sudo[436175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:15 compute-0 podman[436270]: 2025-10-02 19:56:15.290946146 +0000 UTC m=+0.153927712 container exec a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 19:56:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:56:15 compute-0 podman[436270]: 2025-10-02 19:56:15.42051598 +0000 UTC m=+0.283497546 container exec_died a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:56:15 compute-0 nova_compute[355794]: 2025-10-02 19:56:15.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:15 compute-0 podman[436341]: 2025-10-02 19:56:15.979895127 +0000 UTC m=+0.115847740 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:56:15 compute-0 podman[436340]: 2025-10-02 19:56:15.993745226 +0000 UTC m=+0.131471236 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal)
Oct 02 19:56:16 compute-0 ceph-mon[191910]: pgmap v1534: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:16 compute-0 sudo[436175]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:56:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:56:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:56:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:56:16 compute-0 sudo[436468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:56:16 compute-0 sudo[436468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:16 compute-0 sudo[436468]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:16 compute-0 sudo[436493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:56:16 compute-0 sudo[436493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:16 compute-0 sudo[436493]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:17 compute-0 sudo[436518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:56:17 compute-0 sudo[436518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:17 compute-0 sudo[436518]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:17 compute-0 sudo[436543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:56:17 compute-0 sudo[436543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:56:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:56:17 compute-0 ceph-mon[191910]: pgmap v1535: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:17 compute-0 sudo[436543]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:56:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:56:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:56:18 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:56:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:56:18 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:56:18 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev d16fe434-d5c6-4719-95dc-0f32710dd582 does not exist
Oct 02 19:56:18 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 2fd43b46-d545-45b2-b8b9-9f4709d6384c does not exist
Oct 02 19:56:18 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev a7d321cc-0235-4bc6-8020-e999e6bc8847 does not exist
Oct 02 19:56:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:56:18 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:56:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:56:18 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:56:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:56:18 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:56:18 compute-0 sudo[436599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:56:18 compute-0 sudo[436599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:18 compute-0 sudo[436599]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:18 compute-0 sudo[436624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:56:18 compute-0 sudo[436624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:18 compute-0 sudo[436624]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:18 compute-0 sudo[436649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:56:18 compute-0 sudo[436649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:18 compute-0 sudo[436649]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:18 compute-0 sudo[436674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:56:18 compute-0 sudo[436674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:56:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:56:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:56:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:56:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:56:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:56:18 compute-0 nova_compute[355794]: 2025-10-02 19:56:18.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:19 compute-0 podman[436738]: 2025-10-02 19:56:19.046680858 +0000 UTC m=+0.077687346 container create d33eade837181957cd3886ca053565e51a0658a44087043ec5fe8b86039629f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wiles, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:56:19 compute-0 podman[436738]: 2025-10-02 19:56:19.010911917 +0000 UTC m=+0.041918435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:56:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:19 compute-0 systemd[1]: Started libpod-conmon-d33eade837181957cd3886ca053565e51a0658a44087043ec5fe8b86039629f1.scope.
Oct 02 19:56:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:56:19 compute-0 podman[436738]: 2025-10-02 19:56:19.196711015 +0000 UTC m=+0.227717563 container init d33eade837181957cd3886ca053565e51a0658a44087043ec5fe8b86039629f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:56:19 compute-0 podman[436738]: 2025-10-02 19:56:19.215116445 +0000 UTC m=+0.246122903 container start d33eade837181957cd3886ca053565e51a0658a44087043ec5fe8b86039629f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:56:19 compute-0 podman[436738]: 2025-10-02 19:56:19.222438489 +0000 UTC m=+0.253445047 container attach d33eade837181957cd3886ca053565e51a0658a44087043ec5fe8b86039629f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wiles, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:56:19 compute-0 distracted_wiles[436755]: 167 167
Oct 02 19:56:19 compute-0 systemd[1]: libpod-d33eade837181957cd3886ca053565e51a0658a44087043ec5fe8b86039629f1.scope: Deactivated successfully.
Oct 02 19:56:19 compute-0 podman[436738]: 2025-10-02 19:56:19.229278421 +0000 UTC m=+0.260284919 container died d33eade837181957cd3886ca053565e51a0658a44087043ec5fe8b86039629f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 19:56:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8d2e3ded7db3e6bf32fa1f56180f5667091a0f02b52400c6c25cb43594c99d9-merged.mount: Deactivated successfully.
Oct 02 19:56:19 compute-0 podman[436738]: 2025-10-02 19:56:19.304065009 +0000 UTC m=+0.335071477 container remove d33eade837181957cd3886ca053565e51a0658a44087043ec5fe8b86039629f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:56:19 compute-0 systemd[1]: libpod-conmon-d33eade837181957cd3886ca053565e51a0658a44087043ec5fe8b86039629f1.scope: Deactivated successfully.
Oct 02 19:56:19 compute-0 podman[436779]: 2025-10-02 19:56:19.579544371 +0000 UTC m=+0.093667531 container create 90021651cb396679522887a1b07ff0de5fbad1de18844c92377eddfa1d4616a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:56:19 compute-0 podman[436779]: 2025-10-02 19:56:19.5445228 +0000 UTC m=+0.058645980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:56:19 compute-0 systemd[1]: Started libpod-conmon-90021651cb396679522887a1b07ff0de5fbad1de18844c92377eddfa1d4616a6.scope.
Oct 02 19:56:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2f377562fd51b531853ae37da2a00d19f7b039e5fcd8fb34773ff3a13050d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2f377562fd51b531853ae37da2a00d19f7b039e5fcd8fb34773ff3a13050d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2f377562fd51b531853ae37da2a00d19f7b039e5fcd8fb34773ff3a13050d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2f377562fd51b531853ae37da2a00d19f7b039e5fcd8fb34773ff3a13050d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2f377562fd51b531853ae37da2a00d19f7b039e5fcd8fb34773ff3a13050d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:56:19 compute-0 podman[436779]: 2025-10-02 19:56:19.750131635 +0000 UTC m=+0.264254835 container init 90021651cb396679522887a1b07ff0de5fbad1de18844c92377eddfa1d4616a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:56:19 compute-0 ceph-mon[191910]: pgmap v1536: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:19 compute-0 podman[436779]: 2025-10-02 19:56:19.774664967 +0000 UTC m=+0.288788107 container start 90021651cb396679522887a1b07ff0de5fbad1de18844c92377eddfa1d4616a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 19:56:19 compute-0 podman[436779]: 2025-10-02 19:56:19.797594786 +0000 UTC m=+0.311717996 container attach 90021651cb396679522887a1b07ff0de5fbad1de18844c92377eddfa1d4616a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:56:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:56:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1331190820' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:56:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:56:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1331190820' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:56:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:56:20 compute-0 nova_compute[355794]: 2025-10-02 19:56:20.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1331190820' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:56:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1331190820' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:56:21 compute-0 adoring_herschel[436795]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:56:21 compute-0 adoring_herschel[436795]: --> relative data size: 1.0
Oct 02 19:56:21 compute-0 adoring_herschel[436795]: --> All data devices are unavailable
Oct 02 19:56:21 compute-0 systemd[1]: libpod-90021651cb396679522887a1b07ff0de5fbad1de18844c92377eddfa1d4616a6.scope: Deactivated successfully.
Oct 02 19:56:21 compute-0 podman[436779]: 2025-10-02 19:56:21.098863891 +0000 UTC m=+1.612987041 container died 90021651cb396679522887a1b07ff0de5fbad1de18844c92377eddfa1d4616a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 19:56:21 compute-0 systemd[1]: libpod-90021651cb396679522887a1b07ff0de5fbad1de18844c92377eddfa1d4616a6.scope: Consumed 1.246s CPU time.
Oct 02 19:56:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1537: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae2f377562fd51b531853ae37da2a00d19f7b039e5fcd8fb34773ff3a13050d3-merged.mount: Deactivated successfully.
Oct 02 19:56:21 compute-0 podman[436779]: 2025-10-02 19:56:21.274358436 +0000 UTC m=+1.788481596 container remove 90021651cb396679522887a1b07ff0de5fbad1de18844c92377eddfa1d4616a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:56:21 compute-0 systemd[1]: libpod-conmon-90021651cb396679522887a1b07ff0de5fbad1de18844c92377eddfa1d4616a6.scope: Deactivated successfully.
Oct 02 19:56:21 compute-0 sudo[436674]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:21 compute-0 sudo[436835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:56:21 compute-0 sudo[436835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:21 compute-0 sudo[436835]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:21 compute-0 sudo[436860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:56:21 compute-0 sudo[436860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:21 compute-0 sudo[436860]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:21 compute-0 sudo[436885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:56:21 compute-0 sudo[436885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:21 compute-0 sudo[436885]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:21 compute-0 ceph-mon[191910]: pgmap v1537: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:21 compute-0 sudo[436910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:56:21 compute-0 sudo[436910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:22 compute-0 podman[436975]: 2025-10-02 19:56:22.417756486 +0000 UTC m=+0.080956683 container create d875bfdd0f39f967afd5805890fdab9dbb35f524fcfe66ea254db8b4da5b2f69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 19:56:22 compute-0 podman[436975]: 2025-10-02 19:56:22.386102175 +0000 UTC m=+0.049302432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:56:22 compute-0 systemd[1]: Started libpod-conmon-d875bfdd0f39f967afd5805890fdab9dbb35f524fcfe66ea254db8b4da5b2f69.scope.
Oct 02 19:56:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:56:22 compute-0 podman[436975]: 2025-10-02 19:56:22.5808012 +0000 UTC m=+0.244001437 container init d875bfdd0f39f967afd5805890fdab9dbb35f524fcfe66ea254db8b4da5b2f69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_liskov, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:56:22 compute-0 podman[436975]: 2025-10-02 19:56:22.60264636 +0000 UTC m=+0.265846547 container start d875bfdd0f39f967afd5805890fdab9dbb35f524fcfe66ea254db8b4da5b2f69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_liskov, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:56:22 compute-0 podman[436975]: 2025-10-02 19:56:22.609631776 +0000 UTC m=+0.272832033 container attach d875bfdd0f39f967afd5805890fdab9dbb35f524fcfe66ea254db8b4da5b2f69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_liskov, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:56:22 compute-0 hopeful_liskov[436991]: 167 167
Oct 02 19:56:22 compute-0 systemd[1]: libpod-d875bfdd0f39f967afd5805890fdab9dbb35f524fcfe66ea254db8b4da5b2f69.scope: Deactivated successfully.
Oct 02 19:56:22 compute-0 podman[436975]: 2025-10-02 19:56:22.617617758 +0000 UTC m=+0.280817955 container died d875bfdd0f39f967afd5805890fdab9dbb35f524fcfe66ea254db8b4da5b2f69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 19:56:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-09774a94d33cb01c3d34b32afb0627cda1004f496090d7345855a13cb1998aa0-merged.mount: Deactivated successfully.
Oct 02 19:56:22 compute-0 podman[436975]: 2025-10-02 19:56:22.704456866 +0000 UTC m=+0.367657053 container remove d875bfdd0f39f967afd5805890fdab9dbb35f524fcfe66ea254db8b4da5b2f69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 19:56:22 compute-0 systemd[1]: libpod-conmon-d875bfdd0f39f967afd5805890fdab9dbb35f524fcfe66ea254db8b4da5b2f69.scope: Deactivated successfully.
Oct 02 19:56:22 compute-0 podman[437013]: 2025-10-02 19:56:22.985928247 +0000 UTC m=+0.092603592 container create f21beee1135f09e8e8e8d70f2f3b51fbe9d46acd4fbb2c7393f1a8628b8ac962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mayer, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 19:56:23 compute-0 podman[437013]: 2025-10-02 19:56:22.963890282 +0000 UTC m=+0.070565617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:56:23 compute-0 systemd[1]: Started libpod-conmon-f21beee1135f09e8e8e8d70f2f3b51fbe9d46acd4fbb2c7393f1a8628b8ac962.scope.
Oct 02 19:56:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:56:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/371cf2a53d86bbd3b747afc9ada229d606d45d995de52a91944568a50c5c62e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:56:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/371cf2a53d86bbd3b747afc9ada229d606d45d995de52a91944568a50c5c62e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:56:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/371cf2a53d86bbd3b747afc9ada229d606d45d995de52a91944568a50c5c62e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:56:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/371cf2a53d86bbd3b747afc9ada229d606d45d995de52a91944568a50c5c62e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:56:23 compute-0 podman[437013]: 2025-10-02 19:56:23.159885071 +0000 UTC m=+0.266560476 container init f21beee1135f09e8e8e8d70f2f3b51fbe9d46acd4fbb2c7393f1a8628b8ac962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 19:56:23 compute-0 podman[437013]: 2025-10-02 19:56:23.184789563 +0000 UTC m=+0.291464908 container start f21beee1135f09e8e8e8d70f2f3b51fbe9d46acd4fbb2c7393f1a8628b8ac962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mayer, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 19:56:23 compute-0 podman[437013]: 2025-10-02 19:56:23.191804379 +0000 UTC m=+0.298479734 container attach f21beee1135f09e8e8e8d70f2f3b51fbe9d46acd4fbb2c7393f1a8628b8ac962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 19:56:23 compute-0 nova_compute[355794]: 2025-10-02 19:56:23.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:24 compute-0 confident_mayer[437029]: {
Oct 02 19:56:24 compute-0 confident_mayer[437029]:     "0": [
Oct 02 19:56:24 compute-0 confident_mayer[437029]:         {
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "devices": [
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "/dev/loop3"
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             ],
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "lv_name": "ceph_lv0",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "lv_size": "21470642176",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "name": "ceph_lv0",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "tags": {
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.cluster_name": "ceph",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.crush_device_class": "",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.encrypted": "0",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.osd_id": "0",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.type": "block",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.vdo": "0"
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             },
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "type": "block",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "vg_name": "ceph_vg0"
Oct 02 19:56:24 compute-0 confident_mayer[437029]:         }
Oct 02 19:56:24 compute-0 confident_mayer[437029]:     ],
Oct 02 19:56:24 compute-0 confident_mayer[437029]:     "1": [
Oct 02 19:56:24 compute-0 confident_mayer[437029]:         {
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "devices": [
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "/dev/loop4"
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             ],
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "lv_name": "ceph_lv1",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "lv_size": "21470642176",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "name": "ceph_lv1",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "tags": {
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.cluster_name": "ceph",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.crush_device_class": "",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.encrypted": "0",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.osd_id": "1",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.type": "block",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.vdo": "0"
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             },
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "type": "block",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "vg_name": "ceph_vg1"
Oct 02 19:56:24 compute-0 confident_mayer[437029]:         }
Oct 02 19:56:24 compute-0 confident_mayer[437029]:     ],
Oct 02 19:56:24 compute-0 confident_mayer[437029]:     "2": [
Oct 02 19:56:24 compute-0 confident_mayer[437029]:         {
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "devices": [
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "/dev/loop5"
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             ],
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "lv_name": "ceph_lv2",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "lv_size": "21470642176",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "name": "ceph_lv2",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "tags": {
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.cluster_name": "ceph",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.crush_device_class": "",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.encrypted": "0",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.osd_id": "2",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.type": "block",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:                 "ceph.vdo": "0"
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             },
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "type": "block",
Oct 02 19:56:24 compute-0 confident_mayer[437029]:             "vg_name": "ceph_vg2"
Oct 02 19:56:24 compute-0 confident_mayer[437029]:         }
Oct 02 19:56:24 compute-0 confident_mayer[437029]:     ]
Oct 02 19:56:24 compute-0 confident_mayer[437029]: }
Oct 02 19:56:24 compute-0 systemd[1]: libpod-f21beee1135f09e8e8e8d70f2f3b51fbe9d46acd4fbb2c7393f1a8628b8ac962.scope: Deactivated successfully.
Oct 02 19:56:24 compute-0 podman[437013]: 2025-10-02 19:56:24.070453483 +0000 UTC m=+1.177128808 container died f21beee1135f09e8e8e8d70f2f3b51fbe9d46acd4fbb2c7393f1a8628b8ac962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mayer, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 19:56:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-371cf2a53d86bbd3b747afc9ada229d606d45d995de52a91944568a50c5c62e2-merged.mount: Deactivated successfully.
Oct 02 19:56:24 compute-0 podman[437013]: 2025-10-02 19:56:24.154524927 +0000 UTC m=+1.261200242 container remove f21beee1135f09e8e8e8d70f2f3b51fbe9d46acd4fbb2c7393f1a8628b8ac962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:56:24 compute-0 systemd[1]: libpod-conmon-f21beee1135f09e8e8e8d70f2f3b51fbe9d46acd4fbb2c7393f1a8628b8ac962.scope: Deactivated successfully.
Oct 02 19:56:24 compute-0 ceph-mon[191910]: pgmap v1538: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:24 compute-0 sudo[436910]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:24 compute-0 sudo[437052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:56:24 compute-0 sudo[437052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:24 compute-0 sudo[437052]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:24 compute-0 sudo[437077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:56:24 compute-0 sudo[437077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:24 compute-0 sudo[437077]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:24 compute-0 sudo[437102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:56:24 compute-0 sudo[437102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:24 compute-0 sudo[437102]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:24 compute-0 sudo[437127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:56:24 compute-0 sudo[437127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:56:25 compute-0 podman[437188]: 2025-10-02 19:56:25.358799074 +0000 UTC m=+0.127438058 container create 05024f44b70bb23f355abcab02a0c3535359d83fb32dfa3c65d495b81cad7729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hodgkin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:56:25 compute-0 podman[437188]: 2025-10-02 19:56:25.290099528 +0000 UTC m=+0.058738542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:56:25 compute-0 systemd[1]: Started libpod-conmon-05024f44b70bb23f355abcab02a0c3535359d83fb32dfa3c65d495b81cad7729.scope.
Oct 02 19:56:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:56:25 compute-0 nova_compute[355794]: 2025-10-02 19:56:25.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:25 compute-0 podman[437188]: 2025-10-02 19:56:25.567544673 +0000 UTC m=+0.336183697 container init 05024f44b70bb23f355abcab02a0c3535359d83fb32dfa3c65d495b81cad7729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 19:56:25 compute-0 podman[437188]: 2025-10-02 19:56:25.584859573 +0000 UTC m=+0.353498557 container start 05024f44b70bb23f355abcab02a0c3535359d83fb32dfa3c65d495b81cad7729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:56:25 compute-0 podman[437188]: 2025-10-02 19:56:25.594969472 +0000 UTC m=+0.363608466 container attach 05024f44b70bb23f355abcab02a0c3535359d83fb32dfa3c65d495b81cad7729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hodgkin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 19:56:25 compute-0 funny_hodgkin[437204]: 167 167
Oct 02 19:56:25 compute-0 systemd[1]: libpod-05024f44b70bb23f355abcab02a0c3535359d83fb32dfa3c65d495b81cad7729.scope: Deactivated successfully.
Oct 02 19:56:25 compute-0 podman[437188]: 2025-10-02 19:56:25.598584398 +0000 UTC m=+0.367223382 container died 05024f44b70bb23f355abcab02a0c3535359d83fb32dfa3c65d495b81cad7729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hodgkin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 19:56:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e50d6de604be3d33429e5dab869e07ee01e8513cae0004fc341804236ec0e4e-merged.mount: Deactivated successfully.
Oct 02 19:56:25 compute-0 podman[437188]: 2025-10-02 19:56:25.767326073 +0000 UTC m=+0.535965057 container remove 05024f44b70bb23f355abcab02a0c3535359d83fb32dfa3c65d495b81cad7729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hodgkin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 19:56:25 compute-0 systemd[1]: libpod-conmon-05024f44b70bb23f355abcab02a0c3535359d83fb32dfa3c65d495b81cad7729.scope: Deactivated successfully.
Oct 02 19:56:26 compute-0 podman[437228]: 2025-10-02 19:56:26.069242557 +0000 UTC m=+0.098009876 container create a3fb4ad659415caf0001e567b940e6510a5691a872620eaf3d70d08d9c614b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_perlman, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:56:26 compute-0 podman[437228]: 2025-10-02 19:56:26.030172399 +0000 UTC m=+0.058939778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:56:26 compute-0 systemd[1]: Started libpod-conmon-a3fb4ad659415caf0001e567b940e6510a5691a872620eaf3d70d08d9c614b2f.scope.
Oct 02 19:56:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd84ad7ffb984d5bdf75783dc95baf83bc9c449703ff0dd1530c408dde16b765/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:56:26 compute-0 ceph-mon[191910]: pgmap v1539: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd84ad7ffb984d5bdf75783dc95baf83bc9c449703ff0dd1530c408dde16b765/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd84ad7ffb984d5bdf75783dc95baf83bc9c449703ff0dd1530c408dde16b765/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd84ad7ffb984d5bdf75783dc95baf83bc9c449703ff0dd1530c408dde16b765/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:56:26 compute-0 podman[437228]: 2025-10-02 19:56:26.267198069 +0000 UTC m=+0.295965448 container init a3fb4ad659415caf0001e567b940e6510a5691a872620eaf3d70d08d9c614b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_perlman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:56:26 compute-0 podman[437228]: 2025-10-02 19:56:26.28606144 +0000 UTC m=+0.314828769 container start a3fb4ad659415caf0001e567b940e6510a5691a872620eaf3d70d08d9c614b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_perlman, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 19:56:26 compute-0 podman[437228]: 2025-10-02 19:56:26.292328957 +0000 UTC m=+0.321096336 container attach a3fb4ad659415caf0001e567b940e6510a5691a872620eaf3d70d08d9c614b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_perlman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 19:56:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1540: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:27 compute-0 frosty_perlman[437243]: {
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:         "osd_id": 1,
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:         "type": "bluestore"
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:     },
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:         "osd_id": 2,
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:         "type": "bluestore"
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:     },
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:         "osd_id": 0,
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:         "type": "bluestore"
Oct 02 19:56:27 compute-0 frosty_perlman[437243]:     }
Oct 02 19:56:27 compute-0 frosty_perlman[437243]: }
Oct 02 19:56:27 compute-0 systemd[1]: libpod-a3fb4ad659415caf0001e567b940e6510a5691a872620eaf3d70d08d9c614b2f.scope: Deactivated successfully.
Oct 02 19:56:27 compute-0 podman[437228]: 2025-10-02 19:56:27.373994386 +0000 UTC m=+1.402761735 container died a3fb4ad659415caf0001e567b940e6510a5691a872620eaf3d70d08d9c614b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 19:56:27 compute-0 systemd[1]: libpod-a3fb4ad659415caf0001e567b940e6510a5691a872620eaf3d70d08d9c614b2f.scope: Consumed 1.089s CPU time.
Oct 02 19:56:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd84ad7ffb984d5bdf75783dc95baf83bc9c449703ff0dd1530c408dde16b765-merged.mount: Deactivated successfully.
Oct 02 19:56:27 compute-0 podman[437228]: 2025-10-02 19:56:27.479772307 +0000 UTC m=+1.508539636 container remove a3fb4ad659415caf0001e567b940e6510a5691a872620eaf3d70d08d9c614b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:56:27 compute-0 systemd[1]: libpod-conmon-a3fb4ad659415caf0001e567b940e6510a5691a872620eaf3d70d08d9c614b2f.scope: Deactivated successfully.
Oct 02 19:56:27 compute-0 sudo[437127]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:56:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:56:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:56:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:56:27 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 9fa556fa-6f78-4eb6-a51a-a62d90f7a3f3 does not exist
Oct 02 19:56:27 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e793993b-baaa-40fa-b626-13f8dc999e89 does not exist
Oct 02 19:56:27 compute-0 podman[437277]: 2025-10-02 19:56:27.554423891 +0000 UTC m=+0.143797183 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, container_name=multipathd)
Oct 02 19:56:27 compute-0 sudo[437306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:56:27 compute-0 sudo[437306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:27 compute-0 sudo[437306]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:27 compute-0 sudo[437331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:56:27 compute-0 sudo[437331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:56:27 compute-0 sudo[437331]: pam_unix(sudo:session): session closed for user root
Oct 02 19:56:28 compute-0 ceph-mon[191910]: pgmap v1540: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:56:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:56:28 compute-0 nova_compute[355794]: 2025-10-02 19:56:28.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:29 compute-0 podman[157186]: time="2025-10-02T19:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:56:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:56:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9049 "" "Go-http-client/1.1"
Oct 02 19:56:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:56:30 compute-0 nova_compute[355794]: 2025-10-02 19:56:30.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:30 compute-0 ceph-mon[191910]: pgmap v1541: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:31 compute-0 openstack_network_exporter[372736]: ERROR   19:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:56:31 compute-0 openstack_network_exporter[372736]: ERROR   19:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:56:31 compute-0 openstack_network_exporter[372736]: ERROR   19:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:56:31 compute-0 openstack_network_exporter[372736]: ERROR   19:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:56:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:56:31 compute-0 openstack_network_exporter[372736]: ERROR   19:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:56:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:56:31 compute-0 ceph-mon[191910]: pgmap v1542: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:56:32.308 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:56:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:56:32.309 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:56:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:56:32.310 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:56:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:56:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:56:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:56:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:56:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:56:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:56:33 compute-0 nova_compute[355794]: 2025-10-02 19:56:33.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:34 compute-0 ceph-mon[191910]: pgmap v1543: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1544: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:56:35 compute-0 nova_compute[355794]: 2025-10-02 19:56:35.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:35 compute-0 podman[437356]: 2025-10-02 19:56:35.70503683 +0000 UTC m=+0.121553801 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:56:35 compute-0 podman[437357]: 2025-10-02 19:56:35.705230416 +0000 UTC m=+0.115658215 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Oct 02 19:56:36 compute-0 ceph-mon[191910]: pgmap v1544: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1545: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:38 compute-0 ceph-mon[191910]: pgmap v1545: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:38 compute-0 nova_compute[355794]: 2025-10-02 19:56:38.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:40 compute-0 ceph-mon[191910]: pgmap v1546: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:56:40 compute-0 nova_compute[355794]: 2025-10-02 19:56:40.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:40 compute-0 podman[437397]: 2025-10-02 19:56:40.737694551 +0000 UTC m=+0.149698190 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:56:40 compute-0 podman[437398]: 2025-10-02 19:56:40.765323766 +0000 UTC m=+0.173611536 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, build-date=2024-09-18T21:23:30, version=9.4, distribution-scope=public, release-0.7.12=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543)
Oct 02 19:56:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:42 compute-0 ceph-mon[191910]: pgmap v1547: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:43 compute-0 podman[437435]: 2025-10-02 19:56:43.71714612 +0000 UTC m=+0.129582165 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 02 19:56:43 compute-0 podman[437434]: 2025-10-02 19:56:43.7310684 +0000 UTC m=+0.150761858 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 02 19:56:43 compute-0 podman[437436]: 2025-10-02 19:56:43.79087818 +0000 UTC m=+0.191070020 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 19:56:44 compute-0 nova_compute[355794]: 2025-10-02 19:56:44.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:44 compute-0 ceph-mon[191910]: pgmap v1548: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:56:45 compute-0 nova_compute[355794]: 2025-10-02 19:56:45.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:46 compute-0 ceph-mon[191910]: pgmap v1549: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:46 compute-0 podman[437494]: 2025-10-02 19:56:46.748216971 +0000 UTC m=+0.154669972 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:56:46 compute-0 podman[437493]: 2025-10-02 19:56:46.755136745 +0000 UTC m=+0.168673044 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.buildah.version=1.33.7)
Oct 02 19:56:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:48 compute-0 ceph-mon[191910]: pgmap v1550: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:49 compute-0 nova_compute[355794]: 2025-10-02 19:56:49.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:49 compute-0 nova_compute[355794]: 2025-10-02 19:56:49.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:49 compute-0 nova_compute[355794]: 2025-10-02 19:56:49.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:56:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:56:50 compute-0 ceph-mon[191910]: pgmap v1551: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:50 compute-0 nova_compute[355794]: 2025-10-02 19:56:50.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:50 compute-0 nova_compute[355794]: 2025-10-02 19:56:50.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:50 compute-0 nova_compute[355794]: 2025-10-02 19:56:50.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 19:56:50 compute-0 nova_compute[355794]: 2025-10-02 19:56:50.594 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 19:56:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:52 compute-0 ceph-mon[191910]: pgmap v1552: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:54 compute-0 nova_compute[355794]: 2025-10-02 19:56:54.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:54 compute-0 ceph-mon[191910]: pgmap v1553: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:54 compute-0 nova_compute[355794]: 2025-10-02 19:56:54.595 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:54 compute-0 nova_compute[355794]: 2025-10-02 19:56:54.596 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:56:55 compute-0 nova_compute[355794]: 2025-10-02 19:56:55.087 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-58f8959a-5f7e-44a5-9dca-65be0506a4c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:56:55 compute-0 nova_compute[355794]: 2025-10-02 19:56:55.088 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-58f8959a-5f7e-44a5-9dca-65be0506a4c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:56:55 compute-0 nova_compute[355794]: 2025-10-02 19:56:55.089 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:56:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:56:55 compute-0 nova_compute[355794]: 2025-10-02 19:56:55.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:56 compute-0 nova_compute[355794]: 2025-10-02 19:56:56.409 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Updating instance_info_cache with network_info: [{"id": "90a967c2-93a2-4057-add0-3bebfcb9615a", "address": "fa:16:3e:5d:8f:b8", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a967c2-93", "ovs_interfaceid": "90a967c2-93a2-4057-add0-3bebfcb9615a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:56:56 compute-0 nova_compute[355794]: 2025-10-02 19:56:56.437 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-58f8959a-5f7e-44a5-9dca-65be0506a4c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:56:56 compute-0 nova_compute[355794]: 2025-10-02 19:56:56.438 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:56:56 compute-0 nova_compute[355794]: 2025-10-02 19:56:56.438 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:56 compute-0 ceph-mon[191910]: pgmap v1554: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:56 compute-0 nova_compute[355794]: 2025-10-02 19:56:56.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:56 compute-0 nova_compute[355794]: 2025-10-02 19:56:56.609 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:56:56 compute-0 nova_compute[355794]: 2025-10-02 19:56:56.610 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:56:56 compute-0 nova_compute[355794]: 2025-10-02 19:56:56.610 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:56:56 compute-0 nova_compute[355794]: 2025-10-02 19:56:56.611 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:56:56 compute-0 nova_compute[355794]: 2025-10-02 19:56:56.612 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:56:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:56:57 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3496241637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:56:57 compute-0 nova_compute[355794]: 2025-10-02 19:56:57.124 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:56:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:57 compute-0 nova_compute[355794]: 2025-10-02 19:56:57.263 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:56:57 compute-0 nova_compute[355794]: 2025-10-02 19:56:57.264 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:56:57 compute-0 nova_compute[355794]: 2025-10-02 19:56:57.264 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:56:57 compute-0 nova_compute[355794]: 2025-10-02 19:56:57.273 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:56:57 compute-0 nova_compute[355794]: 2025-10-02 19:56:57.274 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:56:57 compute-0 nova_compute[355794]: 2025-10-02 19:56:57.274 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:56:57 compute-0 nova_compute[355794]: 2025-10-02 19:56:57.282 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:56:57 compute-0 nova_compute[355794]: 2025-10-02 19:56:57.282 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:56:57 compute-0 nova_compute[355794]: 2025-10-02 19:56:57.283 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:56:57 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3496241637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:56:57 compute-0 nova_compute[355794]: 2025-10-02 19:56:57.867 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:56:57 compute-0 nova_compute[355794]: 2025-10-02 19:56:57.869 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3412MB free_disk=59.88888168334961GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:56:57 compute-0 nova_compute[355794]: 2025-10-02 19:56:57.869 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:56:57 compute-0 nova_compute[355794]: 2025-10-02 19:56:57.870 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:56:58 compute-0 nova_compute[355794]: 2025-10-02 19:56:58.092 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:56:58 compute-0 nova_compute[355794]: 2025-10-02 19:56:58.092 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance b88114e8-b15d-4a78-ac15-3dd7ee30b949 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:56:58 compute-0 nova_compute[355794]: 2025-10-02 19:56:58.092 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 58f8959a-5f7e-44a5-9dca-65be0506a4c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:56:58 compute-0 nova_compute[355794]: 2025-10-02 19:56:58.093 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:56:58 compute-0 nova_compute[355794]: 2025-10-02 19:56:58.093 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:56:58 compute-0 nova_compute[355794]: 2025-10-02 19:56:58.266 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:56:58 compute-0 ceph-mon[191910]: pgmap v1555: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:58 compute-0 podman[437578]: 2025-10-02 19:56:58.715803742 +0000 UTC m=+0.132581035 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 19:56:58 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:56:58 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1003962257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:56:58 compute-0 nova_compute[355794]: 2025-10-02 19:56:58.803 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:56:58 compute-0 nova_compute[355794]: 2025-10-02 19:56:58.816 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:56:58 compute-0 nova_compute[355794]: 2025-10-02 19:56:58.851 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:56:58 compute-0 nova_compute[355794]: 2025-10-02 19:56:58.855 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:56:58 compute-0 nova_compute[355794]: 2025-10-02 19:56:58.856 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:56:59 compute-0 nova_compute[355794]: 2025-10-02 19:56:59.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:56:59 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1003962257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:56:59 compute-0 podman[157186]: time="2025-10-02T19:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:56:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:56:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9069 "" "Go-http-client/1.1"
Oct 02 19:56:59 compute-0 nova_compute[355794]: 2025-10-02 19:56:59.852 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:59 compute-0 nova_compute[355794]: 2025-10-02 19:56:59.852 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:59 compute-0 nova_compute[355794]: 2025-10-02 19:56:59.852 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:57:00 compute-0 ceph-mon[191910]: pgmap v1556: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:00 compute-0 nova_compute[355794]: 2025-10-02 19:57:00.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:00 compute-0 nova_compute[355794]: 2025-10-02 19:57:00.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:00 compute-0 nova_compute[355794]: 2025-10-02 19:57:00.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:01 compute-0 openstack_network_exporter[372736]: ERROR   19:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:57:01 compute-0 openstack_network_exporter[372736]: ERROR   19:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:57:01 compute-0 openstack_network_exporter[372736]: ERROR   19:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:57:01 compute-0 openstack_network_exporter[372736]: ERROR   19:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:57:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:57:01 compute-0 openstack_network_exporter[372736]: ERROR   19:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:57:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:57:01 compute-0 nova_compute[355794]: 2025-10-02 19:57:01.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:02 compute-0 ceph-mon[191910]: pgmap v1557: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:57:03
Oct 02 19:57:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:57:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:57:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'vms', '.mgr', 'images', 'default.rgw.control', 'default.rgw.log']
Oct 02 19:57:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:57:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:57:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:57:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:57:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:57:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:57:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:57:03 compute-0 ceph-mon[191910]: pgmap v1558: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:04 compute-0 nova_compute[355794]: 2025-10-02 19:57:04.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:57:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:57:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:57:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:57:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:57:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:57:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:57:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:57:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:57:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.298 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.299 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.312 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.318 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b88114e8-b15d-4a78-ac15-3dd7ee30b949', 'name': 'vn-wkgthwg-uxhkceofcvut-4we5flt73ruq-vnf-oj5mzexrwavf', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {'metering.server_group': 'd2d7e2b0-01e0-44b1-b2c7-fe502b333743'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.325 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '58f8959a-5f7e-44a5-9dca-65be0506a4c1', 'name': 'vn-wkgthwg-l57hn7k2t5oc-62etg35sdh32-vnf-wjcrezdtgbmg', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {'metering.server_group': 'd2d7e2b0-01e0-44b1-b2c7-fe502b333743'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.326 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.326 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.327 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.327 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.328 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:57:04.327253) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.400 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.401 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.402 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.477 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.478 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.479 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.553 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.555 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.555 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.557 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.558 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.558 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.558 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.560 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:57:04.558743) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.597 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.597 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.598 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.629 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.630 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.630 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.659 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.659 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.660 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.661 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.661 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.662 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.662 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.662 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.662 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.662 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.663 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.663 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:57:04.662511) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.664 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.665 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.665 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.666 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.666 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.666 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.667 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.668 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.668 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.669 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.669 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.669 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.669 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.670 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.670 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.671 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.671 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.latency volume: 9610815435 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.672 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.latency volume: 22315613 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.673 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.673 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.latency volume: 6692352882 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.674 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:57:04.669844) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.674 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.latency volume: 35191695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.675 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.676 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.676 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.676 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.677 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.677 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.677 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.678 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:57:04.677325) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.715 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.756 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.795 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.796 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.797 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.797 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.797 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.798 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.798 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.798 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.799 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.800 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.800 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.801 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.801 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.802 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.802 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:57:04.798063) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.803 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.804 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.805 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.805 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.805 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.805 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.806 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:57:04.805850) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.815 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.822 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.828 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.829 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.830 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.830 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.830 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.830 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.831 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.832 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.833 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.833 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:57:04.831028) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.833 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.834 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.835 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.836 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.836 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.837 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:57:04.836582) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.838 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.839 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.839 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.840 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.840 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.841 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.841 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.841 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:57:04.841125) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.842 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.843 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.844 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.845 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.846 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.846 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.847 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.847 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.848 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.848 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:57:04.847204) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.849 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.849 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.850 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.850 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.851 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.851 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.851 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.851 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:57:04.851324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.852 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.853 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.853 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.854 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.854 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.855 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.855 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.855 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.855 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:57:04.855475) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.856 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.857 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.857 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.858 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.858 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.859 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.859 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.860 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.860 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:57:04.859310) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.860 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.861 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.861 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.862 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.862 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.863 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.863 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.864 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.865 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.866 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.866 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.867 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.867 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.867 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:57:04.867341) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.868 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.868 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.869 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.870 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.870 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.871 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.871 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.871 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.871 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:57:04.871336) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.872 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.873 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.873 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.874 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.874 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.875 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.876 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.876 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.877 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.878 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.878 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.878 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.879 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.879 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.880 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.880 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.881 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.881 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:57:04.880493) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.881 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/memory.usage volume: 48.9375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.882 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/memory.usage volume: 49.015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.883 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.883 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.884 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.884 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.885 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.885 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.885 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:57:04.885334) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.886 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2562 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.886 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.incoming.bytes volume: 1954 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.887 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.incoming.bytes volume: 1870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.888 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.889 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.889 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.889 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.889 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.889 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.890 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.890 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.891 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.891 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.890 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:57:04.889745) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.892 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.892 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.892 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.892 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.892 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:57:04.892543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.893 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.893 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.894 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.894 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.895 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.895 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.895 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.896 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.896 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.897 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.897 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.897 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.897 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.898 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.898 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.898 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:57:04.897986) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.898 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.899 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.899 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.899 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.900 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.900 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.901 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:57:04.900619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.901 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.901 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.902 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.902 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.902 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.902 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.903 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.903 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:57:04.903247) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.903 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 44660000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.904 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/cpu volume: 38980000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.904 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/cpu volume: 40710000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.905 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.905 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.905 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.905 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.906 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.906 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.906 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.906 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:57:04.906266) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.907 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.907 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.907 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.latency volume: 1897675157 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.908 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.latency volume: 270926831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.908 14 DEBUG ceilometer.compute.pollsters [-] b88114e8-b15d-4a78-ac15-3dd7ee30b949/disk.device.read.latency volume: 180472901 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.908 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.latency volume: 1997650221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.909 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.latency volume: 337600166 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.909 14 DEBUG ceilometer.compute.pollsters [-] 58f8959a-5f7e-44a5-9dca-65be0506a4c1/disk.device.read.latency volume: 232324009 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.910 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.910 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.912 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.912 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.912 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.912 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.913 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.913 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.913 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.913 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.914 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.914 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.914 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.914 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.915 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.915 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.915 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.915 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.915 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.916 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.916 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:57:04.916 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:57:05 compute-0 nova_compute[355794]: 2025-10-02 19:57:05.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:06 compute-0 ceph-mon[191910]: pgmap v1559: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:06 compute-0 podman[437601]: 2025-10-02 19:57:06.695160991 +0000 UTC m=+0.117888795 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:57:06 compute-0 podman[437602]: 2025-10-02 19:57:06.711351881 +0000 UTC m=+0.117725460 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4)
Oct 02 19:57:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:08 compute-0 ceph-mon[191910]: pgmap v1560: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:08 compute-0 nova_compute[355794]: 2025-10-02 19:57:08.599 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:08 compute-0 nova_compute[355794]: 2025-10-02 19:57:08.600 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 19:57:09 compute-0 nova_compute[355794]: 2025-10-02 19:57:09.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:57:10 compute-0 ceph-mon[191910]: pgmap v1561: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:57:10.358003) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435030358067, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2043, "num_deletes": 251, "total_data_size": 3433948, "memory_usage": 3490400, "flush_reason": "Manual Compaction"}
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435030382731, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3368162, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30133, "largest_seqno": 32175, "table_properties": {"data_size": 3358772, "index_size": 5948, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18502, "raw_average_key_size": 20, "raw_value_size": 3340266, "raw_average_value_size": 3618, "num_data_blocks": 264, "num_entries": 923, "num_filter_entries": 923, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759434801, "oldest_key_time": 1759434801, "file_creation_time": 1759435030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 24814 microseconds, and 15192 cpu microseconds.
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:57:10.382817) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3368162 bytes OK
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:57:10.382843) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:57:10.386334) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:57:10.386360) EVENT_LOG_v1 {"time_micros": 1759435030386352, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:57:10.386445) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3425420, prev total WAL file size 3425420, number of live WAL files 2.
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:57:10.388707) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3289KB)], [68(7055KB)]
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435030388781, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10593133, "oldest_snapshot_seqno": -1}
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5346 keys, 8887686 bytes, temperature: kUnknown
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435030482739, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8887686, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8851405, "index_size": 21835, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 133899, "raw_average_key_size": 25, "raw_value_size": 8754218, "raw_average_value_size": 1637, "num_data_blocks": 901, "num_entries": 5346, "num_filter_entries": 5346, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759435030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:57:10.483114) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8887686 bytes
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:57:10.486249) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.6 rd, 94.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 6.9 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(5.8) write-amplify(2.6) OK, records in: 5860, records dropped: 514 output_compression: NoCompression
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:57:10.486286) EVENT_LOG_v1 {"time_micros": 1759435030486270, "job": 38, "event": "compaction_finished", "compaction_time_micros": 94093, "compaction_time_cpu_micros": 48047, "output_level": 6, "num_output_files": 1, "total_output_size": 8887686, "num_input_records": 5860, "num_output_records": 5346, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435030487685, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435030490812, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:57:10.388557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:57:10.491124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:57:10.491134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:57:10.491139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:57:10.491143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:57:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:57:10.491146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:57:10 compute-0 nova_compute[355794]: 2025-10-02 19:57:10.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:11 compute-0 podman[437646]: 2025-10-02 19:57:11.721857811 +0000 UTC m=+0.129703069 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, io.buildah.version=1.29.0, name=ubi9)
Oct 02 19:57:11 compute-0 podman[437645]: 2025-10-02 19:57:11.736221143 +0000 UTC m=+0.158283798 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:57:12 compute-0 ceph-mon[191910]: pgmap v1562: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016569830736797551 of space, bias 1.0, pg target 0.49709492210392653 quantized to 32 (current 32)
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:57:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:57:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:13 compute-0 ceph-mon[191910]: pgmap v1563: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:14 compute-0 nova_compute[355794]: 2025-10-02 19:57:14.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:14 compute-0 podman[437682]: 2025-10-02 19:57:14.729214141 +0000 UTC m=+0.141196874 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:57:14 compute-0 podman[437683]: 2025-10-02 19:57:14.748106493 +0000 UTC m=+0.147576443 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:57:14 compute-0 podman[437684]: 2025-10-02 19:57:14.80477785 +0000 UTC m=+0.198768524 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:57:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:57:15 compute-0 nova_compute[355794]: 2025-10-02 19:57:15.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:16 compute-0 ceph-mon[191910]: pgmap v1564: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1565: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:17 compute-0 podman[437746]: 2025-10-02 19:57:17.718183804 +0000 UTC m=+0.128119936 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:57:17 compute-0 podman[437745]: 2025-10-02 19:57:17.719179071 +0000 UTC m=+0.131908907 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_id=edpm, distribution-scope=public, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 02 19:57:18 compute-0 ceph-mon[191910]: pgmap v1565: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:19 compute-0 nova_compute[355794]: 2025-10-02 19:57:19.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:19 compute-0 ceph-mon[191910]: pgmap v1566: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:57:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2610766297' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:57:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:57:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2610766297' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:57:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:57:20 compute-0 nova_compute[355794]: 2025-10-02 19:57:20.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2610766297' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:57:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2610766297' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:57:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1567: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:21 compute-0 ceph-mon[191910]: pgmap v1567: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1568: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:24 compute-0 nova_compute[355794]: 2025-10-02 19:57:24.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:24 compute-0 ceph-mon[191910]: pgmap v1568: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1569: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:57:25 compute-0 nova_compute[355794]: 2025-10-02 19:57:25.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:26 compute-0 ceph-mon[191910]: pgmap v1569: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:27 compute-0 sudo[437788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:57:27 compute-0 sudo[437788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:57:27 compute-0 sudo[437788]: pam_unix(sudo:session): session closed for user root
Oct 02 19:57:28 compute-0 sudo[437813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:57:28 compute-0 sudo[437813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:57:28 compute-0 sudo[437813]: pam_unix(sudo:session): session closed for user root
Oct 02 19:57:28 compute-0 sudo[437838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:57:28 compute-0 sudo[437838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:57:28 compute-0 sudo[437838]: pam_unix(sudo:session): session closed for user root
Oct 02 19:57:28 compute-0 ceph-mon[191910]: pgmap v1570: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:28 compute-0 sudo[437863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:57:28 compute-0 sudo[437863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:57:28 compute-0 sudo[437863]: pam_unix(sudo:session): session closed for user root
Oct 02 19:57:29 compute-0 nova_compute[355794]: 2025-10-02 19:57:29.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:57:29 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:57:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:57:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:57:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:57:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:57:29 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev c80f45e4-f6a4-40b1-81a1-e965914ee814 does not exist
Oct 02 19:57:29 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 5f6982c0-a512-4617-bd6d-164bd97f616e does not exist
Oct 02 19:57:29 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 90704bf2-955f-4312-a9b3-dd4e6d6f83d9 does not exist
Oct 02 19:57:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:57:29 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:57:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:57:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:57:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:57:29 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:57:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:29 compute-0 sudo[437918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:57:29 compute-0 sudo[437918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:57:29 compute-0 sudo[437918]: pam_unix(sudo:session): session closed for user root
Oct 02 19:57:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:57:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:57:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:57:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:57:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:57:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:57:29 compute-0 podman[437942]: 2025-10-02 19:57:29.312759389 +0000 UTC m=+0.085155394 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:57:29 compute-0 sudo[437949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:57:29 compute-0 sudo[437949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:57:29 compute-0 sudo[437949]: pam_unix(sudo:session): session closed for user root
Oct 02 19:57:29 compute-0 sudo[437986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:57:29 compute-0 sudo[437986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:57:29 compute-0 sudo[437986]: pam_unix(sudo:session): session closed for user root
Oct 02 19:57:29 compute-0 sudo[438011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:57:29 compute-0 sudo[438011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:57:29 compute-0 podman[157186]: time="2025-10-02T19:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:57:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:57:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9068 "" "Go-http-client/1.1"
Oct 02 19:57:30 compute-0 podman[438075]: 2025-10-02 19:57:30.105866719 +0000 UTC m=+0.098648903 container create a632a87d43614de9723e50f68265e2b4729ed7e385b53b8955b81148801ddaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bouman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 19:57:30 compute-0 podman[438075]: 2025-10-02 19:57:30.063685528 +0000 UTC m=+0.056467802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:57:30 compute-0 systemd[1]: Started libpod-conmon-a632a87d43614de9723e50f68265e2b4729ed7e385b53b8955b81148801ddaca.scope.
Oct 02 19:57:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:57:30 compute-0 podman[438075]: 2025-10-02 19:57:30.257542491 +0000 UTC m=+0.250324715 container init a632a87d43614de9723e50f68265e2b4729ed7e385b53b8955b81148801ddaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:57:30 compute-0 podman[438075]: 2025-10-02 19:57:30.268360528 +0000 UTC m=+0.261142722 container start a632a87d43614de9723e50f68265e2b4729ed7e385b53b8955b81148801ddaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 19:57:30 compute-0 podman[438075]: 2025-10-02 19:57:30.27557492 +0000 UTC m=+0.268357124 container attach a632a87d43614de9723e50f68265e2b4729ed7e385b53b8955b81148801ddaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:57:30 compute-0 tender_bouman[438091]: 167 167
Oct 02 19:57:30 compute-0 systemd[1]: libpod-a632a87d43614de9723e50f68265e2b4729ed7e385b53b8955b81148801ddaca.scope: Deactivated successfully.
Oct 02 19:57:30 compute-0 podman[438075]: 2025-10-02 19:57:30.280623684 +0000 UTC m=+0.273405888 container died a632a87d43614de9723e50f68265e2b4729ed7e385b53b8955b81148801ddaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bouman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:57:30 compute-0 ceph-mon[191910]: pgmap v1571: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-900354d8b5b894ef57ab8fc8c59bbae1493e78a103ef6b1f4bcf6140751cd47b-merged.mount: Deactivated successfully.
Oct 02 19:57:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:57:30 compute-0 podman[438075]: 2025-10-02 19:57:30.357624891 +0000 UTC m=+0.350407095 container remove a632a87d43614de9723e50f68265e2b4729ed7e385b53b8955b81148801ddaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bouman, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 19:57:30 compute-0 systemd[1]: libpod-conmon-a632a87d43614de9723e50f68265e2b4729ed7e385b53b8955b81148801ddaca.scope: Deactivated successfully.
Oct 02 19:57:30 compute-0 nova_compute[355794]: 2025-10-02 19:57:30.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:30 compute-0 podman[438117]: 2025-10-02 19:57:30.612097604 +0000 UTC m=+0.069406916 container create ef0c450f63a2364fc34798fa647c6d756c469da049a88bc720776b7ca30ed650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_neumann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:57:30 compute-0 podman[438117]: 2025-10-02 19:57:30.577615808 +0000 UTC m=+0.034925150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:57:30 compute-0 systemd[1]: Started libpod-conmon-ef0c450f63a2364fc34798fa647c6d756c469da049a88bc720776b7ca30ed650.scope.
Oct 02 19:57:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69da28c1558fc37a1d0c9554d3b55e609f284dbddf62281e1bcdf62347e1fb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69da28c1558fc37a1d0c9554d3b55e609f284dbddf62281e1bcdf62347e1fb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69da28c1558fc37a1d0c9554d3b55e609f284dbddf62281e1bcdf62347e1fb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69da28c1558fc37a1d0c9554d3b55e609f284dbddf62281e1bcdf62347e1fb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69da28c1558fc37a1d0c9554d3b55e609f284dbddf62281e1bcdf62347e1fb1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:57:30 compute-0 podman[438117]: 2025-10-02 19:57:30.786304264 +0000 UTC m=+0.243613656 container init ef0c450f63a2364fc34798fa647c6d756c469da049a88bc720776b7ca30ed650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_neumann, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:57:30 compute-0 podman[438117]: 2025-10-02 19:57:30.803421419 +0000 UTC m=+0.260730741 container start ef0c450f63a2364fc34798fa647c6d756c469da049a88bc720776b7ca30ed650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_neumann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:57:30 compute-0 podman[438117]: 2025-10-02 19:57:30.808174916 +0000 UTC m=+0.265484318 container attach ef0c450f63a2364fc34798fa647c6d756c469da049a88bc720776b7ca30ed650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_neumann, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.002 2 DEBUG oslo_concurrency.lockutils [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.002 2 DEBUG oslo_concurrency.lockutils [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.005 2 DEBUG oslo_concurrency.lockutils [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.006 2 DEBUG oslo_concurrency.lockutils [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.006 2 DEBUG oslo_concurrency.lockutils [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.007 2 INFO nova.compute.manager [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Terminating instance
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.009 2 DEBUG nova.compute.manager [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 19:57:31 compute-0 kernel: tap55da210c-64 (unregistering): left promiscuous mode
Oct 02 19:57:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:31 compute-0 NetworkManager[44968]: <info>  [1759435051.1678] device (tap55da210c-64): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:57:31 compute-0 ovn_controller[88435]: 2025-10-02T19:57:31Z|00054|binding|INFO|Releasing lport 55da210c-644a-4f1e-8f20-ee3303b72db2 from this chassis (sb_readonly=0)
Oct 02 19:57:31 compute-0 ovn_controller[88435]: 2025-10-02T19:57:31Z|00055|binding|INFO|Setting lport 55da210c-644a-4f1e-8f20-ee3303b72db2 down in Southbound
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:31 compute-0 ovn_controller[88435]: 2025-10-02T19:57:31Z|00056|binding|INFO|Removing iface tap55da210c-64 ovn-installed in OVS
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:31 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct 02 19:57:31 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 47.470s CPU time.
Oct 02 19:57:31 compute-0 systemd-machined[137646]: Machine qemu-3-instance-00000003 terminated.
Oct 02 19:57:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:31.292 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:df:6b 192.168.0.207'], port_security=['fa:16:3e:c5:df:6b 192.168.0.207'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-rlri6wkgthwg-uxhkceofcvut-4we5flt73ruq-port-h6neckvltb4i', 'neutron:cidrs': '192.168.0.207/24', 'neutron:device_id': 'b88114e8-b15d-4a78-ac15-3dd7ee30b949', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-rlri6wkgthwg-uxhkceofcvut-4we5flt73ruq-port-h6neckvltb4i', 'neutron:project_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '457fbfbc-bc7f-4eb1-986f-4ac88ca280aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.220', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a344602b-6814-4d50-9f7f-4fc26c3ad853, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=55da210c-644a-4f1e-8f20-ee3303b72db2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:57:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:31.303 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 55da210c-644a-4f1e-8f20-ee3303b72db2 in datapath 6e3c6c60-2fbc-4181-942a-00056fc667f2 unbound from our chassis
Oct 02 19:57:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:31.309 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e3c6c60-2fbc-4181-942a-00056fc667f2
Oct 02 19:57:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:31.325 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5a69f9-e117-4290-809e-73aa370c2d06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:57:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:31.360 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[6adff5e7-9ba8-4053-9bad-a334d0a8960d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:57:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:31.364 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d99fe6-6b27-44c9-ada0-30598d9a1021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:57:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:31.401 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[170789db-efe7-4c92-88af-5ef8e2b3bb81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:57:31 compute-0 openstack_network_exporter[372736]: ERROR   19:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:57:31 compute-0 openstack_network_exporter[372736]: ERROR   19:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:57:31 compute-0 openstack_network_exporter[372736]: ERROR   19:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:57:31 compute-0 openstack_network_exporter[372736]: ERROR   19:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:57:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:57:31 compute-0 openstack_network_exporter[372736]: ERROR   19:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:57:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:31.476 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[7041aedd-3f41-4ed6-8b30-55cfbce7a8ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e3c6c60-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:df:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 14, 'rx_bytes': 832, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 14, 'rx_bytes': 832, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545739, 'reachable_time': 37506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 438150, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.480 2 INFO nova.virt.libvirt.driver [-] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Instance destroyed successfully.
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.481 2 DEBUG nova.objects.instance [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lazy-loading 'resources' on Instance uuid b88114e8-b15d-4a78-ac15-3dd7ee30b949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:57:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:31.495 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[114ac3fd-7d93-4420-a7e6-13feae8b15ac]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e3c6c60-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545752, 'tstamp': 545752}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 438160, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap6e3c6c60-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545756, 'tstamp': 545756}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 438160, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:57:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:31.497 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e3c6c60-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:31.510 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e3c6c60-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:57:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:31.510 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:57:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:31.511 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e3c6c60-20, col_values=(('external_ids', {'iface-id': '4a8af5bc-5352-4506-b6d3-43b5d33802a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:57:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:31.511 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.515 2 DEBUG nova.virt.libvirt.vif [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:49:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-wkgthwg-uxhkceofcvut-4we5flt73ruq-vnf-oj5mzexrwavf',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-wkgthwg-uxhkceofcvut-4we5flt73ruq-vnf-oj5mzexrwavf',id=3,image_ref='ce28338d-119e-49e1-ab67-60da8882593a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:49:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='d2d7e2b0-01e0-44b1-b2c7-fe502b333743'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c35486f37b94d43a7bf2f2fa09c70b9',ramdisk_id='',reservation_id='r-vgn5al0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='ce28338d-119e-49e1-ab67-60da8882593a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T19:49:44Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT02NjE4MDIzNjg3NTk0NDQ5NTgwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTY2MTgwMjM2ODc1OTQ0NDk1ODA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NjYxODAyMzY4NzU5NDQ0OTU4MD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTY2MTgwMjM2ODc1OTQ0NDk1ODA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT02NjE4MDIzNjg3NTk0NDQ5NTgwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT02NjE4MDIzNjg3NTk0NDQ5NTgwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1
Oct 02 19:57:31 compute-0 nova_compute[355794]: xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NjYxODAyMzY4NzU5NDQ0OTU4MD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTY2MTgwMjM2ODc1OTQ0NDk1ODA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT02NjE4MDIzNjg3NTk0NDQ5NTgwPT0tLQo=',user_id='811fb7ac717e4ba9b9874e5454ee08f4',uuid=b88114e8-b15d-4a78-ac15-3dd7ee30b949,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "55da210c-644a-4f1e-8f20-ee3303b72db2", "address": "fa:16:3e:c5:df:6b", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55da210c-64", "ovs_interfaceid": "55da210c-644a-4f1e-8f20-ee3303b72db2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.517 2 DEBUG nova.network.os_vif_util [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converting VIF {"id": "55da210c-644a-4f1e-8f20-ee3303b72db2", "address": "fa:16:3e:c5:df:6b", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55da210c-64", "ovs_interfaceid": "55da210c-644a-4f1e-8f20-ee3303b72db2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.518 2 DEBUG nova.network.os_vif_util [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c5:df:6b,bridge_name='br-int',has_traffic_filtering=True,id=55da210c-644a-4f1e-8f20-ee3303b72db2,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap55da210c-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.518 2 DEBUG os_vif [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c5:df:6b,bridge_name='br-int',has_traffic_filtering=True,id=55da210c-644a-4f1e-8f20-ee3303b72db2,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap55da210c-64') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.521 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55da210c-64, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:31 compute-0 nova_compute[355794]: 2025-10-02 19:57:31.528 2 INFO os_vif [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c5:df:6b,bridge_name='br-int',has_traffic_filtering=True,id=55da210c-644a-4f1e-8f20-ee3303b72db2,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap55da210c-64')
Oct 02 19:57:31 compute-0 rsyslogd[187702]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 19:57:31.515 2 DEBUG nova.virt.libvirt.vif [None req-41fa08f4-9822-44 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 19:57:32 compute-0 intelligent_neumann[438133]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:57:32 compute-0 intelligent_neumann[438133]: --> relative data size: 1.0
Oct 02 19:57:32 compute-0 intelligent_neumann[438133]: --> All data devices are unavailable
Oct 02 19:57:32 compute-0 systemd[1]: libpod-ef0c450f63a2364fc34798fa647c6d756c469da049a88bc720776b7ca30ed650.scope: Deactivated successfully.
Oct 02 19:57:32 compute-0 systemd[1]: libpod-ef0c450f63a2364fc34798fa647c6d756c469da049a88bc720776b7ca30ed650.scope: Consumed 1.175s CPU time.
Oct 02 19:57:32 compute-0 podman[438117]: 2025-10-02 19:57:32.097484994 +0000 UTC m=+1.554794336 container died ef0c450f63a2364fc34798fa647c6d756c469da049a88bc720776b7ca30ed650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_neumann, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:57:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e69da28c1558fc37a1d0c9554d3b55e609f284dbddf62281e1bcdf62347e1fb1-merged.mount: Deactivated successfully.
Oct 02 19:57:32 compute-0 podman[438117]: 2025-10-02 19:57:32.224629893 +0000 UTC m=+1.681939235 container remove ef0c450f63a2364fc34798fa647c6d756c469da049a88bc720776b7ca30ed650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 19:57:32 compute-0 systemd[1]: libpod-conmon-ef0c450f63a2364fc34798fa647c6d756c469da049a88bc720776b7ca30ed650.scope: Deactivated successfully.
Oct 02 19:57:32 compute-0 sudo[438011]: pam_unix(sudo:session): session closed for user root
Oct 02 19:57:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:32.310 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:57:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:32.311 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:57:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:32.311 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:57:32 compute-0 ceph-mon[191910]: pgmap v1572: 321 pgs: 321 active+clean; 201 MiB data, 326 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:32 compute-0 sudo[438217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:57:32 compute-0 sudo[438217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:57:32 compute-0 sudo[438217]: pam_unix(sudo:session): session closed for user root
Oct 02 19:57:32 compute-0 sudo[438242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:57:32 compute-0 sudo[438242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:57:32 compute-0 sudo[438242]: pam_unix(sudo:session): session closed for user root
Oct 02 19:57:32 compute-0 nova_compute[355794]: 2025-10-02 19:57:32.498 2 DEBUG nova.compute.manager [req-7091b5bb-77a5-432f-bc29-eb09cd2ad208 req-eb400f0a-5a1c-48e9-bc36-8bea252348f4 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Received event network-vif-unplugged-55da210c-644a-4f1e-8f20-ee3303b72db2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:57:32 compute-0 nova_compute[355794]: 2025-10-02 19:57:32.500 2 DEBUG oslo_concurrency.lockutils [req-7091b5bb-77a5-432f-bc29-eb09cd2ad208 req-eb400f0a-5a1c-48e9-bc36-8bea252348f4 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:57:32 compute-0 nova_compute[355794]: 2025-10-02 19:57:32.501 2 DEBUG oslo_concurrency.lockutils [req-7091b5bb-77a5-432f-bc29-eb09cd2ad208 req-eb400f0a-5a1c-48e9-bc36-8bea252348f4 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:57:32 compute-0 nova_compute[355794]: 2025-10-02 19:57:32.502 2 DEBUG oslo_concurrency.lockutils [req-7091b5bb-77a5-432f-bc29-eb09cd2ad208 req-eb400f0a-5a1c-48e9-bc36-8bea252348f4 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:57:32 compute-0 nova_compute[355794]: 2025-10-02 19:57:32.503 2 DEBUG nova.compute.manager [req-7091b5bb-77a5-432f-bc29-eb09cd2ad208 req-eb400f0a-5a1c-48e9-bc36-8bea252348f4 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] No waiting events found dispatching network-vif-unplugged-55da210c-644a-4f1e-8f20-ee3303b72db2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:57:32 compute-0 nova_compute[355794]: 2025-10-02 19:57:32.504 2 DEBUG nova.compute.manager [req-7091b5bb-77a5-432f-bc29-eb09cd2ad208 req-eb400f0a-5a1c-48e9-bc36-8bea252348f4 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Received event network-vif-unplugged-55da210c-644a-4f1e-8f20-ee3303b72db2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 19:57:32 compute-0 sudo[438267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:57:32 compute-0 sudo[438267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:57:32 compute-0 sudo[438267]: pam_unix(sudo:session): session closed for user root
Oct 02 19:57:32 compute-0 sudo[438292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:57:32 compute-0 sudo[438292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:57:33 compute-0 nova_compute[355794]: 2025-10-02 19:57:33.129 2 INFO nova.virt.libvirt.driver [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Deleting instance files /var/lib/nova/instances/b88114e8-b15d-4a78-ac15-3dd7ee30b949_del
Oct 02 19:57:33 compute-0 nova_compute[355794]: 2025-10-02 19:57:33.131 2 INFO nova.virt.libvirt.driver [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Deletion of /var/lib/nova/instances/b88114e8-b15d-4a78-ac15-3dd7ee30b949_del complete
Oct 02 19:57:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1573: 321 pgs: 321 active+clean; 192 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 3 op/s
Oct 02 19:57:33 compute-0 nova_compute[355794]: 2025-10-02 19:57:33.203 2 INFO nova.compute.manager [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Took 2.19 seconds to destroy the instance on the hypervisor.
Oct 02 19:57:33 compute-0 nova_compute[355794]: 2025-10-02 19:57:33.204 2 DEBUG oslo.service.loopingcall [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 19:57:33 compute-0 nova_compute[355794]: 2025-10-02 19:57:33.205 2 DEBUG nova.compute.manager [-] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 19:57:33 compute-0 nova_compute[355794]: 2025-10-02 19:57:33.205 2 DEBUG nova.network.neutron [-] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 19:57:33 compute-0 podman[438354]: 2025-10-02 19:57:33.294056566 +0000 UTC m=+0.087957468 container create 96c8c9f8f77d9e3aa70442465168b9ff65a5c0c855526d16744b1d382dc4d303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_heyrovsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:57:33 compute-0 podman[438354]: 2025-10-02 19:57:33.257882675 +0000 UTC m=+0.051783647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:57:33 compute-0 systemd[1]: Started libpod-conmon-96c8c9f8f77d9e3aa70442465168b9ff65a5c0c855526d16744b1d382dc4d303.scope.
Oct 02 19:57:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:57:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:33.392 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '6e:8a:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:85:f4:22:b9:4f'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:57:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:33.394 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:57:33 compute-0 nova_compute[355794]: 2025-10-02 19:57:33.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:33 compute-0 podman[438354]: 2025-10-02 19:57:33.411005725 +0000 UTC m=+0.204906657 container init 96c8c9f8f77d9e3aa70442465168b9ff65a5c0c855526d16744b1d382dc4d303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 19:57:33 compute-0 podman[438354]: 2025-10-02 19:57:33.422516681 +0000 UTC m=+0.216417563 container start 96c8c9f8f77d9e3aa70442465168b9ff65a5c0c855526d16744b1d382dc4d303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_heyrovsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:57:33 compute-0 nova_compute[355794]: 2025-10-02 19:57:33.426 2 DEBUG nova.compute.manager [req-1ac04db3-4c39-4ea6-9148-4fef25de720a req-590723d9-0f4c-403b-a4ac-c2891c2422fa 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Received event network-changed-55da210c-644a-4f1e-8f20-ee3303b72db2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:57:33 compute-0 nova_compute[355794]: 2025-10-02 19:57:33.426 2 DEBUG nova.compute.manager [req-1ac04db3-4c39-4ea6-9148-4fef25de720a req-590723d9-0f4c-403b-a4ac-c2891c2422fa 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Refreshing instance network info cache due to event network-changed-55da210c-644a-4f1e-8f20-ee3303b72db2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:57:33 compute-0 podman[438354]: 2025-10-02 19:57:33.427037061 +0000 UTC m=+0.220937943 container attach 96c8c9f8f77d9e3aa70442465168b9ff65a5c0c855526d16744b1d382dc4d303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 19:57:33 compute-0 nova_compute[355794]: 2025-10-02 19:57:33.427 2 DEBUG oslo_concurrency.lockutils [req-1ac04db3-4c39-4ea6-9148-4fef25de720a req-590723d9-0f4c-403b-a4ac-c2891c2422fa 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-b88114e8-b15d-4a78-ac15-3dd7ee30b949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:57:33 compute-0 nova_compute[355794]: 2025-10-02 19:57:33.427 2 DEBUG oslo_concurrency.lockutils [req-1ac04db3-4c39-4ea6-9148-4fef25de720a req-590723d9-0f4c-403b-a4ac-c2891c2422fa 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-b88114e8-b15d-4a78-ac15-3dd7ee30b949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:57:33 compute-0 nova_compute[355794]: 2025-10-02 19:57:33.427 2 DEBUG nova.network.neutron [req-1ac04db3-4c39-4ea6-9148-4fef25de720a req-590723d9-0f4c-403b-a4ac-c2891c2422fa 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Refreshing network info cache for port 55da210c-644a-4f1e-8f20-ee3303b72db2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:57:33 compute-0 blissful_heyrovsky[438370]: 167 167
Oct 02 19:57:33 compute-0 systemd[1]: libpod-96c8c9f8f77d9e3aa70442465168b9ff65a5c0c855526d16744b1d382dc4d303.scope: Deactivated successfully.
Oct 02 19:57:33 compute-0 podman[438354]: 2025-10-02 19:57:33.434257713 +0000 UTC m=+0.228158585 container died 96c8c9f8f77d9e3aa70442465168b9ff65a5c0c855526d16744b1d382dc4d303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:57:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-54dcbfe8952f49d071fdf664b96c21f14355317ba22755bc561b6fc6da10c06f-merged.mount: Deactivated successfully.
Oct 02 19:57:33 compute-0 podman[438354]: 2025-10-02 19:57:33.498695805 +0000 UTC m=+0.292596697 container remove 96c8c9f8f77d9e3aa70442465168b9ff65a5c0c855526d16744b1d382dc4d303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_heyrovsky, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:57:33 compute-0 systemd[1]: libpod-conmon-96c8c9f8f77d9e3aa70442465168b9ff65a5c0c855526d16744b1d382dc4d303.scope: Deactivated successfully.
Oct 02 19:57:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:57:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:57:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:57:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:57:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:57:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:57:33 compute-0 podman[438392]: 2025-10-02 19:57:33.819742809 +0000 UTC m=+0.101660553 container create 9958bbb8de4da41b78b153458f243615f17916a8f603b959bcff27eb695c3d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:57:33 compute-0 podman[438392]: 2025-10-02 19:57:33.77504076 +0000 UTC m=+0.056958564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:57:33 compute-0 systemd[1]: Started libpod-conmon-9958bbb8de4da41b78b153458f243615f17916a8f603b959bcff27eb695c3d7b.scope.
Oct 02 19:57:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:57:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7fbf88f1ec95ef08ddfa61969e65f56438c1cfbf2d83727c10f3cbd89fc0a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:57:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7fbf88f1ec95ef08ddfa61969e65f56438c1cfbf2d83727c10f3cbd89fc0a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:57:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7fbf88f1ec95ef08ddfa61969e65f56438c1cfbf2d83727c10f3cbd89fc0a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:57:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7fbf88f1ec95ef08ddfa61969e65f56438c1cfbf2d83727c10f3cbd89fc0a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:57:34 compute-0 podman[438392]: 2025-10-02 19:57:34.003838512 +0000 UTC m=+0.285756286 container init 9958bbb8de4da41b78b153458f243615f17916a8f603b959bcff27eb695c3d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:57:34 compute-0 podman[438392]: 2025-10-02 19:57:34.026587236 +0000 UTC m=+0.308504950 container start 9958bbb8de4da41b78b153458f243615f17916a8f603b959bcff27eb695c3d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kilby, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:57:34 compute-0 nova_compute[355794]: 2025-10-02 19:57:34.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:34 compute-0 podman[438392]: 2025-10-02 19:57:34.032341289 +0000 UTC m=+0.314259033 container attach 9958bbb8de4da41b78b153458f243615f17916a8f603b959bcff27eb695c3d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kilby, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:57:34 compute-0 ceph-mon[191910]: pgmap v1573: 321 pgs: 321 active+clean; 192 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 3 op/s
Oct 02 19:57:34 compute-0 nova_compute[355794]: 2025-10-02 19:57:34.839 2 DEBUG nova.compute.manager [req-3944bee0-0d9b-47cd-8c3c-46bfe1d2cfd8 req-b6dedb13-fa98-435a-b82e-172a73176b41 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Received event network-vif-plugged-55da210c-644a-4f1e-8f20-ee3303b72db2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:57:34 compute-0 nova_compute[355794]: 2025-10-02 19:57:34.840 2 DEBUG oslo_concurrency.lockutils [req-3944bee0-0d9b-47cd-8c3c-46bfe1d2cfd8 req-b6dedb13-fa98-435a-b82e-172a73176b41 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:57:34 compute-0 nova_compute[355794]: 2025-10-02 19:57:34.842 2 DEBUG oslo_concurrency.lockutils [req-3944bee0-0d9b-47cd-8c3c-46bfe1d2cfd8 req-b6dedb13-fa98-435a-b82e-172a73176b41 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:57:34 compute-0 nova_compute[355794]: 2025-10-02 19:57:34.843 2 DEBUG oslo_concurrency.lockutils [req-3944bee0-0d9b-47cd-8c3c-46bfe1d2cfd8 req-b6dedb13-fa98-435a-b82e-172a73176b41 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:57:34 compute-0 nova_compute[355794]: 2025-10-02 19:57:34.843 2 DEBUG nova.compute.manager [req-3944bee0-0d9b-47cd-8c3c-46bfe1d2cfd8 req-b6dedb13-fa98-435a-b82e-172a73176b41 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] No waiting events found dispatching network-vif-plugged-55da210c-644a-4f1e-8f20-ee3303b72db2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:57:34 compute-0 nova_compute[355794]: 2025-10-02 19:57:34.844 2 WARNING nova.compute.manager [req-3944bee0-0d9b-47cd-8c3c-46bfe1d2cfd8 req-b6dedb13-fa98-435a-b82e-172a73176b41 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Received unexpected event network-vif-plugged-55da210c-644a-4f1e-8f20-ee3303b72db2 for instance with vm_state active and task_state deleting.
Oct 02 19:57:34 compute-0 nova_compute[355794]: 2025-10-02 19:57:34.849 2 DEBUG nova.network.neutron [req-1ac04db3-4c39-4ea6-9148-4fef25de720a req-590723d9-0f4c-403b-a4ac-c2891c2422fa 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Updated VIF entry in instance network info cache for port 55da210c-644a-4f1e-8f20-ee3303b72db2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:57:34 compute-0 nova_compute[355794]: 2025-10-02 19:57:34.850 2 DEBUG nova.network.neutron [req-1ac04db3-4c39-4ea6-9148-4fef25de720a req-590723d9-0f4c-403b-a4ac-c2891c2422fa 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Updating instance_info_cache with network_info: [{"id": "55da210c-644a-4f1e-8f20-ee3303b72db2", "address": "fa:16:3e:c5:df:6b", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55da210c-64", "ovs_interfaceid": "55da210c-644a-4f1e-8f20-ee3303b72db2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:57:34 compute-0 nova_compute[355794]: 2025-10-02 19:57:34.881 2 DEBUG oslo_concurrency.lockutils [req-1ac04db3-4c39-4ea6-9148-4fef25de720a req-590723d9-0f4c-403b-a4ac-c2891c2422fa 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-b88114e8-b15d-4a78-ac15-3dd7ee30b949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:57:35 compute-0 nifty_kilby[438408]: {
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:     "0": [
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:         {
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "devices": [
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "/dev/loop3"
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             ],
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "lv_name": "ceph_lv0",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "lv_size": "21470642176",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "name": "ceph_lv0",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "tags": {
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.cluster_name": "ceph",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.crush_device_class": "",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.encrypted": "0",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.osd_id": "0",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.type": "block",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.vdo": "0"
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             },
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "type": "block",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "vg_name": "ceph_vg0"
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:         }
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:     ],
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:     "1": [
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:         {
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "devices": [
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "/dev/loop4"
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             ],
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "lv_name": "ceph_lv1",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "lv_size": "21470642176",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "name": "ceph_lv1",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "tags": {
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.cluster_name": "ceph",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.crush_device_class": "",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.encrypted": "0",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.osd_id": "1",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.type": "block",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.vdo": "0"
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             },
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "type": "block",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "vg_name": "ceph_vg1"
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:         }
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:     ],
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:     "2": [
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:         {
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "devices": [
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "/dev/loop5"
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             ],
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "lv_name": "ceph_lv2",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "lv_size": "21470642176",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "name": "ceph_lv2",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "tags": {
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.cluster_name": "ceph",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.crush_device_class": "",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.encrypted": "0",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.osd_id": "2",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.type": "block",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:                 "ceph.vdo": "0"
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             },
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "type": "block",
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:             "vg_name": "ceph_vg2"
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:         }
Oct 02 19:57:35 compute-0 nifty_kilby[438408]:     ]
Oct 02 19:57:35 compute-0 nifty_kilby[438408]: }
Oct 02 19:57:35 compute-0 systemd[1]: libpod-9958bbb8de4da41b78b153458f243615f17916a8f603b959bcff27eb695c3d7b.scope: Deactivated successfully.
Oct 02 19:57:35 compute-0 podman[438392]: 2025-10-02 19:57:35.074964181 +0000 UTC m=+1.356881915 container died 9958bbb8de4da41b78b153458f243615f17916a8f603b959bcff27eb695c3d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 19:57:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc7fbf88f1ec95ef08ddfa61969e65f56438c1cfbf2d83727c10f3cbd89fc0a7-merged.mount: Deactivated successfully.
Oct 02 19:57:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1574: 321 pgs: 321 active+clean; 160 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 682 B/s wr, 33 op/s
Oct 02 19:57:35 compute-0 nova_compute[355794]: 2025-10-02 19:57:35.186 2 DEBUG nova.network.neutron [-] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:57:35 compute-0 podman[438392]: 2025-10-02 19:57:35.191716434 +0000 UTC m=+1.473634188 container remove 9958bbb8de4da41b78b153458f243615f17916a8f603b959bcff27eb695c3d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kilby, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:57:35 compute-0 systemd[1]: libpod-conmon-9958bbb8de4da41b78b153458f243615f17916a8f603b959bcff27eb695c3d7b.scope: Deactivated successfully.
Oct 02 19:57:35 compute-0 nova_compute[355794]: 2025-10-02 19:57:35.207 2 INFO nova.compute.manager [-] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Took 2.00 seconds to deallocate network for instance.
Oct 02 19:57:35 compute-0 nova_compute[355794]: 2025-10-02 19:57:35.245 2 DEBUG oslo_concurrency.lockutils [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:57:35 compute-0 nova_compute[355794]: 2025-10-02 19:57:35.246 2 DEBUG oslo_concurrency.lockutils [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:57:35 compute-0 sudo[438292]: pam_unix(sudo:session): session closed for user root
Oct 02 19:57:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:57:35 compute-0 nova_compute[355794]: 2025-10-02 19:57:35.354 2 DEBUG oslo_concurrency.processutils [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:57:35 compute-0 sudo[438427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:57:35 compute-0 sudo[438427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:57:35 compute-0 sudo[438427]: pam_unix(sudo:session): session closed for user root
Oct 02 19:57:35 compute-0 sudo[438453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:57:35 compute-0 sudo[438453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:57:35 compute-0 sudo[438453]: pam_unix(sudo:session): session closed for user root
Oct 02 19:57:35 compute-0 sudo[438497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:57:35 compute-0 sudo[438497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:57:35 compute-0 sudo[438497]: pam_unix(sudo:session): session closed for user root
Oct 02 19:57:35 compute-0 sudo[438522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:57:35 compute-0 sudo[438522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:57:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:57:35 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3808777034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:57:35 compute-0 nova_compute[355794]: 2025-10-02 19:57:35.870 2 DEBUG oslo_concurrency.processutils [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:57:35 compute-0 nova_compute[355794]: 2025-10-02 19:57:35.881 2 DEBUG nova.compute.provider_tree [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:57:35 compute-0 nova_compute[355794]: 2025-10-02 19:57:35.918 2 DEBUG nova.scheduler.client.report [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:57:35 compute-0 nova_compute[355794]: 2025-10-02 19:57:35.946 2 DEBUG oslo_concurrency.lockutils [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:57:36 compute-0 nova_compute[355794]: 2025-10-02 19:57:36.027 2 INFO nova.scheduler.client.report [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Deleted allocations for instance b88114e8-b15d-4a78-ac15-3dd7ee30b949
Oct 02 19:57:36 compute-0 nova_compute[355794]: 2025-10-02 19:57:36.121 2 DEBUG oslo_concurrency.lockutils [None req-41fa08f4-9822-4426-998a-3979b465cfd5 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "b88114e8-b15d-4a78-ac15-3dd7ee30b949" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:57:36 compute-0 ceph-mon[191910]: pgmap v1574: 321 pgs: 321 active+clean; 160 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 682 B/s wr, 33 op/s
Oct 02 19:57:36 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3808777034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:57:36 compute-0 podman[438584]: 2025-10-02 19:57:36.392623022 +0000 UTC m=+0.083701046 container create d806006eb44e656663be453f10c8304f1c8b552e3adc1ab10d2908f1d89ddaa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:57:36 compute-0 podman[438584]: 2025-10-02 19:57:36.363030045 +0000 UTC m=+0.054108049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:57:36 compute-0 systemd[1]: Started libpod-conmon-d806006eb44e656663be453f10c8304f1c8b552e3adc1ab10d2908f1d89ddaa1.scope.
Oct 02 19:57:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:57:36 compute-0 nova_compute[355794]: 2025-10-02 19:57:36.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:36 compute-0 podman[438584]: 2025-10-02 19:57:36.547831757 +0000 UTC m=+0.238909791 container init d806006eb44e656663be453f10c8304f1c8b552e3adc1ab10d2908f1d89ddaa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 19:57:36 compute-0 podman[438584]: 2025-10-02 19:57:36.567431398 +0000 UTC m=+0.258509422 container start d806006eb44e656663be453f10c8304f1c8b552e3adc1ab10d2908f1d89ddaa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:57:36 compute-0 podman[438584]: 2025-10-02 19:57:36.574258949 +0000 UTC m=+0.265336963 container attach d806006eb44e656663be453f10c8304f1c8b552e3adc1ab10d2908f1d89ddaa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:57:36 compute-0 agitated_cerf[438600]: 167 167
Oct 02 19:57:36 compute-0 systemd[1]: libpod-d806006eb44e656663be453f10c8304f1c8b552e3adc1ab10d2908f1d89ddaa1.scope: Deactivated successfully.
Oct 02 19:57:36 compute-0 podman[438584]: 2025-10-02 19:57:36.581927743 +0000 UTC m=+0.273005767 container died d806006eb44e656663be453f10c8304f1c8b552e3adc1ab10d2908f1d89ddaa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:57:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3bfd07dba1cac82fbc4b3662cdbc6673d01082241b67625ba758045305c4210-merged.mount: Deactivated successfully.
Oct 02 19:57:36 compute-0 podman[438584]: 2025-10-02 19:57:36.655087258 +0000 UTC m=+0.346165242 container remove d806006eb44e656663be453f10c8304f1c8b552e3adc1ab10d2908f1d89ddaa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 02 19:57:36 compute-0 systemd[1]: libpod-conmon-d806006eb44e656663be453f10c8304f1c8b552e3adc1ab10d2908f1d89ddaa1.scope: Deactivated successfully.
Oct 02 19:57:36 compute-0 podman[438622]: 2025-10-02 19:57:36.885638095 +0000 UTC m=+0.066611321 container create 468ca3d3e332754f7daa5889175a4b3552c7a8062ebe4bcc85e52873c6f9afa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_einstein, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 19:57:36 compute-0 systemd[1]: Started libpod-conmon-468ca3d3e332754f7daa5889175a4b3552c7a8062ebe4bcc85e52873c6f9afa0.scope.
Oct 02 19:57:36 compute-0 podman[438622]: 2025-10-02 19:57:36.856302576 +0000 UTC m=+0.037275802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:57:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b24e1b40f7b5064d084b2f3bcfe836e1018b0cf1b40d1a8a987c98c0cd32f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b24e1b40f7b5064d084b2f3bcfe836e1018b0cf1b40d1a8a987c98c0cd32f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b24e1b40f7b5064d084b2f3bcfe836e1018b0cf1b40d1a8a987c98c0cd32f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b24e1b40f7b5064d084b2f3bcfe836e1018b0cf1b40d1a8a987c98c0cd32f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:57:36 compute-0 podman[438622]: 2025-10-02 19:57:36.99757069 +0000 UTC m=+0.178543896 container init 468ca3d3e332754f7daa5889175a4b3552c7a8062ebe4bcc85e52873c6f9afa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_einstein, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:57:37 compute-0 podman[438622]: 2025-10-02 19:57:37.014020968 +0000 UTC m=+0.194994174 container start 468ca3d3e332754f7daa5889175a4b3552c7a8062ebe4bcc85e52873c6f9afa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 19:57:37 compute-0 podman[438622]: 2025-10-02 19:57:37.020453819 +0000 UTC m=+0.201427015 container attach 468ca3d3e332754f7daa5889175a4b3552c7a8062ebe4bcc85e52873c6f9afa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 19:57:37 compute-0 podman[438639]: 2025-10-02 19:57:37.052063119 +0000 UTC m=+0.101508349 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:57:37 compute-0 podman[438636]: 2025-10-02 19:57:37.068905256 +0000 UTC m=+0.123952035 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:57:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1575: 321 pgs: 321 active+clean; 139 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 36 op/s
Oct 02 19:57:38 compute-0 nervous_einstein[438640]: {
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:         "osd_id": 1,
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:         "type": "bluestore"
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:     },
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:         "osd_id": 2,
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:         "type": "bluestore"
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:     },
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:         "osd_id": 0,
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:         "type": "bluestore"
Oct 02 19:57:38 compute-0 nervous_einstein[438640]:     }
Oct 02 19:57:38 compute-0 nervous_einstein[438640]: }
Oct 02 19:57:38 compute-0 systemd[1]: libpod-468ca3d3e332754f7daa5889175a4b3552c7a8062ebe4bcc85e52873c6f9afa0.scope: Deactivated successfully.
Oct 02 19:57:38 compute-0 systemd[1]: libpod-468ca3d3e332754f7daa5889175a4b3552c7a8062ebe4bcc85e52873c6f9afa0.scope: Consumed 1.233s CPU time.
Oct 02 19:57:38 compute-0 podman[438713]: 2025-10-02 19:57:38.35577702 +0000 UTC m=+0.067293310 container died 468ca3d3e332754f7daa5889175a4b3552c7a8062ebe4bcc85e52873c6f9afa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_einstein, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:57:38 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:38.397 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1de2af17-a89c-45e5-97c6-db433f26bbb6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:57:38 compute-0 ceph-mon[191910]: pgmap v1575: 321 pgs: 321 active+clean; 139 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 36 op/s
Oct 02 19:57:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5b24e1b40f7b5064d084b2f3bcfe836e1018b0cf1b40d1a8a987c98c0cd32f8-merged.mount: Deactivated successfully.
Oct 02 19:57:38 compute-0 podman[438713]: 2025-10-02 19:57:38.745619231 +0000 UTC m=+0.457135511 container remove 468ca3d3e332754f7daa5889175a4b3552c7a8062ebe4bcc85e52873c6f9afa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 19:57:38 compute-0 systemd[1]: libpod-conmon-468ca3d3e332754f7daa5889175a4b3552c7a8062ebe4bcc85e52873c6f9afa0.scope: Deactivated successfully.
Oct 02 19:57:38 compute-0 sudo[438522]: pam_unix(sudo:session): session closed for user root
Oct 02 19:57:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:57:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:57:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:57:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:57:38 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 33434a2a-ad95-4a04-911f-6e4a19b2ddaa does not exist
Oct 02 19:57:38 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e2f39581-a34c-4946-a139-00bc6fc836d4 does not exist
Oct 02 19:57:39 compute-0 nova_compute[355794]: 2025-10-02 19:57:39.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:39 compute-0 sudo[438728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:57:39 compute-0 sudo[438728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:57:39 compute-0 sudo[438728]: pam_unix(sudo:session): session closed for user root
Oct 02 19:57:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1576: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:57:39 compute-0 sudo[438753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:57:39 compute-0 sudo[438753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:57:39 compute-0 sudo[438753]: pam_unix(sudo:session): session closed for user root
Oct 02 19:57:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:57:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:57:39 compute-0 ceph-mon[191910]: pgmap v1576: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:57:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:57:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1577: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:57:41 compute-0 nova_compute[355794]: 2025-10-02 19:57:41.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:42 compute-0 ceph-mon[191910]: pgmap v1577: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:57:42 compute-0 podman[438779]: 2025-10-02 19:57:42.709186566 +0000 UTC m=+0.123886513 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 19:57:42 compute-0 podman[438780]: 2025-10-02 19:57:42.713542532 +0000 UTC m=+0.122062605 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, architecture=x86_64, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.component=ubi9-container, version=9.4, io.openshift.tags=base rhel9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public)
Oct 02 19:57:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1578: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:57:44 compute-0 nova_compute[355794]: 2025-10-02 19:57:44.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:44 compute-0 ceph-mon[191910]: pgmap v1578: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:57:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 36 op/s
Oct 02 19:57:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:57:45 compute-0 podman[438818]: 2025-10-02 19:57:45.710636559 +0000 UTC m=+0.126195755 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:57:45 compute-0 podman[438817]: 2025-10-02 19:57:45.734095483 +0000 UTC m=+0.157354744 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible)
Oct 02 19:57:45 compute-0 podman[438819]: 2025-10-02 19:57:45.761966174 +0000 UTC m=+0.176848882 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:57:46 compute-0 ceph-mon[191910]: pgmap v1579: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 36 op/s
Oct 02 19:57:46 compute-0 nova_compute[355794]: 2025-10-02 19:57:46.467 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759435051.466649, b88114e8-b15d-4a78-ac15-3dd7ee30b949 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:57:46 compute-0 nova_compute[355794]: 2025-10-02 19:57:46.468 2 INFO nova.compute.manager [-] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] VM Stopped (Lifecycle Event)
Oct 02 19:57:46 compute-0 nova_compute[355794]: 2025-10-02 19:57:46.498 2 DEBUG nova.compute.manager [None req-d24028e1-c477-49fa-bd0a-d3d7ab1ac364 - - - - - -] [instance: b88114e8-b15d-4a78-ac15-3dd7ee30b949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:57:46 compute-0 nova_compute[355794]: 2025-10-02 19:57:46.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.1 KiB/s wr, 6 op/s
Oct 02 19:57:48 compute-0 ceph-mon[191910]: pgmap v1580: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.1 KiB/s wr, 6 op/s
Oct 02 19:57:48 compute-0 podman[438877]: 2025-10-02 19:57:48.717743812 +0000 UTC m=+0.128010183 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:57:48 compute-0 podman[438876]: 2025-10-02 19:57:48.724975694 +0000 UTC m=+0.148302232 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.openshift.tags=minimal rhel9, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, name=ubi9-minimal, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, distribution-scope=public, config_id=edpm, managed_by=edpm_ansible)
Oct 02 19:57:49 compute-0 nova_compute[355794]: 2025-10-02 19:57:49.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1581: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 341 B/s wr, 3 op/s
Oct 02 19:57:49 compute-0 nova_compute[355794]: 2025-10-02 19:57:49.592 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:49 compute-0 nova_compute[355794]: 2025-10-02 19:57:49.593 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:57:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:57:50 compute-0 ceph-mon[191910]: pgmap v1581: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 341 B/s wr, 3 op/s
Oct 02 19:57:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1582: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:51 compute-0 nova_compute[355794]: 2025-10-02 19:57:51.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:52 compute-0 ceph-mon[191910]: pgmap v1582: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1583: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:54 compute-0 nova_compute[355794]: 2025-10-02 19:57:54.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:54 compute-0 ceph-mon[191910]: pgmap v1583: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:54 compute-0 nova_compute[355794]: 2025-10-02 19:57:54.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:54 compute-0 nova_compute[355794]: 2025-10-02 19:57:54.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:57:54 compute-0 nova_compute[355794]: 2025-10-02 19:57:54.578 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:57:54 compute-0 nova_compute[355794]: 2025-10-02 19:57:54.919 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:57:54 compute-0 nova_compute[355794]: 2025-10-02 19:57:54.920 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:57:54 compute-0 nova_compute[355794]: 2025-10-02 19:57:54.921 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:57:54 compute-0 nova_compute[355794]: 2025-10-02 19:57:54.922 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:57:54 compute-0 nova_compute[355794]: 2025-10-02 19:57:54.959 2 DEBUG oslo_concurrency.lockutils [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:57:54 compute-0 nova_compute[355794]: 2025-10-02 19:57:54.960 2 DEBUG oslo_concurrency.lockutils [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:57:54 compute-0 nova_compute[355794]: 2025-10-02 19:57:54.960 2 DEBUG oslo_concurrency.lockutils [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:57:54 compute-0 nova_compute[355794]: 2025-10-02 19:57:54.961 2 DEBUG oslo_concurrency.lockutils [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:57:54 compute-0 nova_compute[355794]: 2025-10-02 19:57:54.961 2 DEBUG oslo_concurrency.lockutils [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:57:54 compute-0 nova_compute[355794]: 2025-10-02 19:57:54.962 2 INFO nova.compute.manager [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Terminating instance
Oct 02 19:57:54 compute-0 nova_compute[355794]: 2025-10-02 19:57:54.964 2 DEBUG nova.compute.manager [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 19:57:55 compute-0 kernel: tap90a967c2-93 (unregistering): left promiscuous mode
Oct 02 19:57:55 compute-0 NetworkManager[44968]: <info>  [1759435075.1461] device (tap90a967c2-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:57:55 compute-0 ovn_controller[88435]: 2025-10-02T19:57:55Z|00057|binding|INFO|Releasing lport 90a967c2-93a2-4057-add0-3bebfcb9615a from this chassis (sb_readonly=0)
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:55 compute-0 ovn_controller[88435]: 2025-10-02T19:57:55Z|00058|binding|INFO|Setting lport 90a967c2-93a2-4057-add0-3bebfcb9615a down in Southbound
Oct 02 19:57:55 compute-0 ovn_controller[88435]: 2025-10-02T19:57:55Z|00059|binding|INFO|Removing iface tap90a967c2-93 ovn-installed in OVS
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:55 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:55.178 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:8f:b8 192.168.0.24'], port_security=['fa:16:3e:5d:8f:b8 192.168.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-rlri6wkgthwg-l57hn7k2t5oc-62etg35sdh32-port-ioebu7t2mzpe', 'neutron:cidrs': '192.168.0.24/24', 'neutron:device_id': '58f8959a-5f7e-44a5-9dca-65be0506a4c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-rlri6wkgthwg-l57hn7k2t5oc-62etg35sdh32-port-ioebu7t2mzpe', 'neutron:project_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '457fbfbc-bc7f-4eb1-986f-4ac88ca280aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.207', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a344602b-6814-4d50-9f7f-4fc26c3ad853, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=90a967c2-93a2-4057-add0-3bebfcb9615a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:57:55 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:55.180 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 90a967c2-93a2-4057-add0-3bebfcb9615a in datapath 6e3c6c60-2fbc-4181-942a-00056fc667f2 unbound from our chassis
Oct 02 19:57:55 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:55.183 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e3c6c60-2fbc-4181-942a-00056fc667f2
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:55 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:55.219 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[d9377abe-18e9-4e1a-9bf6-4e2393a647a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:57:55 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct 02 19:57:55 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1min 34.660s CPU time.
Oct 02 19:57:55 compute-0 systemd-machined[137646]: Machine qemu-4-instance-00000004 terminated.
Oct 02 19:57:55 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:55.275 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[e192c15b-65aa-47a0-a51b-50f9fd008cec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:57:55 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:55.280 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[52b4d97b-6e04-4bff-8f6e-b8ee94592888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:57:55 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:55.329 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[82a561be-075e-4eed-9e82-50cfa6e41315]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:57:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:57:55 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:55.360 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[293ffc05-8bc7-4d8b-91ef-db0f18ca0f72]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e3c6c60-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:df:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 16, 'rx_bytes': 832, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 16, 'rx_bytes': 832, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545739, 'reachable_time': 37506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 438930, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:57:55 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:55.386 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[dd374f35-024f-4273-b9f8-222f139a4bac]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e3c6c60-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545752, 'tstamp': 545752}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 438931, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap6e3c6c60-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545756, 'tstamp': 545756}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 438931, 'error': None, 'target': 'ovnmeta-6e3c6c60-2fbc-4181-942a-00056fc667f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:57:55 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:55.388 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e3c6c60-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:55 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:55.402 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e3c6c60-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:57:55 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:55.402 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:57:55 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:55.403 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e3c6c60-20, col_values=(('external_ids', {'iface-id': '4a8af5bc-5352-4506-b6d3-43b5d33802a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:57:55 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:57:55.404 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.426 2 INFO nova.virt.libvirt.driver [-] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Instance destroyed successfully.
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.427 2 DEBUG nova.objects.instance [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lazy-loading 'resources' on Instance uuid 58f8959a-5f7e-44a5-9dca-65be0506a4c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.444 2 DEBUG nova.virt.libvirt.vif [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:51:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-wkgthwg-l57hn7k2t5oc-62etg35sdh32-vnf-wjcrezdtgbmg',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-wkgthwg-l57hn7k2t5oc-62etg35sdh32-vnf-wjcrezdtgbmg',id=4,image_ref='ce28338d-119e-49e1-ab67-60da8882593a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:51:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='d2d7e2b0-01e0-44b1-b2c7-fe502b333743'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c35486f37b94d43a7bf2f2fa09c70b9',ramdisk_id='',reservation_id='r-0clyp0yi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='ce28338d-119e-49e1-ab67-60da8882593a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T19:51:51Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NzU5MzAxODA5MjM3Mzk3MjkxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU3NTkzMDE4MDkyMzczOTcyOTE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTc1OTMwMTgwOTIzNzM5NzI5MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU3NTkzMDE4MDkyMzczOTcyOTE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NzU5MzAxODA5MjM3Mzk3MjkxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NzU5MzAxODA5MjM3Mzk3MjkxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1
Oct 02 19:57:55 compute-0 nova_compute[355794]: xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTc1OTMwMTgwOTIzNzM5NzI5MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU3NTkzMDE4MDkyMzczOTcyOTE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NzU5MzAxODA5MjM3Mzk3MjkxPT0tLQo=',user_id='811fb7ac717e4ba9b9874e5454ee08f4',uuid=58f8959a-5f7e-44a5-9dca-65be0506a4c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "90a967c2-93a2-4057-add0-3bebfcb9615a", "address": "fa:16:3e:5d:8f:b8", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a967c2-93", "ovs_interfaceid": "90a967c2-93a2-4057-add0-3bebfcb9615a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.445 2 DEBUG nova.network.os_vif_util [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converting VIF {"id": "90a967c2-93a2-4057-add0-3bebfcb9615a", "address": "fa:16:3e:5d:8f:b8", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a967c2-93", "ovs_interfaceid": "90a967c2-93a2-4057-add0-3bebfcb9615a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.446 2 DEBUG nova.network.os_vif_util [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:8f:b8,bridge_name='br-int',has_traffic_filtering=True,id=90a967c2-93a2-4057-add0-3bebfcb9615a,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap90a967c2-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.446 2 DEBUG os_vif [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:8f:b8,bridge_name='br-int',has_traffic_filtering=True,id=90a967c2-93a2-4057-add0-3bebfcb9615a,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap90a967c2-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.449 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap90a967c2-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.461 2 INFO os_vif [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:8f:b8,bridge_name='br-int',has_traffic_filtering=True,id=90a967c2-93a2-4057-add0-3bebfcb9615a,network=Network(6e3c6c60-2fbc-4181-942a-00056fc667f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap90a967c2-93')
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.761 2 DEBUG nova.compute.manager [req-59758665-15d2-4977-9cc9-9c485f189708 req-f39c4ea7-7ade-448a-bea9-6e2553f29397 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Received event network-vif-unplugged-90a967c2-93a2-4057-add0-3bebfcb9615a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.761 2 DEBUG oslo_concurrency.lockutils [req-59758665-15d2-4977-9cc9-9c485f189708 req-f39c4ea7-7ade-448a-bea9-6e2553f29397 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.761 2 DEBUG oslo_concurrency.lockutils [req-59758665-15d2-4977-9cc9-9c485f189708 req-f39c4ea7-7ade-448a-bea9-6e2553f29397 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.762 2 DEBUG oslo_concurrency.lockutils [req-59758665-15d2-4977-9cc9-9c485f189708 req-f39c4ea7-7ade-448a-bea9-6e2553f29397 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.762 2 DEBUG nova.compute.manager [req-59758665-15d2-4977-9cc9-9c485f189708 req-f39c4ea7-7ade-448a-bea9-6e2553f29397 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] No waiting events found dispatching network-vif-unplugged-90a967c2-93a2-4057-add0-3bebfcb9615a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:57:55 compute-0 nova_compute[355794]: 2025-10-02 19:57:55.763 2 DEBUG nova.compute.manager [req-59758665-15d2-4977-9cc9-9c485f189708 req-f39c4ea7-7ade-448a-bea9-6e2553f29397 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Received event network-vif-unplugged-90a967c2-93a2-4057-add0-3bebfcb9615a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 19:57:55 compute-0 rsyslogd[187702]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 19:57:55.444 2 DEBUG nova.virt.libvirt.vif [None req-45115917-3128-4f [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 19:57:56 compute-0 ceph-mon[191910]: pgmap v1584: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:57:56 compute-0 nova_compute[355794]: 2025-10-02 19:57:56.750 2 INFO nova.virt.libvirt.driver [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Deleting instance files /var/lib/nova/instances/58f8959a-5f7e-44a5-9dca-65be0506a4c1_del
Oct 02 19:57:56 compute-0 nova_compute[355794]: 2025-10-02 19:57:56.752 2 INFO nova.virt.libvirt.driver [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Deletion of /var/lib/nova/instances/58f8959a-5f7e-44a5-9dca-65be0506a4c1_del complete
Oct 02 19:57:56 compute-0 nova_compute[355794]: 2025-10-02 19:57:56.814 2 INFO nova.compute.manager [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Took 1.85 seconds to destroy the instance on the hypervisor.
Oct 02 19:57:56 compute-0 nova_compute[355794]: 2025-10-02 19:57:56.814 2 DEBUG oslo.service.loopingcall [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 19:57:56 compute-0 nova_compute[355794]: 2025-10-02 19:57:56.815 2 DEBUG nova.compute.manager [-] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 19:57:56 compute-0 nova_compute[355794]: 2025-10-02 19:57:56.815 2 DEBUG nova.network.neutron [-] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 19:57:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1585: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 0 B/s wr, 6 op/s
Oct 02 19:57:57 compute-0 nova_compute[355794]: 2025-10-02 19:57:57.841 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:57:57 compute-0 nova_compute[355794]: 2025-10-02 19:57:57.847 2 DEBUG nova.compute.manager [req-43baf991-acf4-4633-a656-88f13c09cfb0 req-4d6e54fa-414f-44e9-9067-0cd7c4dd739a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Received event network-vif-plugged-90a967c2-93a2-4057-add0-3bebfcb9615a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:57:57 compute-0 nova_compute[355794]: 2025-10-02 19:57:57.847 2 DEBUG oslo_concurrency.lockutils [req-43baf991-acf4-4633-a656-88f13c09cfb0 req-4d6e54fa-414f-44e9-9067-0cd7c4dd739a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:57:57 compute-0 nova_compute[355794]: 2025-10-02 19:57:57.848 2 DEBUG oslo_concurrency.lockutils [req-43baf991-acf4-4633-a656-88f13c09cfb0 req-4d6e54fa-414f-44e9-9067-0cd7c4dd739a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:57:57 compute-0 nova_compute[355794]: 2025-10-02 19:57:57.848 2 DEBUG oslo_concurrency.lockutils [req-43baf991-acf4-4633-a656-88f13c09cfb0 req-4d6e54fa-414f-44e9-9067-0cd7c4dd739a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:57:57 compute-0 nova_compute[355794]: 2025-10-02 19:57:57.848 2 DEBUG nova.compute.manager [req-43baf991-acf4-4633-a656-88f13c09cfb0 req-4d6e54fa-414f-44e9-9067-0cd7c4dd739a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] No waiting events found dispatching network-vif-plugged-90a967c2-93a2-4057-add0-3bebfcb9615a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:57:57 compute-0 nova_compute[355794]: 2025-10-02 19:57:57.849 2 WARNING nova.compute.manager [req-43baf991-acf4-4633-a656-88f13c09cfb0 req-4d6e54fa-414f-44e9-9067-0cd7c4dd739a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Received unexpected event network-vif-plugged-90a967c2-93a2-4057-add0-3bebfcb9615a for instance with vm_state active and task_state deleting.
Oct 02 19:57:57 compute-0 nova_compute[355794]: 2025-10-02 19:57:57.849 2 DEBUG nova.compute.manager [req-43baf991-acf4-4633-a656-88f13c09cfb0 req-4d6e54fa-414f-44e9-9067-0cd7c4dd739a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Received event network-changed-90a967c2-93a2-4057-add0-3bebfcb9615a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:57:57 compute-0 nova_compute[355794]: 2025-10-02 19:57:57.849 2 DEBUG nova.compute.manager [req-43baf991-acf4-4633-a656-88f13c09cfb0 req-4d6e54fa-414f-44e9-9067-0cd7c4dd739a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Refreshing instance network info cache due to event network-changed-90a967c2-93a2-4057-add0-3bebfcb9615a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:57:57 compute-0 nova_compute[355794]: 2025-10-02 19:57:57.850 2 DEBUG oslo_concurrency.lockutils [req-43baf991-acf4-4633-a656-88f13c09cfb0 req-4d6e54fa-414f-44e9-9067-0cd7c4dd739a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-58f8959a-5f7e-44a5-9dca-65be0506a4c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:57:57 compute-0 nova_compute[355794]: 2025-10-02 19:57:57.850 2 DEBUG oslo_concurrency.lockutils [req-43baf991-acf4-4633-a656-88f13c09cfb0 req-4d6e54fa-414f-44e9-9067-0cd7c4dd739a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-58f8959a-5f7e-44a5-9dca-65be0506a4c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:57:57 compute-0 nova_compute[355794]: 2025-10-02 19:57:57.851 2 DEBUG nova.network.neutron [req-43baf991-acf4-4633-a656-88f13c09cfb0 req-4d6e54fa-414f-44e9-9067-0cd7c4dd739a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Refreshing network info cache for port 90a967c2-93a2-4057-add0-3bebfcb9615a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:57:57 compute-0 nova_compute[355794]: 2025-10-02 19:57:57.868 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:57:57 compute-0 nova_compute[355794]: 2025-10-02 19:57:57.869 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:57:57 compute-0 nova_compute[355794]: 2025-10-02 19:57:57.871 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:57 compute-0 nova_compute[355794]: 2025-10-02 19:57:57.877 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:58 compute-0 nova_compute[355794]: 2025-10-02 19:57:58.415 2 DEBUG nova.network.neutron [-] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:57:58 compute-0 nova_compute[355794]: 2025-10-02 19:57:58.444 2 INFO nova.compute.manager [-] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Took 1.63 seconds to deallocate network for instance.
Oct 02 19:57:58 compute-0 nova_compute[355794]: 2025-10-02 19:57:58.496 2 DEBUG oslo_concurrency.lockutils [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:57:58 compute-0 nova_compute[355794]: 2025-10-02 19:57:58.497 2 DEBUG oslo_concurrency.lockutils [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:57:58 compute-0 nova_compute[355794]: 2025-10-02 19:57:58.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:58 compute-0 nova_compute[355794]: 2025-10-02 19:57:58.600 2 DEBUG oslo_concurrency.processutils [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:57:58 compute-0 ceph-mon[191910]: pgmap v1585: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 0 B/s wr, 6 op/s
Oct 02 19:57:58 compute-0 nova_compute[355794]: 2025-10-02 19:57:58.651 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:57:59 compute-0 nova_compute[355794]: 2025-10-02 19:57:59.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:59 compute-0 nova_compute[355794]: 2025-10-02 19:57:59.055 2 DEBUG nova.network.neutron [req-43baf991-acf4-4633-a656-88f13c09cfb0 req-4d6e54fa-414f-44e9-9067-0cd7c4dd739a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Updated VIF entry in instance network info cache for port 90a967c2-93a2-4057-add0-3bebfcb9615a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:57:59 compute-0 nova_compute[355794]: 2025-10-02 19:57:59.055 2 DEBUG nova.network.neutron [req-43baf991-acf4-4633-a656-88f13c09cfb0 req-4d6e54fa-414f-44e9-9067-0cd7c4dd739a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Updating instance_info_cache with network_info: [{"id": "90a967c2-93a2-4057-add0-3bebfcb9615a", "address": "fa:16:3e:5d:8f:b8", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90a967c2-93", "ovs_interfaceid": "90a967c2-93a2-4057-add0-3bebfcb9615a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:57:59 compute-0 nova_compute[355794]: 2025-10-02 19:57:59.101 2 DEBUG oslo_concurrency.lockutils [req-43baf991-acf4-4633-a656-88f13c09cfb0 req-4d6e54fa-414f-44e9-9067-0cd7c4dd739a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-58f8959a-5f7e-44a5-9dca-65be0506a4c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:57:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:57:59 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4144039009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:57:59 compute-0 nova_compute[355794]: 2025-10-02 19:57:59.138 2 DEBUG oslo_concurrency.processutils [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:57:59 compute-0 nova_compute[355794]: 2025-10-02 19:57:59.152 2 DEBUG nova.compute.provider_tree [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:57:59 compute-0 nova_compute[355794]: 2025-10-02 19:57:59.177 2 DEBUG nova.scheduler.client.report [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:57:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1586: 321 pgs: 321 active+clean; 87 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 341 B/s wr, 29 op/s
Oct 02 19:57:59 compute-0 nova_compute[355794]: 2025-10-02 19:57:59.250 2 DEBUG oslo_concurrency.lockutils [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:57:59 compute-0 nova_compute[355794]: 2025-10-02 19:57:59.254 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:57:59 compute-0 nova_compute[355794]: 2025-10-02 19:57:59.255 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:57:59 compute-0 nova_compute[355794]: 2025-10-02 19:57:59.255 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:57:59 compute-0 nova_compute[355794]: 2025-10-02 19:57:59.256 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:57:59 compute-0 nova_compute[355794]: 2025-10-02 19:57:59.370 2 INFO nova.scheduler.client.report [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Deleted allocations for instance 58f8959a-5f7e-44a5-9dca-65be0506a4c1
Oct 02 19:57:59 compute-0 nova_compute[355794]: 2025-10-02 19:57:59.618 2 DEBUG oslo_concurrency.lockutils [None req-45115917-3128-4fbe-9957-1ad8b4000b60 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "58f8959a-5f7e-44a5-9dca-65be0506a4c1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:57:59 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4144039009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:57:59 compute-0 ceph-mon[191910]: pgmap v1586: 321 pgs: 321 active+clean; 87 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 341 B/s wr, 29 op/s
Oct 02 19:57:59 compute-0 podman[439004]: 2025-10-02 19:57:59.695962295 +0000 UTC m=+0.109118501 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 19:57:59 compute-0 podman[157186]: time="2025-10-02T19:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:57:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:57:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9064 "" "Go-http-client/1.1"
Oct 02 19:57:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:57:59 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1927353061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:57:59 compute-0 nova_compute[355794]: 2025-10-02 19:57:59.855 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:57:59 compute-0 nova_compute[355794]: 2025-10-02 19:57:59.982 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:57:59 compute-0 nova_compute[355794]: 2025-10-02 19:57:59.983 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:57:59 compute-0 nova_compute[355794]: 2025-10-02 19:57:59.983 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:58:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:58:00 compute-0 nova_compute[355794]: 2025-10-02 19:58:00.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:00 compute-0 nova_compute[355794]: 2025-10-02 19:58:00.575 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:58:00 compute-0 nova_compute[355794]: 2025-10-02 19:58:00.578 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3862MB free_disk=59.95184326171875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:58:00 compute-0 nova_compute[355794]: 2025-10-02 19:58:00.578 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:58:00 compute-0 nova_compute[355794]: 2025-10-02 19:58:00.579 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:58:00 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1927353061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:58:00 compute-0 nova_compute[355794]: 2025-10-02 19:58:00.703 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:58:00 compute-0 nova_compute[355794]: 2025-10-02 19:58:00.704 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:58:00 compute-0 nova_compute[355794]: 2025-10-02 19:58:00.704 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:58:00 compute-0 nova_compute[355794]: 2025-10-02 19:58:00.744 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:58:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1587: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:58:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:58:01 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1252568538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:58:01 compute-0 nova_compute[355794]: 2025-10-02 19:58:01.268 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:58:01 compute-0 nova_compute[355794]: 2025-10-02 19:58:01.279 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:58:01 compute-0 nova_compute[355794]: 2025-10-02 19:58:01.301 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:58:01 compute-0 nova_compute[355794]: 2025-10-02 19:58:01.303 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:58:01 compute-0 nova_compute[355794]: 2025-10-02 19:58:01.303 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:58:01 compute-0 openstack_network_exporter[372736]: ERROR   19:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:58:01 compute-0 openstack_network_exporter[372736]: ERROR   19:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:58:01 compute-0 openstack_network_exporter[372736]: ERROR   19:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:58:01 compute-0 openstack_network_exporter[372736]: ERROR   19:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:58:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:58:01 compute-0 openstack_network_exporter[372736]: ERROR   19:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:58:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:58:01 compute-0 ceph-mon[191910]: pgmap v1587: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:58:01 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1252568538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:58:02 compute-0 nova_compute[355794]: 2025-10-02 19:58:02.298 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:02 compute-0 nova_compute[355794]: 2025-10-02 19:58:02.299 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:02 compute-0 nova_compute[355794]: 2025-10-02 19:58:02.300 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:02 compute-0 nova_compute[355794]: 2025-10-02 19:58:02.300 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1588: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:58:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:58:03
Oct 02 19:58:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:58:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:58:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.meta', 'vms', '.mgr', 'volumes', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'backups']
Oct 02 19:58:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:58:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:58:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:58:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:58:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:58:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:58:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:58:04 compute-0 nova_compute[355794]: 2025-10-02 19:58:04.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:58:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:58:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:58:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:58:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:58:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:58:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:58:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:58:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:58:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:58:04 compute-0 ceph-mon[191910]: pgmap v1588: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:58:04 compute-0 nova_compute[355794]: 2025-10-02 19:58:04.570 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1589: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:58:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:58:05 compute-0 nova_compute[355794]: 2025-10-02 19:58:05.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:06 compute-0 ceph-mon[191910]: pgmap v1589: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:58:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1590: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:58:07 compute-0 podman[439049]: 2025-10-02 19:58:07.696673562 +0000 UTC m=+0.108757122 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:58:07 compute-0 podman[439050]: 2025-10-02 19:58:07.706520414 +0000 UTC m=+0.121393148 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4)
Oct 02 19:58:08 compute-0 ceph-mon[191910]: pgmap v1590: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct 02 19:58:09 compute-0 nova_compute[355794]: 2025-10-02 19:58:09.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1591: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 33 op/s
Oct 02 19:58:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:58:10 compute-0 nova_compute[355794]: 2025-10-02 19:58:10.422 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759435075.4199297, 58f8959a-5f7e-44a5-9dca-65be0506a4c1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:58:10 compute-0 nova_compute[355794]: 2025-10-02 19:58:10.422 2 INFO nova.compute.manager [-] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] VM Stopped (Lifecycle Event)
Oct 02 19:58:10 compute-0 nova_compute[355794]: 2025-10-02 19:58:10.448 2 DEBUG nova.compute.manager [None req-7dc6a839-2075-4fe4-99b4-0c2b8198f77c - - - - - -] [instance: 58f8959a-5f7e-44a5-9dca-65be0506a4c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:58:10 compute-0 nova_compute[355794]: 2025-10-02 19:58:10.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:10 compute-0 ceph-mon[191910]: pgmap v1591: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 33 op/s
Oct 02 19:58:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1592: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.4 KiB/s wr, 10 op/s
Oct 02 19:58:12 compute-0 ceph-mon[191910]: pgmap v1592: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.4 KiB/s wr, 10 op/s
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:58:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:58:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1593: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:13 compute-0 podman[439091]: 2025-10-02 19:58:13.709179904 +0000 UTC m=+0.134366062 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 19:58:13 compute-0 podman[439092]: 2025-10-02 19:58:13.713124059 +0000 UTC m=+0.124627013 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, version=9.4, name=ubi9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public)
Oct 02 19:58:14 compute-0 nova_compute[355794]: 2025-10-02 19:58:14.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 19:58:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.0 total, 600.0 interval
                                            Cumulative writes: 7279 writes, 32K keys, 7279 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                            Cumulative WAL: 7279 writes, 7279 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1322 writes, 5980 keys, 1322 commit groups, 1.0 writes per commit group, ingest: 8.56 MB, 0.01 MB/s
                                            Interval WAL: 1322 writes, 1322 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    100.2      0.39              0.18        19    0.021       0      0       0.0       0.0
                                              L6      1/0    8.48 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.3    131.7    106.5      1.22              0.66        18    0.068     86K    10K       0.0       0.0
                                             Sum      1/0    8.48 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.3     99.7    105.0      1.61              0.84        37    0.044     86K    10K       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.4    108.7    112.6      0.35              0.20         8    0.044     22K   2519       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    131.7    106.5      1.22              0.66        18    0.068     86K    10K       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    101.2      0.39              0.18        18    0.022       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 3000.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.038, interval 0.009
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.17 GB write, 0.06 MB/s write, 0.16 GB read, 0.05 MB/s read, 1.6 seconds
                                            Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.4 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x557df35531f0#2 capacity: 308.00 MB usage: 20.20 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000215 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1280,19.53 MB,6.34091%) FilterBlock(38,246.92 KB,0.0782905%) IndexBlock(38,443.83 KB,0.140723%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Oct 02 19:58:14 compute-0 ceph-mon[191910]: pgmap v1593: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1594: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:58:15 compute-0 nova_compute[355794]: 2025-10-02 19:58:15.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:16 compute-0 ceph-mon[191910]: pgmap v1594: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:16 compute-0 podman[439132]: 2025-10-02 19:58:16.688927641 +0000 UTC m=+0.105780382 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 19:58:16 compute-0 podman[439131]: 2025-10-02 19:58:16.701998369 +0000 UTC m=+0.119863107 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:58:16 compute-0 podman[439133]: 2025-10-02 19:58:16.754822093 +0000 UTC m=+0.159370847 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:58:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1595: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:18 compute-0 ceph-mon[191910]: pgmap v1595: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:19 compute-0 nova_compute[355794]: 2025-10-02 19:58:19.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1596: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:19 compute-0 podman[439191]: 2025-10-02 19:58:19.727814361 +0000 UTC m=+0.142321264 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:58:19 compute-0 podman[439190]: 2025-10-02 19:58:19.737049516 +0000 UTC m=+0.154983680 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, version=9.6, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 19:58:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:58:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1103339294' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:58:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:58:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1103339294' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:58:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:58:20 compute-0 nova_compute[355794]: 2025-10-02 19:58:20.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:20 compute-0 ceph-mon[191910]: pgmap v1596: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1103339294' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:58:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1103339294' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:58:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1597: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:22 compute-0 ceph-mon[191910]: pgmap v1597: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1598: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:24 compute-0 nova_compute[355794]: 2025-10-02 19:58:24.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:24 compute-0 ceph-mon[191910]: pgmap v1598: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1599: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:58:25 compute-0 nova_compute[355794]: 2025-10-02 19:58:25.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:26 compute-0 sshd-session[439233]: Accepted publickey for zuul from 38.102.83.68 port 37098 ssh2: RSA SHA256:ehN6Fkjxbk4ZZsnbj2qNHppfucDBFar0ERCGpW0xI0M
Oct 02 19:58:26 compute-0 systemd-logind[793]: New session 63 of user zuul.
Oct 02 19:58:26 compute-0 systemd[1]: Started Session 63 of User zuul.
Oct 02 19:58:26 compute-0 sshd-session[439233]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:58:26 compute-0 ceph-mon[191910]: pgmap v1599: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1600: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:27 compute-0 sudo[439410]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxsdsvszaeuootgwmxpbrllrohzfpucg ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759435106.4852078-57131-267255962517801/AnsiballZ_command.py'
Oct 02 19:58:27 compute-0 sudo[439410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:58:27 compute-0 python3[439412]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:58:27 compute-0 ceph-mon[191910]: pgmap v1600: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:27 compute-0 sudo[439410]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:29 compute-0 nova_compute[355794]: 2025-10-02 19:58:29.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1601: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:29 compute-0 podman[157186]: time="2025-10-02T19:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:58:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:58:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9058 "" "Go-http-client/1.1"
Oct 02 19:58:30 compute-0 ovn_controller[88435]: 2025-10-02T19:58:30Z|00060|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Oct 02 19:58:30 compute-0 ceph-mon[191910]: pgmap v1601: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:58:30 compute-0 nova_compute[355794]: 2025-10-02 19:58:30.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:30 compute-0 podman[439452]: 2025-10-02 19:58:30.71264992 +0000 UTC m=+0.130788537 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 19:58:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1602: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:31 compute-0 openstack_network_exporter[372736]: ERROR   19:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:58:31 compute-0 openstack_network_exporter[372736]: ERROR   19:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:58:31 compute-0 openstack_network_exporter[372736]: ERROR   19:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:58:31 compute-0 openstack_network_exporter[372736]: ERROR   19:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:58:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:58:31 compute-0 openstack_network_exporter[372736]: ERROR   19:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:58:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:58:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:58:32.311 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:58:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:58:32.312 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:58:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:58:32.313 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:58:32 compute-0 ceph-mon[191910]: pgmap v1602: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1603: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:58:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:58:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:58:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:58:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:58:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:58:34 compute-0 nova_compute[355794]: 2025-10-02 19:58:34.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:34 compute-0 ceph-mon[191910]: pgmap v1603: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:58:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1604: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 19:58:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:58:35 compute-0 nova_compute[355794]: 2025-10-02 19:58:35.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:36 compute-0 ceph-mon[191910]: pgmap v1604: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 19:58:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1605: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 19:58:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct 02 19:58:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct 02 19:58:37 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct 02 19:58:38 compute-0 ceph-mon[191910]: pgmap v1605: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 19:58:38 compute-0 ceph-mon[191910]: osdmap e129: 3 total, 3 up, 3 in
Oct 02 19:58:38 compute-0 podman[439474]: 2025-10-02 19:58:38.752916547 +0000 UTC m=+0.168554461 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:58:38 compute-0 podman[439475]: 2025-10-02 19:58:38.76128408 +0000 UTC m=+0.168389627 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:58:39 compute-0 nova_compute[355794]: 2025-10-02 19:58:39.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1607: 321 pgs: 321 active+clean; 93 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 17 op/s
Oct 02 19:58:39 compute-0 sudo[439519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:58:39 compute-0 sudo[439519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:58:39 compute-0 sudo[439519]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:39 compute-0 sudo[439544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:58:39 compute-0 sudo[439544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:58:39 compute-0 sudo[439544]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:39 compute-0 sudo[439569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:58:39 compute-0 sudo[439569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:58:39 compute-0 sudo[439569]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:39 compute-0 sudo[439594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:58:39 compute-0 sudo[439594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:58:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:58:40.357297) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435120357522, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 952, "num_deletes": 256, "total_data_size": 1343399, "memory_usage": 1362672, "flush_reason": "Manual Compaction"}
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435120373468, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1331227, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32176, "largest_seqno": 33127, "table_properties": {"data_size": 1326441, "index_size": 2374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10162, "raw_average_key_size": 19, "raw_value_size": 1316815, "raw_average_value_size": 2493, "num_data_blocks": 106, "num_entries": 528, "num_filter_entries": 528, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759435031, "oldest_key_time": 1759435031, "file_creation_time": 1759435120, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 16189 microseconds, and 8832 cpu microseconds.
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:58:40.373519) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1331227 bytes OK
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:58:40.373540) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:58:40.375901) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:58:40.375914) EVENT_LOG_v1 {"time_micros": 1759435120375909, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:58:40.375931) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1338819, prev total WAL file size 1338819, number of live WAL files 2.
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:58:40.376858) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303031' seq:72057594037927935, type:22 .. '6C6F676D0031323533' seq:0, type:0; will stop at (end)
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1300KB)], [71(8679KB)]
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435120376889, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 10218913, "oldest_snapshot_seqno": -1}
Oct 02 19:58:40 compute-0 ceph-mon[191910]: pgmap v1607: 321 pgs: 321 active+clean; 93 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 17 op/s
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5346 keys, 10116288 bytes, temperature: kUnknown
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435120430553, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 10116288, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10078038, "index_size": 23804, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 134809, "raw_average_key_size": 25, "raw_value_size": 9978828, "raw_average_value_size": 1866, "num_data_blocks": 984, "num_entries": 5346, "num_filter_entries": 5346, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759435120, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:58:40.430806) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 10116288 bytes
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:58:40.436834) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.2 rd, 188.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.5 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(15.3) write-amplify(7.6) OK, records in: 5874, records dropped: 528 output_compression: NoCompression
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:58:40.436858) EVENT_LOG_v1 {"time_micros": 1759435120436846, "job": 40, "event": "compaction_finished", "compaction_time_micros": 53730, "compaction_time_cpu_micros": 22091, "output_level": 6, "num_output_files": 1, "total_output_size": 10116288, "num_input_records": 5874, "num_output_records": 5346, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435120437302, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435120439771, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:58:40.376658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:58:40.440037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:58:40.440047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:58:40.440050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:58:40.440053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:58:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:58:40.440056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:58:40 compute-0 nova_compute[355794]: 2025-10-02 19:58:40.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:40 compute-0 sudo[439594]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:58:40 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:58:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:58:40 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:58:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:58:40 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:58:40 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev f50c8f1d-36f1-481c-97ef-5465b940b974 does not exist
Oct 02 19:58:40 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 0ab44f20-d003-4ead-963a-90876d7a2711 does not exist
Oct 02 19:58:40 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev dbed47f6-e673-4193-8c80-5897638b47d2 does not exist
Oct 02 19:58:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:58:40 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:58:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:58:40 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:58:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:58:40 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:58:40 compute-0 sudo[439649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:58:40 compute-0 sudo[439649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:58:40 compute-0 sudo[439649]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:40 compute-0 sudo[439674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:58:40 compute-0 sudo[439674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:58:40 compute-0 sudo[439674]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:41 compute-0 sudo[439699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:58:41 compute-0 sudo[439699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:58:41 compute-0 sudo[439699]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:41 compute-0 sudo[439724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:58:41 compute-0 sudo[439724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:58:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1608: 321 pgs: 321 active+clean; 93 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Oct 02 19:58:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:58:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:58:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:58:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:58:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:58:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:58:41 compute-0 podman[439789]: 2025-10-02 19:58:41.78969822 +0000 UTC m=+0.094194484 container create 2e35aa08ffe29ce93b046fe66a9d0175f6981b8c12d145dc5bb8a80ee7d8fbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mclean, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:58:41 compute-0 podman[439789]: 2025-10-02 19:58:41.750741215 +0000 UTC m=+0.055237519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:58:41 compute-0 systemd[1]: Started libpod-conmon-2e35aa08ffe29ce93b046fe66a9d0175f6981b8c12d145dc5bb8a80ee7d8fbb3.scope.
Oct 02 19:58:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:58:41 compute-0 podman[439789]: 2025-10-02 19:58:41.957694755 +0000 UTC m=+0.262191099 container init 2e35aa08ffe29ce93b046fe66a9d0175f6981b8c12d145dc5bb8a80ee7d8fbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mclean, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 19:58:41 compute-0 podman[439789]: 2025-10-02 19:58:41.978439497 +0000 UTC m=+0.282935761 container start 2e35aa08ffe29ce93b046fe66a9d0175f6981b8c12d145dc5bb8a80ee7d8fbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:58:41 compute-0 podman[439789]: 2025-10-02 19:58:41.984507528 +0000 UTC m=+0.289003852 container attach 2e35aa08ffe29ce93b046fe66a9d0175f6981b8c12d145dc5bb8a80ee7d8fbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mclean, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 19:58:41 compute-0 optimistic_mclean[439806]: 167 167
Oct 02 19:58:41 compute-0 systemd[1]: libpod-2e35aa08ffe29ce93b046fe66a9d0175f6981b8c12d145dc5bb8a80ee7d8fbb3.scope: Deactivated successfully.
Oct 02 19:58:41 compute-0 conmon[439806]: conmon 2e35aa08ffe29ce93b04 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e35aa08ffe29ce93b046fe66a9d0175f6981b8c12d145dc5bb8a80ee7d8fbb3.scope/container/memory.events
Oct 02 19:58:41 compute-0 podman[439789]: 2025-10-02 19:58:41.996082786 +0000 UTC m=+0.300579060 container died 2e35aa08ffe29ce93b046fe66a9d0175f6981b8c12d145dc5bb8a80ee7d8fbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mclean, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 19:58:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0a9c975e66b6cab9f14afb793f371c6d7110eec71b5ac3ef1c31b8e5596ca2e-merged.mount: Deactivated successfully.
Oct 02 19:58:42 compute-0 podman[439789]: 2025-10-02 19:58:42.068353267 +0000 UTC m=+0.372849511 container remove 2e35aa08ffe29ce93b046fe66a9d0175f6981b8c12d145dc5bb8a80ee7d8fbb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 19:58:42 compute-0 systemd[1]: libpod-conmon-2e35aa08ffe29ce93b046fe66a9d0175f6981b8c12d145dc5bb8a80ee7d8fbb3.scope: Deactivated successfully.
Oct 02 19:58:42 compute-0 podman[439830]: 2025-10-02 19:58:42.3337432 +0000 UTC m=+0.082107963 container create 782e77002a7b08dd92dfa176cf95d5262ccf2dee3173c5a73f9a7c3bc92e2fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_tu, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:58:42 compute-0 podman[439830]: 2025-10-02 19:58:42.303101336 +0000 UTC m=+0.051466129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:58:42 compute-0 systemd[1]: Started libpod-conmon-782e77002a7b08dd92dfa176cf95d5262ccf2dee3173c5a73f9a7c3bc92e2fe8.scope.
Oct 02 19:58:42 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cdb35d0ec0491252a6a4700544f77af220056ce41530dde15ca49bc2c8ce75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cdb35d0ec0491252a6a4700544f77af220056ce41530dde15ca49bc2c8ce75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cdb35d0ec0491252a6a4700544f77af220056ce41530dde15ca49bc2c8ce75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cdb35d0ec0491252a6a4700544f77af220056ce41530dde15ca49bc2c8ce75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cdb35d0ec0491252a6a4700544f77af220056ce41530dde15ca49bc2c8ce75/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:58:42 compute-0 podman[439830]: 2025-10-02 19:58:42.49212068 +0000 UTC m=+0.240485513 container init 782e77002a7b08dd92dfa176cf95d5262ccf2dee3173c5a73f9a7c3bc92e2fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_tu, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:58:42 compute-0 podman[439830]: 2025-10-02 19:58:42.513347514 +0000 UTC m=+0.261712267 container start 782e77002a7b08dd92dfa176cf95d5262ccf2dee3173c5a73f9a7c3bc92e2fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_tu, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 19:58:42 compute-0 podman[439830]: 2025-10-02 19:58:42.52074309 +0000 UTC m=+0.269107933 container attach 782e77002a7b08dd92dfa176cf95d5262ccf2dee3173c5a73f9a7c3bc92e2fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:58:42 compute-0 ceph-mon[191910]: pgmap v1608: 321 pgs: 321 active+clean; 93 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Oct 02 19:58:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1609: 321 pgs: 321 active+clean; 93 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Oct 02 19:58:43 compute-0 ceph-mon[191910]: pgmap v1609: 321 pgs: 321 active+clean; 93 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Oct 02 19:58:43 compute-0 elastic_tu[439847]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:58:43 compute-0 elastic_tu[439847]: --> relative data size: 1.0
Oct 02 19:58:43 compute-0 elastic_tu[439847]: --> All data devices are unavailable
Oct 02 19:58:43 compute-0 systemd[1]: libpod-782e77002a7b08dd92dfa176cf95d5262ccf2dee3173c5a73f9a7c3bc92e2fe8.scope: Deactivated successfully.
Oct 02 19:58:43 compute-0 podman[439830]: 2025-10-02 19:58:43.922860067 +0000 UTC m=+1.671224860 container died 782e77002a7b08dd92dfa176cf95d5262ccf2dee3173c5a73f9a7c3bc92e2fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 19:58:43 compute-0 systemd[1]: libpod-782e77002a7b08dd92dfa176cf95d5262ccf2dee3173c5a73f9a7c3bc92e2fe8.scope: Consumed 1.346s CPU time.
Oct 02 19:58:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-08cdb35d0ec0491252a6a4700544f77af220056ce41530dde15ca49bc2c8ce75-merged.mount: Deactivated successfully.
Oct 02 19:58:44 compute-0 podman[439830]: 2025-10-02 19:58:44.046768169 +0000 UTC m=+1.795132932 container remove 782e77002a7b08dd92dfa176cf95d5262ccf2dee3173c5a73f9a7c3bc92e2fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_tu, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct 02 19:58:44 compute-0 systemd[1]: libpod-conmon-782e77002a7b08dd92dfa176cf95d5262ccf2dee3173c5a73f9a7c3bc92e2fe8.scope: Deactivated successfully.
Oct 02 19:58:44 compute-0 sudo[439724]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:44 compute-0 nova_compute[355794]: 2025-10-02 19:58:44.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:44 compute-0 podman[439877]: 2025-10-02 19:58:44.130341101 +0000 UTC m=+0.153056488 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm)
Oct 02 19:58:44 compute-0 podman[439884]: 2025-10-02 19:58:44.148533234 +0000 UTC m=+0.162936921 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, release=1214.1726694543, version=9.4)
Oct 02 19:58:44 compute-0 sudo[439921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:58:44 compute-0 sudo[439921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:58:44 compute-0 sudo[439921]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:44 compute-0 sudo[439950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:58:44 compute-0 sudo[439950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:58:44 compute-0 sudo[439950]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:44 compute-0 sudo[439975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:58:44 compute-0 sudo[439975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:58:44 compute-0 sudo[439975]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:44 compute-0 sudo[440000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:58:44 compute-0 sudo[440000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:58:44 compute-0 nova_compute[355794]: 2025-10-02 19:58:44.701 2 DEBUG oslo_concurrency.lockutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "6e556210-af07-4c5d-8558-2ba943af16a1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:58:44 compute-0 nova_compute[355794]: 2025-10-02 19:58:44.703 2 DEBUG oslo_concurrency.lockutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "6e556210-af07-4c5d-8558-2ba943af16a1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:58:44 compute-0 nova_compute[355794]: 2025-10-02 19:58:44.728 2 DEBUG nova.compute.manager [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:58:44 compute-0 nova_compute[355794]: 2025-10-02 19:58:44.812 2 DEBUG oslo_concurrency.lockutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:58:44 compute-0 nova_compute[355794]: 2025-10-02 19:58:44.813 2 DEBUG oslo_concurrency.lockutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:58:44 compute-0 nova_compute[355794]: 2025-10-02 19:58:44.826 2 DEBUG nova.virt.hardware [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:58:44 compute-0 nova_compute[355794]: 2025-10-02 19:58:44.827 2 INFO nova.compute.claims [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:58:44 compute-0 nova_compute[355794]: 2025-10-02 19:58:44.998 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:58:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1610: 321 pgs: 321 active+clean; 93 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 15 op/s
Oct 02 19:58:45 compute-0 podman[440080]: 2025-10-02 19:58:45.290301431 +0000 UTC m=+0.089483659 container create f8db3e3e0810de5902b439dbea479dab9b103a7d1089878409921fdb00a56733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 19:58:45 compute-0 podman[440080]: 2025-10-02 19:58:45.25677282 +0000 UTC m=+0.055955088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:58:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:58:45 compute-0 systemd[1]: Started libpod-conmon-f8db3e3e0810de5902b439dbea479dab9b103a7d1089878409921fdb00a56733.scope.
Oct 02 19:58:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:58:45 compute-0 podman[440080]: 2025-10-02 19:58:45.445101665 +0000 UTC m=+0.244283893 container init f8db3e3e0810de5902b439dbea479dab9b103a7d1089878409921fdb00a56733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:58:45 compute-0 podman[440080]: 2025-10-02 19:58:45.464931122 +0000 UTC m=+0.264113310 container start f8db3e3e0810de5902b439dbea479dab9b103a7d1089878409921fdb00a56733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kepler, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 19:58:45 compute-0 podman[440080]: 2025-10-02 19:58:45.470512861 +0000 UTC m=+0.269695089 container attach f8db3e3e0810de5902b439dbea479dab9b103a7d1089878409921fdb00a56733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kepler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 19:58:45 compute-0 quizzical_kepler[440097]: 167 167
Oct 02 19:58:45 compute-0 systemd[1]: libpod-f8db3e3e0810de5902b439dbea479dab9b103a7d1089878409921fdb00a56733.scope: Deactivated successfully.
Oct 02 19:58:45 compute-0 conmon[440097]: conmon f8db3e3e0810de5902b4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f8db3e3e0810de5902b439dbea479dab9b103a7d1089878409921fdb00a56733.scope/container/memory.events
Oct 02 19:58:45 compute-0 nova_compute[355794]: 2025-10-02 19:58:45.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:45 compute-0 podman[440080]: 2025-10-02 19:58:45.488726245 +0000 UTC m=+0.287908533 container died f8db3e3e0810de5902b439dbea479dab9b103a7d1089878409921fdb00a56733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kepler, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 19:58:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:58:45 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/480843153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:58:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb373c1a006a24c4a5aafeee63a67969840cc864ceef81112c4bf79eb779ad3c-merged.mount: Deactivated successfully.
Oct 02 19:58:45 compute-0 podman[440080]: 2025-10-02 19:58:45.577977567 +0000 UTC m=+0.377159795 container remove f8db3e3e0810de5902b439dbea479dab9b103a7d1089878409921fdb00a56733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:58:45 compute-0 nova_compute[355794]: 2025-10-02 19:58:45.591 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:58:45 compute-0 systemd[1]: libpod-conmon-f8db3e3e0810de5902b439dbea479dab9b103a7d1089878409921fdb00a56733.scope: Deactivated successfully.
Oct 02 19:58:45 compute-0 nova_compute[355794]: 2025-10-02 19:58:45.607 2 DEBUG nova.compute.provider_tree [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:58:45 compute-0 nova_compute[355794]: 2025-10-02 19:58:45.637 2 DEBUG nova.scheduler.client.report [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:58:45 compute-0 nova_compute[355794]: 2025-10-02 19:58:45.676 2 DEBUG oslo_concurrency.lockutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:58:45 compute-0 nova_compute[355794]: 2025-10-02 19:58:45.678 2 DEBUG nova.compute.manager [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:58:45 compute-0 nova_compute[355794]: 2025-10-02 19:58:45.770 2 DEBUG nova.compute.manager [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 02 19:58:45 compute-0 nova_compute[355794]: 2025-10-02 19:58:45.826 2 INFO nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:58:45 compute-0 podman[440124]: 2025-10-02 19:58:45.880679912 +0000 UTC m=+0.104888948 container create 61929224a5583c0817685cc34fce184de8e8c2d2e43b7e7fa89a5ab42de55996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatterjee, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:58:45 compute-0 nova_compute[355794]: 2025-10-02 19:58:45.914 2 DEBUG nova.compute.manager [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:58:45 compute-0 podman[440124]: 2025-10-02 19:58:45.845931789 +0000 UTC m=+0.070140835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:58:45 compute-0 systemd[1]: Started libpod-conmon-61929224a5583c0817685cc34fce184de8e8c2d2e43b7e7fa89a5ab42de55996.scope.
Oct 02 19:58:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:58:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1a35df70be91dec33c032bfe8c1e8723c8d7430357f5f0d3ebadd302c738ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:58:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1a35df70be91dec33c032bfe8c1e8723c8d7430357f5f0d3ebadd302c738ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:58:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1a35df70be91dec33c032bfe8c1e8723c8d7430357f5f0d3ebadd302c738ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:58:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1a35df70be91dec33c032bfe8c1e8723c8d7430357f5f0d3ebadd302c738ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:58:46 compute-0 nova_compute[355794]: 2025-10-02 19:58:46.045 2 DEBUG nova.compute.manager [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:58:46 compute-0 nova_compute[355794]: 2025-10-02 19:58:46.049 2 DEBUG nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:58:46 compute-0 nova_compute[355794]: 2025-10-02 19:58:46.050 2 INFO nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Creating image(s)
Oct 02 19:58:46 compute-0 podman[440124]: 2025-10-02 19:58:46.065483244 +0000 UTC m=+0.289692290 container init 61929224a5583c0817685cc34fce184de8e8c2d2e43b7e7fa89a5ab42de55996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 19:58:46 compute-0 podman[440124]: 2025-10-02 19:58:46.084458469 +0000 UTC m=+0.308667515 container start 61929224a5583c0817685cc34fce184de8e8c2d2e43b7e7fa89a5ab42de55996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 02 19:58:46 compute-0 podman[440124]: 2025-10-02 19:58:46.092447491 +0000 UTC m=+0.316656577 container attach 61929224a5583c0817685cc34fce184de8e8c2d2e43b7e7fa89a5ab42de55996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 19:58:46 compute-0 nova_compute[355794]: 2025-10-02 19:58:46.128 2 DEBUG nova.storage.rbd_utils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 6e556210-af07-4c5d-8558-2ba943af16a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:58:46 compute-0 nova_compute[355794]: 2025-10-02 19:58:46.204 2 DEBUG nova.storage.rbd_utils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 6e556210-af07-4c5d-8558-2ba943af16a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:58:46 compute-0 nova_compute[355794]: 2025-10-02 19:58:46.267 2 DEBUG nova.storage.rbd_utils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 6e556210-af07-4c5d-8558-2ba943af16a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:58:46 compute-0 nova_compute[355794]: 2025-10-02 19:58:46.278 2 DEBUG oslo_concurrency.lockutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "4e913ffd3c43828864a77c577e0c9e3c7f1ca233" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:58:46 compute-0 nova_compute[355794]: 2025-10-02 19:58:46.279 2 DEBUG oslo_concurrency.lockutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "4e913ffd3c43828864a77c577e0c9e3c7f1ca233" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:58:46 compute-0 ceph-mon[191910]: pgmap v1610: 321 pgs: 321 active+clean; 93 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 15 op/s
Oct 02 19:58:46 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/480843153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:58:46 compute-0 nova_compute[355794]: 2025-10-02 19:58:46.632 2 DEBUG nova.virt.libvirt.imagebackend [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Image locations are: [{'url': 'rbd://6019f664-a1c2-5955-8391-692cb79a59f9/images/f3767c50-a12c-451d-9d6a-4916622a3d7a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://6019f664-a1c2-5955-8391-692cb79a59f9/images/f3767c50-a12c-451d-9d6a-4916622a3d7a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]: {
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:     "0": [
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:         {
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "devices": [
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "/dev/loop3"
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             ],
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "lv_name": "ceph_lv0",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "lv_size": "21470642176",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "name": "ceph_lv0",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "tags": {
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.cluster_name": "ceph",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.crush_device_class": "",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.encrypted": "0",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.osd_id": "0",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.type": "block",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.vdo": "0"
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             },
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "type": "block",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "vg_name": "ceph_vg0"
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:         }
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:     ],
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:     "1": [
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:         {
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "devices": [
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "/dev/loop4"
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             ],
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "lv_name": "ceph_lv1",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "lv_size": "21470642176",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "name": "ceph_lv1",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "tags": {
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.cluster_name": "ceph",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.crush_device_class": "",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.encrypted": "0",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.osd_id": "1",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.type": "block",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.vdo": "0"
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             },
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "type": "block",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "vg_name": "ceph_vg1"
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:         }
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:     ],
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:     "2": [
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:         {
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "devices": [
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "/dev/loop5"
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             ],
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "lv_name": "ceph_lv2",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "lv_size": "21470642176",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "name": "ceph_lv2",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "tags": {
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.cluster_name": "ceph",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.crush_device_class": "",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.encrypted": "0",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.osd_id": "2",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.type": "block",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:                 "ceph.vdo": "0"
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             },
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "type": "block",
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:             "vg_name": "ceph_vg2"
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:         }
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]:     ]
Oct 02 19:58:47 compute-0 charming_chatterjee[440140]: }
Oct 02 19:58:47 compute-0 systemd[1]: libpod-61929224a5583c0817685cc34fce184de8e8c2d2e43b7e7fa89a5ab42de55996.scope: Deactivated successfully.
Oct 02 19:58:47 compute-0 podman[440124]: 2025-10-02 19:58:47.054519162 +0000 UTC m=+1.278728208 container died 61929224a5583c0817685cc34fce184de8e8c2d2e43b7e7fa89a5ab42de55996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 19:58:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d1a35df70be91dec33c032bfe8c1e8723c8d7430357f5f0d3ebadd302c738ce-merged.mount: Deactivated successfully.
Oct 02 19:58:47 compute-0 podman[440124]: 2025-10-02 19:58:47.164093804 +0000 UTC m=+1.388302820 container remove 61929224a5583c0817685cc34fce184de8e8c2d2e43b7e7fa89a5ab42de55996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 19:58:47 compute-0 systemd[1]: libpod-conmon-61929224a5583c0817685cc34fce184de8e8c2d2e43b7e7fa89a5ab42de55996.scope: Deactivated successfully.
Oct 02 19:58:47 compute-0 sudo[440000]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1611: 321 pgs: 321 active+clean; 93 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 15 op/s
Oct 02 19:58:47 compute-0 podman[440204]: 2025-10-02 19:58:47.231093795 +0000 UTC m=+0.113180089 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 02 19:58:47 compute-0 podman[440212]: 2025-10-02 19:58:47.24443329 +0000 UTC m=+0.134816115 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 19:58:47 compute-0 podman[440214]: 2025-10-02 19:58:47.313843044 +0000 UTC m=+0.181804643 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 02 19:58:47 compute-0 sudo[440267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:58:47 compute-0 sudo[440267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:58:47 compute-0 sudo[440267]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:47 compute-0 sudo[440301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:58:47 compute-0 sudo[440301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:58:47 compute-0 sudo[440301]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:47 compute-0 sudo[440326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:58:47 compute-0 sudo[440326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:58:47 compute-0 sudo[440326]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:47 compute-0 sudo[440351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:58:47 compute-0 sudo[440351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:58:48 compute-0 nova_compute[355794]: 2025-10-02 19:58:48.009 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4e913ffd3c43828864a77c577e0c9e3c7f1ca233.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:58:48 compute-0 nova_compute[355794]: 2025-10-02 19:58:48.110 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4e913ffd3c43828864a77c577e0c9e3c7f1ca233.part --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:58:48 compute-0 nova_compute[355794]: 2025-10-02 19:58:48.112 2 DEBUG nova.virt.images [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] f3767c50-a12c-451d-9d6a-4916622a3d7a was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 02 19:58:48 compute-0 nova_compute[355794]: 2025-10-02 19:58:48.114 2 DEBUG nova.privsep.utils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 19:58:48 compute-0 nova_compute[355794]: 2025-10-02 19:58:48.114 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/4e913ffd3c43828864a77c577e0c9e3c7f1ca233.part /var/lib/nova/instances/_base/4e913ffd3c43828864a77c577e0c9e3c7f1ca233.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:58:48 compute-0 ceph-mon[191910]: pgmap v1611: 321 pgs: 321 active+clean; 93 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 15 op/s
Oct 02 19:58:48 compute-0 podman[440429]: 2025-10-02 19:58:48.317662454 +0000 UTC m=+0.075099897 container create abecc4d512f0ce84eb14869a738291d3b9f5ab8598be5917b48b1b8333b52a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_beaver, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:58:48 compute-0 nova_compute[355794]: 2025-10-02 19:58:48.367 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/4e913ffd3c43828864a77c577e0c9e3c7f1ca233.part /var/lib/nova/instances/_base/4e913ffd3c43828864a77c577e0c9e3c7f1ca233.converted" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:58:48 compute-0 nova_compute[355794]: 2025-10-02 19:58:48.373 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4e913ffd3c43828864a77c577e0c9e3c7f1ca233.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:58:48 compute-0 systemd[1]: Started libpod-conmon-abecc4d512f0ce84eb14869a738291d3b9f5ab8598be5917b48b1b8333b52a16.scope.
Oct 02 19:58:48 compute-0 podman[440429]: 2025-10-02 19:58:48.292722991 +0000 UTC m=+0.050160444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:58:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:58:48 compute-0 podman[440429]: 2025-10-02 19:58:48.455103017 +0000 UTC m=+0.212540500 container init abecc4d512f0ce84eb14869a738291d3b9f5ab8598be5917b48b1b8333b52a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 19:58:48 compute-0 nova_compute[355794]: 2025-10-02 19:58:48.456 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4e913ffd3c43828864a77c577e0c9e3c7f1ca233.converted --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:58:48 compute-0 nova_compute[355794]: 2025-10-02 19:58:48.458 2 DEBUG oslo_concurrency.lockutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "4e913ffd3c43828864a77c577e0c9e3c7f1ca233" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:58:48 compute-0 podman[440429]: 2025-10-02 19:58:48.473361812 +0000 UTC m=+0.230799255 container start abecc4d512f0ce84eb14869a738291d3b9f5ab8598be5917b48b1b8333b52a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_beaver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 19:58:48 compute-0 practical_beaver[440446]: 167 167
Oct 02 19:58:48 compute-0 podman[440429]: 2025-10-02 19:58:48.480922143 +0000 UTC m=+0.238359636 container attach abecc4d512f0ce84eb14869a738291d3b9f5ab8598be5917b48b1b8333b52a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:58:48 compute-0 systemd[1]: libpod-abecc4d512f0ce84eb14869a738291d3b9f5ab8598be5917b48b1b8333b52a16.scope: Deactivated successfully.
Oct 02 19:58:48 compute-0 conmon[440446]: conmon abecc4d512f0ce84eb14 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-abecc4d512f0ce84eb14869a738291d3b9f5ab8598be5917b48b1b8333b52a16.scope/container/memory.events
Oct 02 19:58:48 compute-0 podman[440429]: 2025-10-02 19:58:48.485849914 +0000 UTC m=+0.243287327 container died abecc4d512f0ce84eb14869a738291d3b9f5ab8598be5917b48b1b8333b52a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:58:48 compute-0 nova_compute[355794]: 2025-10-02 19:58:48.529 2 DEBUG nova.storage.rbd_utils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 6e556210-af07-4c5d-8558-2ba943af16a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:58:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c946c80e3f97c5c3d2fb40eab655df731611583b61f3db7681b28234e70de6f4-merged.mount: Deactivated successfully.
Oct 02 19:58:48 compute-0 nova_compute[355794]: 2025-10-02 19:58:48.554 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/4e913ffd3c43828864a77c577e0c9e3c7f1ca233 6e556210-af07-4c5d-8558-2ba943af16a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:58:48 compute-0 podman[440429]: 2025-10-02 19:58:48.59181197 +0000 UTC m=+0.349249413 container remove abecc4d512f0ce84eb14869a738291d3b9f5ab8598be5917b48b1b8333b52a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_beaver, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct 02 19:58:48 compute-0 systemd[1]: libpod-conmon-abecc4d512f0ce84eb14869a738291d3b9f5ab8598be5917b48b1b8333b52a16.scope: Deactivated successfully.
Oct 02 19:58:48 compute-0 podman[440509]: 2025-10-02 19:58:48.906488494 +0000 UTC m=+0.088938725 container create 5adeb771f7ab6078162e84059c411621859a789a95d9d29f7a7dfba011803a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:58:48 compute-0 podman[440509]: 2025-10-02 19:58:48.876361283 +0000 UTC m=+0.058811514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:58:48 compute-0 systemd[1]: Started libpod-conmon-5adeb771f7ab6078162e84059c411621859a789a95d9d29f7a7dfba011803a0c.scope.
Oct 02 19:58:48 compute-0 nova_compute[355794]: 2025-10-02 19:58:48.979 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/4e913ffd3c43828864a77c577e0c9e3c7f1ca233 6e556210-af07-4c5d-8558-2ba943af16a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:58:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c9a322b968bf11ed6dec8f18ea969b13782c44e1752ec175b08ef796e9de4aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c9a322b968bf11ed6dec8f18ea969b13782c44e1752ec175b08ef796e9de4aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c9a322b968bf11ed6dec8f18ea969b13782c44e1752ec175b08ef796e9de4aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c9a322b968bf11ed6dec8f18ea969b13782c44e1752ec175b08ef796e9de4aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:58:49 compute-0 podman[440509]: 2025-10-02 19:58:49.082624635 +0000 UTC m=+0.265074886 container init 5adeb771f7ab6078162e84059c411621859a789a95d9d29f7a7dfba011803a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:58:49 compute-0 podman[440509]: 2025-10-02 19:58:49.105274627 +0000 UTC m=+0.287724868 container start 5adeb771f7ab6078162e84059c411621859a789a95d9d29f7a7dfba011803a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:58:49 compute-0 podman[440509]: 2025-10-02 19:58:49.116373002 +0000 UTC m=+0.298823303 container attach 5adeb771f7ab6078162e84059c411621859a789a95d9d29f7a7dfba011803a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:58:49 compute-0 nova_compute[355794]: 2025-10-02 19:58:49.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:49 compute-0 nova_compute[355794]: 2025-10-02 19:58:49.151 2 DEBUG nova.storage.rbd_utils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] resizing rbd image 6e556210-af07-4c5d-8558-2ba943af16a1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 19:58:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1612: 321 pgs: 321 active+clean; 109 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.0 MiB/s wr, 27 op/s
Oct 02 19:58:49 compute-0 nova_compute[355794]: 2025-10-02 19:58:49.349 2 DEBUG nova.objects.instance [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lazy-loading 'migration_context' on Instance uuid 6e556210-af07-4c5d-8558-2ba943af16a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:58:49 compute-0 nova_compute[355794]: 2025-10-02 19:58:49.435 2 DEBUG nova.storage.rbd_utils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 6e556210-af07-4c5d-8558-2ba943af16a1_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:58:49 compute-0 nova_compute[355794]: 2025-10-02 19:58:49.507 2 DEBUG nova.storage.rbd_utils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 6e556210-af07-4c5d-8558-2ba943af16a1_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:58:49 compute-0 nova_compute[355794]: 2025-10-02 19:58:49.518 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:58:49 compute-0 nova_compute[355794]: 2025-10-02 19:58:49.614 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:58:49 compute-0 nova_compute[355794]: 2025-10-02 19:58:49.616 2 DEBUG oslo_concurrency.lockutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:58:49 compute-0 nova_compute[355794]: 2025-10-02 19:58:49.616 2 DEBUG oslo_concurrency.lockutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:58:49 compute-0 nova_compute[355794]: 2025-10-02 19:58:49.617 2 DEBUG oslo_concurrency.lockutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:58:49 compute-0 nova_compute[355794]: 2025-10-02 19:58:49.652 2 DEBUG nova.storage.rbd_utils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 6e556210-af07-4c5d-8558-2ba943af16a1_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:58:49 compute-0 nova_compute[355794]: 2025-10-02 19:58:49.662 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 6e556210-af07-4c5d-8558-2ba943af16a1_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.133 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 6e556210-af07-4c5d-8558-2ba943af16a1_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:58:50 compute-0 tender_easley[440525]: {
Oct 02 19:58:50 compute-0 tender_easley[440525]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 19:58:50 compute-0 tender_easley[440525]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:58:50 compute-0 tender_easley[440525]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 19:58:50 compute-0 tender_easley[440525]:         "osd_id": 1,
Oct 02 19:58:50 compute-0 tender_easley[440525]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:58:50 compute-0 tender_easley[440525]:         "type": "bluestore"
Oct 02 19:58:50 compute-0 tender_easley[440525]:     },
Oct 02 19:58:50 compute-0 tender_easley[440525]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 19:58:50 compute-0 tender_easley[440525]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:58:50 compute-0 tender_easley[440525]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 19:58:50 compute-0 tender_easley[440525]:         "osd_id": 2,
Oct 02 19:58:50 compute-0 tender_easley[440525]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:58:50 compute-0 tender_easley[440525]:         "type": "bluestore"
Oct 02 19:58:50 compute-0 tender_easley[440525]:     },
Oct 02 19:58:50 compute-0 tender_easley[440525]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 19:58:50 compute-0 tender_easley[440525]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:58:50 compute-0 tender_easley[440525]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 19:58:50 compute-0 tender_easley[440525]:         "osd_id": 0,
Oct 02 19:58:50 compute-0 tender_easley[440525]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:58:50 compute-0 tender_easley[440525]:         "type": "bluestore"
Oct 02 19:58:50 compute-0 tender_easley[440525]:     }
Oct 02 19:58:50 compute-0 tender_easley[440525]: }
Oct 02 19:58:50 compute-0 systemd[1]: libpod-5adeb771f7ab6078162e84059c411621859a789a95d9d29f7a7dfba011803a0c.scope: Deactivated successfully.
Oct 02 19:58:50 compute-0 conmon[440525]: conmon 5adeb771f7ab6078162e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5adeb771f7ab6078162e84059c411621859a789a95d9d29f7a7dfba011803a0c.scope/container/memory.events
Oct 02 19:58:50 compute-0 systemd[1]: libpod-5adeb771f7ab6078162e84059c411621859a789a95d9d29f7a7dfba011803a0c.scope: Consumed 1.132s CPU time.
Oct 02 19:58:50 compute-0 podman[440509]: 2025-10-02 19:58:50.291582978 +0000 UTC m=+1.474033249 container died 5adeb771f7ab6078162e84059c411621859a789a95d9d29f7a7dfba011803a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:58:50 compute-0 ceph-mon[191910]: pgmap v1612: 321 pgs: 321 active+clean; 109 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.0 MiB/s wr, 27 op/s
Oct 02 19:58:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c9a322b968bf11ed6dec8f18ea969b13782c44e1752ec175b08ef796e9de4aa-merged.mount: Deactivated successfully.
Oct 02 19:58:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:58:50 compute-0 podman[440509]: 2025-10-02 19:58:50.389966353 +0000 UTC m=+1.572416584 container remove 5adeb771f7ab6078162e84059c411621859a789a95d9d29f7a7dfba011803a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.394 2 DEBUG nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.395 2 DEBUG nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Ensure instance console log exists: /var/lib/nova/instances/6e556210-af07-4c5d-8558-2ba943af16a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.396 2 DEBUG oslo_concurrency.lockutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.396 2 DEBUG oslo_concurrency.lockutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.397 2 DEBUG oslo_concurrency.lockutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.399 2 DEBUG nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:58:32Z,direct_url=<?>,disk_format='qcow2',id=f3767c50-a12c-451d-9d6a-4916622a3d7a,min_disk=0,min_ram=0,name='fvt_testing_image',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:58:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'disk_bus': 'virtio', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'image_id': 'f3767c50-a12c-451d-9d6a-4916622a3d7a'}], 'ephemerals': [{'encryption_secret_uuid': None, 'device_name': '/dev/vdb', 'encrypted': False, 'disk_bus': 'virtio', 'size': 1, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.411 2 WARNING nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.419 2 DEBUG nova.virt.libvirt.host [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.420 2 DEBUG nova.virt.libvirt.host [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:58:50 compute-0 systemd[1]: libpod-conmon-5adeb771f7ab6078162e84059c411621859a789a95d9d29f7a7dfba011803a0c.scope: Deactivated successfully.
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.424 2 DEBUG nova.virt.libvirt.host [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.425 2 DEBUG nova.virt.libvirt.host [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.425 2 DEBUG nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.425 2 DEBUG nova.virt.hardware [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:58:39Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='3b09379d-eec1-4616-b358-f61e3e929bc4',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:58:32Z,direct_url=<?>,disk_format='qcow2',id=f3767c50-a12c-451d-9d6a-4916622a3d7a,min_disk=0,min_ram=0,name='fvt_testing_image',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:58:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.426 2 DEBUG nova.virt.hardware [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.426 2 DEBUG nova.virt.hardware [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.426 2 DEBUG nova.virt.hardware [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.427 2 DEBUG nova.virt.hardware [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.427 2 DEBUG nova.virt.hardware [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.427 2 DEBUG nova.virt.hardware [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.427 2 DEBUG nova.virt.hardware [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.428 2 DEBUG nova.virt.hardware [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.428 2 DEBUG nova.virt.hardware [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.428 2 DEBUG nova.virt.hardware [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:58:50 compute-0 sudo[440351]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.432 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:58:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 19:58:50 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:58:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 19:58:50 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:58:50 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 7174813b-dfee-4385-917e-815da781ed31 does not exist
Oct 02 19:58:50 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 3223804b-e0ef-48ad-8aa6-67d94ab054ea does not exist
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:50 compute-0 podman[440759]: 2025-10-02 19:58:50.50047744 +0000 UTC m=+0.151213080 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.)
Oct 02 19:58:50 compute-0 podman[440762]: 2025-10-02 19:58:50.513016763 +0000 UTC m=+0.163212529 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:58:50 compute-0 sudo[440804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:58:50 compute-0 sudo[440804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:58:50 compute-0 sudo[440804]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:50 compute-0 sudo[440856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 19:58:50 compute-0 sudo[440856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:58:50 compute-0 sudo[440856]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 19:58:50 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4088868256' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.903 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:58:50 compute-0 nova_compute[355794]: 2025-10-02 19:58:50.904 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:58:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1613: 321 pgs: 321 active+clean; 119 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.2 MiB/s wr, 27 op/s
Oct 02 19:58:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 19:58:51 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/140824906' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:58:51 compute-0 nova_compute[355794]: 2025-10-02 19:58:51.396 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:58:51 compute-0 nova_compute[355794]: 2025-10-02 19:58:51.440 2 DEBUG nova.storage.rbd_utils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 6e556210-af07-4c5d-8558-2ba943af16a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:58:51 compute-0 nova_compute[355794]: 2025-10-02 19:58:51.452 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:58:51 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:58:51 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:58:51 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4088868256' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:58:51 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/140824906' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:58:51 compute-0 nova_compute[355794]: 2025-10-02 19:58:51.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:51 compute-0 nova_compute[355794]: 2025-10-02 19:58:51.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:58:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 19:58:52 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2129914276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:58:52 compute-0 nova_compute[355794]: 2025-10-02 19:58:52.065 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:58:52 compute-0 nova_compute[355794]: 2025-10-02 19:58:52.067 2 DEBUG nova.objects.instance [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6e556210-af07-4c5d-8558-2ba943af16a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:58:52 compute-0 nova_compute[355794]: 2025-10-02 19:58:52.120 2 DEBUG nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:58:52 compute-0 nova_compute[355794]:   <uuid>6e556210-af07-4c5d-8558-2ba943af16a1</uuid>
Oct 02 19:58:52 compute-0 nova_compute[355794]:   <name>instance-00000005</name>
Oct 02 19:58:52 compute-0 nova_compute[355794]:   <memory>524288</memory>
Oct 02 19:58:52 compute-0 nova_compute[355794]:   <vcpu>1</vcpu>
Oct 02 19:58:52 compute-0 nova_compute[355794]:   <metadata>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <nova:name>fvt_testing_server</nova:name>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <nova:creationTime>2025-10-02 19:58:50</nova:creationTime>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <nova:flavor name="fvt_testing_flavor">
Oct 02 19:58:52 compute-0 nova_compute[355794]:         <nova:memory>512</nova:memory>
Oct 02 19:58:52 compute-0 nova_compute[355794]:         <nova:disk>1</nova:disk>
Oct 02 19:58:52 compute-0 nova_compute[355794]:         <nova:swap>0</nova:swap>
Oct 02 19:58:52 compute-0 nova_compute[355794]:         <nova:ephemeral>1</nova:ephemeral>
Oct 02 19:58:52 compute-0 nova_compute[355794]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       </nova:flavor>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <nova:owner>
Oct 02 19:58:52 compute-0 nova_compute[355794]:         <nova:user uuid="811fb7ac717e4ba9b9874e5454ee08f4">admin</nova:user>
Oct 02 19:58:52 compute-0 nova_compute[355794]:         <nova:project uuid="1c35486f37b94d43a7bf2f2fa09c70b9">admin</nova:project>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       </nova:owner>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <nova:root type="image" uuid="f3767c50-a12c-451d-9d6a-4916622a3d7a"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <nova:ports/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     </nova:instance>
Oct 02 19:58:52 compute-0 nova_compute[355794]:   </metadata>
Oct 02 19:58:52 compute-0 nova_compute[355794]:   <sysinfo type="smbios">
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <system>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <entry name="serial">6e556210-af07-4c5d-8558-2ba943af16a1</entry>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <entry name="uuid">6e556210-af07-4c5d-8558-2ba943af16a1</entry>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     </system>
Oct 02 19:58:52 compute-0 nova_compute[355794]:   </sysinfo>
Oct 02 19:58:52 compute-0 nova_compute[355794]:   <os>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <boot dev="hd"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <smbios mode="sysinfo"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:   </os>
Oct 02 19:58:52 compute-0 nova_compute[355794]:   <features>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <acpi/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <apic/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <vmcoreinfo/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:   </features>
Oct 02 19:58:52 compute-0 nova_compute[355794]:   <clock offset="utc">
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <timer name="hpet" present="no"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:   </clock>
Oct 02 19:58:52 compute-0 nova_compute[355794]:   <cpu mode="host-model" match="exact">
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:   </cpu>
Oct 02 19:58:52 compute-0 nova_compute[355794]:   <devices>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/6e556210-af07-4c5d-8558-2ba943af16a1_disk">
Oct 02 19:58:52 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       </source>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 19:58:52 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       </auth>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <target dev="vda" bus="virtio"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/6e556210-af07-4c5d-8558-2ba943af16a1_disk.eph0">
Oct 02 19:58:52 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       </source>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 19:58:52 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       </auth>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <target dev="vdb" bus="virtio"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <disk type="network" device="cdrom">
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/6e556210-af07-4c5d-8558-2ba943af16a1_disk.config">
Oct 02 19:58:52 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       </source>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 19:58:52 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       </auth>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <target dev="sda" bus="sata"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     </disk>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <serial type="pty">
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <log file="/var/lib/nova/instances/6e556210-af07-4c5d-8558-2ba943af16a1/console.log" append="off"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     </serial>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <video>
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     </video>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <input type="tablet" bus="usb"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <rng model="virtio">
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     </rng>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <controller type="usb" index="0"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     <memballoon model="virtio">
Oct 02 19:58:52 compute-0 nova_compute[355794]:       <stats period="10"/>
Oct 02 19:58:52 compute-0 nova_compute[355794]:     </memballoon>
Oct 02 19:58:52 compute-0 nova_compute[355794]:   </devices>
Oct 02 19:58:52 compute-0 nova_compute[355794]: </domain>
Oct 02 19:58:52 compute-0 nova_compute[355794]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:58:52 compute-0 nova_compute[355794]: 2025-10-02 19:58:52.336 2 DEBUG nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:58:52 compute-0 nova_compute[355794]: 2025-10-02 19:58:52.337 2 DEBUG nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:58:52 compute-0 nova_compute[355794]: 2025-10-02 19:58:52.337 2 DEBUG nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:58:52 compute-0 nova_compute[355794]: 2025-10-02 19:58:52.338 2 INFO nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Using config drive
Oct 02 19:58:52 compute-0 nova_compute[355794]: 2025-10-02 19:58:52.378 2 DEBUG nova.storage.rbd_utils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 6e556210-af07-4c5d-8558-2ba943af16a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:58:52 compute-0 ceph-mon[191910]: pgmap v1613: 321 pgs: 321 active+clean; 119 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.2 MiB/s wr, 27 op/s
Oct 02 19:58:52 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2129914276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 19:58:53 compute-0 nova_compute[355794]: 2025-10-02 19:58:53.125 2 INFO nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Creating config drive at /var/lib/nova/instances/6e556210-af07-4c5d-8558-2ba943af16a1/disk.config
Oct 02 19:58:53 compute-0 nova_compute[355794]: 2025-10-02 19:58:53.134 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6e556210-af07-4c5d-8558-2ba943af16a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2vy457f4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:58:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1614: 321 pgs: 321 active+clean; 126 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 29 op/s
Oct 02 19:58:53 compute-0 nova_compute[355794]: 2025-10-02 19:58:53.292 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6e556210-af07-4c5d-8558-2ba943af16a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2vy457f4" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:58:53 compute-0 nova_compute[355794]: 2025-10-02 19:58:53.349 2 DEBUG nova.storage.rbd_utils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] rbd image 6e556210-af07-4c5d-8558-2ba943af16a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 19:58:53 compute-0 nova_compute[355794]: 2025-10-02 19:58:53.362 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6e556210-af07-4c5d-8558-2ba943af16a1/disk.config 6e556210-af07-4c5d-8558-2ba943af16a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:58:53 compute-0 nova_compute[355794]: 2025-10-02 19:58:53.672 2 DEBUG oslo_concurrency.processutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6e556210-af07-4c5d-8558-2ba943af16a1/disk.config 6e556210-af07-4c5d-8558-2ba943af16a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:58:53 compute-0 nova_compute[355794]: 2025-10-02 19:58:53.673 2 INFO nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Deleting local config drive /var/lib/nova/instances/6e556210-af07-4c5d-8558-2ba943af16a1/disk.config because it was imported into RBD.
Oct 02 19:58:53 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 02 19:58:53 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 02 19:58:53 compute-0 systemd-machined[137646]: New machine qemu-5-instance-00000005.
Oct 02 19:58:53 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Oct 02 19:58:54 compute-0 nova_compute[355794]: 2025-10-02 19:58:54.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:54 compute-0 ceph-mon[191910]: pgmap v1614: 321 pgs: 321 active+clean; 126 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 29 op/s
Oct 02 19:58:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1615: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 45 op/s
Oct 02 19:58:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:58:55 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:55 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.807 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435135.806644, 6e556210-af07-4c5d-8558-2ba943af16a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.809 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] VM Resumed (Lifecycle Event)
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.816 2 DEBUG nova.compute.manager [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.817 2 DEBUG nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.826 2 INFO nova.virt.libvirt.driver [-] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Instance spawned successfully.
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.826 2 DEBUG nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.849 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.870 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.878 2 DEBUG nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.879 2 DEBUG nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.880 2 DEBUG nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.881 2 DEBUG nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.883 2 DEBUG nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.884 2 DEBUG nova.virt.libvirt.driver [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.939 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.940 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435135.8152936, 6e556210-af07-4c5d-8558-2ba943af16a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.940 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] VM Started (Lifecycle Event)
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.973 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.983 2 INFO nova.compute.manager [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Took 9.94 seconds to spawn the instance on the hypervisor.
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.984 2 DEBUG nova.compute.manager [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:58:55 compute-0 nova_compute[355794]: 2025-10-02 19:58:55.986 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:58:56 compute-0 nova_compute[355794]: 2025-10-02 19:58:56.005 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:58:56 compute-0 nova_compute[355794]: 2025-10-02 19:58:56.044 2 INFO nova.compute.manager [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Took 11.27 seconds to build instance.
Oct 02 19:58:56 compute-0 nova_compute[355794]: 2025-10-02 19:58:56.082 2 DEBUG oslo_concurrency.lockutils [None req-7f39d223-1a03-4b9e-b8b8-01cd209219be 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "6e556210-af07-4c5d-8558-2ba943af16a1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.380s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:58:56 compute-0 ceph-mon[191910]: pgmap v1615: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 45 op/s
Oct 02 19:58:56 compute-0 nova_compute[355794]: 2025-10-02 19:58:56.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:56 compute-0 nova_compute[355794]: 2025-10-02 19:58:56.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:58:56 compute-0 nova_compute[355794]: 2025-10-02 19:58:56.619 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:58:56 compute-0 nova_compute[355794]: 2025-10-02 19:58:56.620 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1616: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 51 op/s
Oct 02 19:58:58 compute-0 ceph-mon[191910]: pgmap v1616: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 51 op/s
Oct 02 19:58:59 compute-0 nova_compute[355794]: 2025-10-02 19:58:59.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1617: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 84 op/s
Oct 02 19:58:59 compute-0 nova_compute[355794]: 2025-10-02 19:58:59.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:59 compute-0 nova_compute[355794]: 2025-10-02 19:58:59.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:59 compute-0 nova_compute[355794]: 2025-10-02 19:58:59.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:59 compute-0 nova_compute[355794]: 2025-10-02 19:58:59.612 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:58:59 compute-0 nova_compute[355794]: 2025-10-02 19:58:59.612 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:58:59 compute-0 nova_compute[355794]: 2025-10-02 19:58:59.613 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:58:59 compute-0 nova_compute[355794]: 2025-10-02 19:58:59.613 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:58:59 compute-0 nova_compute[355794]: 2025-10-02 19:58:59.613 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:58:59 compute-0 podman[157186]: time="2025-10-02T19:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:58:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:58:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9067 "" "Go-http-client/1.1"
Oct 02 19:59:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:59:00 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1713103410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:59:00 compute-0 nova_compute[355794]: 2025-10-02 19:59:00.114 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:59:00 compute-0 nova_compute[355794]: 2025-10-02 19:59:00.254 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:59:00 compute-0 nova_compute[355794]: 2025-10-02 19:59:00.256 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:59:00 compute-0 nova_compute[355794]: 2025-10-02 19:59:00.256 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:59:00 compute-0 nova_compute[355794]: 2025-10-02 19:59:00.268 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:59:00 compute-0 nova_compute[355794]: 2025-10-02 19:59:00.269 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:59:00 compute-0 nova_compute[355794]: 2025-10-02 19:59:00.269 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 19:59:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:59:00 compute-0 nova_compute[355794]: 2025-10-02 19:59:00.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:00 compute-0 ceph-mon[191910]: pgmap v1617: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 84 op/s
Oct 02 19:59:00 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1713103410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:59:00 compute-0 nova_compute[355794]: 2025-10-02 19:59:00.916 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:59:00 compute-0 nova_compute[355794]: 2025-10-02 19:59:00.918 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3658MB free_disk=59.93916702270508GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:59:00 compute-0 nova_compute[355794]: 2025-10-02 19:59:00.919 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:59:00 compute-0 nova_compute[355794]: 2025-10-02 19:59:00.920 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:59:01 compute-0 nova_compute[355794]: 2025-10-02 19:59:01.027 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:59:01 compute-0 nova_compute[355794]: 2025-10-02 19:59:01.028 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 6e556210-af07-4c5d-8558-2ba943af16a1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:59:01 compute-0 nova_compute[355794]: 2025-10-02 19:59:01.029 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:59:01 compute-0 nova_compute[355794]: 2025-10-02 19:59:01.029 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:59:01 compute-0 nova_compute[355794]: 2025-10-02 19:59:01.093 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:59:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1618: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 738 KiB/s wr, 88 op/s
Oct 02 19:59:01 compute-0 openstack_network_exporter[372736]: ERROR   19:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:59:01 compute-0 openstack_network_exporter[372736]: ERROR   19:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:59:01 compute-0 openstack_network_exporter[372736]: ERROR   19:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:59:01 compute-0 openstack_network_exporter[372736]: ERROR   19:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:59:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:59:01 compute-0 openstack_network_exporter[372736]: ERROR   19:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:59:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:59:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:59:01 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691798175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:59:01 compute-0 nova_compute[355794]: 2025-10-02 19:59:01.683 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:59:01 compute-0 nova_compute[355794]: 2025-10-02 19:59:01.693 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:59:01 compute-0 nova_compute[355794]: 2025-10-02 19:59:01.721 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:59:01 compute-0 nova_compute[355794]: 2025-10-02 19:59:01.777 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:59:01 compute-0 nova_compute[355794]: 2025-10-02 19:59:01.778 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:59:01 compute-0 podman[441155]: 2025-10-02 19:59:01.78405231 +0000 UTC m=+0.204190698 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:59:02 compute-0 ceph-mon[191910]: pgmap v1618: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 738 KiB/s wr, 88 op/s
Oct 02 19:59:02 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1691798175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:59:02 compute-0 nova_compute[355794]: 2025-10-02 19:59:02.778 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:59:02 compute-0 nova_compute[355794]: 2025-10-02 19:59:02.779 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:59:02 compute-0 nova_compute[355794]: 2025-10-02 19:59:02.780 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:59:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1619: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 123 KiB/s wr, 77 op/s
Oct 02 19:59:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_19:59:03
Oct 02 19:59:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 19:59:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 19:59:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'vms', '.rgw.root', 'images', 'default.rgw.control', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Oct 02 19:59:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 19:59:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:59:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:59:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:59:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:59:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:59:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:59:04 compute-0 nova_compute[355794]: 2025-10-02 19:59:04.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 19:59:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:59:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 19:59:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 19:59:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:59:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:59:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 19:59:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:59:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 19:59:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.299 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.301 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.311 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 6e556210-af07-4c5d-8558-2ba943af16a1 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 19:59:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:04.312 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/6e556210-af07-4c5d-8558-2ba943af16a1 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}83403cbd27ca997ec29507654ce1f65c7e3234a2ba47473760e788c908204f10" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 19:59:04 compute-0 ceph-mon[191910]: pgmap v1619: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 123 KiB/s wr, 77 op/s
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.137 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1572 Content-Type: application/json Date: Thu, 02 Oct 2025 19:59:04 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-1ccf3718-4538-46be-b56f-3f496e8b4b66 x-openstack-request-id: req-1ccf3718-4538-46be-b56f-3f496e8b4b66 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.138 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "6e556210-af07-4c5d-8558-2ba943af16a1", "name": "fvt_testing_server", "status": "ACTIVE", "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "user_id": "811fb7ac717e4ba9b9874e5454ee08f4", "metadata": {}, "hostId": "0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d", "image": {"id": "f3767c50-a12c-451d-9d6a-4916622a3d7a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/f3767c50-a12c-451d-9d6a-4916622a3d7a"}]}, "flavor": {"id": "3b09379d-eec1-4616-b358-f61e3e929bc4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/3b09379d-eec1-4616-b358-f61e3e929bc4"}]}, "created": "2025-10-02T19:58:43Z", "updated": "2025-10-02T19:58:56Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/6e556210-af07-4c5d-8558-2ba943af16a1"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/6e556210-af07-4c5d-8558-2ba943af16a1"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-02T19:58:55.000000", "OS-SRV-USG:terminated_at": null, "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.138 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/6e556210-af07-4c5d-8558-2ba943af16a1 used request id req-1ccf3718-4538-46be-b56f-3f496e8b4b66 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.141 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '6e556210-af07-4c5d-8558-2ba943af16a1', 'name': 'fvt_testing_server', 'flavor': {'id': '3b09379d-eec1-4616-b358-f61e3e929bc4', 'name': 'fvt_testing_flavor', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f3767c50-a12c-451d-9d6a-4916622a3d7a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.145 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.146 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.146 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.146 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.146 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.147 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:59:05.146751) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.210 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.211 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.212 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1620: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.271 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.272 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.273 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.274 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.274 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.274 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.274 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.275 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.275 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:59:05.275153) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.299 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.299 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.300 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.343 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.344 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.344 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.345 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.345 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.346 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.347 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:59:05.346454) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.347 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.347 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.348 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.348 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.349 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.350 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.350 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.350 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.351 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.351 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.351 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:59:05.351190) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.351 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.352 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.352 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.353 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.353 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.354 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.354 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.355 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.355 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.355 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.355 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:59:05.355797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.381 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.439 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.441 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.441 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.441 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.443 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.446 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.446 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.453 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:59:05.446234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.453 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.455 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.465 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.466 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.467 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.467 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.472 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.472 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.472 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.472 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.473 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:59:05.472743) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.483 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.484 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.485 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.485 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.485 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.485 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.486 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.486 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:59:05.485997) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.487 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.487 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.488 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.488 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.488 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.489 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.489 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-02T19:59:05.489133) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.489 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.490 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.490 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.491 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.491 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.491 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.491 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.492 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:59:05.491744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.493 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.493 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.493 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.494 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.494 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.494 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.495 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:59:05.494553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.494 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.495 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.496 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.496 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.496 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.496 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.497 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.497 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.498 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.498 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:59:05.497098) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.499 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.499 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.499 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.500 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.500 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:59:05.499973) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.501 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.502 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:59:05.502336) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.503 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.504 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.504 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.505 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.506 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.506 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:59:05.505072) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.507 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.507 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.508 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.509 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.509 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.509 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.509 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:59:05.509804) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.510 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.510 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.510 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.510 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.511 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.511 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:59:05.511137) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.511 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.511 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.512 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.512 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 nova_compute[355794]: 2025-10-02 19:59:05.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.512 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.513 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.513 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.514 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.514 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.514 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.514 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.514 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-02T19:59:05.514451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.514 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.515 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.515 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.515 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.515 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.516 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:59:05.515793) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.516 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.516 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 6e556210-af07-4c5d-8558-2ba943af16a1: ceilometer.compute.pollsters.NoVolumeException
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.516 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.517 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.517 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.517 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.517 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.517 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.518 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2730 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.518 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:59:05.517803) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.518 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.519 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.519 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.519 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.519 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.519 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.520 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.520 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:59:05.519247) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.520 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.520 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.521 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.521 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.521 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.522 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:59:05.521069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.522 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.522 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.523 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.523 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.524 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.524 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.524 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.524 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.525 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.525 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.526 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.526 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.526 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.526 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:59:05.524858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.526 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.527 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.527 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.527 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:59:05.526918) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.527 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.528 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.528 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.528 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:59:05.528320) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.528 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/cpu volume: 9160000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.529 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 46690000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.529 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.529 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.529 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.530 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.530 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.read.latency volume: 1752474568 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:59:05.530061) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.530 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.530 14 DEBUG ceilometer.compute.pollsters [-] 6e556210-af07-4c5d-8558-2ba943af16a1/disk.device.read.latency volume: 3705359 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.531 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.531 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.531 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.532 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 19:59:05.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:06 compute-0 ceph-mon[191910]: pgmap v1620: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 02 19:59:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1621: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 58 op/s
Oct 02 19:59:08 compute-0 ceph-mon[191910]: pgmap v1621: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 58 op/s
Oct 02 19:59:09 compute-0 nova_compute[355794]: 2025-10-02 19:59:09.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1622: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 52 op/s
Oct 02 19:59:09 compute-0 podman[441177]: 2025-10-02 19:59:09.709461825 +0000 UTC m=+0.116845336 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:59:09 compute-0 podman[441178]: 2025-10-02 19:59:09.760755898 +0000 UTC m=+0.168715515 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 19:59:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:59:10 compute-0 nova_compute[355794]: 2025-10-02 19:59:10.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:10 compute-0 ceph-mon[191910]: pgmap v1622: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 52 op/s
Oct 02 19:59:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1623: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 612 KiB/s rd, 19 op/s
Oct 02 19:59:11 compute-0 ceph-mon[191910]: pgmap v1623: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 612 KiB/s rd, 19 op/s
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000818730363189425 of space, bias 1.0, pg target 0.24561910895682748 quantized to 32 (current 32)
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0005066271692062251 of space, bias 1.0, pg target 0.15198815076186756 quantized to 32 (current 32)
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 19:59:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 19:59:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1624: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 op/s
Oct 02 19:59:14 compute-0 nova_compute[355794]: 2025-10-02 19:59:14.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:14 compute-0 ceph-mon[191910]: pgmap v1624: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 op/s
Oct 02 19:59:14 compute-0 podman[441220]: 2025-10-02 19:59:14.744353235 +0000 UTC m=+0.154841757 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, release=1214.1726694543, io.openshift.expose-services=)
Oct 02 19:59:14 compute-0 podman[441219]: 2025-10-02 19:59:14.772857703 +0000 UTC m=+0.184865484 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:59:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1625: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:59:15 compute-0 nova_compute[355794]: 2025-10-02 19:59:15.367 2 DEBUG oslo_concurrency.lockutils [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "6e556210-af07-4c5d-8558-2ba943af16a1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:59:15 compute-0 nova_compute[355794]: 2025-10-02 19:59:15.367 2 DEBUG oslo_concurrency.lockutils [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "6e556210-af07-4c5d-8558-2ba943af16a1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:59:15 compute-0 nova_compute[355794]: 2025-10-02 19:59:15.368 2 DEBUG oslo_concurrency.lockutils [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "6e556210-af07-4c5d-8558-2ba943af16a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:59:15 compute-0 nova_compute[355794]: 2025-10-02 19:59:15.368 2 DEBUG oslo_concurrency.lockutils [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "6e556210-af07-4c5d-8558-2ba943af16a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:59:15 compute-0 nova_compute[355794]: 2025-10-02 19:59:15.368 2 DEBUG oslo_concurrency.lockutils [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "6e556210-af07-4c5d-8558-2ba943af16a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:59:15 compute-0 nova_compute[355794]: 2025-10-02 19:59:15.371 2 INFO nova.compute.manager [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Terminating instance
Oct 02 19:59:15 compute-0 nova_compute[355794]: 2025-10-02 19:59:15.372 2 DEBUG oslo_concurrency.lockutils [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "refresh_cache-6e556210-af07-4c5d-8558-2ba943af16a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:59:15 compute-0 nova_compute[355794]: 2025-10-02 19:59:15.372 2 DEBUG oslo_concurrency.lockutils [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquired lock "refresh_cache-6e556210-af07-4c5d-8558-2ba943af16a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:59:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:59:15 compute-0 nova_compute[355794]: 2025-10-02 19:59:15.372 2 DEBUG nova.network.neutron [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:59:15 compute-0 nova_compute[355794]: 2025-10-02 19:59:15.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:15 compute-0 nova_compute[355794]: 2025-10-02 19:59:15.585 2 DEBUG nova.network.neutron [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:59:15 compute-0 nova_compute[355794]: 2025-10-02 19:59:15.935 2 DEBUG nova.network.neutron [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:59:15 compute-0 nova_compute[355794]: 2025-10-02 19:59:15.957 2 DEBUG oslo_concurrency.lockutils [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Releasing lock "refresh_cache-6e556210-af07-4c5d-8558-2ba943af16a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:59:15 compute-0 nova_compute[355794]: 2025-10-02 19:59:15.958 2 DEBUG nova.compute.manager [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 19:59:16 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct 02 19:59:16 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 22.681s CPU time.
Oct 02 19:59:16 compute-0 systemd-machined[137646]: Machine qemu-5-instance-00000005 terminated.
Oct 02 19:59:16 compute-0 nova_compute[355794]: 2025-10-02 19:59:16.207 2 INFO nova.virt.libvirt.driver [-] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Instance destroyed successfully.
Oct 02 19:59:16 compute-0 nova_compute[355794]: 2025-10-02 19:59:16.208 2 DEBUG nova.objects.instance [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lazy-loading 'resources' on Instance uuid 6e556210-af07-4c5d-8558-2ba943af16a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:59:16 compute-0 ceph-mon[191910]: pgmap v1625: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:59:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1626: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:59:17 compute-0 nova_compute[355794]: 2025-10-02 19:59:17.672 2 INFO nova.virt.libvirt.driver [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Deleting instance files /var/lib/nova/instances/6e556210-af07-4c5d-8558-2ba943af16a1_del
Oct 02 19:59:17 compute-0 nova_compute[355794]: 2025-10-02 19:59:17.674 2 INFO nova.virt.libvirt.driver [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Deletion of /var/lib/nova/instances/6e556210-af07-4c5d-8558-2ba943af16a1_del complete
Oct 02 19:59:17 compute-0 podman[441278]: 2025-10-02 19:59:17.715361 +0000 UTC m=+0.127471629 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:59:17 compute-0 podman[441279]: 2025-10-02 19:59:17.716625084 +0000 UTC m=+0.118436899 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 19:59:17 compute-0 nova_compute[355794]: 2025-10-02 19:59:17.740 2 INFO nova.compute.manager [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Took 1.78 seconds to destroy the instance on the hypervisor.
Oct 02 19:59:17 compute-0 nova_compute[355794]: 2025-10-02 19:59:17.741 2 DEBUG oslo.service.loopingcall [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 19:59:17 compute-0 nova_compute[355794]: 2025-10-02 19:59:17.742 2 DEBUG nova.compute.manager [-] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 19:59:17 compute-0 nova_compute[355794]: 2025-10-02 19:59:17.742 2 DEBUG nova.network.neutron [-] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 19:59:17 compute-0 podman[441280]: 2025-10-02 19:59:17.771596415 +0000 UTC m=+0.169455645 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 19:59:18 compute-0 nova_compute[355794]: 2025-10-02 19:59:18.121 2 DEBUG nova.network.neutron [-] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:59:18 compute-0 nova_compute[355794]: 2025-10-02 19:59:18.136 2 DEBUG nova.network.neutron [-] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:59:18 compute-0 nova_compute[355794]: 2025-10-02 19:59:18.156 2 INFO nova.compute.manager [-] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Took 0.41 seconds to deallocate network for instance.
Oct 02 19:59:18 compute-0 nova_compute[355794]: 2025-10-02 19:59:18.200 2 DEBUG oslo_concurrency.lockutils [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:59:18 compute-0 nova_compute[355794]: 2025-10-02 19:59:18.201 2 DEBUG oslo_concurrency.lockutils [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:59:18 compute-0 nova_compute[355794]: 2025-10-02 19:59:18.317 2 DEBUG oslo_concurrency.processutils [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:59:18 compute-0 ceph-mon[191910]: pgmap v1626: 321 pgs: 321 active+clean; 126 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 02 19:59:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 19:59:18 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2737688159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:59:18 compute-0 nova_compute[355794]: 2025-10-02 19:59:18.826 2 DEBUG oslo_concurrency.processutils [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:59:18 compute-0 nova_compute[355794]: 2025-10-02 19:59:18.838 2 DEBUG nova.compute.provider_tree [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:59:18 compute-0 nova_compute[355794]: 2025-10-02 19:59:18.859 2 DEBUG nova.scheduler.client.report [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:59:18 compute-0 nova_compute[355794]: 2025-10-02 19:59:18.899 2 DEBUG oslo_concurrency.lockutils [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:59:18 compute-0 nova_compute[355794]: 2025-10-02 19:59:18.956 2 INFO nova.scheduler.client.report [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Deleted allocations for instance 6e556210-af07-4c5d-8558-2ba943af16a1
Oct 02 19:59:19 compute-0 nova_compute[355794]: 2025-10-02 19:59:19.035 2 DEBUG oslo_concurrency.lockutils [None req-dc8b4ac0-84d1-4e3e-a835-ca5496c671cd 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Lock "6e556210-af07-4c5d-8558-2ba943af16a1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:59:19 compute-0 nova_compute[355794]: 2025-10-02 19:59:19.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1627: 321 pgs: 321 active+clean; 102 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 24 op/s
Oct 02 19:59:19 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2737688159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 19:59:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 19:59:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3301648165' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:59:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 19:59:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3301648165' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:59:20 compute-0 ceph-mon[191910]: pgmap v1627: 321 pgs: 321 active+clean; 102 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 24 op/s
Oct 02 19:59:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3301648165' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 19:59:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3301648165' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 19:59:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:59:20 compute-0 nova_compute[355794]: 2025-10-02 19:59:20.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:20 compute-0 podman[441362]: 2025-10-02 19:59:20.705921915 +0000 UTC m=+0.126044251 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-type=git, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 02 19:59:20 compute-0 podman[441363]: 2025-10-02 19:59:20.716917517 +0000 UTC m=+0.130605932 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:59:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1628: 321 pgs: 321 active+clean; 93 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 38 op/s
Oct 02 19:59:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct 02 19:59:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct 02 19:59:21 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct 02 19:59:22 compute-0 ceph-mon[191910]: pgmap v1628: 321 pgs: 321 active+clean; 93 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 38 op/s
Oct 02 19:59:22 compute-0 ceph-mon[191910]: osdmap e130: 3 total, 3 up, 3 in
Oct 02 19:59:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1630: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.6 KiB/s wr, 52 op/s
Oct 02 19:59:24 compute-0 nova_compute[355794]: 2025-10-02 19:59:24.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:24 compute-0 ceph-mon[191910]: pgmap v1630: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.6 KiB/s wr, 52 op/s
Oct 02 19:59:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1631: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 70 op/s
Oct 02 19:59:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:59:25 compute-0 nova_compute[355794]: 2025-10-02 19:59:25.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:26 compute-0 ceph-mon[191910]: pgmap v1631: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 70 op/s
Oct 02 19:59:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1632: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.5 KiB/s wr, 71 op/s
Oct 02 19:59:27 compute-0 sshd-session[439236]: Received disconnect from 38.102.83.68 port 37098:11: disconnected by user
Oct 02 19:59:27 compute-0 sshd-session[439236]: Disconnected from user zuul 38.102.83.68 port 37098
Oct 02 19:59:27 compute-0 sshd-session[439233]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:59:27 compute-0 systemd[1]: session-63.scope: Deactivated successfully.
Oct 02 19:59:27 compute-0 systemd[1]: session-63.scope: Consumed 1.668s CPU time.
Oct 02 19:59:27 compute-0 systemd-logind[793]: Session 63 logged out. Waiting for processes to exit.
Oct 02 19:59:27 compute-0 systemd-logind[793]: Removed session 63.
Oct 02 19:59:28 compute-0 ceph-mon[191910]: pgmap v1632: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.5 KiB/s wr, 71 op/s
Oct 02 19:59:29 compute-0 nova_compute[355794]: 2025-10-02 19:59:29.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1633: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.7 KiB/s wr, 41 op/s
Oct 02 19:59:29 compute-0 podman[157186]: time="2025-10-02T19:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:59:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:59:29 compute-0 podman[157186]: @ - - [02/Oct/2025:19:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9055 "" "Go-http-client/1.1"
Oct 02 19:59:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:59:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct 02 19:59:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct 02 19:59:30 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct 02 19:59:30 compute-0 ceph-mon[191910]: pgmap v1633: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.7 KiB/s wr, 41 op/s
Oct 02 19:59:30 compute-0 ceph-mon[191910]: osdmap e131: 3 total, 3 up, 3 in
Oct 02 19:59:30 compute-0 nova_compute[355794]: 2025-10-02 19:59:30.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:31 compute-0 nova_compute[355794]: 2025-10-02 19:59:31.201 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759435156.1985135, 6e556210-af07-4c5d-8558-2ba943af16a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:59:31 compute-0 nova_compute[355794]: 2025-10-02 19:59:31.202 2 INFO nova.compute.manager [-] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] VM Stopped (Lifecycle Event)
Oct 02 19:59:31 compute-0 nova_compute[355794]: 2025-10-02 19:59:31.232 2 DEBUG nova.compute.manager [None req-1d826c60-d8db-4b5b-b3a8-efe8cb677234 - - - - - -] [instance: 6e556210-af07-4c5d-8558-2ba943af16a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:59:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1635: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 02 19:59:31 compute-0 openstack_network_exporter[372736]: ERROR   19:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:59:31 compute-0 openstack_network_exporter[372736]: ERROR   19:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:59:31 compute-0 openstack_network_exporter[372736]: ERROR   19:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:59:31 compute-0 openstack_network_exporter[372736]: ERROR   19:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:59:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:59:31 compute-0 openstack_network_exporter[372736]: ERROR   19:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:59:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 19:59:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:59:32.313 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:59:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:59:32.315 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:59:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 19:59:32.316 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:59:32 compute-0 ceph-mon[191910]: pgmap v1635: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 02 19:59:32 compute-0 podman[441406]: 2025-10-02 19:59:32.75496453 +0000 UTC m=+0.167760370 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:59:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1636: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 21 op/s
Oct 02 19:59:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:59:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:59:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:59:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:59:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 19:59:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 19:59:34 compute-0 nova_compute[355794]: 2025-10-02 19:59:34.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:34 compute-0 ceph-mon[191910]: pgmap v1636: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 21 op/s
Oct 02 19:59:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1637: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 204 B/s wr, 3 op/s
Oct 02 19:59:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:59:35 compute-0 nova_compute[355794]: 2025-10-02 19:59:35.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:36 compute-0 ceph-mon[191910]: pgmap v1637: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 204 B/s wr, 3 op/s
Oct 02 19:59:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1638: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 716 B/s wr, 5 op/s
Oct 02 19:59:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct 02 19:59:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct 02 19:59:37 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct 02 19:59:38 compute-0 ceph-mon[191910]: pgmap v1638: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 716 B/s wr, 5 op/s
Oct 02 19:59:38 compute-0 ceph-mon[191910]: osdmap e132: 3 total, 3 up, 3 in
Oct 02 19:59:39 compute-0 nova_compute[355794]: 2025-10-02 19:59:39.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1640: 321 pgs: 321 active+clean; 93 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Oct 02 19:59:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:59:40.394440) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435180394540, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 769, "num_deletes": 250, "total_data_size": 963859, "memory_usage": 977864, "flush_reason": "Manual Compaction"}
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435180403750, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 638979, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33128, "largest_seqno": 33896, "table_properties": {"data_size": 635524, "index_size": 1235, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9130, "raw_average_key_size": 20, "raw_value_size": 628218, "raw_average_value_size": 1437, "num_data_blocks": 55, "num_entries": 437, "num_filter_entries": 437, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759435120, "oldest_key_time": 1759435120, "file_creation_time": 1759435180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 9385 microseconds, and 4372 cpu microseconds.
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:59:40.403834) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 638979 bytes OK
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:59:40.403859) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:59:40.407288) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:59:40.407309) EVENT_LOG_v1 {"time_micros": 1759435180407302, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:59:40.407331) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 959942, prev total WAL file size 959942, number of live WAL files 2.
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:59:40.408258) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353034' seq:0, type:0; will stop at (end)
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(624KB)], [74(9879KB)]
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435180408290, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 10755267, "oldest_snapshot_seqno": -1}
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 5291 keys, 7770992 bytes, temperature: kUnknown
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435180469534, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 7770992, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7737046, "index_size": 19628, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13253, "raw_key_size": 133918, "raw_average_key_size": 25, "raw_value_size": 7642756, "raw_average_value_size": 1444, "num_data_blocks": 812, "num_entries": 5291, "num_filter_entries": 5291, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759435180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:59:40.469853) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 7770992 bytes
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:59:40.472781) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.3 rd, 126.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.6 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(29.0) write-amplify(12.2) OK, records in: 5783, records dropped: 492 output_compression: NoCompression
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:59:40.472814) EVENT_LOG_v1 {"time_micros": 1759435180472799, "job": 42, "event": "compaction_finished", "compaction_time_micros": 61340, "compaction_time_cpu_micros": 28597, "output_level": 6, "num_output_files": 1, "total_output_size": 7770992, "num_input_records": 5783, "num_output_records": 5291, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435180473261, "job": 42, "event": "table_file_deletion", "file_number": 76}
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435180477575, "job": 42, "event": "table_file_deletion", "file_number": 74}
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:59:40.408142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:59:40.477888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:59:40.477902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:59:40.477905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:59:40.477907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:59:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-19:59:40.477909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 19:59:40 compute-0 ceph-mon[191910]: pgmap v1640: 321 pgs: 321 active+clean; 93 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Oct 02 19:59:40 compute-0 nova_compute[355794]: 2025-10-02 19:59:40.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:40 compute-0 podman[441424]: 2025-10-02 19:59:40.687938256 +0000 UTC m=+0.106237304 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:59:40 compute-0 podman[441425]: 2025-10-02 19:59:40.744083129 +0000 UTC m=+0.155067483 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 19:59:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1641: 321 pgs: 321 active+clean; 93 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Oct 02 19:59:42 compute-0 ceph-mon[191910]: pgmap v1641: 321 pgs: 321 active+clean; 93 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Oct 02 19:59:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1642: 321 pgs: 321 active+clean; 93 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 15 op/s
Oct 02 19:59:44 compute-0 nova_compute[355794]: 2025-10-02 19:59:44.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:44 compute-0 ceph-mon[191910]: pgmap v1642: 321 pgs: 321 active+clean; 93 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 15 op/s
Oct 02 19:59:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1643: 321 pgs: 321 active+clean; 93 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 15 op/s
Oct 02 19:59:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:59:45 compute-0 nova_compute[355794]: 2025-10-02 19:59:45.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct 02 19:59:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct 02 19:59:45 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct 02 19:59:45 compute-0 podman[441469]: 2025-10-02 19:59:45.743126074 +0000 UTC m=+0.162007167 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:59:45 compute-0 podman[441470]: 2025-10-02 19:59:45.764642606 +0000 UTC m=+0.172114616 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, name=ubi9, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm, version=9.4, managed_by=edpm_ansible, vcs-type=git, container_name=kepler, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 19:59:46 compute-0 ceph-mon[191910]: pgmap v1643: 321 pgs: 321 active+clean; 93 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 15 op/s
Oct 02 19:59:46 compute-0 ceph-mon[191910]: osdmap e133: 3 total, 3 up, 3 in
Oct 02 19:59:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1645: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 93 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Oct 02 19:59:48 compute-0 ceph-mon[191910]: pgmap v1645: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 93 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Oct 02 19:59:48 compute-0 podman[441507]: 2025-10-02 19:59:48.712921447 +0000 UTC m=+0.120661598 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:59:48 compute-0 podman[441506]: 2025-10-02 19:59:48.735788894 +0000 UTC m=+0.150763598 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 19:59:48 compute-0 podman[441508]: 2025-10-02 19:59:48.754259445 +0000 UTC m=+0.155015491 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:59:49 compute-0 nova_compute[355794]: 2025-10-02 19:59:49.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1646: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.6 KiB/s wr, 23 op/s
Oct 02 19:59:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:59:50 compute-0 nova_compute[355794]: 2025-10-02 19:59:50.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:50 compute-0 ceph-mon[191910]: pgmap v1646: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.6 KiB/s wr, 23 op/s
Oct 02 19:59:50 compute-0 sudo[441566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:59:50 compute-0 sudo[441566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:59:50 compute-0 sudo[441566]: pam_unix(sudo:session): session closed for user root
Oct 02 19:59:50 compute-0 sudo[441603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:59:50 compute-0 sudo[441603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:59:51 compute-0 sudo[441603]: pam_unix(sudo:session): session closed for user root
Oct 02 19:59:51 compute-0 podman[441590]: 2025-10-02 19:59:51.013114913 +0000 UTC m=+0.120174155 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vendor=Red Hat, Inc., version=9.6)
Oct 02 19:59:51 compute-0 podman[441591]: 2025-10-02 19:59:51.025074761 +0000 UTC m=+0.126234327 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:59:51 compute-0 sudo[441658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:59:51 compute-0 sudo[441658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:59:51 compute-0 sudo[441658]: pam_unix(sudo:session): session closed for user root
Oct 02 19:59:51 compute-0 sudo[441683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 19:59:51 compute-0 sudo[441683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:59:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1647: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 02 19:59:51 compute-0 sudo[441683]: pam_unix(sudo:session): session closed for user root
Oct 02 19:59:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:59:52 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:59:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 19:59:52 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:59:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 19:59:52 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:59:52 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 133487a4-bd3e-42e5-9676-d685940bd683 does not exist
Oct 02 19:59:52 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 9894c5e1-7a76-4eb2-addc-e23f072eca65 does not exist
Oct 02 19:59:52 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev c6ceac6a-86fc-4379-843a-3e5ef87faf8d does not exist
Oct 02 19:59:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 19:59:52 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:59:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 19:59:52 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:59:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 19:59:52 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:59:52 compute-0 sudo[441742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:59:52 compute-0 sudo[441742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:59:52 compute-0 sudo[441742]: pam_unix(sudo:session): session closed for user root
Oct 02 19:59:52 compute-0 sudo[441767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:59:52 compute-0 sudo[441767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:59:52 compute-0 sudo[441767]: pam_unix(sudo:session): session closed for user root
Oct 02 19:59:52 compute-0 sudo[441792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:59:52 compute-0 sudo[441792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:59:52 compute-0 sudo[441792]: pam_unix(sudo:session): session closed for user root
Oct 02 19:59:52 compute-0 sudo[441817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 19:59:52 compute-0 sudo[441817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:59:52 compute-0 ceph-mon[191910]: pgmap v1647: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 02 19:59:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:59:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 19:59:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 19:59:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 19:59:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 19:59:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 19:59:53 compute-0 podman[441880]: 2025-10-02 19:59:53.255126132 +0000 UTC m=+0.095374166 container create 31295e0b539c4c7bdc21762496ce31638706c71108b88a838589e2fbd712f5b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:59:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1648: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 02 19:59:53 compute-0 podman[441880]: 2025-10-02 19:59:53.216228928 +0000 UTC m=+0.056477012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:59:53 compute-0 systemd[1]: Started libpod-conmon-31295e0b539c4c7bdc21762496ce31638706c71108b88a838589e2fbd712f5b2.scope.
Oct 02 19:59:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:59:53 compute-0 podman[441880]: 2025-10-02 19:59:53.432984889 +0000 UTC m=+0.273232973 container init 31295e0b539c4c7bdc21762496ce31638706c71108b88a838589e2fbd712f5b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:59:53 compute-0 podman[441880]: 2025-10-02 19:59:53.453617657 +0000 UTC m=+0.293865691 container start 31295e0b539c4c7bdc21762496ce31638706c71108b88a838589e2fbd712f5b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_moser, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 19:59:53 compute-0 podman[441880]: 2025-10-02 19:59:53.460728826 +0000 UTC m=+0.300976900 container attach 31295e0b539c4c7bdc21762496ce31638706c71108b88a838589e2fbd712f5b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_moser, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:59:53 compute-0 kind_moser[441896]: 167 167
Oct 02 19:59:53 compute-0 systemd[1]: libpod-31295e0b539c4c7bdc21762496ce31638706c71108b88a838589e2fbd712f5b2.scope: Deactivated successfully.
Oct 02 19:59:53 compute-0 conmon[441896]: conmon 31295e0b539c4c7bdc21 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31295e0b539c4c7bdc21762496ce31638706c71108b88a838589e2fbd712f5b2.scope/container/memory.events
Oct 02 19:59:53 compute-0 podman[441880]: 2025-10-02 19:59:53.469328225 +0000 UTC m=+0.309576249 container died 31295e0b539c4c7bdc21762496ce31638706c71108b88a838589e2fbd712f5b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:59:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-6605cb2949fe0c6d839c5cad78c10dcfb47e8d1bdd407c03887b4dad170eb625-merged.mount: Deactivated successfully.
Oct 02 19:59:53 compute-0 podman[441880]: 2025-10-02 19:59:53.549194148 +0000 UTC m=+0.389442152 container remove 31295e0b539c4c7bdc21762496ce31638706c71108b88a838589e2fbd712f5b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_moser, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 19:59:53 compute-0 nova_compute[355794]: 2025-10-02 19:59:53.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:59:53 compute-0 nova_compute[355794]: 2025-10-02 19:59:53.578 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:59:53 compute-0 systemd[1]: libpod-conmon-31295e0b539c4c7bdc21762496ce31638706c71108b88a838589e2fbd712f5b2.scope: Deactivated successfully.
Oct 02 19:59:53 compute-0 podman[441920]: 2025-10-02 19:59:53.875705076 +0000 UTC m=+0.107721324 container create 45cc62788e06c93cf7fa5e36f2809dcb2c7c700ad0c4fbd9ab2b7af9d564ffd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_matsumoto, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Oct 02 19:59:53 compute-0 podman[441920]: 2025-10-02 19:59:53.840255324 +0000 UTC m=+0.072271632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:59:53 compute-0 systemd[1]: Started libpod-conmon-45cc62788e06c93cf7fa5e36f2809dcb2c7c700ad0c4fbd9ab2b7af9d564ffd9.scope.
Oct 02 19:59:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:59:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98214f913df8d393b3fc77273c4502e054e2d66f334ef76a8f967678e30a357e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:59:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98214f913df8d393b3fc77273c4502e054e2d66f334ef76a8f967678e30a357e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:59:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98214f913df8d393b3fc77273c4502e054e2d66f334ef76a8f967678e30a357e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:59:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98214f913df8d393b3fc77273c4502e054e2d66f334ef76a8f967678e30a357e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:59:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98214f913df8d393b3fc77273c4502e054e2d66f334ef76a8f967678e30a357e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 19:59:54 compute-0 podman[441920]: 2025-10-02 19:59:54.057270042 +0000 UTC m=+0.289286350 container init 45cc62788e06c93cf7fa5e36f2809dcb2c7c700ad0c4fbd9ab2b7af9d564ffd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:59:54 compute-0 podman[441920]: 2025-10-02 19:59:54.08129461 +0000 UTC m=+0.313310868 container start 45cc62788e06c93cf7fa5e36f2809dcb2c7c700ad0c4fbd9ab2b7af9d564ffd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_matsumoto, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct 02 19:59:54 compute-0 podman[441920]: 2025-10-02 19:59:54.087788663 +0000 UTC m=+0.319804961 container attach 45cc62788e06c93cf7fa5e36f2809dcb2c7c700ad0c4fbd9ab2b7af9d564ffd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_matsumoto, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:59:54 compute-0 nova_compute[355794]: 2025-10-02 19:59:54.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:54 compute-0 ceph-mon[191910]: pgmap v1648: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 02 19:59:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1649: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 02 19:59:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 19:59:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct 02 19:59:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct 02 19:59:55 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct 02 19:59:55 compute-0 sshd-session[441959]: Accepted publickey for zuul from 38.102.83.68 port 54128 ssh2: RSA SHA256:ehN6Fkjxbk4ZZsnbj2qNHppfucDBFar0ERCGpW0xI0M
Oct 02 19:59:55 compute-0 systemd-logind[793]: New session 64 of user zuul.
Oct 02 19:59:55 compute-0 systemd[1]: Started Session 64 of User zuul.
Oct 02 19:59:55 compute-0 sshd-session[441959]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:59:55 compute-0 nova_compute[355794]: 2025-10-02 19:59:55.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:55 compute-0 awesome_matsumoto[441937]: --> passed data devices: 0 physical, 3 LVM
Oct 02 19:59:55 compute-0 awesome_matsumoto[441937]: --> relative data size: 1.0
Oct 02 19:59:55 compute-0 awesome_matsumoto[441937]: --> All data devices are unavailable
Oct 02 19:59:55 compute-0 systemd[1]: libpod-45cc62788e06c93cf7fa5e36f2809dcb2c7c700ad0c4fbd9ab2b7af9d564ffd9.scope: Deactivated successfully.
Oct 02 19:59:55 compute-0 systemd[1]: libpod-45cc62788e06c93cf7fa5e36f2809dcb2c7c700ad0c4fbd9ab2b7af9d564ffd9.scope: Consumed 1.460s CPU time.
Oct 02 19:59:55 compute-0 podman[441920]: 2025-10-02 19:59:55.611248864 +0000 UTC m=+1.843265082 container died 45cc62788e06c93cf7fa5e36f2809dcb2c7c700ad0c4fbd9ab2b7af9d564ffd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_matsumoto, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 19:59:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-98214f913df8d393b3fc77273c4502e054e2d66f334ef76a8f967678e30a357e-merged.mount: Deactivated successfully.
Oct 02 19:59:55 compute-0 podman[441920]: 2025-10-02 19:59:55.751567073 +0000 UTC m=+1.983583301 container remove 45cc62788e06c93cf7fa5e36f2809dcb2c7c700ad0c4fbd9ab2b7af9d564ffd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 19:59:55 compute-0 systemd[1]: libpod-conmon-45cc62788e06c93cf7fa5e36f2809dcb2c7c700ad0c4fbd9ab2b7af9d564ffd9.scope: Deactivated successfully.
Oct 02 19:59:55 compute-0 sudo[441817]: pam_unix(sudo:session): session closed for user root
Oct 02 19:59:55 compute-0 sudo[442026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:59:55 compute-0 sudo[442026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:59:55 compute-0 sudo[442026]: pam_unix(sudo:session): session closed for user root
Oct 02 19:59:56 compute-0 sudo[442078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:59:56 compute-0 sudo[442078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:59:56 compute-0 sudo[442078]: pam_unix(sudo:session): session closed for user root
Oct 02 19:59:56 compute-0 sudo[442127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:59:56 compute-0 sudo[442127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:59:56 compute-0 sudo[442127]: pam_unix(sudo:session): session closed for user root
Oct 02 19:59:56 compute-0 sudo[442165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 19:59:56 compute-0 sudo[442165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:59:56 compute-0 ceph-mon[191910]: pgmap v1649: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 02 19:59:56 compute-0 ceph-mon[191910]: osdmap e134: 3 total, 3 up, 3 in
Oct 02 19:59:56 compute-0 sudo[442256]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuibwjritdfhqtmmgupwgvkchsvnxqot ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759435195.7293274-57882-119045625717822/AnsiballZ_command.py'
Oct 02 19:59:56 compute-0 sudo[442256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:59:56 compute-0 python3[442265]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:59:56 compute-0 sudo[442256]: pam_unix(sudo:session): session closed for user root
Oct 02 19:59:57 compute-0 podman[442312]: 2025-10-02 19:59:57.061167551 +0000 UTC m=+0.112748398 container create 8025994f84f6ea4be45b87307ad6d7f67888aa8a2dfda998d7c3f18f8ac8f256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haibt, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:59:57 compute-0 podman[442312]: 2025-10-02 19:59:57.028267996 +0000 UTC m=+0.079848953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:59:57 compute-0 systemd[1]: Started libpod-conmon-8025994f84f6ea4be45b87307ad6d7f67888aa8a2dfda998d7c3f18f8ac8f256.scope.
Oct 02 19:59:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:59:57 compute-0 podman[442312]: 2025-10-02 19:59:57.228261232 +0000 UTC m=+0.279842109 container init 8025994f84f6ea4be45b87307ad6d7f67888aa8a2dfda998d7c3f18f8ac8f256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haibt, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 19:59:57 compute-0 podman[442312]: 2025-10-02 19:59:57.247893414 +0000 UTC m=+0.299474281 container start 8025994f84f6ea4be45b87307ad6d7f67888aa8a2dfda998d7c3f18f8ac8f256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 02 19:59:57 compute-0 podman[442312]: 2025-10-02 19:59:57.255855345 +0000 UTC m=+0.307436262 container attach 8025994f84f6ea4be45b87307ad6d7f67888aa8a2dfda998d7c3f18f8ac8f256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haibt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 19:59:57 compute-0 cranky_haibt[442352]: 167 167
Oct 02 19:59:57 compute-0 systemd[1]: libpod-8025994f84f6ea4be45b87307ad6d7f67888aa8a2dfda998d7c3f18f8ac8f256.scope: Deactivated successfully.
Oct 02 19:59:57 compute-0 podman[442312]: 2025-10-02 19:59:57.262994555 +0000 UTC m=+0.314575432 container died 8025994f84f6ea4be45b87307ad6d7f67888aa8a2dfda998d7c3f18f8ac8f256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haibt, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 19:59:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1651: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 KiB/s wr, 23 op/s
Oct 02 19:59:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-79c18e883d1ab0ad54e180d72b860e50b7c16d208e56b5fb0c4914b3f09d6ba1-merged.mount: Deactivated successfully.
Oct 02 19:59:57 compute-0 podman[442312]: 2025-10-02 19:59:57.346193176 +0000 UTC m=+0.397774053 container remove 8025994f84f6ea4be45b87307ad6d7f67888aa8a2dfda998d7c3f18f8ac8f256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haibt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 02 19:59:57 compute-0 systemd[1]: libpod-conmon-8025994f84f6ea4be45b87307ad6d7f67888aa8a2dfda998d7c3f18f8ac8f256.scope: Deactivated successfully.
Oct 02 19:59:57 compute-0 nova_compute[355794]: 2025-10-02 19:59:57.579 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:59:57 compute-0 podman[442375]: 2025-10-02 19:59:57.689053959 +0000 UTC m=+0.105146405 container create acc74d64de0f5f60d9c30c5deefbad48e3b094b4c78bca8a7a43675b91572c1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_solomon, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 19:59:57 compute-0 podman[442375]: 2025-10-02 19:59:57.652558609 +0000 UTC m=+0.068651095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 19:59:57 compute-0 systemd[1]: Started libpod-conmon-acc74d64de0f5f60d9c30c5deefbad48e3b094b4c78bca8a7a43675b91572c1b.scope.
Oct 02 19:59:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:59:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f10af69eba522134cfdb34350c42c2873383a308a2390d7757664bb66ed2de8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 19:59:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f10af69eba522134cfdb34350c42c2873383a308a2390d7757664bb66ed2de8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 19:59:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f10af69eba522134cfdb34350c42c2873383a308a2390d7757664bb66ed2de8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 19:59:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f10af69eba522134cfdb34350c42c2873383a308a2390d7757664bb66ed2de8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 19:59:57 compute-0 podman[442375]: 2025-10-02 19:59:57.852923245 +0000 UTC m=+0.269015681 container init acc74d64de0f5f60d9c30c5deefbad48e3b094b4c78bca8a7a43675b91572c1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_solomon, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 19:59:57 compute-0 podman[442375]: 2025-10-02 19:59:57.884487184 +0000 UTC m=+0.300579600 container start acc74d64de0f5f60d9c30c5deefbad48e3b094b4c78bca8a7a43675b91572c1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_solomon, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:59:57 compute-0 podman[442375]: 2025-10-02 19:59:57.889847746 +0000 UTC m=+0.305940202 container attach acc74d64de0f5f60d9c30c5deefbad48e3b094b4c78bca8a7a43675b91572c1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 19:59:58 compute-0 ceph-mon[191910]: pgmap v1651: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 KiB/s wr, 23 op/s
Oct 02 19:59:58 compute-0 nova_compute[355794]: 2025-10-02 19:59:58.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:59:58 compute-0 nova_compute[355794]: 2025-10-02 19:59:58.578 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:59:58 compute-0 nova_compute[355794]: 2025-10-02 19:59:58.578 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]: {
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:     "0": [
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:         {
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "devices": [
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "/dev/loop3"
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             ],
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "lv_name": "ceph_lv0",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "lv_size": "21470642176",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "name": "ceph_lv0",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "tags": {
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.cluster_name": "ceph",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.crush_device_class": "",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.encrypted": "0",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.osd_id": "0",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.type": "block",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.vdo": "0"
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             },
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "type": "block",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "vg_name": "ceph_vg0"
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:         }
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:     ],
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:     "1": [
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:         {
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "devices": [
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "/dev/loop4"
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             ],
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "lv_name": "ceph_lv1",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "lv_size": "21470642176",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "name": "ceph_lv1",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "tags": {
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.cluster_name": "ceph",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.crush_device_class": "",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.encrypted": "0",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.osd_id": "1",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.type": "block",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.vdo": "0"
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             },
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "type": "block",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "vg_name": "ceph_vg1"
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:         }
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:     ],
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:     "2": [
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:         {
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "devices": [
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "/dev/loop5"
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             ],
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "lv_name": "ceph_lv2",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "lv_size": "21470642176",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "name": "ceph_lv2",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "tags": {
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.cluster_name": "ceph",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.crush_device_class": "",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.encrypted": "0",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.osd_id": "2",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.type": "block",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:                 "ceph.vdo": "0"
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             },
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "type": "block",
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:             "vg_name": "ceph_vg2"
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:         }
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]:     ]
Oct 02 19:59:58 compute-0 eloquent_solomon[442391]: }
Oct 02 19:59:58 compute-0 systemd[1]: libpod-acc74d64de0f5f60d9c30c5deefbad48e3b094b4c78bca8a7a43675b91572c1b.scope: Deactivated successfully.
Oct 02 19:59:58 compute-0 podman[442400]: 2025-10-02 19:59:58.97580272 +0000 UTC m=+0.065535823 container died acc74d64de0f5f60d9c30c5deefbad48e3b094b4c78bca8a7a43675b91572c1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 19:59:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f10af69eba522134cfdb34350c42c2873383a308a2390d7757664bb66ed2de8-merged.mount: Deactivated successfully.
Oct 02 19:59:59 compute-0 podman[442400]: 2025-10-02 19:59:59.089937823 +0000 UTC m=+0.179670936 container remove acc74d64de0f5f60d9c30c5deefbad48e3b094b4c78bca8a7a43675b91572c1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 19:59:59 compute-0 systemd[1]: libpod-conmon-acc74d64de0f5f60d9c30c5deefbad48e3b094b4c78bca8a7a43675b91572c1b.scope: Deactivated successfully.
Oct 02 19:59:59 compute-0 nova_compute[355794]: 2025-10-02 19:59:59.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:59 compute-0 sudo[442165]: pam_unix(sudo:session): session closed for user root
Oct 02 19:59:59 compute-0 nova_compute[355794]: 2025-10-02 19:59:59.180 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:59:59 compute-0 nova_compute[355794]: 2025-10-02 19:59:59.180 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:59:59 compute-0 nova_compute[355794]: 2025-10-02 19:59:59.181 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:59:59 compute-0 nova_compute[355794]: 2025-10-02 19:59:59.181 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:59:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1652: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 409 B/s wr, 3 op/s
Oct 02 19:59:59 compute-0 sudo[442414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:59:59 compute-0 sudo[442414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:59:59 compute-0 sudo[442414]: pam_unix(sudo:session): session closed for user root
Oct 02 19:59:59 compute-0 sudo[442439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 19:59:59 compute-0 sudo[442439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:59:59 compute-0 sudo[442439]: pam_unix(sudo:session): session closed for user root
Oct 02 19:59:59 compute-0 sudo[442464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 19:59:59 compute-0 sudo[442464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 19:59:59 compute-0 sudo[442464]: pam_unix(sudo:session): session closed for user root
Oct 02 19:59:59 compute-0 podman[157186]: time="2025-10-02T19:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:59:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 19:59:59 compute-0 podman[157186]: @ - - [02/Oct/2025:19:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9050 "" "Go-http-client/1.1"
Oct 02 19:59:59 compute-0 sudo[442489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 19:59:59 compute-0 sudo[442489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:00:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:00:00 compute-0 ceph-mon[191910]: pgmap v1652: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 409 B/s wr, 3 op/s
Oct 02 20:00:00 compute-0 podman[442552]: 2025-10-02 20:00:00.496833226 +0000 UTC m=+0.084313262 container create 6a1bca87ea8e34181823ac940cff849ae5b1ee621ddf7fc3d8225618502ced08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_feistel, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:00:00 compute-0 podman[442552]: 2025-10-02 20:00:00.45635343 +0000 UTC m=+0.043833526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:00:00 compute-0 systemd[1]: Started libpod-conmon-6a1bca87ea8e34181823ac940cff849ae5b1ee621ddf7fc3d8225618502ced08.scope.
Oct 02 20:00:00 compute-0 nova_compute[355794]: 2025-10-02 20:00:00.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:00:00 compute-0 podman[442552]: 2025-10-02 20:00:00.63320944 +0000 UTC m=+0.220689466 container init 6a1bca87ea8e34181823ac940cff849ae5b1ee621ddf7fc3d8225618502ced08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_feistel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:00:00 compute-0 podman[442552]: 2025-10-02 20:00:00.649018281 +0000 UTC m=+0.236498297 container start 6a1bca87ea8e34181823ac940cff849ae5b1ee621ddf7fc3d8225618502ced08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 20:00:00 compute-0 podman[442552]: 2025-10-02 20:00:00.65425196 +0000 UTC m=+0.241732046 container attach 6a1bca87ea8e34181823ac940cff849ae5b1ee621ddf7fc3d8225618502ced08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_feistel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:00:00 compute-0 heuristic_feistel[442568]: 167 167
Oct 02 20:00:00 compute-0 systemd[1]: libpod-6a1bca87ea8e34181823ac940cff849ae5b1ee621ddf7fc3d8225618502ced08.scope: Deactivated successfully.
Oct 02 20:00:00 compute-0 conmon[442568]: conmon 6a1bca87ea8e34181823 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6a1bca87ea8e34181823ac940cff849ae5b1ee621ddf7fc3d8225618502ced08.scope/container/memory.events
Oct 02 20:00:00 compute-0 podman[442573]: 2025-10-02 20:00:00.724669151 +0000 UTC m=+0.043467236 container died 6a1bca87ea8e34181823ac940cff849ae5b1ee621ddf7fc3d8225618502ced08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_feistel, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 20:00:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e170c65ad6e7a57363c56e32cf8d78d3797f296e579914514a37295ca0af30d1-merged.mount: Deactivated successfully.
Oct 02 20:00:00 compute-0 podman[442573]: 2025-10-02 20:00:00.799244743 +0000 UTC m=+0.118042808 container remove 6a1bca87ea8e34181823ac940cff849ae5b1ee621ddf7fc3d8225618502ced08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_feistel, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:00:00 compute-0 systemd[1]: libpod-conmon-6a1bca87ea8e34181823ac940cff849ae5b1ee621ddf7fc3d8225618502ced08.scope: Deactivated successfully.
Oct 02 20:00:01 compute-0 podman[442593]: 2025-10-02 20:00:01.159762995 +0000 UTC m=+0.106976074 container create 926f2646974448d163e48450e902f6125bf09981b16cccea204ae2fc110a9fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wozniak, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:00:01 compute-0 nova_compute[355794]: 2025-10-02 20:00:01.199 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:00:01 compute-0 podman[442593]: 2025-10-02 20:00:01.11816789 +0000 UTC m=+0.065381019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:00:01 compute-0 systemd[1]: Started libpod-conmon-926f2646974448d163e48450e902f6125bf09981b16cccea204ae2fc110a9fc9.scope.
Oct 02 20:00:01 compute-0 nova_compute[355794]: 2025-10-02 20:00:01.232 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:00:01 compute-0 nova_compute[355794]: 2025-10-02 20:00:01.233 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:00:01 compute-0 nova_compute[355794]: 2025-10-02 20:00:01.233 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:00:01 compute-0 nova_compute[355794]: 2025-10-02 20:00:01.234 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:00:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84c7693d539bff48417825b582fce26b720a8e732a41d0070107fba44a0bb48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:00:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1653: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84c7693d539bff48417825b582fce26b720a8e732a41d0070107fba44a0bb48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84c7693d539bff48417825b582fce26b720a8e732a41d0070107fba44a0bb48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84c7693d539bff48417825b582fce26b720a8e732a41d0070107fba44a0bb48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:00:01 compute-0 nova_compute[355794]: 2025-10-02 20:00:01.295 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:00:01 compute-0 nova_compute[355794]: 2025-10-02 20:00:01.296 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:00:01 compute-0 nova_compute[355794]: 2025-10-02 20:00:01.297 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:00:01 compute-0 nova_compute[355794]: 2025-10-02 20:00:01.297 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:00:01 compute-0 nova_compute[355794]: 2025-10-02 20:00:01.298 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:00:01 compute-0 podman[442593]: 2025-10-02 20:00:01.341685091 +0000 UTC m=+0.288898200 container init 926f2646974448d163e48450e902f6125bf09981b16cccea204ae2fc110a9fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:00:01 compute-0 podman[442593]: 2025-10-02 20:00:01.361099257 +0000 UTC m=+0.308312326 container start 926f2646974448d163e48450e902f6125bf09981b16cccea204ae2fc110a9fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wozniak, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:00:01 compute-0 podman[442593]: 2025-10-02 20:00:01.367783614 +0000 UTC m=+0.314996653 container attach 926f2646974448d163e48450e902f6125bf09981b16cccea204ae2fc110a9fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wozniak, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 20:00:01 compute-0 openstack_network_exporter[372736]: ERROR   20:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:00:01 compute-0 openstack_network_exporter[372736]: ERROR   20:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:00:01 compute-0 openstack_network_exporter[372736]: ERROR   20:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:00:01 compute-0 openstack_network_exporter[372736]: ERROR   20:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:00:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:00:01 compute-0 openstack_network_exporter[372736]: ERROR   20:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:00:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:00:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:00:01 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3370586355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:00:01 compute-0 nova_compute[355794]: 2025-10-02 20:00:01.815 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:00:01 compute-0 nova_compute[355794]: 2025-10-02 20:00:01.969 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:00:01 compute-0 nova_compute[355794]: 2025-10-02 20:00:01.970 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:00:01 compute-0 nova_compute[355794]: 2025-10-02 20:00:01.970 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:00:02 compute-0 ceph-mon[191910]: pgmap v1653: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:02 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3370586355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]: {
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:         "osd_id": 1,
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:         "type": "bluestore"
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:     },
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:         "osd_id": 2,
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:         "type": "bluestore"
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:     },
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:         "osd_id": 0,
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:         "type": "bluestore"
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]:     }
Oct 02 20:00:02 compute-0 thirsty_wozniak[442609]: }
Oct 02 20:00:02 compute-0 nova_compute[355794]: 2025-10-02 20:00:02.521 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:00:02 compute-0 nova_compute[355794]: 2025-10-02 20:00:02.522 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3777MB free_disk=59.955204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:00:02 compute-0 nova_compute[355794]: 2025-10-02 20:00:02.522 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:00:02 compute-0 nova_compute[355794]: 2025-10-02 20:00:02.523 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:00:02 compute-0 systemd[1]: libpod-926f2646974448d163e48450e902f6125bf09981b16cccea204ae2fc110a9fc9.scope: Deactivated successfully.
Oct 02 20:00:02 compute-0 systemd[1]: libpod-926f2646974448d163e48450e902f6125bf09981b16cccea204ae2fc110a9fc9.scope: Consumed 1.132s CPU time.
Oct 02 20:00:02 compute-0 podman[442665]: 2025-10-02 20:00:02.623295464 +0000 UTC m=+0.061796753 container died 926f2646974448d163e48450e902f6125bf09981b16cccea204ae2fc110a9fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wozniak, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:00:02 compute-0 nova_compute[355794]: 2025-10-02 20:00:02.624 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:00:02 compute-0 nova_compute[355794]: 2025-10-02 20:00:02.625 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:00:02 compute-0 nova_compute[355794]: 2025-10-02 20:00:02.625 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:00:02 compute-0 nova_compute[355794]: 2025-10-02 20:00:02.639 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing inventories for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 20:00:02 compute-0 nova_compute[355794]: 2025-10-02 20:00:02.657 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating ProviderTree inventory for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 20:00:02 compute-0 nova_compute[355794]: 2025-10-02 20:00:02.657 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating inventory in ProviderTree for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 20:00:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-f84c7693d539bff48417825b582fce26b720a8e732a41d0070107fba44a0bb48-merged.mount: Deactivated successfully.
Oct 02 20:00:02 compute-0 nova_compute[355794]: 2025-10-02 20:00:02.679 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing aggregate associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 20:00:02 compute-0 nova_compute[355794]: 2025-10-02 20:00:02.702 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing trait associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, traits: COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 20:00:02 compute-0 podman[442665]: 2025-10-02 20:00:02.727208516 +0000 UTC m=+0.165709735 container remove 926f2646974448d163e48450e902f6125bf09981b16cccea204ae2fc110a9fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wozniak, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 20:00:02 compute-0 nova_compute[355794]: 2025-10-02 20:00:02.744 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:00:02 compute-0 systemd[1]: libpod-conmon-926f2646974448d163e48450e902f6125bf09981b16cccea204ae2fc110a9fc9.scope: Deactivated successfully.
Oct 02 20:00:02 compute-0 sudo[442489]: pam_unix(sudo:session): session closed for user root
Oct 02 20:00:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:00:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:00:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:00:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:00:02 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e6a11d95-90a2-439c-b339-0c378103def8 does not exist
Oct 02 20:00:02 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 2eceec9e-71d7-411d-9200-e5dce5f84b6a does not exist
Oct 02 20:00:02 compute-0 sudo[442679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:00:02 compute-0 sudo[442679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:00:03 compute-0 sudo[442679]: pam_unix(sudo:session): session closed for user root
Oct 02 20:00:03 compute-0 sudo[442727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:00:03 compute-0 sudo[442727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:00:03 compute-0 sudo[442727]: pam_unix(sudo:session): session closed for user root
Oct 02 20:00:03 compute-0 podman[442722]: 2025-10-02 20:00:03.169563893 +0000 UTC m=+0.150400269 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=multipathd)
Oct 02 20:00:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:00:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/211549302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:00:03 compute-0 nova_compute[355794]: 2025-10-02 20:00:03.245 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:00:03 compute-0 nova_compute[355794]: 2025-10-02 20:00:03.262 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:00:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1654: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:03 compute-0 nova_compute[355794]: 2025-10-02 20:00:03.284 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:00:03 compute-0 nova_compute[355794]: 2025-10-02 20:00:03.286 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:00:03 compute-0 nova_compute[355794]: 2025-10-02 20:00:03.286 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:00:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:00:03
Oct 02 20:00:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:00:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:00:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['backups', 'volumes', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.mgr']
Oct 02 20:00:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:00:03 compute-0 nova_compute[355794]: 2025-10-02 20:00:03.628 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:00:03 compute-0 nova_compute[355794]: 2025-10-02 20:00:03.628 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:00:03 compute-0 nova_compute[355794]: 2025-10-02 20:00:03.629 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:00:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:00:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:00:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:00:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:00:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:00:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:00:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:00:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:00:03 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/211549302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:00:03 compute-0 ceph-mon[191910]: pgmap v1654: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:04 compute-0 nova_compute[355794]: 2025-10-02 20:00:04.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:00:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:00:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:00:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:00:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:00:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:00:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:00:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:00:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:00:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:00:04 compute-0 nova_compute[355794]: 2025-10-02 20:00:04.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:00:05 compute-0 sudo[442940]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qobzsqootfokhmmmpwyfcmqggbrhirnx ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759435204.168342-58046-208200112394199/AnsiballZ_command.py'
Oct 02 20:00:05 compute-0 sudo[442940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 20:00:05 compute-0 python3[442942]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 20:00:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1655: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:05 compute-0 sudo[442940]: pam_unix(sudo:session): session closed for user root
Oct 02 20:00:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:00:05 compute-0 nova_compute[355794]: 2025-10-02 20:00:05.570 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:00:05 compute-0 nova_compute[355794]: 2025-10-02 20:00:05.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:06 compute-0 ceph-mon[191910]: pgmap v1655: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1656: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:08 compute-0 ceph-mon[191910]: pgmap v1656: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:09 compute-0 nova_compute[355794]: 2025-10-02 20:00:09.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1657: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:10 compute-0 ceph-mon[191910]: pgmap v1657: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:00:10 compute-0 nova_compute[355794]: 2025-10-02 20:00:10.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1658: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:11 compute-0 podman[442983]: 2025-10-02 20:00:11.718225875 +0000 UTC m=+0.124564352 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 20:00:11 compute-0 podman[442982]: 2025-10-02 20:00:11.72407876 +0000 UTC m=+0.141144712 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 20:00:12 compute-0 ceph-mon[191910]: pgmap v1658: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:00:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:00:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1659: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:14 compute-0 nova_compute[355794]: 2025-10-02 20:00:14.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:14 compute-0 ceph-mon[191910]: pgmap v1659: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:15 compute-0 sudo[443198]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-womfyuyonxkzeyfjgfoctciufcfiaqqp ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759435214.2733881-58199-116118832091667/AnsiballZ_command.py'
Oct 02 20:00:15 compute-0 sudo[443198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 20:00:15 compute-0 python3[443200]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 20:00:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1660: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:15 compute-0 sudo[443198]: pam_unix(sudo:session): session closed for user root
Oct 02 20:00:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:00:15 compute-0 nova_compute[355794]: 2025-10-02 20:00:15.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:16 compute-0 ceph-mon[191910]: pgmap v1660: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:16 compute-0 podman[443240]: 2025-10-02 20:00:16.751929703 +0000 UTC m=+0.167228036 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Oct 02 20:00:16 compute-0 podman[443241]: 2025-10-02 20:00:16.765481103 +0000 UTC m=+0.181363661 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-type=git, container_name=kepler, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=edpm, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, version=9.4, distribution-scope=public)
Oct 02 20:00:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:00:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 8068 writes, 31K keys, 8068 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 8068 writes, 1863 syncs, 4.33 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1254 writes, 4156 keys, 1254 commit groups, 1.0 writes per commit group, ingest: 3.24 MB, 0.01 MB/s
                                            Interval WAL: 1254 writes, 527 syncs, 2.38 writes per sync, written: 0.00 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:00:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1661: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:18 compute-0 ceph-mon[191910]: pgmap v1661: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:19 compute-0 nova_compute[355794]: 2025-10-02 20:00:19.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1662: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:19 compute-0 podman[443281]: 2025-10-02 20:00:19.683334845 +0000 UTC m=+0.105838114 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 20:00:19 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 20:00:19 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 20:00:19 compute-0 podman[443282]: 2025-10-02 20:00:19.703292615 +0000 UTC m=+0.124990513 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 20:00:19 compute-0 podman[443283]: 2025-10-02 20:00:19.750246943 +0000 UTC m=+0.155960226 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 02 20:00:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:00:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/157206207' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:00:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:00:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/157206207' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:00:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:00:20 compute-0 ceph-mon[191910]: pgmap v1662: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/157206207' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:00:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/157206207' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:00:20 compute-0 nova_compute[355794]: 2025-10-02 20:00:20.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1663: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:21 compute-0 podman[443341]: 2025-10-02 20:00:21.742821102 +0000 UTC m=+0.143942967 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 20:00:21 compute-0 podman[443340]: 2025-10-02 20:00:21.770137808 +0000 UTC m=+0.177914130 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, container_name=openstack_network_exporter, release=1755695350, vcs-type=git, config_id=edpm, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=)
Oct 02 20:00:22 compute-0 ceph-mon[191910]: pgmap v1663: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1664: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:00:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 8518 writes, 32K keys, 8518 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 8518 writes, 1987 syncs, 4.29 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 891 writes, 2340 keys, 891 commit groups, 1.0 writes per commit group, ingest: 1.49 MB, 0.00 MB/s
                                            Interval WAL: 891 writes, 405 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:00:24 compute-0 nova_compute[355794]: 2025-10-02 20:00:24.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:24 compute-0 ceph-mon[191910]: pgmap v1664: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1665: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:00:25 compute-0 nova_compute[355794]: 2025-10-02 20:00:25.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:26 compute-0 ceph-mon[191910]: pgmap v1665: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1666: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:28 compute-0 ceph-mon[191910]: pgmap v1666: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:29 compute-0 nova_compute[355794]: 2025-10-02 20:00:29.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:29 compute-0 podman[157186]: time="2025-10-02T20:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:00:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:00:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9073 "" "Go-http-client/1.1"
Oct 02 20:00:30 compute-0 sudo[443555]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwqlnoghvklydyysfguhawonxchmhxli ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759435229.1976864-58416-44720371930790/AnsiballZ_command.py'
Oct 02 20:00:30 compute-0 sudo[443555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 20:00:30 compute-0 python3[443557]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 20:00:30 compute-0 sudo[443555]: pam_unix(sudo:session): session closed for user root
Oct 02 20:00:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:00:30 compute-0 ceph-mon[191910]: pgmap v1667: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:30 compute-0 nova_compute[355794]: 2025-10-02 20:00:30.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:00:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 7023 writes, 27K keys, 7023 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 7023 writes, 1462 syncs, 4.80 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 644 writes, 1936 keys, 644 commit groups, 1.0 writes per commit group, ingest: 1.10 MB, 0.00 MB/s
                                            Interval WAL: 644 writes, 290 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:00:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1668: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:31 compute-0 openstack_network_exporter[372736]: ERROR   20:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:00:31 compute-0 openstack_network_exporter[372736]: ERROR   20:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:00:31 compute-0 openstack_network_exporter[372736]: ERROR   20:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:00:31 compute-0 openstack_network_exporter[372736]: ERROR   20:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:00:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:00:31 compute-0 openstack_network_exporter[372736]: ERROR   20:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:00:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:00:31 compute-0 ceph-mgr[192222]: [devicehealth INFO root] Check health
Oct 02 20:00:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:00:32.314 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:00:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:00:32.316 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:00:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:00:32.317 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:00:32 compute-0 ceph-mon[191910]: pgmap v1668: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1669: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:00:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:00:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:00:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:00:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:00:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:00:33 compute-0 podman[443597]: 2025-10-02 20:00:33.685320146 +0000 UTC m=+0.110526309 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.vendor=CentOS)
Oct 02 20:00:34 compute-0 nova_compute[355794]: 2025-10-02 20:00:34.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:34 compute-0 ceph-mon[191910]: pgmap v1669: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1670: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:00:35 compute-0 nova_compute[355794]: 2025-10-02 20:00:35.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:36 compute-0 ceph-mon[191910]: pgmap v1670: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1671: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:38 compute-0 ceph-mon[191910]: pgmap v1671: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:39 compute-0 nova_compute[355794]: 2025-10-02 20:00:39.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1672: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:00:40 compute-0 ceph-mon[191910]: pgmap v1672: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:40 compute-0 nova_compute[355794]: 2025-10-02 20:00:40.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1673: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:42 compute-0 ceph-mon[191910]: pgmap v1673: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:42 compute-0 podman[443618]: 2025-10-02 20:00:42.657472752 +0000 UTC m=+0.071859501 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:00:42 compute-0 podman[443619]: 2025-10-02 20:00:42.694572508 +0000 UTC m=+0.116345174 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm)
Oct 02 20:00:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:44 compute-0 nova_compute[355794]: 2025-10-02 20:00:44.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:44 compute-0 ceph-mon[191910]: pgmap v1674: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:00:45 compute-0 nova_compute[355794]: 2025-10-02 20:00:45.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:46 compute-0 ceph-mon[191910]: pgmap v1675: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1676: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:47 compute-0 podman[443659]: 2025-10-02 20:00:47.700838698 +0000 UTC m=+0.117557526 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 20:00:47 compute-0 podman[443660]: 2025-10-02 20:00:47.702916153 +0000 UTC m=+0.119454016 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, release-0.7.12=)
Oct 02 20:00:48 compute-0 ceph-mon[191910]: pgmap v1676: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:49 compute-0 nova_compute[355794]: 2025-10-02 20:00:49.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1677: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:00:50 compute-0 nova_compute[355794]: 2025-10-02 20:00:50.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:50 compute-0 podman[443700]: 2025-10-02 20:00:50.65782774 +0000 UTC m=+0.077519761 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 20:00:50 compute-0 ceph-mon[191910]: pgmap v1677: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:50 compute-0 podman[443701]: 2025-10-02 20:00:50.73533449 +0000 UTC m=+0.145966870 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct 02 20:00:50 compute-0 podman[443702]: 2025-10-02 20:00:50.765285056 +0000 UTC m=+0.173011559 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Oct 02 20:00:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:52 compute-0 ceph-mon[191910]: pgmap v1678: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:52 compute-0 podman[443761]: 2025-10-02 20:00:52.709892511 +0000 UTC m=+0.136458358 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 20:00:52 compute-0 podman[443762]: 2025-10-02 20:00:52.727414677 +0000 UTC m=+0.150759558 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 20:00:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:53 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 02 20:00:54 compute-0 nova_compute[355794]: 2025-10-02 20:00:54.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:54 compute-0 nova_compute[355794]: 2025-10-02 20:00:54.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:00:54 compute-0 nova_compute[355794]: 2025-10-02 20:00:54.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:00:54 compute-0 ceph-mon[191910]: pgmap v1679: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1680: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:00:55 compute-0 nova_compute[355794]: 2025-10-02 20:00:55.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:55 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 02 20:00:55 compute-0 ceph-mon[191910]: pgmap v1680: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1681: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:00:57.400666) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435257400724, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 861, "num_deletes": 252, "total_data_size": 1184926, "memory_usage": 1206576, "flush_reason": "Manual Compaction"}
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435257417653, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1174070, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33897, "largest_seqno": 34757, "table_properties": {"data_size": 1169590, "index_size": 2132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9717, "raw_average_key_size": 19, "raw_value_size": 1160679, "raw_average_value_size": 2363, "num_data_blocks": 95, "num_entries": 491, "num_filter_entries": 491, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759435180, "oldest_key_time": 1759435180, "file_creation_time": 1759435257, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 17086 microseconds, and 8572 cpu microseconds.
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:00:57.417749) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1174070 bytes OK
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:00:57.417780) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:00:57.421017) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:00:57.421068) EVENT_LOG_v1 {"time_micros": 1759435257421054, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:00:57.421097) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 1180687, prev total WAL file size 1180687, number of live WAL files 2.
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:00:57.422699) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(1146KB)], [77(7588KB)]
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435257422776, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 8945062, "oldest_snapshot_seqno": -1}
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 5263 keys, 7223476 bytes, temperature: kUnknown
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435257489086, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 7223476, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7190157, "index_size": 19065, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 134030, "raw_average_key_size": 25, "raw_value_size": 7096688, "raw_average_value_size": 1348, "num_data_blocks": 781, "num_entries": 5263, "num_filter_entries": 5263, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759435257, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:00:57.489575) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 7223476 bytes
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:00:57.492293) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.6 rd, 108.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 7.4 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(13.8) write-amplify(6.2) OK, records in: 5782, records dropped: 519 output_compression: NoCompression
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:00:57.492324) EVENT_LOG_v1 {"time_micros": 1759435257492308, "job": 44, "event": "compaction_finished", "compaction_time_micros": 66448, "compaction_time_cpu_micros": 25063, "output_level": 6, "num_output_files": 1, "total_output_size": 7223476, "num_input_records": 5782, "num_output_records": 5263, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435257493099, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435257496432, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:00:57.422458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:00:57.496713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:00:57.496719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:00:57.496723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:00:57.496726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:00:57 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:00:57.496729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:00:58 compute-0 ceph-mon[191910]: pgmap v1681: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:58 compute-0 nova_compute[355794]: 2025-10-02 20:00:58.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:00:59 compute-0 nova_compute[355794]: 2025-10-02 20:00:59.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1682: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:00:59 compute-0 nova_compute[355794]: 2025-10-02 20:00:59.570 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:00:59 compute-0 nova_compute[355794]: 2025-10-02 20:00:59.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:00:59 compute-0 nova_compute[355794]: 2025-10-02 20:00:59.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:00:59 compute-0 nova_compute[355794]: 2025-10-02 20:00:59.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:00:59 compute-0 podman[157186]: time="2025-10-02T20:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:00:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:00:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9062 "" "Go-http-client/1.1"
Oct 02 20:01:00 compute-0 nova_compute[355794]: 2025-10-02 20:01:00.210 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:01:00 compute-0 nova_compute[355794]: 2025-10-02 20:01:00.211 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:01:00 compute-0 nova_compute[355794]: 2025-10-02 20:01:00.211 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:01:00 compute-0 nova_compute[355794]: 2025-10-02 20:01:00.212 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:01:00 compute-0 ceph-mon[191910]: pgmap v1682: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:01:00 compute-0 nova_compute[355794]: 2025-10-02 20:01:00.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1683: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:01 compute-0 openstack_network_exporter[372736]: ERROR   20:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:01:01 compute-0 nova_compute[355794]: 2025-10-02 20:01:01.417 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:01:01 compute-0 openstack_network_exporter[372736]: ERROR   20:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:01:01 compute-0 openstack_network_exporter[372736]: ERROR   20:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:01:01 compute-0 openstack_network_exporter[372736]: ERROR   20:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:01:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:01:01 compute-0 openstack_network_exporter[372736]: ERROR   20:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:01:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:01:01 compute-0 nova_compute[355794]: 2025-10-02 20:01:01.439 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:01:01 compute-0 nova_compute[355794]: 2025-10-02 20:01:01.439 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:01:01 compute-0 nova_compute[355794]: 2025-10-02 20:01:01.440 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:01 compute-0 nova_compute[355794]: 2025-10-02 20:01:01.441 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:01 compute-0 nova_compute[355794]: 2025-10-02 20:01:01.468 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:01:01 compute-0 nova_compute[355794]: 2025-10-02 20:01:01.469 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:01:01 compute-0 nova_compute[355794]: 2025-10-02 20:01:01.470 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:01:01 compute-0 nova_compute[355794]: 2025-10-02 20:01:01.470 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:01:01 compute-0 nova_compute[355794]: 2025-10-02 20:01:01.471 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:01:01 compute-0 CROND[443825]: (root) CMD (run-parts /etc/cron.hourly)
Oct 02 20:01:01 compute-0 run-parts[443828]: (/etc/cron.hourly) starting 0anacron
Oct 02 20:01:01 compute-0 run-parts[443834]: (/etc/cron.hourly) finished 0anacron
Oct 02 20:01:01 compute-0 CROND[443824]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 02 20:01:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:01:01 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4175481974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:01:01 compute-0 nova_compute[355794]: 2025-10-02 20:01:01.974 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:01:02 compute-0 nova_compute[355794]: 2025-10-02 20:01:02.082 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:01:02 compute-0 nova_compute[355794]: 2025-10-02 20:01:02.083 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:01:02 compute-0 nova_compute[355794]: 2025-10-02 20:01:02.085 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:01:02 compute-0 ceph-mon[191910]: pgmap v1683: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:02 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4175481974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:01:02 compute-0 nova_compute[355794]: 2025-10-02 20:01:02.777 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:01:02 compute-0 nova_compute[355794]: 2025-10-02 20:01:02.780 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3846MB free_disk=59.955204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:01:02 compute-0 nova_compute[355794]: 2025-10-02 20:01:02.781 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:01:02 compute-0 nova_compute[355794]: 2025-10-02 20:01:02.781 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:01:02 compute-0 nova_compute[355794]: 2025-10-02 20:01:02.900 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:01:02 compute-0 nova_compute[355794]: 2025-10-02 20:01:02.900 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:01:02 compute-0 nova_compute[355794]: 2025-10-02 20:01:02.900 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:01:02 compute-0 nova_compute[355794]: 2025-10-02 20:01:02.968 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:01:03 compute-0 sudo[443857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:01:03 compute-0 sudo[443857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:01:03 compute-0 sudo[443857]: pam_unix(sudo:session): session closed for user root
Oct 02 20:01:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1684: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:03 compute-0 sudo[443882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:01:03 compute-0 sudo[443882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:01:03 compute-0 sudo[443882]: pam_unix(sudo:session): session closed for user root
Oct 02 20:01:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:01:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/630561264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:01:03 compute-0 nova_compute[355794]: 2025-10-02 20:01:03.477 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:01:03 compute-0 nova_compute[355794]: 2025-10-02 20:01:03.493 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:01:03 compute-0 nova_compute[355794]: 2025-10-02 20:01:03.513 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:01:03 compute-0 nova_compute[355794]: 2025-10-02 20:01:03.517 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:01:03 compute-0 nova_compute[355794]: 2025-10-02 20:01:03.517 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:01:03 compute-0 sudo[443907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:01:03 compute-0 sudo[443907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:01:03 compute-0 sudo[443907]: pam_unix(sudo:session): session closed for user root
Oct 02 20:01:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:01:03
Oct 02 20:01:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:01:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:01:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['vms', 'backups', '.rgw.root', 'volumes', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', '.mgr']
Oct 02 20:01:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:01:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:01:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:01:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:01:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:01:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:01:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:01:03 compute-0 sudo[443934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:01:03 compute-0 sudo[443934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:01:03 compute-0 podman[443959]: 2025-10-02 20:01:03.872784491 +0000 UTC m=+0.131267500 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 20:01:04 compute-0 nova_compute[355794]: 2025-10-02 20:01:04.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:01:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:01:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:01:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:01:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:01:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:01:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:01:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:01:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:01:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.300 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.301 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.317 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.318 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.321 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.321 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.321 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T20:01:04.321571) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.390 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.392 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.392 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.393 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.393 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.393 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.393 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.393 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.394 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.395 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:01:04.394302) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.418 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.418 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.419 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.419 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.420 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.420 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.420 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.420 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.421 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.421 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.422 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.421 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:01:04.421038) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.422 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 sudo[443934]: pam_unix(sudo:session): session closed for user root
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.424 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.425 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.425 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.425 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.425 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.425 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.425 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.426 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.426 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.426 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:01:04.425364) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.426 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.427 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.427 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.427 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.427 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.427 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:01:04.427294) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceph-mon[191910]: pgmap v1684: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:04 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/630561264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.457 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.458 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.459 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.459 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.459 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.459 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.459 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.459 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.459 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.460 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.460 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.460 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.460 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.460 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.460 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.461 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:01:04.459243) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.461 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:01:04.460744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.464 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.465 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.465 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.465 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.465 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.466 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.466 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.466 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.466 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.466 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.466 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.466 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:01:04.465561) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.466 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.467 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.467 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.467 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.467 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.467 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.467 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.467 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.467 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:01:04.466831) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.468 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:01:04.467663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.468 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.468 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.468 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.468 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.468 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.468 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.468 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.469 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.469 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.469 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.469 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.470 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.470 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.470 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.470 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.470 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.470 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:01:04.468585) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.471 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:01:04.469531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.471 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:01:04.470561) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.471 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.471 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.471 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.471 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.471 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.471 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:01:04.471571) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.472 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.472 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.472 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.472 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.472 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.472 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.472 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.472 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.472 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.473 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.473 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.473 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:01:04.472877) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.473 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.473 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.473 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.473 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.473 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.474 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.474 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.474 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.474 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.474 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.475 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.475 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.475 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.475 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.475 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.475 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.475 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.475 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.476 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.476 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.476 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:01:04.473815) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.476 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.476 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:01:04.475357) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.476 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.476 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2730 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.476 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.476 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.477 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.477 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:01:04.476356) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.477 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.477 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.477 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.477 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.477 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.477 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.478 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.478 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.478 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.478 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.478 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.478 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.479 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.479 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.479 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.479 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.479 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:01:04.477344) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:01:04.478403) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.479 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.480 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.480 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.480 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:01:04.480727) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.481 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.481 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.481 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.481 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.481 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.481 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.481 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.482 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.482 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.482 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.482 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.482 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.483 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.483 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 48700000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.483 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:01:04.481802) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.483 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:01:04.482991) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.483 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.483 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.483 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.483 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.483 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.483 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.484 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.484 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.484 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.484 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:01:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:01:04 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.485 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:01:04.483874) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.485 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.485 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.485 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.485 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.486 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.486 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.486 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.486 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.486 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.487 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.487 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.487 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.487 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.487 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.487 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.488 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.488 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.488 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.488 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.488 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:01:04 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.488 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.488 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.488 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.489 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.489 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:01:04.489 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:01:04 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:01:04 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 60d89567-31b5-4d99-a9a1-0bc907431994 does not exist
Oct 02 20:01:04 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 78e50f79-c6e5-4cc1-b029-ec11aaf873f5 does not exist
Oct 02 20:01:04 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 28b2ec3c-d9a3-4532-afd6-a48d2a146c4c does not exist
Oct 02 20:01:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:01:04 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:01:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:01:04 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:01:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:01:04 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:01:04 compute-0 sudo[444011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:01:04 compute-0 sudo[444011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:01:04 compute-0 sudo[444011]: pam_unix(sudo:session): session closed for user root
Oct 02 20:01:04 compute-0 nova_compute[355794]: 2025-10-02 20:01:04.653 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:04 compute-0 nova_compute[355794]: 2025-10-02 20:01:04.654 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:04 compute-0 sudo[444036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:01:04 compute-0 sudo[444036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:01:04 compute-0 sudo[444036]: pam_unix(sudo:session): session closed for user root
Oct 02 20:01:04 compute-0 sudo[444061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:01:04 compute-0 sudo[444061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:01:04 compute-0 sudo[444061]: pam_unix(sudo:session): session closed for user root
Oct 02 20:01:05 compute-0 sudo[444086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:01:05 compute-0 sudo[444086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:01:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1685: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:01:05 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:01:05 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:01:05 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:01:05 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:01:05 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:01:05 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:01:05 compute-0 nova_compute[355794]: 2025-10-02 20:01:05.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:05 compute-0 nova_compute[355794]: 2025-10-02 20:01:05.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:05 compute-0 podman[444152]: 2025-10-02 20:01:05.713820574 +0000 UTC m=+0.075466477 container create 81e539cee6db77c4d2485e0e98db7c4862b9dcf3ae09058a953f8867270578f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 02 20:01:05 compute-0 podman[444152]: 2025-10-02 20:01:05.685216243 +0000 UTC m=+0.046862176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:01:05 compute-0 systemd[1]: Started libpod-conmon-81e539cee6db77c4d2485e0e98db7c4862b9dcf3ae09058a953f8867270578f1.scope.
Oct 02 20:01:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:01:05 compute-0 podman[444152]: 2025-10-02 20:01:05.861787477 +0000 UTC m=+0.223433450 container init 81e539cee6db77c4d2485e0e98db7c4862b9dcf3ae09058a953f8867270578f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 20:01:05 compute-0 podman[444152]: 2025-10-02 20:01:05.88298485 +0000 UTC m=+0.244630773 container start 81e539cee6db77c4d2485e0e98db7c4862b9dcf3ae09058a953f8867270578f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:01:05 compute-0 podman[444152]: 2025-10-02 20:01:05.892168294 +0000 UTC m=+0.253814387 container attach 81e539cee6db77c4d2485e0e98db7c4862b9dcf3ae09058a953f8867270578f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 20:01:05 compute-0 wonderful_montalcini[444168]: 167 167
Oct 02 20:01:05 compute-0 systemd[1]: libpod-81e539cee6db77c4d2485e0e98db7c4862b9dcf3ae09058a953f8867270578f1.scope: Deactivated successfully.
Oct 02 20:01:05 compute-0 podman[444152]: 2025-10-02 20:01:05.898696918 +0000 UTC m=+0.260342851 container died 81e539cee6db77c4d2485e0e98db7c4862b9dcf3ae09058a953f8867270578f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:01:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-818fe5969590bd715660a8d967f04808b7b6b4144bdb8466b39c58c024ad1fc8-merged.mount: Deactivated successfully.
Oct 02 20:01:05 compute-0 podman[444152]: 2025-10-02 20:01:05.987148959 +0000 UTC m=+0.348794862 container remove 81e539cee6db77c4d2485e0e98db7c4862b9dcf3ae09058a953f8867270578f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 20:01:06 compute-0 systemd[1]: libpod-conmon-81e539cee6db77c4d2485e0e98db7c4862b9dcf3ae09058a953f8867270578f1.scope: Deactivated successfully.
Oct 02 20:01:06 compute-0 podman[444190]: 2025-10-02 20:01:06.299999334 +0000 UTC m=+0.099119026 container create 4e34feb33ac2eb32461a7bd08a6317cb8edb552589afa66c5c7542f6a1185137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:01:06 compute-0 podman[444190]: 2025-10-02 20:01:06.260303699 +0000 UTC m=+0.059423481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:01:06 compute-0 systemd[1]: Started libpod-conmon-4e34feb33ac2eb32461a7bd08a6317cb8edb552589afa66c5c7542f6a1185137.scope.
Oct 02 20:01:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:01:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f70352fbd5a6d7daa9d1414420dad744a3fd15e8029ca3880b3482e3a898a20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:01:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f70352fbd5a6d7daa9d1414420dad744a3fd15e8029ca3880b3482e3a898a20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:01:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f70352fbd5a6d7daa9d1414420dad744a3fd15e8029ca3880b3482e3a898a20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:01:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f70352fbd5a6d7daa9d1414420dad744a3fd15e8029ca3880b3482e3a898a20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:01:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f70352fbd5a6d7daa9d1414420dad744a3fd15e8029ca3880b3482e3a898a20/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:01:06 compute-0 podman[444190]: 2025-10-02 20:01:06.491073242 +0000 UTC m=+0.290193014 container init 4e34feb33ac2eb32461a7bd08a6317cb8edb552589afa66c5c7542f6a1185137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_kilby, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 20:01:06 compute-0 podman[444190]: 2025-10-02 20:01:06.513617761 +0000 UTC m=+0.312737483 container start 4e34feb33ac2eb32461a7bd08a6317cb8edb552589afa66c5c7542f6a1185137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_kilby, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:01:06 compute-0 ceph-mon[191910]: pgmap v1685: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:06 compute-0 podman[444190]: 2025-10-02 20:01:06.520656839 +0000 UTC m=+0.319776571 container attach 4e34feb33ac2eb32461a7bd08a6317cb8edb552589afa66c5c7542f6a1185137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 20:01:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1686: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:07 compute-0 heuristic_kilby[444206]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:01:07 compute-0 heuristic_kilby[444206]: --> relative data size: 1.0
Oct 02 20:01:07 compute-0 heuristic_kilby[444206]: --> All data devices are unavailable
Oct 02 20:01:07 compute-0 systemd[1]: libpod-4e34feb33ac2eb32461a7bd08a6317cb8edb552589afa66c5c7542f6a1185137.scope: Deactivated successfully.
Oct 02 20:01:07 compute-0 podman[444190]: 2025-10-02 20:01:07.962194492 +0000 UTC m=+1.761314214 container died 4e34feb33ac2eb32461a7bd08a6317cb8edb552589afa66c5c7542f6a1185137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 02 20:01:07 compute-0 systemd[1]: libpod-4e34feb33ac2eb32461a7bd08a6317cb8edb552589afa66c5c7542f6a1185137.scope: Consumed 1.370s CPU time.
Oct 02 20:01:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f70352fbd5a6d7daa9d1414420dad744a3fd15e8029ca3880b3482e3a898a20-merged.mount: Deactivated successfully.
Oct 02 20:01:08 compute-0 podman[444190]: 2025-10-02 20:01:08.106799915 +0000 UTC m=+1.905919637 container remove 4e34feb33ac2eb32461a7bd08a6317cb8edb552589afa66c5c7542f6a1185137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_kilby, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:01:08 compute-0 systemd[1]: libpod-conmon-4e34feb33ac2eb32461a7bd08a6317cb8edb552589afa66c5c7542f6a1185137.scope: Deactivated successfully.
Oct 02 20:01:08 compute-0 sudo[444086]: pam_unix(sudo:session): session closed for user root
Oct 02 20:01:08 compute-0 sudo[444245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:01:08 compute-0 sudo[444245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:01:08 compute-0 sudo[444245]: pam_unix(sudo:session): session closed for user root
Oct 02 20:01:08 compute-0 sudo[444270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:01:08 compute-0 sudo[444270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:01:08 compute-0 sudo[444270]: pam_unix(sudo:session): session closed for user root
Oct 02 20:01:08 compute-0 ceph-mon[191910]: pgmap v1686: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:08 compute-0 sudo[444295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:01:08 compute-0 sudo[444295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:01:08 compute-0 sudo[444295]: pam_unix(sudo:session): session closed for user root
Oct 02 20:01:08 compute-0 sudo[444320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:01:08 compute-0 sudo[444320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:01:09 compute-0 nova_compute[355794]: 2025-10-02 20:01:09.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1687: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:09 compute-0 podman[444386]: 2025-10-02 20:01:09.465131168 +0000 UTC m=+0.087875977 container create cfd486f0a6e43f70d93a1cfacadcb6f371b05f09a9aeec9e7a1d29200ef189a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:01:09 compute-0 podman[444386]: 2025-10-02 20:01:09.427794036 +0000 UTC m=+0.050538905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:01:09 compute-0 systemd[1]: Started libpod-conmon-cfd486f0a6e43f70d93a1cfacadcb6f371b05f09a9aeec9e7a1d29200ef189a1.scope.
Oct 02 20:01:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:01:09 compute-0 podman[444386]: 2025-10-02 20:01:09.629534808 +0000 UTC m=+0.252279687 container init cfd486f0a6e43f70d93a1cfacadcb6f371b05f09a9aeec9e7a1d29200ef189a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hugle, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:01:09 compute-0 podman[444386]: 2025-10-02 20:01:09.650276149 +0000 UTC m=+0.273020968 container start cfd486f0a6e43f70d93a1cfacadcb6f371b05f09a9aeec9e7a1d29200ef189a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hugle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 20:01:09 compute-0 podman[444386]: 2025-10-02 20:01:09.658117147 +0000 UTC m=+0.280861976 container attach cfd486f0a6e43f70d93a1cfacadcb6f371b05f09a9aeec9e7a1d29200ef189a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hugle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 20:01:09 compute-0 stupefied_hugle[444402]: 167 167
Oct 02 20:01:09 compute-0 systemd[1]: libpod-cfd486f0a6e43f70d93a1cfacadcb6f371b05f09a9aeec9e7a1d29200ef189a1.scope: Deactivated successfully.
Oct 02 20:01:09 compute-0 podman[444386]: 2025-10-02 20:01:09.664630661 +0000 UTC m=+0.287375480 container died cfd486f0a6e43f70d93a1cfacadcb6f371b05f09a9aeec9e7a1d29200ef189a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:01:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-084a2c25ac8f7cca8ea3094f3be3e3c91dc7b681e51a11d2298f64d712343b71-merged.mount: Deactivated successfully.
Oct 02 20:01:09 compute-0 podman[444386]: 2025-10-02 20:01:09.753586845 +0000 UTC m=+0.376331664 container remove cfd486f0a6e43f70d93a1cfacadcb6f371b05f09a9aeec9e7a1d29200ef189a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 20:01:09 compute-0 systemd[1]: libpod-conmon-cfd486f0a6e43f70d93a1cfacadcb6f371b05f09a9aeec9e7a1d29200ef189a1.scope: Deactivated successfully.
Oct 02 20:01:10 compute-0 podman[444424]: 2025-10-02 20:01:10.043420088 +0000 UTC m=+0.088746679 container create 32618458013df32001f77d7cf6666cf055039dae44c5bde4785188b274934b5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:01:10 compute-0 podman[444424]: 2025-10-02 20:01:10.008981453 +0000 UTC m=+0.054308054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:01:10 compute-0 systemd[1]: Started libpod-conmon-32618458013df32001f77d7cf6666cf055039dae44c5bde4785188b274934b5c.scope.
Oct 02 20:01:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:01:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84bb9ef41bb20b7177e79e0fbcd9bf89907e8c9e26a4e55d48615c2cd83635c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:01:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84bb9ef41bb20b7177e79e0fbcd9bf89907e8c9e26a4e55d48615c2cd83635c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:01:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84bb9ef41bb20b7177e79e0fbcd9bf89907e8c9e26a4e55d48615c2cd83635c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:01:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84bb9ef41bb20b7177e79e0fbcd9bf89907e8c9e26a4e55d48615c2cd83635c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:01:10 compute-0 podman[444424]: 2025-10-02 20:01:10.241116543 +0000 UTC m=+0.286443194 container init 32618458013df32001f77d7cf6666cf055039dae44c5bde4785188b274934b5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:01:10 compute-0 podman[444424]: 2025-10-02 20:01:10.259228284 +0000 UTC m=+0.304554885 container start 32618458013df32001f77d7cf6666cf055039dae44c5bde4785188b274934b5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 02 20:01:10 compute-0 podman[444424]: 2025-10-02 20:01:10.265664045 +0000 UTC m=+0.310990656 container attach 32618458013df32001f77d7cf6666cf055039dae44c5bde4785188b274934b5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 20:01:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:01:10 compute-0 ceph-mon[191910]: pgmap v1687: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:10 compute-0 nova_compute[355794]: 2025-10-02 20:01:10.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]: {
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:     "0": [
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:         {
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "devices": [
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "/dev/loop3"
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             ],
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "lv_name": "ceph_lv0",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "lv_size": "21470642176",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "name": "ceph_lv0",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "tags": {
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.cluster_name": "ceph",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.crush_device_class": "",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.encrypted": "0",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.osd_id": "0",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.type": "block",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.vdo": "0"
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             },
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "type": "block",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "vg_name": "ceph_vg0"
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:         }
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:     ],
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:     "1": [
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:         {
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "devices": [
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "/dev/loop4"
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             ],
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "lv_name": "ceph_lv1",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "lv_size": "21470642176",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "name": "ceph_lv1",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "tags": {
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.cluster_name": "ceph",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.crush_device_class": "",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.encrypted": "0",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.osd_id": "1",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.type": "block",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.vdo": "0"
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             },
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "type": "block",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "vg_name": "ceph_vg1"
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:         }
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:     ],
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:     "2": [
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:         {
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "devices": [
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "/dev/loop5"
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             ],
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "lv_name": "ceph_lv2",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "lv_size": "21470642176",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "name": "ceph_lv2",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "tags": {
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.cluster_name": "ceph",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.crush_device_class": "",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.encrypted": "0",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.osd_id": "2",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.type": "block",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:                 "ceph.vdo": "0"
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             },
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "type": "block",
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:             "vg_name": "ceph_vg2"
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:         }
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]:     ]
Oct 02 20:01:11 compute-0 busy_kowalevski[444440]: }
Oct 02 20:01:11 compute-0 systemd[1]: libpod-32618458013df32001f77d7cf6666cf055039dae44c5bde4785188b274934b5c.scope: Deactivated successfully.
Oct 02 20:01:11 compute-0 podman[444424]: 2025-10-02 20:01:11.175606919 +0000 UTC m=+1.220933510 container died 32618458013df32001f77d7cf6666cf055039dae44c5bde4785188b274934b5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 20:01:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-84bb9ef41bb20b7177e79e0fbcd9bf89907e8c9e26a4e55d48615c2cd83635c9-merged.mount: Deactivated successfully.
Oct 02 20:01:11 compute-0 podman[444424]: 2025-10-02 20:01:11.288005897 +0000 UTC m=+1.333332488 container remove 32618458013df32001f77d7cf6666cf055039dae44c5bde4785188b274934b5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kowalevski, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:01:11 compute-0 systemd[1]: libpod-conmon-32618458013df32001f77d7cf6666cf055039dae44c5bde4785188b274934b5c.scope: Deactivated successfully.
Oct 02 20:01:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1688: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:11 compute-0 sudo[444320]: pam_unix(sudo:session): session closed for user root
Oct 02 20:01:11 compute-0 sudo[444460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:01:11 compute-0 sudo[444460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:01:11 compute-0 sudo[444460]: pam_unix(sudo:session): session closed for user root
Oct 02 20:01:11 compute-0 sudo[444485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:01:11 compute-0 sudo[444485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:01:11 compute-0 sudo[444485]: pam_unix(sudo:session): session closed for user root
Oct 02 20:01:11 compute-0 sudo[444510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:01:11 compute-0 sudo[444510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:01:11 compute-0 sudo[444510]: pam_unix(sudo:session): session closed for user root
Oct 02 20:01:11 compute-0 sudo[444535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:01:11 compute-0 sudo[444535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:01:12 compute-0 ceph-mon[191910]: pgmap v1688: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:12 compute-0 podman[444599]: 2025-10-02 20:01:12.607037475 +0000 UTC m=+0.086270684 container create fff082504c29f3fc1f8c02df084c0a5c3f221f37009cfa450ea315b9bf27360d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wright, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:01:12 compute-0 podman[444599]: 2025-10-02 20:01:12.577676875 +0000 UTC m=+0.056910064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:01:12 compute-0 systemd[1]: Started libpod-conmon-fff082504c29f3fc1f8c02df084c0a5c3f221f37009cfa450ea315b9bf27360d.scope.
Oct 02 20:01:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:01:12 compute-0 podman[444599]: 2025-10-02 20:01:12.765266231 +0000 UTC m=+0.244499460 container init fff082504c29f3fc1f8c02df084c0a5c3f221f37009cfa450ea315b9bf27360d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wright, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:01:12 compute-0 podman[444599]: 2025-10-02 20:01:12.78705839 +0000 UTC m=+0.266291609 container start fff082504c29f3fc1f8c02df084c0a5c3f221f37009cfa450ea315b9bf27360d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wright, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 20:01:12 compute-0 podman[444599]: 2025-10-02 20:01:12.794671732 +0000 UTC m=+0.273905011 container attach fff082504c29f3fc1f8c02df084c0a5c3f221f37009cfa450ea315b9bf27360d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:01:12 compute-0 blissful_wright[444616]: 167 167
Oct 02 20:01:12 compute-0 systemd[1]: libpod-fff082504c29f3fc1f8c02df084c0a5c3f221f37009cfa450ea315b9bf27360d.scope: Deactivated successfully.
Oct 02 20:01:12 compute-0 podman[444599]: 2025-10-02 20:01:12.803774834 +0000 UTC m=+0.283008043 container died fff082504c29f3fc1f8c02df084c0a5c3f221f37009cfa450ea315b9bf27360d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 20:01:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-63ebf73a04709772c4ab75b76cda511c99ffb6810eaa1a1b8744141f683d83c2-merged.mount: Deactivated successfully.
Oct 02 20:01:12 compute-0 podman[444599]: 2025-10-02 20:01:12.885013553 +0000 UTC m=+0.364246742 container remove fff082504c29f3fc1f8c02df084c0a5c3f221f37009cfa450ea315b9bf27360d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wright, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 20:01:12 compute-0 systemd[1]: libpod-conmon-fff082504c29f3fc1f8c02df084c0a5c3f221f37009cfa450ea315b9bf27360d.scope: Deactivated successfully.
Oct 02 20:01:12 compute-0 podman[444615]: 2025-10-02 20:01:12.906151555 +0000 UTC m=+0.177787686 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:01:12 compute-0 podman[444624]: 2025-10-02 20:01:12.916447319 +0000 UTC m=+0.142821647 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute)
Oct 02 20:01:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:01:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:01:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:01:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:01:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:01:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:01:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:01:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:01:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:01:12 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:01:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 20:01:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:01:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:01:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:01:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:01:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:01:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:01:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:01:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:01:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:01:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:01:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:01:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:01:13 compute-0 podman[444680]: 2025-10-02 20:01:13.208628385 +0000 UTC m=+0.104678164 container create 44822a869db86689b7a28d4e4117970f7f244bc9ea170ac54656888e2381940f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_thompson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 20:01:13 compute-0 podman[444680]: 2025-10-02 20:01:13.153288614 +0000 UTC m=+0.049338473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:01:13 compute-0 systemd[1]: Started libpod-conmon-44822a869db86689b7a28d4e4117970f7f244bc9ea170ac54656888e2381940f.scope.
Oct 02 20:01:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1689: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:01:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb4ac4609916819bd0ab895262fce9a2641bb25fa46f66a85de56183da80c3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:01:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb4ac4609916819bd0ab895262fce9a2641bb25fa46f66a85de56183da80c3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:01:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb4ac4609916819bd0ab895262fce9a2641bb25fa46f66a85de56183da80c3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:01:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb4ac4609916819bd0ab895262fce9a2641bb25fa46f66a85de56183da80c3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:01:13 compute-0 podman[444680]: 2025-10-02 20:01:13.413660194 +0000 UTC m=+0.309710033 container init 44822a869db86689b7a28d4e4117970f7f244bc9ea170ac54656888e2381940f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_thompson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 20:01:13 compute-0 podman[444680]: 2025-10-02 20:01:13.447208316 +0000 UTC m=+0.343258095 container start 44822a869db86689b7a28d4e4117970f7f244bc9ea170ac54656888e2381940f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 20:01:13 compute-0 podman[444680]: 2025-10-02 20:01:13.457224192 +0000 UTC m=+0.353274031 container attach 44822a869db86689b7a28d4e4117970f7f244bc9ea170ac54656888e2381940f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_thompson, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:01:14 compute-0 nova_compute[355794]: 2025-10-02 20:01:14.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:14 compute-0 ceph-mon[191910]: pgmap v1689: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:14 compute-0 frosty_thompson[444696]: {
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:         "osd_id": 1,
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:         "type": "bluestore"
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:     },
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:         "osd_id": 2,
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:         "type": "bluestore"
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:     },
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:         "osd_id": 0,
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:         "type": "bluestore"
Oct 02 20:01:14 compute-0 frosty_thompson[444696]:     }
Oct 02 20:01:14 compute-0 frosty_thompson[444696]: }
Oct 02 20:01:14 compute-0 systemd[1]: libpod-44822a869db86689b7a28d4e4117970f7f244bc9ea170ac54656888e2381940f.scope: Deactivated successfully.
Oct 02 20:01:14 compute-0 podman[444680]: 2025-10-02 20:01:14.749533039 +0000 UTC m=+1.645582808 container died 44822a869db86689b7a28d4e4117970f7f244bc9ea170ac54656888e2381940f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_thompson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:01:14 compute-0 systemd[1]: libpod-44822a869db86689b7a28d4e4117970f7f244bc9ea170ac54656888e2381940f.scope: Consumed 1.296s CPU time.
Oct 02 20:01:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebb4ac4609916819bd0ab895262fce9a2641bb25fa46f66a85de56183da80c3c-merged.mount: Deactivated successfully.
Oct 02 20:01:14 compute-0 podman[444680]: 2025-10-02 20:01:14.84099161 +0000 UTC m=+1.737041359 container remove 44822a869db86689b7a28d4e4117970f7f244bc9ea170ac54656888e2381940f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_thompson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 20:01:14 compute-0 systemd[1]: libpod-conmon-44822a869db86689b7a28d4e4117970f7f244bc9ea170ac54656888e2381940f.scope: Deactivated successfully.
Oct 02 20:01:14 compute-0 sudo[444535]: pam_unix(sudo:session): session closed for user root
Oct 02 20:01:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:01:14 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:01:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:01:14 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:01:14 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e9a90d6b-7480-45f9-8d32-0e3ea331157b does not exist
Oct 02 20:01:14 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e17698b0-df04-4605-b094-20d63360e841 does not exist
Oct 02 20:01:15 compute-0 sudo[444740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:01:15 compute-0 sudo[444740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:01:15 compute-0 sudo[444740]: pam_unix(sudo:session): session closed for user root
Oct 02 20:01:15 compute-0 sudo[444765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:01:15 compute-0 sudo[444765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:01:15 compute-0 sudo[444765]: pam_unix(sudo:session): session closed for user root
Oct 02 20:01:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1690: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:01:15 compute-0 nova_compute[355794]: 2025-10-02 20:01:15.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:01:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:01:15 compute-0 ceph-mon[191910]: pgmap v1690: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1691: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:18 compute-0 ceph-mon[191910]: pgmap v1691: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:18 compute-0 podman[444790]: 2025-10-02 20:01:18.777186759 +0000 UTC m=+0.188758728 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct 02 20:01:18 compute-0 podman[444791]: 2025-10-02 20:01:18.784489613 +0000 UTC m=+0.192440996 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, version=9.4, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=)
Oct 02 20:01:19 compute-0 nova_compute[355794]: 2025-10-02 20:01:19.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1692: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:01:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/762968891' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:01:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:01:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/762968891' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:01:20 compute-0 ceph-mon[191910]: pgmap v1692: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/762968891' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:01:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/762968891' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:01:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:01:20 compute-0 nova_compute[355794]: 2025-10-02 20:01:20.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1693: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:21 compute-0 podman[444833]: 2025-10-02 20:01:21.68636751 +0000 UTC m=+0.111109254 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 20:01:21 compute-0 podman[444832]: 2025-10-02 20:01:21.712464284 +0000 UTC m=+0.129329469 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 20:01:21 compute-0 podman[444834]: 2025-10-02 20:01:21.769835779 +0000 UTC m=+0.175763263 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:01:22 compute-0 ceph-mon[191910]: pgmap v1693: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1694: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:23 compute-0 podman[444895]: 2025-10-02 20:01:23.73686415 +0000 UTC m=+0.150166912 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, architecture=x86_64, managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Oct 02 20:01:23 compute-0 podman[444896]: 2025-10-02 20:01:23.750366709 +0000 UTC m=+0.155387101 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:01:24 compute-0 nova_compute[355794]: 2025-10-02 20:01:24.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:24 compute-0 ceph-mon[191910]: pgmap v1694: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1695: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:01:25 compute-0 nova_compute[355794]: 2025-10-02 20:01:25.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:26 compute-0 ceph-mon[191910]: pgmap v1695: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1696: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:28 compute-0 ceph-mon[191910]: pgmap v1696: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:29 compute-0 nova_compute[355794]: 2025-10-02 20:01:29.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:29 compute-0 podman[157186]: time="2025-10-02T20:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:01:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:01:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9063 "" "Go-http-client/1.1"
Oct 02 20:01:30 compute-0 sshd-session[441968]: Received disconnect from 38.102.83.68 port 54128:11: disconnected by user
Oct 02 20:01:30 compute-0 sshd-session[441968]: Disconnected from user zuul 38.102.83.68 port 54128
Oct 02 20:01:30 compute-0 sshd-session[441959]: pam_unix(sshd:session): session closed for user zuul
Oct 02 20:01:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:01:30 compute-0 systemd[1]: session-64.scope: Deactivated successfully.
Oct 02 20:01:30 compute-0 systemd[1]: session-64.scope: Consumed 5.924s CPU time.
Oct 02 20:01:30 compute-0 systemd-logind[793]: Session 64 logged out. Waiting for processes to exit.
Oct 02 20:01:30 compute-0 systemd-logind[793]: Removed session 64.
Oct 02 20:01:30 compute-0 ceph-mon[191910]: pgmap v1697: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:30 compute-0 nova_compute[355794]: 2025-10-02 20:01:30.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1698: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:31 compute-0 openstack_network_exporter[372736]: ERROR   20:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:01:31 compute-0 openstack_network_exporter[372736]: ERROR   20:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:01:31 compute-0 openstack_network_exporter[372736]: ERROR   20:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:01:31 compute-0 openstack_network_exporter[372736]: ERROR   20:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:01:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:01:31 compute-0 openstack_network_exporter[372736]: ERROR   20:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:01:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:01:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:01:32.315 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:01:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:01:32.316 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:01:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:01:32.317 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:01:32 compute-0 ceph-mon[191910]: pgmap v1698: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1699: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:01:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:01:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:01:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:01:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:01:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:01:34 compute-0 nova_compute[355794]: 2025-10-02 20:01:34.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:34 compute-0 ceph-mon[191910]: pgmap v1699: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:34 compute-0 podman[444940]: 2025-10-02 20:01:34.721445033 +0000 UTC m=+0.134115006 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct 02 20:01:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1700: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:01:35 compute-0 nova_compute[355794]: 2025-10-02 20:01:35.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:36 compute-0 ceph-mon[191910]: pgmap v1700: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1701: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:38 compute-0 ceph-mon[191910]: pgmap v1701: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:39 compute-0 nova_compute[355794]: 2025-10-02 20:01:39.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1702: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:01:40 compute-0 ceph-mon[191910]: pgmap v1702: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:40 compute-0 nova_compute[355794]: 2025-10-02 20:01:40.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1703: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:42 compute-0 ceph-mon[191910]: pgmap v1703: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1704: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:43 compute-0 podman[444959]: 2025-10-02 20:01:43.686744476 +0000 UTC m=+0.121196092 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 20:01:43 compute-0 podman[444960]: 2025-10-02 20:01:43.739024525 +0000 UTC m=+0.165439298 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.build-date=20250930, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Oct 02 20:01:44 compute-0 nova_compute[355794]: 2025-10-02 20:01:44.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:44 compute-0 ceph-mon[191910]: pgmap v1704: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1705: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:01:45 compute-0 nova_compute[355794]: 2025-10-02 20:01:45.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:46 compute-0 ceph-mon[191910]: pgmap v1705: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:48 compute-0 ceph-mon[191910]: pgmap v1706: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:49 compute-0 nova_compute[355794]: 2025-10-02 20:01:49.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1707: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:49 compute-0 podman[445000]: 2025-10-02 20:01:49.687768735 +0000 UTC m=+0.113480558 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 20:01:49 compute-0 podman[445001]: 2025-10-02 20:01:49.712225815 +0000 UTC m=+0.120581606 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, vcs-type=git, build-date=2024-09-18T21:23:30, name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, release-0.7.12=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, container_name=kepler)
Oct 02 20:01:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:01:50 compute-0 ceph-mon[191910]: pgmap v1707: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:50 compute-0 nova_compute[355794]: 2025-10-02 20:01:50.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1708: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:52 compute-0 ceph-mon[191910]: pgmap v1708: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:52 compute-0 podman[445036]: 2025-10-02 20:01:52.705928632 +0000 UTC m=+0.123581606 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 02 20:01:52 compute-0 podman[445037]: 2025-10-02 20:01:52.71753806 +0000 UTC m=+0.128774843 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001)
Oct 02 20:01:52 compute-0 podman[445038]: 2025-10-02 20:01:52.753184918 +0000 UTC m=+0.168074989 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 20:01:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1709: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:54 compute-0 nova_compute[355794]: 2025-10-02 20:01:54.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:54 compute-0 ceph-mon[191910]: pgmap v1709: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:54 compute-0 podman[445099]: 2025-10-02 20:01:54.713675164 +0000 UTC m=+0.122077276 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:01:54 compute-0 podman[445098]: 2025-10-02 20:01:54.726923536 +0000 UTC m=+0.146317660 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, release=1755695350, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vcs-type=git, version=9.6, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 02 20:01:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1710: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:01:55 compute-0 nova_compute[355794]: 2025-10-02 20:01:55.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:56 compute-0 nova_compute[355794]: 2025-10-02 20:01:56.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:56 compute-0 nova_compute[355794]: 2025-10-02 20:01:56.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:01:56 compute-0 ceph-mon[191910]: pgmap v1710: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:01:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1711: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Oct 02 20:01:58 compute-0 ceph-mon[191910]: pgmap v1711: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Oct 02 20:01:59 compute-0 nova_compute[355794]: 2025-10-02 20:01:59.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1712: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Oct 02 20:01:59 compute-0 nova_compute[355794]: 2025-10-02 20:01:59.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:59 compute-0 nova_compute[355794]: 2025-10-02 20:01:59.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:01:59 compute-0 nova_compute[355794]: 2025-10-02 20:01:59.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:01:59 compute-0 podman[157186]: time="2025-10-02T20:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:01:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:01:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9071 "" "Go-http-client/1.1"
Oct 02 20:02:00 compute-0 nova_compute[355794]: 2025-10-02 20:02:00.218 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:02:00 compute-0 nova_compute[355794]: 2025-10-02 20:02:00.219 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:02:00 compute-0 nova_compute[355794]: 2025-10-02 20:02:00.220 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:02:00 compute-0 nova_compute[355794]: 2025-10-02 20:02:00.221 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:02:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:02:00 compute-0 nova_compute[355794]: 2025-10-02 20:02:00.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:00 compute-0 ceph-mon[191910]: pgmap v1712: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Oct 02 20:02:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1713: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Oct 02 20:02:01 compute-0 openstack_network_exporter[372736]: ERROR   20:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:02:01 compute-0 openstack_network_exporter[372736]: ERROR   20:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:02:01 compute-0 openstack_network_exporter[372736]: ERROR   20:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:02:01 compute-0 openstack_network_exporter[372736]: ERROR   20:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:02:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:02:01 compute-0 openstack_network_exporter[372736]: ERROR   20:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:02:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:02:01 compute-0 nova_compute[355794]: 2025-10-02 20:02:01.478 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:02:01 compute-0 nova_compute[355794]: 2025-10-02 20:02:01.493 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:02:01 compute-0 nova_compute[355794]: 2025-10-02 20:02:01.494 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:02:01 compute-0 nova_compute[355794]: 2025-10-02 20:02:01.496 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:01 compute-0 nova_compute[355794]: 2025-10-02 20:02:01.497 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:01 compute-0 nova_compute[355794]: 2025-10-02 20:02:01.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:01 compute-0 nova_compute[355794]: 2025-10-02 20:02:01.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:02 compute-0 nova_compute[355794]: 2025-10-02 20:02:02.594 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:02 compute-0 nova_compute[355794]: 2025-10-02 20:02:02.634 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:02:02 compute-0 nova_compute[355794]: 2025-10-02 20:02:02.636 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:02:02 compute-0 nova_compute[355794]: 2025-10-02 20:02:02.636 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:02:02 compute-0 nova_compute[355794]: 2025-10-02 20:02:02.636 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:02:02 compute-0 nova_compute[355794]: 2025-10-02 20:02:02.637 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:02:02 compute-0 ceph-mon[191910]: pgmap v1713: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Oct 02 20:02:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:02:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3181988990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:02:03 compute-0 nova_compute[355794]: 2025-10-02 20:02:03.145 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:02:03 compute-0 nova_compute[355794]: 2025-10-02 20:02:03.276 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:02:03 compute-0 nova_compute[355794]: 2025-10-02 20:02:03.277 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:02:03 compute-0 nova_compute[355794]: 2025-10-02 20:02:03.277 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:02:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1714: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 20:02:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:02:03
Oct 02 20:02:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:02:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:02:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'vms', 'default.rgw.log', 'volumes']
Oct 02 20:02:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:02:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:02:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:02:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:02:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:02:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:02:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:02:03 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3181988990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:02:03 compute-0 nova_compute[355794]: 2025-10-02 20:02:03.833 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:02:03 compute-0 nova_compute[355794]: 2025-10-02 20:02:03.835 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3847MB free_disk=59.955204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:02:03 compute-0 nova_compute[355794]: 2025-10-02 20:02:03.835 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:02:03 compute-0 nova_compute[355794]: 2025-10-02 20:02:03.836 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:02:04 compute-0 nova_compute[355794]: 2025-10-02 20:02:04.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:02:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:02:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:02:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:02:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:02:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:02:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:02:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:02:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:02:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:02:04 compute-0 nova_compute[355794]: 2025-10-02 20:02:04.288 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:02:04 compute-0 nova_compute[355794]: 2025-10-02 20:02:04.289 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:02:04 compute-0 nova_compute[355794]: 2025-10-02 20:02:04.289 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:02:04 compute-0 nova_compute[355794]: 2025-10-02 20:02:04.550 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:02:04 compute-0 ceph-mon[191910]: pgmap v1714: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 20:02:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:02:05 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/28273466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:02:05 compute-0 nova_compute[355794]: 2025-10-02 20:02:05.103 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:02:05 compute-0 nova_compute[355794]: 2025-10-02 20:02:05.119 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:02:05 compute-0 nova_compute[355794]: 2025-10-02 20:02:05.149 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:02:05 compute-0 nova_compute[355794]: 2025-10-02 20:02:05.152 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:02:05 compute-0 nova_compute[355794]: 2025-10-02 20:02:05.153 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:02:05 compute-0 nova_compute[355794]: 2025-10-02 20:02:05.154 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:05 compute-0 nova_compute[355794]: 2025-10-02 20:02:05.155 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 20:02:05 compute-0 nova_compute[355794]: 2025-10-02 20:02:05.171 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 20:02:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 20:02:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:02:05 compute-0 nova_compute[355794]: 2025-10-02 20:02:05.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:05 compute-0 podman[445185]: 2025-10-02 20:02:05.745908624 +0000 UTC m=+0.163367913 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 20:02:05 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/28273466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:02:05 compute-0 ceph-mon[191910]: pgmap v1715: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 20:02:07 compute-0 nova_compute[355794]: 2025-10-02 20:02:07.154 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:07 compute-0 nova_compute[355794]: 2025-10-02 20:02:07.154 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:07 compute-0 nova_compute[355794]: 2025-10-02 20:02:07.155 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1716: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 20:02:08 compute-0 ceph-mon[191910]: pgmap v1716: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 20:02:09 compute-0 nova_compute[355794]: 2025-10-02 20:02:09.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1717: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Oct 02 20:02:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:02:10 compute-0 ceph-mon[191910]: pgmap v1717: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Oct 02 20:02:10 compute-0 nova_compute[355794]: 2025-10-02 20:02:10.571 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:10 compute-0 nova_compute[355794]: 2025-10-02 20:02:10.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1718: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Oct 02 20:02:12 compute-0 ceph-mon[191910]: pgmap v1718: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:02:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1719: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Oct 02 20:02:14 compute-0 nova_compute[355794]: 2025-10-02 20:02:14.018 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:14 compute-0 nova_compute[355794]: 2025-10-02 20:02:14.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:14 compute-0 ceph-mon[191910]: pgmap v1719: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Oct 02 20:02:14 compute-0 nova_compute[355794]: 2025-10-02 20:02:14.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:14 compute-0 nova_compute[355794]: 2025-10-02 20:02:14.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 20:02:14 compute-0 podman[445205]: 2025-10-02 20:02:14.720819013 +0000 UTC m=+0.132711388 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:02:14 compute-0 podman[445206]: 2025-10-02 20:02:14.772896197 +0000 UTC m=+0.176639356 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20250930, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 20:02:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1720: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:15 compute-0 sudo[445247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:02:15 compute-0 sudo[445247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:02:15 compute-0 sudo[445247]: pam_unix(sudo:session): session closed for user root
Oct 02 20:02:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:02:15 compute-0 sudo[445272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:02:15 compute-0 sudo[445272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:02:15 compute-0 sudo[445272]: pam_unix(sudo:session): session closed for user root
Oct 02 20:02:15 compute-0 nova_compute[355794]: 2025-10-02 20:02:15.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:15 compute-0 sudo[445297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:02:15 compute-0 sudo[445297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:02:15 compute-0 sudo[445297]: pam_unix(sudo:session): session closed for user root
Oct 02 20:02:15 compute-0 sudo[445322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:02:15 compute-0 sudo[445322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:02:16 compute-0 ceph-mon[191910]: pgmap v1720: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:16 compute-0 sudo[445322]: pam_unix(sudo:session): session closed for user root
Oct 02 20:02:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:02:16 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:02:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:02:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:02:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:02:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:02:16 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 3690702f-462e-4256-8309-33bab6bf2d0b does not exist
Oct 02 20:02:16 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 09249c4f-2bc5-4e6d-8e9e-957aabc23b60 does not exist
Oct 02 20:02:16 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 1e7dab71-0aec-4ea1-9bf0-fb9b79a38f09 does not exist
Oct 02 20:02:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:02:16 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:02:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:02:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:02:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:02:16 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:02:16 compute-0 sudo[445377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:02:16 compute-0 sudo[445377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:02:16 compute-0 sudo[445377]: pam_unix(sudo:session): session closed for user root
Oct 02 20:02:16 compute-0 sudo[445402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:02:16 compute-0 sudo[445402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:02:16 compute-0 sudo[445402]: pam_unix(sudo:session): session closed for user root
Oct 02 20:02:17 compute-0 sudo[445427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:02:17 compute-0 sudo[445427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:02:17 compute-0 sudo[445427]: pam_unix(sudo:session): session closed for user root
Oct 02 20:02:17 compute-0 sudo[445452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:02:17 compute-0 sudo[445452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:02:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1721: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:02:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:02:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:02:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:02:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:02:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:02:17 compute-0 podman[445518]: 2025-10-02 20:02:17.83300412 +0000 UTC m=+0.067709581 container create 8baea39d8461af02c514b35403a544bb2192930688e2428d4c1a9e737efd75fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:02:17 compute-0 podman[445518]: 2025-10-02 20:02:17.809237338 +0000 UTC m=+0.043942919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:02:17 compute-0 systemd[1]: Started libpod-conmon-8baea39d8461af02c514b35403a544bb2192930688e2428d4c1a9e737efd75fc.scope.
Oct 02 20:02:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:02:17 compute-0 podman[445518]: 2025-10-02 20:02:17.983759976 +0000 UTC m=+0.218465527 container init 8baea39d8461af02c514b35403a544bb2192930688e2428d4c1a9e737efd75fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_stonebraker, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:02:18 compute-0 podman[445518]: 2025-10-02 20:02:18.00196395 +0000 UTC m=+0.236669441 container start 8baea39d8461af02c514b35403a544bb2192930688e2428d4c1a9e737efd75fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_stonebraker, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:02:18 compute-0 podman[445518]: 2025-10-02 20:02:18.012660765 +0000 UTC m=+0.247366256 container attach 8baea39d8461af02c514b35403a544bb2192930688e2428d4c1a9e737efd75fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 20:02:18 compute-0 quizzical_stonebraker[445533]: 167 167
Oct 02 20:02:18 compute-0 systemd[1]: libpod-8baea39d8461af02c514b35403a544bb2192930688e2428d4c1a9e737efd75fc.scope: Deactivated successfully.
Oct 02 20:02:18 compute-0 podman[445518]: 2025-10-02 20:02:18.018280754 +0000 UTC m=+0.252986255 container died 8baea39d8461af02c514b35403a544bb2192930688e2428d4c1a9e737efd75fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:02:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-78ce09036d4aec29bb6ba37f94222fc69c6265cf914a31b46d92152d40fb15e4-merged.mount: Deactivated successfully.
Oct 02 20:02:18 compute-0 podman[445518]: 2025-10-02 20:02:18.102465472 +0000 UTC m=+0.337170973 container remove 8baea39d8461af02c514b35403a544bb2192930688e2428d4c1a9e737efd75fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_stonebraker, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 20:02:18 compute-0 systemd[1]: libpod-conmon-8baea39d8461af02c514b35403a544bb2192930688e2428d4c1a9e737efd75fc.scope: Deactivated successfully.
Oct 02 20:02:18 compute-0 podman[445556]: 2025-10-02 20:02:18.390152498 +0000 UTC m=+0.101229882 container create 0c15b6b7eb7da19291ef79376dacbaded25e9df09f2fbef0e6ef65e79e51ef33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_curran, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:02:18 compute-0 podman[445556]: 2025-10-02 20:02:18.346552279 +0000 UTC m=+0.057629743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:02:18 compute-0 systemd[1]: Started libpod-conmon-0c15b6b7eb7da19291ef79376dacbaded25e9df09f2fbef0e6ef65e79e51ef33.scope.
Oct 02 20:02:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8e0cd5597af87d0d706d44eb84f8fdbef8a59e1a00fe39229d349ae4e34b093/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8e0cd5597af87d0d706d44eb84f8fdbef8a59e1a00fe39229d349ae4e34b093/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8e0cd5597af87d0d706d44eb84f8fdbef8a59e1a00fe39229d349ae4e34b093/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8e0cd5597af87d0d706d44eb84f8fdbef8a59e1a00fe39229d349ae4e34b093/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8e0cd5597af87d0d706d44eb84f8fdbef8a59e1a00fe39229d349ae4e34b093/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:02:18 compute-0 podman[445556]: 2025-10-02 20:02:18.568846647 +0000 UTC m=+0.279924111 container init 0c15b6b7eb7da19291ef79376dacbaded25e9df09f2fbef0e6ef65e79e51ef33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:02:18 compute-0 podman[445556]: 2025-10-02 20:02:18.5953076 +0000 UTC m=+0.306385004 container start 0c15b6b7eb7da19291ef79376dacbaded25e9df09f2fbef0e6ef65e79e51ef33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_curran, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 20:02:18 compute-0 podman[445556]: 2025-10-02 20:02:18.602178093 +0000 UTC m=+0.313255567 container attach 0c15b6b7eb7da19291ef79376dacbaded25e9df09f2fbef0e6ef65e79e51ef33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_curran, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:02:18 compute-0 ceph-mon[191910]: pgmap v1721: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:19 compute-0 nova_compute[355794]: 2025-10-02 20:02:19.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1722: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:19 compute-0 agitated_curran[445571]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:02:19 compute-0 agitated_curran[445571]: --> relative data size: 1.0
Oct 02 20:02:19 compute-0 agitated_curran[445571]: --> All data devices are unavailable
Oct 02 20:02:20 compute-0 systemd[1]: libpod-0c15b6b7eb7da19291ef79376dacbaded25e9df09f2fbef0e6ef65e79e51ef33.scope: Deactivated successfully.
Oct 02 20:02:20 compute-0 systemd[1]: libpod-0c15b6b7eb7da19291ef79376dacbaded25e9df09f2fbef0e6ef65e79e51ef33.scope: Consumed 1.365s CPU time.
Oct 02 20:02:20 compute-0 podman[445556]: 2025-10-02 20:02:20.024044694 +0000 UTC m=+1.735122118 container died 0c15b6b7eb7da19291ef79376dacbaded25e9df09f2fbef0e6ef65e79e51ef33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_curran, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 20:02:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8e0cd5597af87d0d706d44eb84f8fdbef8a59e1a00fe39229d349ae4e34b093-merged.mount: Deactivated successfully.
Oct 02 20:02:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:02:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2678840776' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:02:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:02:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2678840776' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:02:20 compute-0 podman[445556]: 2025-10-02 20:02:20.14731018 +0000 UTC m=+1.858387564 container remove 0c15b6b7eb7da19291ef79376dacbaded25e9df09f2fbef0e6ef65e79e51ef33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 20:02:20 compute-0 systemd[1]: libpod-conmon-0c15b6b7eb7da19291ef79376dacbaded25e9df09f2fbef0e6ef65e79e51ef33.scope: Deactivated successfully.
Oct 02 20:02:20 compute-0 sudo[445452]: pam_unix(sudo:session): session closed for user root
Oct 02 20:02:20 compute-0 podman[445602]: 2025-10-02 20:02:20.239608923 +0000 UTC m=+0.161749530 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 20:02:20 compute-0 podman[445608]: 2025-10-02 20:02:20.259579814 +0000 UTC m=+0.167634377 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, version=9.4, com.redhat.component=ubi9-container, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=base rhel9, io.buildah.version=1.29.0)
Oct 02 20:02:20 compute-0 sudo[445646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:02:20 compute-0 sudo[445646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:02:20 compute-0 sudo[445646]: pam_unix(sudo:session): session closed for user root
Oct 02 20:02:20 compute-0 sudo[445674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:02:20 compute-0 sudo[445674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:02:20 compute-0 sudo[445674]: pam_unix(sudo:session): session closed for user root
Oct 02 20:02:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:02:20 compute-0 sudo[445699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:02:20 compute-0 sudo[445699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:02:20 compute-0 sudo[445699]: pam_unix(sudo:session): session closed for user root
Oct 02 20:02:20 compute-0 ceph-mon[191910]: pgmap v1722: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2678840776' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:02:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2678840776' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:02:20 compute-0 nova_compute[355794]: 2025-10-02 20:02:20.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:20 compute-0 sudo[445724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:02:20 compute-0 sudo[445724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:02:21 compute-0 podman[445790]: 2025-10-02 20:02:21.322941637 +0000 UTC m=+0.090376973 container create 3ceb6b5ce5be11874ee2aed5c83c3c7ba7d3a4a1a0341c63ae00f648dc7b0e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:02:21 compute-0 podman[445790]: 2025-10-02 20:02:21.279131132 +0000 UTC m=+0.046566518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:02:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1723: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:21 compute-0 systemd[1]: Started libpod-conmon-3ceb6b5ce5be11874ee2aed5c83c3c7ba7d3a4a1a0341c63ae00f648dc7b0e96.scope.
Oct 02 20:02:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:02:21 compute-0 podman[445790]: 2025-10-02 20:02:21.512869865 +0000 UTC m=+0.280305261 container init 3ceb6b5ce5be11874ee2aed5c83c3c7ba7d3a4a1a0341c63ae00f648dc7b0e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:02:21 compute-0 podman[445790]: 2025-10-02 20:02:21.532072795 +0000 UTC m=+0.299508101 container start 3ceb6b5ce5be11874ee2aed5c83c3c7ba7d3a4a1a0341c63ae00f648dc7b0e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:02:21 compute-0 podman[445790]: 2025-10-02 20:02:21.537211592 +0000 UTC m=+0.304646938 container attach 3ceb6b5ce5be11874ee2aed5c83c3c7ba7d3a4a1a0341c63ae00f648dc7b0e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 20:02:21 compute-0 keen_lehmann[445806]: 167 167
Oct 02 20:02:21 compute-0 systemd[1]: libpod-3ceb6b5ce5be11874ee2aed5c83c3c7ba7d3a4a1a0341c63ae00f648dc7b0e96.scope: Deactivated successfully.
Oct 02 20:02:21 compute-0 podman[445811]: 2025-10-02 20:02:21.638508274 +0000 UTC m=+0.064913006 container died 3ceb6b5ce5be11874ee2aed5c83c3c7ba7d3a4a1a0341c63ae00f648dc7b0e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:02:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b81fe198ce28334cb49446731f4f76feefc18a1cd6a774bf08c4842ba785861-merged.mount: Deactivated successfully.
Oct 02 20:02:21 compute-0 podman[445811]: 2025-10-02 20:02:21.717294668 +0000 UTC m=+0.143699410 container remove 3ceb6b5ce5be11874ee2aed5c83c3c7ba7d3a4a1a0341c63ae00f648dc7b0e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 20:02:21 compute-0 systemd[1]: libpod-conmon-3ceb6b5ce5be11874ee2aed5c83c3c7ba7d3a4a1a0341c63ae00f648dc7b0e96.scope: Deactivated successfully.
Oct 02 20:02:22 compute-0 podman[445833]: 2025-10-02 20:02:22.029719482 +0000 UTC m=+0.098032546 container create 50db82d675e17742b695f354e70ec28ac47305a436954b40c5898ef04cdf1b16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:02:22 compute-0 podman[445833]: 2025-10-02 20:02:21.994582358 +0000 UTC m=+0.062895482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:02:22 compute-0 systemd[1]: Started libpod-conmon-50db82d675e17742b695f354e70ec28ac47305a436954b40c5898ef04cdf1b16.scope.
Oct 02 20:02:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e2a3b99971dc2d4749e2cfd1d1904289f74e6470eb8d38d305fa250b9bdb4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e2a3b99971dc2d4749e2cfd1d1904289f74e6470eb8d38d305fa250b9bdb4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e2a3b99971dc2d4749e2cfd1d1904289f74e6470eb8d38d305fa250b9bdb4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e2a3b99971dc2d4749e2cfd1d1904289f74e6470eb8d38d305fa250b9bdb4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:02:22 compute-0 podman[445833]: 2025-10-02 20:02:22.236831527 +0000 UTC m=+0.305144611 container init 50db82d675e17742b695f354e70ec28ac47305a436954b40c5898ef04cdf1b16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_robinson, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 20:02:22 compute-0 podman[445833]: 2025-10-02 20:02:22.266794833 +0000 UTC m=+0.335107907 container start 50db82d675e17742b695f354e70ec28ac47305a436954b40c5898ef04cdf1b16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_robinson, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:02:22 compute-0 podman[445833]: 2025-10-02 20:02:22.27758786 +0000 UTC m=+0.345900914 container attach 50db82d675e17742b695f354e70ec28ac47305a436954b40c5898ef04cdf1b16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_robinson, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 20:02:22 compute-0 ceph-mon[191910]: pgmap v1723: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:23 compute-0 happy_robinson[445848]: {
Oct 02 20:02:23 compute-0 happy_robinson[445848]:     "0": [
Oct 02 20:02:23 compute-0 happy_robinson[445848]:         {
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "devices": [
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "/dev/loop3"
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             ],
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "lv_name": "ceph_lv0",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "lv_size": "21470642176",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "name": "ceph_lv0",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "tags": {
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.cluster_name": "ceph",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.crush_device_class": "",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.encrypted": "0",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.osd_id": "0",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.type": "block",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.vdo": "0"
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             },
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "type": "block",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "vg_name": "ceph_vg0"
Oct 02 20:02:23 compute-0 happy_robinson[445848]:         }
Oct 02 20:02:23 compute-0 happy_robinson[445848]:     ],
Oct 02 20:02:23 compute-0 happy_robinson[445848]:     "1": [
Oct 02 20:02:23 compute-0 happy_robinson[445848]:         {
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "devices": [
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "/dev/loop4"
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             ],
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "lv_name": "ceph_lv1",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "lv_size": "21470642176",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "name": "ceph_lv1",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "tags": {
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.cluster_name": "ceph",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.crush_device_class": "",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.encrypted": "0",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.osd_id": "1",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.type": "block",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.vdo": "0"
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             },
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "type": "block",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "vg_name": "ceph_vg1"
Oct 02 20:02:23 compute-0 happy_robinson[445848]:         }
Oct 02 20:02:23 compute-0 happy_robinson[445848]:     ],
Oct 02 20:02:23 compute-0 happy_robinson[445848]:     "2": [
Oct 02 20:02:23 compute-0 happy_robinson[445848]:         {
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "devices": [
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "/dev/loop5"
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             ],
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "lv_name": "ceph_lv2",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "lv_size": "21470642176",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "name": "ceph_lv2",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "tags": {
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.cluster_name": "ceph",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.crush_device_class": "",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.encrypted": "0",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.osd_id": "2",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.type": "block",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:                 "ceph.vdo": "0"
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             },
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "type": "block",
Oct 02 20:02:23 compute-0 happy_robinson[445848]:             "vg_name": "ceph_vg2"
Oct 02 20:02:23 compute-0 happy_robinson[445848]:         }
Oct 02 20:02:23 compute-0 happy_robinson[445848]:     ]
Oct 02 20:02:23 compute-0 happy_robinson[445848]: }
Oct 02 20:02:23 compute-0 systemd[1]: libpod-50db82d675e17742b695f354e70ec28ac47305a436954b40c5898ef04cdf1b16.scope: Deactivated successfully.
Oct 02 20:02:23 compute-0 podman[445833]: 2025-10-02 20:02:23.071971183 +0000 UTC m=+1.140284257 container died 50db82d675e17742b695f354e70ec28ac47305a436954b40c5898ef04cdf1b16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct 02 20:02:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2e2a3b99971dc2d4749e2cfd1d1904289f74e6470eb8d38d305fa250b9bdb4c-merged.mount: Deactivated successfully.
Oct 02 20:02:23 compute-0 podman[445833]: 2025-10-02 20:02:23.175188526 +0000 UTC m=+1.243501570 container remove 50db82d675e17742b695f354e70ec28ac47305a436954b40c5898ef04cdf1b16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_robinson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:02:23 compute-0 systemd[1]: libpod-conmon-50db82d675e17742b695f354e70ec28ac47305a436954b40c5898ef04cdf1b16.scope: Deactivated successfully.
Oct 02 20:02:23 compute-0 sudo[445724]: pam_unix(sudo:session): session closed for user root
Oct 02 20:02:23 compute-0 podman[445857]: 2025-10-02 20:02:23.226920271 +0000 UTC m=+0.113061516 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 02 20:02:23 compute-0 podman[445860]: 2025-10-02 20:02:23.242506436 +0000 UTC m=+0.121319416 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 20:02:23 compute-0 podman[445866]: 2025-10-02 20:02:23.296505811 +0000 UTC m=+0.152829163 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:02:23 compute-0 sudo[445920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:02:23 compute-0 sudo[445920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:02:23 compute-0 sudo[445920]: pam_unix(sudo:session): session closed for user root
Oct 02 20:02:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:23 compute-0 sudo[445952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:02:23 compute-0 sudo[445952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:02:23 compute-0 sudo[445952]: pam_unix(sudo:session): session closed for user root
Oct 02 20:02:23 compute-0 sudo[445977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:02:23 compute-0 sudo[445977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:02:23 compute-0 sudo[445977]: pam_unix(sudo:session): session closed for user root
Oct 02 20:02:23 compute-0 sudo[446002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:02:23 compute-0 sudo[446002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:02:24 compute-0 nova_compute[355794]: 2025-10-02 20:02:24.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:24 compute-0 podman[446062]: 2025-10-02 20:02:24.302193311 +0000 UTC m=+0.101143660 container create 507c6ada4096945299e8e104af0d8674bec497a9d4ecd8eefa7656f4ac732713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 20:02:24 compute-0 podman[446062]: 2025-10-02 20:02:24.265004252 +0000 UTC m=+0.063954661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:02:24 compute-0 systemd[1]: Started libpod-conmon-507c6ada4096945299e8e104af0d8674bec497a9d4ecd8eefa7656f4ac732713.scope.
Oct 02 20:02:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:02:24 compute-0 podman[446062]: 2025-10-02 20:02:24.452036123 +0000 UTC m=+0.250986532 container init 507c6ada4096945299e8e104af0d8674bec497a9d4ecd8eefa7656f4ac732713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:02:24 compute-0 podman[446062]: 2025-10-02 20:02:24.473581766 +0000 UTC m=+0.272532115 container start 507c6ada4096945299e8e104af0d8674bec497a9d4ecd8eefa7656f4ac732713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 20:02:24 compute-0 podman[446062]: 2025-10-02 20:02:24.48089368 +0000 UTC m=+0.279844029 container attach 507c6ada4096945299e8e104af0d8674bec497a9d4ecd8eefa7656f4ac732713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:02:24 compute-0 inspiring_haslett[446078]: 167 167
Oct 02 20:02:24 compute-0 systemd[1]: libpod-507c6ada4096945299e8e104af0d8674bec497a9d4ecd8eefa7656f4ac732713.scope: Deactivated successfully.
Oct 02 20:02:24 compute-0 podman[446062]: 2025-10-02 20:02:24.485361499 +0000 UTC m=+0.284311818 container died 507c6ada4096945299e8e104af0d8674bec497a9d4ecd8eefa7656f4ac732713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_haslett, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:02:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d1b377a3997c3b05ca857d193fabda5febba112220926ac9abb60e02f089808-merged.mount: Deactivated successfully.
Oct 02 20:02:24 compute-0 podman[446062]: 2025-10-02 20:02:24.579813319 +0000 UTC m=+0.378763638 container remove 507c6ada4096945299e8e104af0d8674bec497a9d4ecd8eefa7656f4ac732713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 20:02:24 compute-0 systemd[1]: libpod-conmon-507c6ada4096945299e8e104af0d8674bec497a9d4ecd8eefa7656f4ac732713.scope: Deactivated successfully.
Oct 02 20:02:24 compute-0 ceph-mon[191910]: pgmap v1724: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:24 compute-0 podman[446101]: 2025-10-02 20:02:24.876046093 +0000 UTC m=+0.089755887 container create 7c11c2fd195ac49bc44dc7f62eb431532e2e7628169b6b6619d56504243c379e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 20:02:24 compute-0 podman[446101]: 2025-10-02 20:02:24.84168528 +0000 UTC m=+0.055395164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:02:24 compute-0 systemd[1]: Started libpod-conmon-7c11c2fd195ac49bc44dc7f62eb431532e2e7628169b6b6619d56504243c379e.scope.
Oct 02 20:02:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a11394cee7c78c3830afda3605ec2af80148c2c10a94d1a8be0fa040a6ba36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a11394cee7c78c3830afda3605ec2af80148c2c10a94d1a8be0fa040a6ba36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a11394cee7c78c3830afda3605ec2af80148c2c10a94d1a8be0fa040a6ba36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a11394cee7c78c3830afda3605ec2af80148c2c10a94d1a8be0fa040a6ba36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:02:25 compute-0 podman[446101]: 2025-10-02 20:02:25.027436997 +0000 UTC m=+0.241146851 container init 7c11c2fd195ac49bc44dc7f62eb431532e2e7628169b6b6619d56504243c379e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shirley, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 20:02:25 compute-0 podman[446101]: 2025-10-02 20:02:25.04372329 +0000 UTC m=+0.257433094 container start 7c11c2fd195ac49bc44dc7f62eb431532e2e7628169b6b6619d56504243c379e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:02:25 compute-0 podman[446101]: 2025-10-02 20:02:25.049614716 +0000 UTC m=+0.263324570 container attach 7c11c2fd195ac49bc44dc7f62eb431532e2e7628169b6b6619d56504243c379e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shirley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:02:25 compute-0 podman[446119]: 2025-10-02 20:02:25.059660103 +0000 UTC m=+0.104089547 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 20:02:25 compute-0 podman[446115]: 2025-10-02 20:02:25.069722881 +0000 UTC m=+0.115424479 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 02 20:02:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1725: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:02:25 compute-0 nova_compute[355794]: 2025-10-02 20:02:25.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]: {
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:         "osd_id": 1,
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:         "type": "bluestore"
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:     },
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:         "osd_id": 2,
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:         "type": "bluestore"
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:     },
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:         "osd_id": 0,
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:         "type": "bluestore"
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]:     }
Oct 02 20:02:26 compute-0 beautiful_shirley[446123]: }
Oct 02 20:02:26 compute-0 systemd[1]: libpod-7c11c2fd195ac49bc44dc7f62eb431532e2e7628169b6b6619d56504243c379e.scope: Deactivated successfully.
Oct 02 20:02:26 compute-0 podman[446101]: 2025-10-02 20:02:26.254184351 +0000 UTC m=+1.467894195 container died 7c11c2fd195ac49bc44dc7f62eb431532e2e7628169b6b6619d56504243c379e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 20:02:26 compute-0 systemd[1]: libpod-7c11c2fd195ac49bc44dc7f62eb431532e2e7628169b6b6619d56504243c379e.scope: Consumed 1.206s CPU time.
Oct 02 20:02:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4a11394cee7c78c3830afda3605ec2af80148c2c10a94d1a8be0fa040a6ba36-merged.mount: Deactivated successfully.
Oct 02 20:02:26 compute-0 podman[446101]: 2025-10-02 20:02:26.353668975 +0000 UTC m=+1.567378779 container remove 7c11c2fd195ac49bc44dc7f62eb431532e2e7628169b6b6619d56504243c379e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shirley, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 20:02:26 compute-0 systemd[1]: libpod-conmon-7c11c2fd195ac49bc44dc7f62eb431532e2e7628169b6b6619d56504243c379e.scope: Deactivated successfully.
Oct 02 20:02:26 compute-0 sudo[446002]: pam_unix(sudo:session): session closed for user root
Oct 02 20:02:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:02:26 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:02:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:02:26 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:02:26 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 16c3eff2-7ab5-459d-b7c1-1790342aaf97 does not exist
Oct 02 20:02:26 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev a1814d2b-9cca-4017-a12f-cb7a4aac1cc3 does not exist
Oct 02 20:02:26 compute-0 sudo[446206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:02:26 compute-0 sudo[446206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:02:26 compute-0 sudo[446206]: pam_unix(sudo:session): session closed for user root
Oct 02 20:02:26 compute-0 ceph-mon[191910]: pgmap v1725: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:26 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:02:26 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:02:26 compute-0 sudo[446231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:02:26 compute-0 sudo[446231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:02:26 compute-0 sudo[446231]: pam_unix(sudo:session): session closed for user root
Oct 02 20:02:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1726: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:28 compute-0 ceph-mon[191910]: pgmap v1726: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:29 compute-0 nova_compute[355794]: 2025-10-02 20:02:29.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1727: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:29 compute-0 podman[157186]: time="2025-10-02T20:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:02:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:02:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9068 "" "Go-http-client/1.1"
Oct 02 20:02:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:02:30 compute-0 nova_compute[355794]: 2025-10-02 20:02:30.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:30 compute-0 ceph-mon[191910]: pgmap v1727: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1728: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:31 compute-0 openstack_network_exporter[372736]: ERROR   20:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:02:31 compute-0 openstack_network_exporter[372736]: ERROR   20:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:02:31 compute-0 openstack_network_exporter[372736]: ERROR   20:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:02:31 compute-0 openstack_network_exporter[372736]: ERROR   20:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:02:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:02:31 compute-0 openstack_network_exporter[372736]: ERROR   20:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:02:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:02:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:02:32.316 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:02:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:02:32.317 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:02:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:02:32.317 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:02:32 compute-0 ceph-mon[191910]: pgmap v1728: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1729: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:02:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:02:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:02:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:02:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:02:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:02:34 compute-0 nova_compute[355794]: 2025-10-02 20:02:34.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:34 compute-0 ceph-mon[191910]: pgmap v1729: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1730: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:02:35 compute-0 nova_compute[355794]: 2025-10-02 20:02:35.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:36 compute-0 podman[446256]: 2025-10-02 20:02:36.684023391 +0000 UTC m=+0.108273608 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 20:02:36 compute-0 ceph-mon[191910]: pgmap v1730: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:38 compute-0 ceph-mon[191910]: pgmap v1731: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:39 compute-0 nova_compute[355794]: 2025-10-02 20:02:39.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:39 compute-0 ceph-mon[191910]: pgmap v1732: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:02:40 compute-0 nova_compute[355794]: 2025-10-02 20:02:40.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:42 compute-0 ceph-mon[191910]: pgmap v1733: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:44 compute-0 nova_compute[355794]: 2025-10-02 20:02:44.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:44 compute-0 ceph-mon[191910]: pgmap v1734: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1735: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:02:45 compute-0 nova_compute[355794]: 2025-10-02 20:02:45.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:45 compute-0 podman[446276]: 2025-10-02 20:02:45.70780646 +0000 UTC m=+0.122595670 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:02:45 compute-0 podman[446277]: 2025-10-02 20:02:45.741482645 +0000 UTC m=+0.151826906 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 20:02:46 compute-0 ceph-mon[191910]: pgmap v1735: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:48 compute-0 ceph-mon[191910]: pgmap v1736: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:49 compute-0 nova_compute[355794]: 2025-10-02 20:02:49.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:02:50 compute-0 ceph-mon[191910]: pgmap v1737: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:50 compute-0 nova_compute[355794]: 2025-10-02 20:02:50.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:50 compute-0 podman[446319]: 2025-10-02 20:02:50.767045427 +0000 UTC m=+0.177044837 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Oct 02 20:02:50 compute-0 podman[446320]: 2025-10-02 20:02:50.773036916 +0000 UTC m=+0.176397349 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.openshift.tags=base rhel9, release=1214.1726694543, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vcs-type=git, name=ubi9)
Oct 02 20:02:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:52 compute-0 ceph-mon[191910]: pgmap v1738: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1739: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:53 compute-0 podman[446356]: 2025-10-02 20:02:53.677867172 +0000 UTC m=+0.101729175 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 20:02:53 compute-0 podman[446357]: 2025-10-02 20:02:53.7127842 +0000 UTC m=+0.119275141 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:02:53 compute-0 podman[446358]: 2025-10-02 20:02:53.728920389 +0000 UTC m=+0.140091324 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 20:02:54 compute-0 nova_compute[355794]: 2025-10-02 20:02:54.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:54 compute-0 ceph-mon[191910]: pgmap v1739: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:02:55 compute-0 podman[446419]: 2025-10-02 20:02:55.685644655 +0000 UTC m=+0.104210151 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 20:02:55 compute-0 nova_compute[355794]: 2025-10-02 20:02:55.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:55 compute-0 podman[446418]: 2025-10-02 20:02:55.703474999 +0000 UTC m=+0.126209596 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1755695350, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64)
Oct 02 20:02:56 compute-0 ceph-mon[191910]: pgmap v1740: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:57 compute-0 nova_compute[355794]: 2025-10-02 20:02:57.595 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:57 compute-0 nova_compute[355794]: 2025-10-02 20:02:57.596 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:02:58 compute-0 ceph-mon[191910]: pgmap v1741: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:59 compute-0 nova_compute[355794]: 2025-10-02 20:02:59.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:02:59 compute-0 podman[157186]: time="2025-10-02T20:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:02:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:02:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9068 "" "Go-http-client/1.1"
Oct 02 20:03:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:03:00 compute-0 nova_compute[355794]: 2025-10-02 20:03:00.577 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:00 compute-0 ceph-mon[191910]: pgmap v1742: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:00 compute-0 nova_compute[355794]: 2025-10-02 20:03:00.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:01 compute-0 openstack_network_exporter[372736]: ERROR   20:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:03:01 compute-0 openstack_network_exporter[372736]: ERROR   20:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:03:01 compute-0 openstack_network_exporter[372736]: ERROR   20:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:03:01 compute-0 openstack_network_exporter[372736]: ERROR   20:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:03:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:03:01 compute-0 openstack_network_exporter[372736]: ERROR   20:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:03:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:03:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1743: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:01 compute-0 nova_compute[355794]: 2025-10-02 20:03:01.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:01 compute-0 nova_compute[355794]: 2025-10-02 20:03:01.574 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:03:01 compute-0 nova_compute[355794]: 2025-10-02 20:03:01.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:03:02 compute-0 nova_compute[355794]: 2025-10-02 20:03:02.246 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:03:02 compute-0 nova_compute[355794]: 2025-10-02 20:03:02.247 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:03:02 compute-0 nova_compute[355794]: 2025-10-02 20:03:02.248 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:03:02 compute-0 nova_compute[355794]: 2025-10-02 20:03:02.249 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:03:02 compute-0 ceph-mon[191910]: pgmap v1743: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:03:03
Oct 02 20:03:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:03:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:03:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.meta', 'backups']
Oct 02 20:03:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:03:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:03:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:03:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:03:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:03:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:03:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:03:04 compute-0 nova_compute[355794]: 2025-10-02 20:03:04.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:03:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:03:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:03:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:03:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:03:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:03:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:03:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.301 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.302 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:03:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.314 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.314 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.314 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.314 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.314 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T20:03:04.314903) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.388 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.389 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.390 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.390 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.391 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.391 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.391 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.391 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.392 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.393 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:03:04.392092) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.428 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.429 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.429 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.430 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.430 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.430 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.431 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.431 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.431 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.431 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.432 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:03:04.431365) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.433 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.434 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.435 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.435 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.436 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.436 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.437 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.437 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:03:04.435317) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.437 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.438 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.438 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.438 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.438 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.439 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:03:04.438661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.479 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.480 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.481 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.481 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.481 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.481 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.481 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.482 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.483 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.484 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.483 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:03:04.481818) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.484 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.485 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.485 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.485 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.486 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.486 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.487 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:03:04.486168) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.492 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.493 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.494 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.494 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.494 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.495 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.495 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.495 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.496 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:03:04.495171) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.496 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.497 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.497 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.498 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.498 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.498 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:03:04.498657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.499 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.500 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.500 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.501 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.501 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.502 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:03:04.501217) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.503 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.504 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.505 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:03:04.505561) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.507 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.508 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.508 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.508 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.509 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:03:04.509193) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.509 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.511 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.512 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.512 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.513 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.514 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.515 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.515 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:03:04.513663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.516 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.517 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.517 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.517 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.517 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.518 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.518 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.519 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.519 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.520 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.520 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.520 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.520 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.520 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.521 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.522 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.522 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:03:04.517860) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.522 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:03:04.520752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.522 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.522 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.523 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.523 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.524 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:03:04.522746) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.525 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.525 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.525 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.526 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.526 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.526 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.527 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.527 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.527 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.527 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.528 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2730 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.528 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.529 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.529 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.530 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.530 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:03:04.526078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.530 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:03:04.527931) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.531 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:03:04.530237) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.532 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.532 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.532 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.532 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.532 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.533 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.533 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.534 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.534 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.534 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.534 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.534 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:03:04.532577) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.535 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.535 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.535 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.535 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.535 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.535 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.536 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.536 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.536 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.536 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.536 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.537 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 50740000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.537 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.537 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.537 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.537 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.538 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.538 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.538 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.538 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.539 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.542 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:03:04.534845) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.542 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:03:04.535865) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.542 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:03:04.536911) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.542 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:03:04.537986) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:03:04.544 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:04 compute-0 nova_compute[355794]: 2025-10-02 20:03:04.629 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:03:04 compute-0 ceph-mon[191910]: pgmap v1744: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:04 compute-0 nova_compute[355794]: 2025-10-02 20:03:04.692 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:03:04 compute-0 nova_compute[355794]: 2025-10-02 20:03:04.693 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:03:04 compute-0 nova_compute[355794]: 2025-10-02 20:03:04.694 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:04 compute-0 nova_compute[355794]: 2025-10-02 20:03:04.694 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:04 compute-0 nova_compute[355794]: 2025-10-02 20:03:04.694 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:04 compute-0 nova_compute[355794]: 2025-10-02 20:03:04.725 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:03:04 compute-0 nova_compute[355794]: 2025-10-02 20:03:04.726 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:03:04 compute-0 nova_compute[355794]: 2025-10-02 20:03:04.727 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:03:04 compute-0 nova_compute[355794]: 2025-10-02 20:03:04.728 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:03:04 compute-0 nova_compute[355794]: 2025-10-02 20:03:04.729 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:03:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:03:05 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3239639645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:03:05 compute-0 nova_compute[355794]: 2025-10-02 20:03:05.212 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:03:05 compute-0 nova_compute[355794]: 2025-10-02 20:03:05.335 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:03:05 compute-0 nova_compute[355794]: 2025-10-02 20:03:05.336 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:03:05 compute-0 nova_compute[355794]: 2025-10-02 20:03:05.336 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:03:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:03:05 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3239639645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:03:05 compute-0 nova_compute[355794]: 2025-10-02 20:03:05.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:05 compute-0 nova_compute[355794]: 2025-10-02 20:03:05.951 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:03:05 compute-0 nova_compute[355794]: 2025-10-02 20:03:05.954 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3859MB free_disk=59.955204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:03:05 compute-0 nova_compute[355794]: 2025-10-02 20:03:05.955 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:03:05 compute-0 nova_compute[355794]: 2025-10-02 20:03:05.955 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:03:06 compute-0 nova_compute[355794]: 2025-10-02 20:03:06.059 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:03:06 compute-0 nova_compute[355794]: 2025-10-02 20:03:06.060 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:03:06 compute-0 nova_compute[355794]: 2025-10-02 20:03:06.061 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:03:06 compute-0 nova_compute[355794]: 2025-10-02 20:03:06.107 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:03:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:03:06 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/117136726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:03:06 compute-0 nova_compute[355794]: 2025-10-02 20:03:06.625 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:03:06 compute-0 nova_compute[355794]: 2025-10-02 20:03:06.639 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:03:06 compute-0 nova_compute[355794]: 2025-10-02 20:03:06.660 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:03:06 compute-0 nova_compute[355794]: 2025-10-02 20:03:06.663 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:03:06 compute-0 nova_compute[355794]: 2025-10-02 20:03:06.663 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:03:06 compute-0 ceph-mon[191910]: pgmap v1745: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:06 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/117136726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:03:06 compute-0 nova_compute[355794]: 2025-10-02 20:03:06.947 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:06 compute-0 nova_compute[355794]: 2025-10-02 20:03:06.948 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:06 compute-0 nova_compute[355794]: 2025-10-02 20:03:06.948 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:06 compute-0 nova_compute[355794]: 2025-10-02 20:03:06.949 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:06 compute-0 nova_compute[355794]: 2025-10-02 20:03:06.973 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Triggering sync for uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 20:03:06 compute-0 nova_compute[355794]: 2025-10-02 20:03:06.974 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:03:06 compute-0 nova_compute[355794]: 2025-10-02 20:03:06.975 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:03:07 compute-0 nova_compute[355794]: 2025-10-02 20:03:07.003 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:03:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1746: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:07 compute-0 podman[446505]: 2025-10-02 20:03:07.718845889 +0000 UTC m=+0.135810451 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 20:03:08 compute-0 ceph-mon[191910]: pgmap v1746: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:09 compute-0 nova_compute[355794]: 2025-10-02 20:03:09.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:03:10 compute-0 ceph-mon[191910]: pgmap v1747: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:10 compute-0 nova_compute[355794]: 2025-10-02 20:03:10.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1748: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:12 compute-0 ceph-mon[191910]: pgmap v1748: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:03:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:14 compute-0 nova_compute[355794]: 2025-10-02 20:03:14.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:14 compute-0 ceph-mon[191910]: pgmap v1749: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:03:15 compute-0 nova_compute[355794]: 2025-10-02 20:03:15.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:16 compute-0 podman[446525]: 2025-10-02 20:03:16.703262781 +0000 UTC m=+0.125219119 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 20:03:16 compute-0 podman[446526]: 2025-10-02 20:03:16.727885335 +0000 UTC m=+0.141203424 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 20:03:16 compute-0 ceph-mon[191910]: pgmap v1750: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:18 compute-0 ceph-mon[191910]: pgmap v1751: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:19 compute-0 nova_compute[355794]: 2025-10-02 20:03:19.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:03:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4178562790' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:03:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:03:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4178562790' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:03:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:03:20 compute-0 nova_compute[355794]: 2025-10-02 20:03:20.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:20 compute-0 ceph-mon[191910]: pgmap v1752: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/4178562790' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:03:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/4178562790' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:03:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1753: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:21 compute-0 podman[446568]: 2025-10-02 20:03:21.705211696 +0000 UTC m=+0.128365843 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release=1214.1726694543, vcs-type=git, container_name=kepler, architecture=x86_64, release-0.7.12=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.component=ubi9-container, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30)
Oct 02 20:03:21 compute-0 podman[446567]: 2025-10-02 20:03:21.730994851 +0000 UTC m=+0.158537844 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 20:03:21 compute-0 ceph-mon[191910]: pgmap v1753: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:24 compute-0 nova_compute[355794]: 2025-10-02 20:03:24.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:24 compute-0 ceph-mon[191910]: pgmap v1754: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:24 compute-0 podman[446602]: 2025-10-02 20:03:24.694532768 +0000 UTC m=+0.121055369 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 20:03:24 compute-0 podman[446603]: 2025-10-02 20:03:24.703824295 +0000 UTC m=+0.108894606 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 20:03:24 compute-0 podman[446609]: 2025-10-02 20:03:24.756132075 +0000 UTC m=+0.153357287 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 20:03:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1755: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:03:25 compute-0 nova_compute[355794]: 2025-10-02 20:03:25.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:26 compute-0 ceph-mon[191910]: pgmap v1755: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:26 compute-0 podman[446665]: 2025-10-02 20:03:26.722980461 +0000 UTC m=+0.135718068 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 20:03:26 compute-0 podman[446664]: 2025-10-02 20:03:26.739757117 +0000 UTC m=+0.162461479 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9)
Oct 02 20:03:26 compute-0 sudo[446707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:03:26 compute-0 sudo[446707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:26 compute-0 sudo[446707]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:27 compute-0 sudo[446732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:03:27 compute-0 sudo[446732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:27 compute-0 sudo[446732]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:27 compute-0 sudo[446757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:03:27 compute-0 sudo[446757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:27 compute-0 sudo[446757]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:27 compute-0 sudo[446782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 20:03:27 compute-0 sudo[446782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:27 compute-0 sudo[446782]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:03:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:03:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:03:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:03:27 compute-0 sudo[446828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:03:27 compute-0 sudo[446828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:27 compute-0 sudo[446828]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:28 compute-0 sudo[446853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:03:28 compute-0 sudo[446853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:28 compute-0 sudo[446853]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:28 compute-0 sudo[446878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:03:28 compute-0 sudo[446878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:28 compute-0 sudo[446878]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:28 compute-0 sudo[446903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:03:28 compute-0 sudo[446903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:28 compute-0 ceph-mon[191910]: pgmap v1756: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:03:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:03:29 compute-0 sudo[446903]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:03:29 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:03:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:03:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:03:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:03:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:03:29 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 950345a1-cbd2-4aec-aa12-da4bf17aa584 does not exist
Oct 02 20:03:29 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 476d00d5-20fd-4d15-9542-fb2602778f10 does not exist
Oct 02 20:03:29 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e7c7e42e-f96d-430c-9fa9-9a2f4f50f547 does not exist
Oct 02 20:03:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:03:29 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:03:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:03:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:03:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:03:29 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:03:29 compute-0 nova_compute[355794]: 2025-10-02 20:03:29.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:29 compute-0 sudo[446961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:03:29 compute-0 sudo[446961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:29 compute-0 sudo[446961]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1757: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:03:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:03:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:03:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:03:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:03:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:03:29 compute-0 sudo[446986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:03:29 compute-0 sudo[446986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:29 compute-0 sudo[446986]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:29 compute-0 sudo[447011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:03:29 compute-0 sudo[447011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:29 compute-0 sudo[447011]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:29 compute-0 podman[157186]: time="2025-10-02T20:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:03:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:03:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9073 "" "Go-http-client/1.1"
Oct 02 20:03:29 compute-0 sudo[447036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:03:29 compute-0 sudo[447036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:03:30 compute-0 podman[447099]: 2025-10-02 20:03:30.493180437 +0000 UTC m=+0.096872056 container create d326ee7b6ddfefadd2dd1f9f8891eb4369e9c9b2a775ca0e1d279649bb90fb60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 20:03:30 compute-0 ceph-mon[191910]: pgmap v1757: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:30 compute-0 podman[447099]: 2025-10-02 20:03:30.456705117 +0000 UTC m=+0.060396806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:03:30 compute-0 systemd[1]: Started libpod-conmon-d326ee7b6ddfefadd2dd1f9f8891eb4369e9c9b2a775ca0e1d279649bb90fb60.scope.
Oct 02 20:03:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:03:30 compute-0 podman[447099]: 2025-10-02 20:03:30.666665628 +0000 UTC m=+0.270357267 container init d326ee7b6ddfefadd2dd1f9f8891eb4369e9c9b2a775ca0e1d279649bb90fb60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_aryabhata, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:03:30 compute-0 podman[447099]: 2025-10-02 20:03:30.68482761 +0000 UTC m=+0.288519219 container start d326ee7b6ddfefadd2dd1f9f8891eb4369e9c9b2a775ca0e1d279649bb90fb60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_aryabhata, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:03:30 compute-0 podman[447099]: 2025-10-02 20:03:30.691518638 +0000 UTC m=+0.295210267 container attach d326ee7b6ddfefadd2dd1f9f8891eb4369e9c9b2a775ca0e1d279649bb90fb60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_aryabhata, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:03:30 compute-0 festive_aryabhata[447115]: 167 167
Oct 02 20:03:30 compute-0 systemd[1]: libpod-d326ee7b6ddfefadd2dd1f9f8891eb4369e9c9b2a775ca0e1d279649bb90fb60.scope: Deactivated successfully.
Oct 02 20:03:30 compute-0 podman[447099]: 2025-10-02 20:03:30.70101197 +0000 UTC m=+0.304703589 container died d326ee7b6ddfefadd2dd1f9f8891eb4369e9c9b2a775ca0e1d279649bb90fb60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:03:30 compute-0 nova_compute[355794]: 2025-10-02 20:03:30.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a147082329fb8f10f8b25add7e376ce1ab59064045955fe2429e1b5457fd4c02-merged.mount: Deactivated successfully.
Oct 02 20:03:30 compute-0 podman[447099]: 2025-10-02 20:03:30.780677278 +0000 UTC m=+0.384368867 container remove d326ee7b6ddfefadd2dd1f9f8891eb4369e9c9b2a775ca0e1d279649bb90fb60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_aryabhata, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 02 20:03:30 compute-0 systemd[1]: libpod-conmon-d326ee7b6ddfefadd2dd1f9f8891eb4369e9c9b2a775ca0e1d279649bb90fb60.scope: Deactivated successfully.
Oct 02 20:03:31 compute-0 podman[447138]: 2025-10-02 20:03:31.123115129 +0000 UTC m=+0.112450779 container create c9530e1e1bb43891a2528bdc0d380abbbb7be3d1ab3a98ac653435af0f48ceb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 20:03:31 compute-0 podman[447138]: 2025-10-02 20:03:31.079323555 +0000 UTC m=+0.068659255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:03:31 compute-0 systemd[1]: Started libpod-conmon-c9530e1e1bb43891a2528bdc0d380abbbb7be3d1ab3a98ac653435af0f48ceb9.scope.
Oct 02 20:03:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22a0eafaea33ccc8e9fd6a9ada90e3a4d17b696fb8ef723486078c2fb4a66e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22a0eafaea33ccc8e9fd6a9ada90e3a4d17b696fb8ef723486078c2fb4a66e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22a0eafaea33ccc8e9fd6a9ada90e3a4d17b696fb8ef723486078c2fb4a66e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22a0eafaea33ccc8e9fd6a9ada90e3a4d17b696fb8ef723486078c2fb4a66e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22a0eafaea33ccc8e9fd6a9ada90e3a4d17b696fb8ef723486078c2fb4a66e1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:03:31 compute-0 podman[447138]: 2025-10-02 20:03:31.296509788 +0000 UTC m=+0.285845448 container init c9530e1e1bb43891a2528bdc0d380abbbb7be3d1ab3a98ac653435af0f48ceb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jemison, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:03:31 compute-0 podman[447138]: 2025-10-02 20:03:31.319721305 +0000 UTC m=+0.309056935 container start c9530e1e1bb43891a2528bdc0d380abbbb7be3d1ab3a98ac653435af0f48ceb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 20:03:31 compute-0 podman[447138]: 2025-10-02 20:03:31.328359234 +0000 UTC m=+0.317694894 container attach c9530e1e1bb43891a2528bdc0d380abbbb7be3d1ab3a98ac653435af0f48ceb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 20:03:31 compute-0 openstack_network_exporter[372736]: ERROR   20:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:03:31 compute-0 openstack_network_exporter[372736]: ERROR   20:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:03:31 compute-0 openstack_network_exporter[372736]: ERROR   20:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:03:31 compute-0 openstack_network_exporter[372736]: ERROR   20:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:03:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:03:31 compute-0 openstack_network_exporter[372736]: ERROR   20:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:03:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:03:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1758: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:03:32.317 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:03:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:03:32.318 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:03:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:03:32.318 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:03:32 compute-0 ceph-mon[191910]: pgmap v1758: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:32 compute-0 adoring_jemison[447154]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:03:32 compute-0 adoring_jemison[447154]: --> relative data size: 1.0
Oct 02 20:03:32 compute-0 adoring_jemison[447154]: --> All data devices are unavailable
Oct 02 20:03:32 compute-0 systemd[1]: libpod-c9530e1e1bb43891a2528bdc0d380abbbb7be3d1ab3a98ac653435af0f48ceb9.scope: Deactivated successfully.
Oct 02 20:03:32 compute-0 systemd[1]: libpod-c9530e1e1bb43891a2528bdc0d380abbbb7be3d1ab3a98ac653435af0f48ceb9.scope: Consumed 1.361s CPU time.
Oct 02 20:03:32 compute-0 podman[447138]: 2025-10-02 20:03:32.76769844 +0000 UTC m=+1.757034090 container died c9530e1e1bb43891a2528bdc0d380abbbb7be3d1ab3a98ac653435af0f48ceb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jemison, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 20:03:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-d22a0eafaea33ccc8e9fd6a9ada90e3a4d17b696fb8ef723486078c2fb4a66e1-merged.mount: Deactivated successfully.
Oct 02 20:03:32 compute-0 podman[447138]: 2025-10-02 20:03:32.868920841 +0000 UTC m=+1.858256501 container remove c9530e1e1bb43891a2528bdc0d380abbbb7be3d1ab3a98ac653435af0f48ceb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:03:32 compute-0 systemd[1]: libpod-conmon-c9530e1e1bb43891a2528bdc0d380abbbb7be3d1ab3a98ac653435af0f48ceb9.scope: Deactivated successfully.
Oct 02 20:03:32 compute-0 sudo[447036]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:33 compute-0 sudo[447195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:03:33 compute-0 sudo[447195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:33 compute-0 sudo[447195]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:33 compute-0 sudo[447220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:03:33 compute-0 sudo[447220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:33 compute-0 sudo[447220]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:33 compute-0 sudo[447245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:03:33 compute-0 sudo[447245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:33 compute-0 sudo[447245]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1759: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:33 compute-0 sudo[447270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:03:33 compute-0 sudo[447270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:03:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:03:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:03:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:03:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:03:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:03:34 compute-0 podman[447335]: 2025-10-02 20:03:34.25200257 +0000 UTC m=+0.099371432 container create a9f3b12f6d16b1b2fe76cb9d916a9e9f7d35a1ec97d1af4cc58e70a87200390c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 20:03:34 compute-0 nova_compute[355794]: 2025-10-02 20:03:34.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:34 compute-0 podman[447335]: 2025-10-02 20:03:34.219141927 +0000 UTC m=+0.066510809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:03:34 compute-0 systemd[1]: Started libpod-conmon-a9f3b12f6d16b1b2fe76cb9d916a9e9f7d35a1ec97d1af4cc58e70a87200390c.scope.
Oct 02 20:03:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:03:34 compute-0 podman[447335]: 2025-10-02 20:03:34.411223472 +0000 UTC m=+0.258592384 container init a9f3b12f6d16b1b2fe76cb9d916a9e9f7d35a1ec97d1af4cc58e70a87200390c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_knuth, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 20:03:34 compute-0 podman[447335]: 2025-10-02 20:03:34.43070107 +0000 UTC m=+0.278069912 container start a9f3b12f6d16b1b2fe76cb9d916a9e9f7d35a1ec97d1af4cc58e70a87200390c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:03:34 compute-0 podman[447335]: 2025-10-02 20:03:34.437816539 +0000 UTC m=+0.285185451 container attach a9f3b12f6d16b1b2fe76cb9d916a9e9f7d35a1ec97d1af4cc58e70a87200390c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 20:03:34 compute-0 sweet_knuth[447351]: 167 167
Oct 02 20:03:34 compute-0 systemd[1]: libpod-a9f3b12f6d16b1b2fe76cb9d916a9e9f7d35a1ec97d1af4cc58e70a87200390c.scope: Deactivated successfully.
Oct 02 20:03:34 compute-0 podman[447335]: 2025-10-02 20:03:34.445478023 +0000 UTC m=+0.292846935 container died a9f3b12f6d16b1b2fe76cb9d916a9e9f7d35a1ec97d1af4cc58e70a87200390c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 20:03:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1ef515652be2eac30b1810e52e63404842f1df43d26384c7e833e8261b902ed-merged.mount: Deactivated successfully.
Oct 02 20:03:34 compute-0 podman[447335]: 2025-10-02 20:03:34.522803698 +0000 UTC m=+0.370172550 container remove a9f3b12f6d16b1b2fe76cb9d916a9e9f7d35a1ec97d1af4cc58e70a87200390c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_knuth, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:03:34 compute-0 systemd[1]: libpod-conmon-a9f3b12f6d16b1b2fe76cb9d916a9e9f7d35a1ec97d1af4cc58e70a87200390c.scope: Deactivated successfully.
Oct 02 20:03:34 compute-0 ceph-mon[191910]: pgmap v1759: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:34 compute-0 podman[447373]: 2025-10-02 20:03:34.876229982 +0000 UTC m=+0.107501029 container create 5289ed19170e7b2f7111e3709d3cc4e702b8843961bcb6475b818b86ba30f62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:03:34 compute-0 podman[447373]: 2025-10-02 20:03:34.836160117 +0000 UTC m=+0.067431214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:03:34 compute-0 systemd[1]: Started libpod-conmon-5289ed19170e7b2f7111e3709d3cc4e702b8843961bcb6475b818b86ba30f62b.scope.
Oct 02 20:03:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb693c544d318b7db28030e217586ed91a93c3808bb454e3284ee39cc20cf31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb693c544d318b7db28030e217586ed91a93c3808bb454e3284ee39cc20cf31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb693c544d318b7db28030e217586ed91a93c3808bb454e3284ee39cc20cf31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb693c544d318b7db28030e217586ed91a93c3808bb454e3284ee39cc20cf31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:03:35 compute-0 podman[447373]: 2025-10-02 20:03:35.063057947 +0000 UTC m=+0.294329054 container init 5289ed19170e7b2f7111e3709d3cc4e702b8843961bcb6475b818b86ba30f62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:03:35 compute-0 podman[447373]: 2025-10-02 20:03:35.096759823 +0000 UTC m=+0.328030870 container start 5289ed19170e7b2f7111e3709d3cc4e702b8843961bcb6475b818b86ba30f62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcclintock, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:03:35 compute-0 podman[447373]: 2025-10-02 20:03:35.103675387 +0000 UTC m=+0.334946474 container attach 5289ed19170e7b2f7111e3709d3cc4e702b8843961bcb6475b818b86ba30f62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:03:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1760: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:03:35 compute-0 nova_compute[355794]: 2025-10-02 20:03:35.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]: {
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:     "0": [
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:         {
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "devices": [
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "/dev/loop3"
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             ],
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "lv_name": "ceph_lv0",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "lv_size": "21470642176",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "name": "ceph_lv0",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "tags": {
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.cluster_name": "ceph",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.crush_device_class": "",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.encrypted": "0",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.osd_id": "0",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.type": "block",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.vdo": "0"
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             },
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "type": "block",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "vg_name": "ceph_vg0"
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:         }
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:     ],
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:     "1": [
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:         {
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "devices": [
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "/dev/loop4"
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             ],
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "lv_name": "ceph_lv1",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "lv_size": "21470642176",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "name": "ceph_lv1",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "tags": {
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.cluster_name": "ceph",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.crush_device_class": "",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.encrypted": "0",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.osd_id": "1",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.type": "block",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.vdo": "0"
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             },
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "type": "block",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "vg_name": "ceph_vg1"
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:         }
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:     ],
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:     "2": [
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:         {
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "devices": [
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "/dev/loop5"
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             ],
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "lv_name": "ceph_lv2",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "lv_size": "21470642176",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "name": "ceph_lv2",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "tags": {
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.cluster_name": "ceph",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.crush_device_class": "",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.encrypted": "0",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.osd_id": "2",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.type": "block",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:                 "ceph.vdo": "0"
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             },
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "type": "block",
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:             "vg_name": "ceph_vg2"
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:         }
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]:     ]
Oct 02 20:03:36 compute-0 upbeat_mcclintock[447390]: }
Oct 02 20:03:36 compute-0 systemd[1]: libpod-5289ed19170e7b2f7111e3709d3cc4e702b8843961bcb6475b818b86ba30f62b.scope: Deactivated successfully.
Oct 02 20:03:36 compute-0 podman[447373]: 2025-10-02 20:03:36.08014975 +0000 UTC m=+1.311420797 container died 5289ed19170e7b2f7111e3709d3cc4e702b8843961bcb6475b818b86ba30f62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 20:03:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fb693c544d318b7db28030e217586ed91a93c3808bb454e3284ee39cc20cf31-merged.mount: Deactivated successfully.
Oct 02 20:03:36 compute-0 podman[447373]: 2025-10-02 20:03:36.184462863 +0000 UTC m=+1.415733870 container remove 5289ed19170e7b2f7111e3709d3cc4e702b8843961bcb6475b818b86ba30f62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 20:03:36 compute-0 systemd[1]: libpod-conmon-5289ed19170e7b2f7111e3709d3cc4e702b8843961bcb6475b818b86ba30f62b.scope: Deactivated successfully.
Oct 02 20:03:36 compute-0 sudo[447270]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:36 compute-0 sudo[447411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:03:36 compute-0 sudo[447411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:36 compute-0 sudo[447411]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:36 compute-0 sudo[447436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:03:36 compute-0 sudo[447436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:36 compute-0 sudo[447436]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:36 compute-0 ceph-mon[191910]: pgmap v1760: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:36 compute-0 sudo[447461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:03:36 compute-0 sudo[447461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:36 compute-0 sudo[447461]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:36 compute-0 sudo[447486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:03:36 compute-0 sudo[447486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:37 compute-0 podman[447549]: 2025-10-02 20:03:37.339547874 +0000 UTC m=+0.093887057 container create e285ee3ecb79d260e5d50d3679fed8ad3a68fbe1bfbb8a5c6488ae40f3a2d3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_shirley, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:03:37 compute-0 podman[447549]: 2025-10-02 20:03:37.306856415 +0000 UTC m=+0.061195648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:03:37 compute-0 systemd[1]: Started libpod-conmon-e285ee3ecb79d260e5d50d3679fed8ad3a68fbe1bfbb8a5c6488ae40f3a2d3cd.scope.
Oct 02 20:03:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1761: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:03:37 compute-0 podman[447549]: 2025-10-02 20:03:37.493011222 +0000 UTC m=+0.247350445 container init e285ee3ecb79d260e5d50d3679fed8ad3a68fbe1bfbb8a5c6488ae40f3a2d3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_shirley, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 20:03:37 compute-0 podman[447549]: 2025-10-02 20:03:37.509667035 +0000 UTC m=+0.264006218 container start e285ee3ecb79d260e5d50d3679fed8ad3a68fbe1bfbb8a5c6488ae40f3a2d3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_shirley, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:03:37 compute-0 podman[447549]: 2025-10-02 20:03:37.51589981 +0000 UTC m=+0.270239053 container attach e285ee3ecb79d260e5d50d3679fed8ad3a68fbe1bfbb8a5c6488ae40f3a2d3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 20:03:37 compute-0 awesome_shirley[447565]: 167 167
Oct 02 20:03:37 compute-0 systemd[1]: libpod-e285ee3ecb79d260e5d50d3679fed8ad3a68fbe1bfbb8a5c6488ae40f3a2d3cd.scope: Deactivated successfully.
Oct 02 20:03:37 compute-0 podman[447549]: 2025-10-02 20:03:37.521756616 +0000 UTC m=+0.276095789 container died e285ee3ecb79d260e5d50d3679fed8ad3a68fbe1bfbb8a5c6488ae40f3a2d3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:03:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-49dcf877d3f2f312a30b2deb0e1208584afa549d934bb62de1d911c986233a98-merged.mount: Deactivated successfully.
Oct 02 20:03:37 compute-0 podman[447549]: 2025-10-02 20:03:37.59716534 +0000 UTC m=+0.351504493 container remove e285ee3ecb79d260e5d50d3679fed8ad3a68fbe1bfbb8a5c6488ae40f3a2d3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_shirley, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 20:03:37 compute-0 systemd[1]: libpod-conmon-e285ee3ecb79d260e5d50d3679fed8ad3a68fbe1bfbb8a5c6488ae40f3a2d3cd.scope: Deactivated successfully.
Oct 02 20:03:37 compute-0 podman[447589]: 2025-10-02 20:03:37.918039168 +0000 UTC m=+0.101062897 container create 353d3b130e7c9e85bbb0624168a3f520344839014ded436dfef977c0ebc95dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:03:37 compute-0 podman[447589]: 2025-10-02 20:03:37.8800954 +0000 UTC m=+0.063119189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:03:37 compute-0 systemd[1]: Started libpod-conmon-353d3b130e7c9e85bbb0624168a3f520344839014ded436dfef977c0ebc95dd0.scope.
Oct 02 20:03:38 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/258aa6334c969a1223e295d0a275b5132374bbb48c665b8f22dcf9ee0da067be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/258aa6334c969a1223e295d0a275b5132374bbb48c665b8f22dcf9ee0da067be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/258aa6334c969a1223e295d0a275b5132374bbb48c665b8f22dcf9ee0da067be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/258aa6334c969a1223e295d0a275b5132374bbb48c665b8f22dcf9ee0da067be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:03:38 compute-0 podman[447589]: 2025-10-02 20:03:38.058984534 +0000 UTC m=+0.242008253 container init 353d3b130e7c9e85bbb0624168a3f520344839014ded436dfef977c0ebc95dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hellman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:03:38 compute-0 podman[447589]: 2025-10-02 20:03:38.080224359 +0000 UTC m=+0.263248058 container start 353d3b130e7c9e85bbb0624168a3f520344839014ded436dfef977c0ebc95dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hellman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:03:38 compute-0 podman[447589]: 2025-10-02 20:03:38.085990522 +0000 UTC m=+0.269014221 container attach 353d3b130e7c9e85bbb0624168a3f520344839014ded436dfef977c0ebc95dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hellman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 20:03:38 compute-0 podman[447603]: 2025-10-02 20:03:38.098753981 +0000 UTC m=+0.103103751 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 20:03:38 compute-0 ceph-mon[191910]: pgmap v1761: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:39 compute-0 nova_compute[355794]: 2025-10-02 20:03:39.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:39 compute-0 stoic_hellman[447611]: {
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:         "osd_id": 1,
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:         "type": "bluestore"
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:     },
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:         "osd_id": 2,
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:         "type": "bluestore"
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:     },
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:         "osd_id": 0,
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:         "type": "bluestore"
Oct 02 20:03:39 compute-0 stoic_hellman[447611]:     }
Oct 02 20:03:39 compute-0 stoic_hellman[447611]: }
Oct 02 20:03:39 compute-0 systemd[1]: libpod-353d3b130e7c9e85bbb0624168a3f520344839014ded436dfef977c0ebc95dd0.scope: Deactivated successfully.
Oct 02 20:03:39 compute-0 systemd[1]: libpod-353d3b130e7c9e85bbb0624168a3f520344839014ded436dfef977c0ebc95dd0.scope: Consumed 1.297s CPU time.
Oct 02 20:03:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1762: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:39 compute-0 podman[447658]: 2025-10-02 20:03:39.483745863 +0000 UTC m=+0.075615791 container died 353d3b130e7c9e85bbb0624168a3f520344839014ded436dfef977c0ebc95dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 20:03:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-258aa6334c969a1223e295d0a275b5132374bbb48c665b8f22dcf9ee0da067be-merged.mount: Deactivated successfully.
Oct 02 20:03:39 compute-0 podman[447658]: 2025-10-02 20:03:39.591948179 +0000 UTC m=+0.183818047 container remove 353d3b130e7c9e85bbb0624168a3f520344839014ded436dfef977c0ebc95dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hellman, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:03:39 compute-0 systemd[1]: libpod-conmon-353d3b130e7c9e85bbb0624168a3f520344839014ded436dfef977c0ebc95dd0.scope: Deactivated successfully.
Oct 02 20:03:39 compute-0 sudo[447486]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:03:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:03:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:03:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:03:39 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev a4ddf7c7-6628-4399-8c9c-083ec5330233 does not exist
Oct 02 20:03:39 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 6b7cde73-b479-4d56-97aa-c7bbe752cd19 does not exist
Oct 02 20:03:39 compute-0 sudo[447673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:03:39 compute-0 sudo[447673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:39 compute-0 sudo[447673]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:40 compute-0 sudo[447698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:03:40 compute-0 sudo[447698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:03:40 compute-0 sudo[447698]: pam_unix(sudo:session): session closed for user root
Oct 02 20:03:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:03:40 compute-0 ceph-mon[191910]: pgmap v1762: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:03:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:03:40 compute-0 nova_compute[355794]: 2025-10-02 20:03:40.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1763: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:42 compute-0 ceph-mon[191910]: pgmap v1763: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:44 compute-0 nova_compute[355794]: 2025-10-02 20:03:44.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:44 compute-0 ceph-mon[191910]: pgmap v1764: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1765: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:03:45 compute-0 nova_compute[355794]: 2025-10-02 20:03:45.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:46 compute-0 ceph-mon[191910]: pgmap v1765: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1766: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:47 compute-0 podman[447723]: 2025-10-02 20:03:47.678106618 +0000 UTC m=+0.094944815 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 20:03:47 compute-0 podman[447724]: 2025-10-02 20:03:47.717065973 +0000 UTC m=+0.130253683 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 20:03:48 compute-0 ceph-mon[191910]: pgmap v1766: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:49 compute-0 nova_compute[355794]: 2025-10-02 20:03:49.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1767: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:03:50 compute-0 nova_compute[355794]: 2025-10-02 20:03:50.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:50 compute-0 ceph-mon[191910]: pgmap v1767: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1768: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:51 compute-0 ceph-mon[191910]: pgmap v1768: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:52 compute-0 podman[447766]: 2025-10-02 20:03:52.695105141 +0000 UTC m=+0.111629948 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=)
Oct 02 20:03:52 compute-0 podman[447765]: 2025-10-02 20:03:52.708645611 +0000 UTC m=+0.129096562 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 20:03:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1769: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:54 compute-0 nova_compute[355794]: 2025-10-02 20:03:54.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:54 compute-0 ceph-mon[191910]: pgmap v1769: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1770: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:03:55 compute-0 nova_compute[355794]: 2025-10-02 20:03:55.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:55 compute-0 podman[447802]: 2025-10-02 20:03:55.747107618 +0000 UTC m=+0.158077673 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:03:55 compute-0 podman[447803]: 2025-10-02 20:03:55.750442487 +0000 UTC m=+0.155908335 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:03:55 compute-0 podman[447804]: 2025-10-02 20:03:55.792854364 +0000 UTC m=+0.193074883 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Oct 02 20:03:56 compute-0 ceph-mon[191910]: pgmap v1770: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1771: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:57 compute-0 podman[447866]: 2025-10-02 20:03:57.737236193 +0000 UTC m=+0.154286011 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, release=1755695350, name=ubi9-minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, config_id=edpm, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Oct 02 20:03:57 compute-0 podman[447867]: 2025-10-02 20:03:57.744301741 +0000 UTC m=+0.152770221 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 20:03:58 compute-0 ceph-mon[191910]: pgmap v1771: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:58 compute-0 nova_compute[355794]: 2025-10-02 20:03:58.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:58 compute-0 nova_compute[355794]: 2025-10-02 20:03:58.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:03:59 compute-0 nova_compute[355794]: 2025-10-02 20:03:59.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1772: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:03:59 compute-0 podman[157186]: time="2025-10-02T20:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:03:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:03:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9068 "" "Go-http-client/1.1"
Oct 02 20:04:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:04:00 compute-0 ceph-mon[191910]: pgmap v1772: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:00 compute-0 nova_compute[355794]: 2025-10-02 20:04:00.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:01 compute-0 openstack_network_exporter[372736]: ERROR   20:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:04:01 compute-0 openstack_network_exporter[372736]: ERROR   20:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:04:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:04:01 compute-0 openstack_network_exporter[372736]: ERROR   20:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:04:01 compute-0 openstack_network_exporter[372736]: ERROR   20:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:04:01 compute-0 openstack_network_exporter[372736]: ERROR   20:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:04:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:04:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1773: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:01 compute-0 nova_compute[355794]: 2025-10-02 20:04:01.577 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:04:01 compute-0 nova_compute[355794]: 2025-10-02 20:04:01.579 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:04:02 compute-0 nova_compute[355794]: 2025-10-02 20:04:02.578 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:04:02 compute-0 nova_compute[355794]: 2025-10-02 20:04:02.578 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:04:02 compute-0 nova_compute[355794]: 2025-10-02 20:04:02.579 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:04:02 compute-0 ceph-mon[191910]: pgmap v1773: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1774: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:04:03
Oct 02 20:04:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:04:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:04:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'vms', 'backups', 'cephfs.cephfs.meta', 'volumes']
Oct 02 20:04:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:04:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:04:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:04:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:04:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:04:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:04:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:04:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:04:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:04:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:04:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:04:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:04:04 compute-0 nova_compute[355794]: 2025-10-02 20:04:04.310 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:04:04 compute-0 nova_compute[355794]: 2025-10-02 20:04:04.311 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:04:04 compute-0 nova_compute[355794]: 2025-10-02 20:04:04.311 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:04:04 compute-0 nova_compute[355794]: 2025-10-02 20:04:04.313 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:04:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:04:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:04:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:04:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:04:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:04:04 compute-0 nova_compute[355794]: 2025-10-02 20:04:04.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:04 compute-0 ceph-mon[191910]: pgmap v1774: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1775: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:04:05 compute-0 nova_compute[355794]: 2025-10-02 20:04:05.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:06 compute-0 ceph-mon[191910]: pgmap v1775: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:06 compute-0 nova_compute[355794]: 2025-10-02 20:04:06.984 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:04:07 compute-0 nova_compute[355794]: 2025-10-02 20:04:07.313 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:04:07 compute-0 nova_compute[355794]: 2025-10-02 20:04:07.314 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:04:07 compute-0 nova_compute[355794]: 2025-10-02 20:04:07.316 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:04:07 compute-0 nova_compute[355794]: 2025-10-02 20:04:07.316 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:04:07 compute-0 nova_compute[355794]: 2025-10-02 20:04:07.317 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:04:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:08 compute-0 nova_compute[355794]: 2025-10-02 20:04:08.085 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:04:08 compute-0 nova_compute[355794]: 2025-10-02 20:04:08.086 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:04:08 compute-0 nova_compute[355794]: 2025-10-02 20:04:08.086 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:04:08 compute-0 nova_compute[355794]: 2025-10-02 20:04:08.087 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:04:08 compute-0 nova_compute[355794]: 2025-10-02 20:04:08.087 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:04:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:04:08 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3565688985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:04:08 compute-0 nova_compute[355794]: 2025-10-02 20:04:08.580 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:04:08 compute-0 ceph-mon[191910]: pgmap v1776: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:08 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3565688985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:04:08 compute-0 podman[447928]: 2025-10-02 20:04:08.698080215 +0000 UTC m=+0.120103806 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 02 20:04:09 compute-0 nova_compute[355794]: 2025-10-02 20:04:09.081 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:04:09 compute-0 nova_compute[355794]: 2025-10-02 20:04:09.082 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:04:09 compute-0 nova_compute[355794]: 2025-10-02 20:04:09.082 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:04:09 compute-0 nova_compute[355794]: 2025-10-02 20:04:09.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1777: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:09 compute-0 nova_compute[355794]: 2025-10-02 20:04:09.690 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:04:09 compute-0 nova_compute[355794]: 2025-10-02 20:04:09.692 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3844MB free_disk=59.955204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:04:09 compute-0 nova_compute[355794]: 2025-10-02 20:04:09.694 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:04:09 compute-0 nova_compute[355794]: 2025-10-02 20:04:09.694 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:04:09 compute-0 nova_compute[355794]: 2025-10-02 20:04:09.799 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:04:09 compute-0 nova_compute[355794]: 2025-10-02 20:04:09.801 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:04:09 compute-0 nova_compute[355794]: 2025-10-02 20:04:09.801 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:04:09 compute-0 nova_compute[355794]: 2025-10-02 20:04:09.850 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:04:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:04:10 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3487122681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:04:10 compute-0 nova_compute[355794]: 2025-10-02 20:04:10.358 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:04:10 compute-0 nova_compute[355794]: 2025-10-02 20:04:10.376 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:04:10 compute-0 nova_compute[355794]: 2025-10-02 20:04:10.415 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:04:10 compute-0 nova_compute[355794]: 2025-10-02 20:04:10.418 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:04:10 compute-0 nova_compute[355794]: 2025-10-02 20:04:10.419 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:04:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:04:10 compute-0 ceph-mon[191910]: pgmap v1777: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:10 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3487122681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:04:10 compute-0 nova_compute[355794]: 2025-10-02 20:04:10.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1778: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:11 compute-0 nova_compute[355794]: 2025-10-02 20:04:11.679 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:04:11 compute-0 nova_compute[355794]: 2025-10-02 20:04:11.680 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:04:11 compute-0 nova_compute[355794]: 2025-10-02 20:04:11.717 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:04:12 compute-0 ceph-mon[191910]: pgmap v1778: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:04:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1779: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:14 compute-0 nova_compute[355794]: 2025-10-02 20:04:14.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:14 compute-0 ceph-mon[191910]: pgmap v1779: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:04:15 compute-0 nova_compute[355794]: 2025-10-02 20:04:15.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:16 compute-0 ceph-mon[191910]: pgmap v1780: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:18 compute-0 ceph-mon[191910]: pgmap v1781: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:18 compute-0 podman[447972]: 2025-10-02 20:04:18.725327604 +0000 UTC m=+0.140249150 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:04:18 compute-0 podman[447973]: 2025-10-02 20:04:18.742295507 +0000 UTC m=+0.152678263 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.schema-version=1.0)
Oct 02 20:04:19 compute-0 nova_compute[355794]: 2025-10-02 20:04:19.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1782: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:04:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/538832009' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:04:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:04:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/538832009' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:04:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:04:20 compute-0 ceph-mon[191910]: pgmap v1782: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/538832009' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:04:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/538832009' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:04:20 compute-0 nova_compute[355794]: 2025-10-02 20:04:20.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1783: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:22 compute-0 ceph-mon[191910]: pgmap v1783: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1784: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:23 compute-0 podman[448012]: 2025-10-02 20:04:23.732993596 +0000 UTC m=+0.148873900 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:04:23 compute-0 podman[448013]: 2025-10-02 20:04:23.740674175 +0000 UTC m=+0.148589393 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-container, name=ubi9, version=9.4, build-date=2024-09-18T21:23:30, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9)
Oct 02 20:04:24 compute-0 nova_compute[355794]: 2025-10-02 20:04:24.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:24 compute-0 ceph-mon[191910]: pgmap v1784: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1785: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:04:25 compute-0 nova_compute[355794]: 2025-10-02 20:04:25.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:26 compute-0 podman[448050]: 2025-10-02 20:04:26.722876228 +0000 UTC m=+0.138599353 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 20:04:26 compute-0 podman[448051]: 2025-10-02 20:04:26.749910113 +0000 UTC m=+0.158157053 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 20:04:26 compute-0 ceph-mon[191910]: pgmap v1785: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:26 compute-0 podman[448052]: 2025-10-02 20:04:26.781634642 +0000 UTC m=+0.183769688 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 20:04:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1786: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:28 compute-0 podman[448113]: 2025-10-02 20:04:28.680553353 +0000 UTC m=+0.111739651 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 20:04:28 compute-0 podman[448114]: 2025-10-02 20:04:28.703456898 +0000 UTC m=+0.123694445 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 20:04:28 compute-0 ceph-mon[191910]: pgmap v1786: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:29 compute-0 nova_compute[355794]: 2025-10-02 20:04:29.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1787: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:29 compute-0 podman[157186]: time="2025-10-02T20:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:04:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:04:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9068 "" "Go-http-client/1.1"
Oct 02 20:04:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:04:30 compute-0 nova_compute[355794]: 2025-10-02 20:04:30.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:30 compute-0 ceph-mon[191910]: pgmap v1787: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:31 compute-0 openstack_network_exporter[372736]: ERROR   20:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:04:31 compute-0 openstack_network_exporter[372736]: ERROR   20:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:04:31 compute-0 openstack_network_exporter[372736]: ERROR   20:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:04:31 compute-0 openstack_network_exporter[372736]: ERROR   20:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:04:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:04:31 compute-0 openstack_network_exporter[372736]: ERROR   20:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:04:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:04:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:04:32.318 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:04:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:04:32.319 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:04:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:04:32.320 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:04:32 compute-0 ceph-mon[191910]: pgmap v1788: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1789: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:04:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:04:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:04:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:04:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:04:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:04:33 compute-0 ceph-mon[191910]: pgmap v1789: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:34 compute-0 nova_compute[355794]: 2025-10-02 20:04:34.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:04:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:35 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:04:35.757 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '6e:8a:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:85:f4:22:b9:4f'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:04:35 compute-0 nova_compute[355794]: 2025-10-02 20:04:35.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:35 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:04:35.760 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 20:04:35 compute-0 nova_compute[355794]: 2025-10-02 20:04:35.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:36 compute-0 ceph-mon[191910]: pgmap v1790: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:04:36.763 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1de2af17-a89c-45e5-97c6-db433f26bbb6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:04:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1791: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:38 compute-0 ceph-mon[191910]: pgmap v1791: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:39 compute-0 nova_compute[355794]: 2025-10-02 20:04:39.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1792: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:39 compute-0 podman[448156]: 2025-10-02 20:04:39.7022915 +0000 UTC m=+0.118048580 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 20:04:40 compute-0 sudo[448178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:04:40 compute-0 sudo[448178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:04:40 compute-0 sudo[448178]: pam_unix(sudo:session): session closed for user root
Oct 02 20:04:40 compute-0 sudo[448203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:04:40 compute-0 sudo[448203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:04:40 compute-0 sudo[448203]: pam_unix(sudo:session): session closed for user root
Oct 02 20:04:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:04:40 compute-0 sudo[448228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:04:40 compute-0 sudo[448228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:04:40 compute-0 sudo[448228]: pam_unix(sudo:session): session closed for user root
Oct 02 20:04:40 compute-0 ceph-mon[191910]: pgmap v1792: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:04:40 compute-0 sudo[448253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:04:40 compute-0 sudo[448253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:04:40 compute-0 nova_compute[355794]: 2025-10-02 20:04:40.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:41 compute-0 sudo[448253]: pam_unix(sudo:session): session closed for user root
Oct 02 20:04:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 321 active+clean; 90 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.0 MiB/s wr, 4 op/s
Oct 02 20:04:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 20:04:41 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 20:04:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:04:41 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:04:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:04:41 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:04:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:04:41 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:04:41 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 6598e166-e8c0-4478-b4d9-81302d31b0d8 does not exist
Oct 02 20:04:41 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 084c973f-628a-4d19-92e5-4c96c7857b21 does not exist
Oct 02 20:04:41 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 83e894a2-591c-495f-b71f-ddab6cc44dd0 does not exist
Oct 02 20:04:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:04:41 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:04:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:04:41 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:04:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:04:41 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:04:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct 02 20:04:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 20:04:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:04:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:04:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:04:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:04:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:04:41 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:04:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct 02 20:04:41 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct 02 20:04:41 compute-0 sudo[448309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:04:41 compute-0 sudo[448309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:04:41 compute-0 sudo[448309]: pam_unix(sudo:session): session closed for user root
Oct 02 20:04:41 compute-0 sudo[448334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:04:41 compute-0 sudo[448334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:04:41 compute-0 sudo[448334]: pam_unix(sudo:session): session closed for user root
Oct 02 20:04:42 compute-0 sudo[448359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:04:42 compute-0 sudo[448359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:04:42 compute-0 sudo[448359]: pam_unix(sudo:session): session closed for user root
Oct 02 20:04:42 compute-0 sudo[448384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:04:42 compute-0 sudo[448384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:04:42 compute-0 ceph-mon[191910]: pgmap v1793: 321 pgs: 321 active+clean; 90 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.0 MiB/s wr, 4 op/s
Oct 02 20:04:42 compute-0 ceph-mon[191910]: osdmap e135: 3 total, 3 up, 3 in
Oct 02 20:04:42 compute-0 podman[448446]: 2025-10-02 20:04:42.908065018 +0000 UTC m=+0.104383589 container create 6c7b335c33df55b484afb7cff0ef8f3a4749e51df67e1e950ef4279036dd4cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 20:04:42 compute-0 podman[448446]: 2025-10-02 20:04:42.867223949 +0000 UTC m=+0.063542570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:04:42 compute-0 systemd[1]: Started libpod-conmon-6c7b335c33df55b484afb7cff0ef8f3a4749e51df67e1e950ef4279036dd4cba.scope.
Oct 02 20:04:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:04:43 compute-0 podman[448446]: 2025-10-02 20:04:43.047847367 +0000 UTC m=+0.244165998 container init 6c7b335c33df55b484afb7cff0ef8f3a4749e51df67e1e950ef4279036dd4cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curie, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 20:04:43 compute-0 podman[448446]: 2025-10-02 20:04:43.06707253 +0000 UTC m=+0.263391111 container start 6c7b335c33df55b484afb7cff0ef8f3a4749e51df67e1e950ef4279036dd4cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:04:43 compute-0 podman[448446]: 2025-10-02 20:04:43.072923779 +0000 UTC m=+0.269242410 container attach 6c7b335c33df55b484afb7cff0ef8f3a4749e51df67e1e950ef4279036dd4cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curie, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 20:04:43 compute-0 silly_curie[448462]: 167 167
Oct 02 20:04:43 compute-0 systemd[1]: libpod-6c7b335c33df55b484afb7cff0ef8f3a4749e51df67e1e950ef4279036dd4cba.scope: Deactivated successfully.
Oct 02 20:04:43 compute-0 podman[448446]: 2025-10-02 20:04:43.081990769 +0000 UTC m=+0.278309320 container died 6c7b335c33df55b484afb7cff0ef8f3a4749e51df67e1e950ef4279036dd4cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 20:04:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c97ea62a129643f38ebbb6656429ed49ebf6221c04933989a97b410a6d1cbec0-merged.mount: Deactivated successfully.
Oct 02 20:04:43 compute-0 podman[448446]: 2025-10-02 20:04:43.171260464 +0000 UTC m=+0.367579035 container remove 6c7b335c33df55b484afb7cff0ef8f3a4749e51df67e1e950ef4279036dd4cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:04:43 compute-0 systemd[1]: libpod-conmon-6c7b335c33df55b484afb7cff0ef8f3a4749e51df67e1e950ef4279036dd4cba.scope: Deactivated successfully.
Oct 02 20:04:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 106 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.8 MiB/s wr, 18 op/s
Oct 02 20:04:43 compute-0 podman[448486]: 2025-10-02 20:04:43.502739473 +0000 UTC m=+0.105768080 container create 5a87c8a4346d9ae78e6cc6ae4da22f6a51fa0c0c5d5fa3650a26b17e680ad8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 20:04:43 compute-0 podman[448486]: 2025-10-02 20:04:43.462844825 +0000 UTC m=+0.065873562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:04:43 compute-0 systemd[1]: Started libpod-conmon-5a87c8a4346d9ae78e6cc6ae4da22f6a51fa0c0c5d5fa3650a26b17e680ad8ae.scope.
Oct 02 20:04:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct 02 20:04:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct 02 20:04:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:04:43 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct 02 20:04:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f713d561672e295b8d1a66c2a942b5287ba6f43a29bc3a275901bef184b633d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:04:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f713d561672e295b8d1a66c2a942b5287ba6f43a29bc3a275901bef184b633d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:04:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f713d561672e295b8d1a66c2a942b5287ba6f43a29bc3a275901bef184b633d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:04:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f713d561672e295b8d1a66c2a942b5287ba6f43a29bc3a275901bef184b633d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:04:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f713d561672e295b8d1a66c2a942b5287ba6f43a29bc3a275901bef184b633d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:04:43 compute-0 podman[448486]: 2025-10-02 20:04:43.684329891 +0000 UTC m=+0.287358548 container init 5a87c8a4346d9ae78e6cc6ae4da22f6a51fa0c0c5d5fa3650a26b17e680ad8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 20:04:43 compute-0 podman[448486]: 2025-10-02 20:04:43.700900666 +0000 UTC m=+0.303929273 container start 5a87c8a4346d9ae78e6cc6ae4da22f6a51fa0c0c5d5fa3650a26b17e680ad8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_chaplygin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct 02 20:04:43 compute-0 podman[448486]: 2025-10-02 20:04:43.707554533 +0000 UTC m=+0.310583130 container attach 5a87c8a4346d9ae78e6cc6ae4da22f6a51fa0c0c5d5fa3650a26b17e680ad8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_chaplygin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:04:44 compute-0 nova_compute[355794]: 2025-10-02 20:04:44.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:44 compute-0 ceph-mon[191910]: pgmap v1795: 321 pgs: 321 active+clean; 106 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.8 MiB/s wr, 18 op/s
Oct 02 20:04:44 compute-0 ceph-mon[191910]: osdmap e136: 3 total, 3 up, 3 in
Oct 02 20:04:45 compute-0 lucid_chaplygin[448502]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:04:45 compute-0 lucid_chaplygin[448502]: --> relative data size: 1.0
Oct 02 20:04:45 compute-0 lucid_chaplygin[448502]: --> All data devices are unavailable
Oct 02 20:04:45 compute-0 systemd[1]: libpod-5a87c8a4346d9ae78e6cc6ae4da22f6a51fa0c0c5d5fa3650a26b17e680ad8ae.scope: Deactivated successfully.
Oct 02 20:04:45 compute-0 systemd[1]: libpod-5a87c8a4346d9ae78e6cc6ae4da22f6a51fa0c0c5d5fa3650a26b17e680ad8ae.scope: Consumed 1.400s CPU time.
Oct 02 20:04:45 compute-0 podman[448531]: 2025-10-02 20:04:45.287743267 +0000 UTC m=+0.056446634 container died 5a87c8a4346d9ae78e6cc6ae4da22f6a51fa0c0c5d5fa3650a26b17e680ad8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_chaplygin, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:04:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f713d561672e295b8d1a66c2a942b5287ba6f43a29bc3a275901bef184b633d-merged.mount: Deactivated successfully.
Oct 02 20:04:45 compute-0 podman[448531]: 2025-10-02 20:04:45.418453615 +0000 UTC m=+0.187156982 container remove 5a87c8a4346d9ae78e6cc6ae4da22f6a51fa0c0c5d5fa3650a26b17e680ad8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_chaplygin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 20:04:45 compute-0 systemd[1]: libpod-conmon-5a87c8a4346d9ae78e6cc6ae4da22f6a51fa0c0c5d5fa3650a26b17e680ad8ae.scope: Deactivated successfully.
Oct 02 20:04:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:04:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1797: 321 pgs: 321 active+clean; 106 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 3.6 MiB/s wr, 44 op/s
Oct 02 20:04:45 compute-0 sudo[448384]: pam_unix(sudo:session): session closed for user root
Oct 02 20:04:45 compute-0 sudo[448545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:04:45 compute-0 sudo[448545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:04:45 compute-0 sudo[448545]: pam_unix(sudo:session): session closed for user root
Oct 02 20:04:45 compute-0 nova_compute[355794]: 2025-10-02 20:04:45.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:45 compute-0 sudo[448570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:04:45 compute-0 sudo[448570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:04:45 compute-0 sudo[448570]: pam_unix(sudo:session): session closed for user root
Oct 02 20:04:45 compute-0 sudo[448595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:04:45 compute-0 sudo[448595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:04:45 compute-0 sudo[448595]: pam_unix(sudo:session): session closed for user root
Oct 02 20:04:46 compute-0 sudo[448620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:04:46 compute-0 sudo[448620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:04:46 compute-0 ceph-mon[191910]: pgmap v1797: 321 pgs: 321 active+clean; 106 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 3.6 MiB/s wr, 44 op/s
Oct 02 20:04:46 compute-0 podman[448683]: 2025-10-02 20:04:46.800218308 +0000 UTC m=+0.094979252 container create b9a0e6ef17d29bbe928996e524197b2c3f50befa76fd9b9f6acb6a0b88929015 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 20:04:46 compute-0 podman[448683]: 2025-10-02 20:04:46.763471269 +0000 UTC m=+0.058232293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:04:46 compute-0 systemd[1]: Started libpod-conmon-b9a0e6ef17d29bbe928996e524197b2c3f50befa76fd9b9f6acb6a0b88929015.scope.
Oct 02 20:04:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:04:46 compute-0 podman[448683]: 2025-10-02 20:04:46.953086574 +0000 UTC m=+0.247847538 container init b9a0e6ef17d29bbe928996e524197b2c3f50befa76fd9b9f6acb6a0b88929015 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 20:04:46 compute-0 podman[448683]: 2025-10-02 20:04:46.97468726 +0000 UTC m=+0.269448234 container start b9a0e6ef17d29bbe928996e524197b2c3f50befa76fd9b9f6acb6a0b88929015 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:04:46 compute-0 podman[448683]: 2025-10-02 20:04:46.981936449 +0000 UTC m=+0.276697413 container attach b9a0e6ef17d29bbe928996e524197b2c3f50befa76fd9b9f6acb6a0b88929015 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 20:04:46 compute-0 compassionate_fermi[448699]: 167 167
Oct 02 20:04:46 compute-0 systemd[1]: libpod-b9a0e6ef17d29bbe928996e524197b2c3f50befa76fd9b9f6acb6a0b88929015.scope: Deactivated successfully.
Oct 02 20:04:46 compute-0 podman[448683]: 2025-10-02 20:04:46.987276857 +0000 UTC m=+0.282037821 container died b9a0e6ef17d29bbe928996e524197b2c3f50befa76fd9b9f6acb6a0b88929015 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:04:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ccd3d9779f2dd5e13acf346aed34fc151841991f861cf3060df7e515f34c80c-merged.mount: Deactivated successfully.
Oct 02 20:04:47 compute-0 podman[448683]: 2025-10-02 20:04:47.071086812 +0000 UTC m=+0.365847786 container remove b9a0e6ef17d29bbe928996e524197b2c3f50befa76fd9b9f6acb6a0b88929015 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:04:47 compute-0 systemd[1]: libpod-conmon-b9a0e6ef17d29bbe928996e524197b2c3f50befa76fd9b9f6acb6a0b88929015.scope: Deactivated successfully.
Oct 02 20:04:47 compute-0 podman[448722]: 2025-10-02 20:04:47.387246714 +0000 UTC m=+0.087600160 container create 458b902f302d93d3a6254b1ff3b72b92a76a909ba06b50d7c6a1df26fb8f35c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:04:47 compute-0 podman[448722]: 2025-10-02 20:04:47.357367136 +0000 UTC m=+0.057720642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:04:47 compute-0 systemd[1]: Started libpod-conmon-458b902f302d93d3a6254b1ff3b72b92a76a909ba06b50d7c6a1df26fb8f35c8.scope.
Oct 02 20:04:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1798: 321 pgs: 321 active+clean; 114 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 4.6 MiB/s wr, 47 op/s
Oct 02 20:04:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd6a56cf99893f6440384c94da0c4ca3b95e5ecbf112038200f6043095343db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd6a56cf99893f6440384c94da0c4ca3b95e5ecbf112038200f6043095343db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd6a56cf99893f6440384c94da0c4ca3b95e5ecbf112038200f6043095343db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd6a56cf99893f6440384c94da0c4ca3b95e5ecbf112038200f6043095343db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:04:47 compute-0 podman[448722]: 2025-10-02 20:04:47.575750265 +0000 UTC m=+0.276103751 container init 458b902f302d93d3a6254b1ff3b72b92a76a909ba06b50d7c6a1df26fb8f35c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:04:47 compute-0 podman[448722]: 2025-10-02 20:04:47.61914547 +0000 UTC m=+0.319498916 container start 458b902f302d93d3a6254b1ff3b72b92a76a909ba06b50d7c6a1df26fb8f35c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goldstine, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:04:47 compute-0 podman[448722]: 2025-10-02 20:04:47.627195137 +0000 UTC m=+0.327548583 container attach 458b902f302d93d3a6254b1ff3b72b92a76a909ba06b50d7c6a1df26fb8f35c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]: {
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:     "0": [
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:         {
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "devices": [
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "/dev/loop3"
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             ],
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "lv_name": "ceph_lv0",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "lv_size": "21470642176",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "name": "ceph_lv0",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "tags": {
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.cluster_name": "ceph",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.crush_device_class": "",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.encrypted": "0",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.osd_id": "0",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.type": "block",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.vdo": "0"
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             },
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "type": "block",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "vg_name": "ceph_vg0"
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:         }
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:     ],
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:     "1": [
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:         {
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "devices": [
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "/dev/loop4"
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             ],
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "lv_name": "ceph_lv1",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "lv_size": "21470642176",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "name": "ceph_lv1",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "tags": {
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.cluster_name": "ceph",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.crush_device_class": "",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.encrypted": "0",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.osd_id": "1",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.type": "block",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.vdo": "0"
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             },
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "type": "block",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "vg_name": "ceph_vg1"
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:         }
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:     ],
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:     "2": [
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:         {
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "devices": [
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "/dev/loop5"
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             ],
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "lv_name": "ceph_lv2",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "lv_size": "21470642176",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "name": "ceph_lv2",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "tags": {
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.cluster_name": "ceph",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.crush_device_class": "",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.encrypted": "0",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.osd_id": "2",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.type": "block",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:                 "ceph.vdo": "0"
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             },
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "type": "block",
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:             "vg_name": "ceph_vg2"
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:         }
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]:     ]
Oct 02 20:04:48 compute-0 blissful_goldstine[448738]: }
Oct 02 20:04:48 compute-0 systemd[1]: libpod-458b902f302d93d3a6254b1ff3b72b92a76a909ba06b50d7c6a1df26fb8f35c8.scope: Deactivated successfully.
Oct 02 20:04:48 compute-0 podman[448722]: 2025-10-02 20:04:48.504345102 +0000 UTC m=+1.204698548 container died 458b902f302d93d3a6254b1ff3b72b92a76a909ba06b50d7c6a1df26fb8f35c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:04:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bd6a56cf99893f6440384c94da0c4ca3b95e5ecbf112038200f6043095343db-merged.mount: Deactivated successfully.
Oct 02 20:04:48 compute-0 podman[448722]: 2025-10-02 20:04:48.611565113 +0000 UTC m=+1.311918549 container remove 458b902f302d93d3a6254b1ff3b72b92a76a909ba06b50d7c6a1df26fb8f35c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goldstine, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 20:04:48 compute-0 systemd[1]: libpod-conmon-458b902f302d93d3a6254b1ff3b72b92a76a909ba06b50d7c6a1df26fb8f35c8.scope: Deactivated successfully.
Oct 02 20:04:48 compute-0 sudo[448620]: pam_unix(sudo:session): session closed for user root
Oct 02 20:04:48 compute-0 ceph-mon[191910]: pgmap v1798: 321 pgs: 321 active+clean; 114 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 4.6 MiB/s wr, 47 op/s
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:04:48.694202) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435488694290, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2066, "num_deletes": 251, "total_data_size": 3476642, "memory_usage": 3531328, "flush_reason": "Manual Compaction"}
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435488719282, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3410615, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34758, "largest_seqno": 36823, "table_properties": {"data_size": 3401089, "index_size": 6084, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18970, "raw_average_key_size": 20, "raw_value_size": 3382140, "raw_average_value_size": 3594, "num_data_blocks": 270, "num_entries": 941, "num_filter_entries": 941, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759435258, "oldest_key_time": 1759435258, "file_creation_time": 1759435488, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 25258 microseconds, and 16610 cpu microseconds.
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:04:48.719452) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3410615 bytes OK
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:04:48.719489) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:04:48.724689) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:04:48.724730) EVENT_LOG_v1 {"time_micros": 1759435488724717, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:04:48.724759) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3467977, prev total WAL file size 3467977, number of live WAL files 2.
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:04:48.726796) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3330KB)], [80(7054KB)]
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435488726871, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 10634091, "oldest_snapshot_seqno": -1}
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 5686 keys, 8897770 bytes, temperature: kUnknown
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435488779846, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8897770, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8859921, "index_size": 22507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 143465, "raw_average_key_size": 25, "raw_value_size": 8757283, "raw_average_value_size": 1540, "num_data_blocks": 925, "num_entries": 5686, "num_filter_entries": 5686, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759435488, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:04:48.780209) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8897770 bytes
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:04:48.781899) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.1 rd, 167.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.9 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 6204, records dropped: 518 output_compression: NoCompression
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:04:48.781915) EVENT_LOG_v1 {"time_micros": 1759435488781906, "job": 46, "event": "compaction_finished", "compaction_time_micros": 53144, "compaction_time_cpu_micros": 33279, "output_level": 6, "num_output_files": 1, "total_output_size": 8897770, "num_input_records": 6204, "num_output_records": 5686, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435488782591, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435488783734, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:04:48.726550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:04:48.783870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:04:48.783879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:04:48.783881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:04:48.783883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:04:48 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:04:48.783886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:04:48 compute-0 sudo[448758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:04:48 compute-0 sudo[448758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:04:48 compute-0 sudo[448758]: pam_unix(sudo:session): session closed for user root
Oct 02 20:04:49 compute-0 podman[448782]: 2025-10-02 20:04:49.019085456 +0000 UTC m=+0.125051085 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 20:04:49 compute-0 podman[448783]: 2025-10-02 20:04:49.030049997 +0000 UTC m=+0.138152233 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:04:49 compute-0 sudo[448797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:04:49 compute-0 sudo[448797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:04:49 compute-0 sudo[448797]: pam_unix(sudo:session): session closed for user root
Oct 02 20:04:49 compute-0 sudo[448851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:04:49 compute-0 sudo[448851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:04:49 compute-0 sudo[448851]: pam_unix(sudo:session): session closed for user root
Oct 02 20:04:49 compute-0 sudo[448877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:04:49 compute-0 sudo[448877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:04:49 compute-0 nova_compute[355794]: 2025-10-02 20:04:49.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1799: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.6 MiB/s wr, 40 op/s
Oct 02 20:04:49 compute-0 podman[448942]: 2025-10-02 20:04:49.930643787 +0000 UTC m=+0.088781806 container create e621ee1ff225fafd6933a1148e6045c145b4cd982fc15911223cb8c4b9a1a8e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 20:04:49 compute-0 podman[448942]: 2025-10-02 20:04:49.904699456 +0000 UTC m=+0.062837475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:04:50 compute-0 systemd[1]: Started libpod-conmon-e621ee1ff225fafd6933a1148e6045c145b4cd982fc15911223cb8c4b9a1a8e0.scope.
Oct 02 20:04:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:04:50 compute-0 podman[448942]: 2025-10-02 20:04:50.097198814 +0000 UTC m=+0.255336893 container init e621ee1ff225fafd6933a1148e6045c145b4cd982fc15911223cb8c4b9a1a8e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:04:50 compute-0 podman[448942]: 2025-10-02 20:04:50.120798724 +0000 UTC m=+0.278936743 container start e621ee1ff225fafd6933a1148e6045c145b4cd982fc15911223cb8c4b9a1a8e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_proskuriakova, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:04:50 compute-0 podman[448942]: 2025-10-02 20:04:50.128721078 +0000 UTC m=+0.286859147 container attach e621ee1ff225fafd6933a1148e6045c145b4cd982fc15911223cb8c4b9a1a8e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 20:04:50 compute-0 elegant_proskuriakova[448958]: 167 167
Oct 02 20:04:50 compute-0 systemd[1]: libpod-e621ee1ff225fafd6933a1148e6045c145b4cd982fc15911223cb8c4b9a1a8e0.scope: Deactivated successfully.
Oct 02 20:04:50 compute-0 podman[448942]: 2025-10-02 20:04:50.137572773 +0000 UTC m=+0.295710812 container died e621ee1ff225fafd6933a1148e6045c145b4cd982fc15911223cb8c4b9a1a8e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:04:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9288b33191b3205a2ee1b2354086cb823da365394eab97bdc7889fd36269db5a-merged.mount: Deactivated successfully.
Oct 02 20:04:50 compute-0 podman[448942]: 2025-10-02 20:04:50.233770322 +0000 UTC m=+0.391908341 container remove e621ee1ff225fafd6933a1148e6045c145b4cd982fc15911223cb8c4b9a1a8e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:04:50 compute-0 systemd[1]: libpod-conmon-e621ee1ff225fafd6933a1148e6045c145b4cd982fc15911223cb8c4b9a1a8e0.scope: Deactivated successfully.
Oct 02 20:04:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:04:50 compute-0 podman[448984]: 2025-10-02 20:04:50.537822987 +0000 UTC m=+0.086235700 container create 3ae0351ac7ad111234567d09eb371e2511139f3c4388576b084cd2d4979c1637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hawking, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 20:04:50 compute-0 podman[448984]: 2025-10-02 20:04:50.505240309 +0000 UTC m=+0.053653052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:04:50 compute-0 systemd[1]: Started libpod-conmon-3ae0351ac7ad111234567d09eb371e2511139f3c4388576b084cd2d4979c1637.scope.
Oct 02 20:04:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d06a0f2cf9f98335d35563921bb340a1751a75791fca2edd668e129b97f0ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d06a0f2cf9f98335d35563921bb340a1751a75791fca2edd668e129b97f0ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d06a0f2cf9f98335d35563921bb340a1751a75791fca2edd668e129b97f0ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d06a0f2cf9f98335d35563921bb340a1751a75791fca2edd668e129b97f0ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:04:50 compute-0 podman[448984]: 2025-10-02 20:04:50.700326475 +0000 UTC m=+0.248739198 container init 3ae0351ac7ad111234567d09eb371e2511139f3c4388576b084cd2d4979c1637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hawking, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:04:50 compute-0 ceph-mon[191910]: pgmap v1799: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.6 MiB/s wr, 40 op/s
Oct 02 20:04:50 compute-0 podman[448984]: 2025-10-02 20:04:50.715655362 +0000 UTC m=+0.264068055 container start 3ae0351ac7ad111234567d09eb371e2511139f3c4388576b084cd2d4979c1637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:04:50 compute-0 podman[448984]: 2025-10-02 20:04:50.721775967 +0000 UTC m=+0.270188660 container attach 3ae0351ac7ad111234567d09eb371e2511139f3c4388576b084cd2d4979c1637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hawking, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:04:50 compute-0 nova_compute[355794]: 2025-10-02 20:04:50.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1800: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.9 MiB/s wr, 32 op/s
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]: {
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:         "osd_id": 1,
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:         "type": "bluestore"
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:     },
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:         "osd_id": 2,
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:         "type": "bluestore"
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:     },
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:         "osd_id": 0,
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:         "type": "bluestore"
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]:     }
Oct 02 20:04:51 compute-0 thirsty_hawking[449000]: }
Oct 02 20:04:51 compute-0 systemd[1]: libpod-3ae0351ac7ad111234567d09eb371e2511139f3c4388576b084cd2d4979c1637.scope: Deactivated successfully.
Oct 02 20:04:51 compute-0 systemd[1]: libpod-3ae0351ac7ad111234567d09eb371e2511139f3c4388576b084cd2d4979c1637.scope: Consumed 1.257s CPU time.
Oct 02 20:04:51 compute-0 podman[448984]: 2025-10-02 20:04:51.973010738 +0000 UTC m=+1.521423471 container died 3ae0351ac7ad111234567d09eb371e2511139f3c4388576b084cd2d4979c1637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hawking, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:04:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-96d06a0f2cf9f98335d35563921bb340a1751a75791fca2edd668e129b97f0ea-merged.mount: Deactivated successfully.
Oct 02 20:04:52 compute-0 podman[448984]: 2025-10-02 20:04:52.068693455 +0000 UTC m=+1.617106188 container remove 3ae0351ac7ad111234567d09eb371e2511139f3c4388576b084cd2d4979c1637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:04:52 compute-0 systemd[1]: libpod-conmon-3ae0351ac7ad111234567d09eb371e2511139f3c4388576b084cd2d4979c1637.scope: Deactivated successfully.
Oct 02 20:04:52 compute-0 sudo[448877]: pam_unix(sudo:session): session closed for user root
Oct 02 20:04:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:04:52 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:04:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:04:52 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:04:52 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev aaf7092b-dc8b-484d-9672-4b89d0a15be1 does not exist
Oct 02 20:04:52 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev aa420cab-9941-4742-9344-a533d95750bf does not exist
Oct 02 20:04:52 compute-0 sudo[449045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:04:52 compute-0 sudo[449045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:04:52 compute-0 sudo[449045]: pam_unix(sudo:session): session closed for user root
Oct 02 20:04:52 compute-0 sudo[449070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:04:52 compute-0 sudo[449070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:04:52 compute-0 sudo[449070]: pam_unix(sudo:session): session closed for user root
Oct 02 20:04:52 compute-0 ceph-mon[191910]: pgmap v1800: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.9 MiB/s wr, 32 op/s
Oct 02 20:04:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:04:52 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:04:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1801: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 MiB/s wr, 19 op/s
Oct 02 20:04:54 compute-0 nova_compute[355794]: 2025-10-02 20:04:54.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:54 compute-0 podman[449095]: 2025-10-02 20:04:54.735751238 +0000 UTC m=+0.141247422 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:04:54 compute-0 podman[449096]: 2025-10-02 20:04:54.736961014 +0000 UTC m=+0.142118340 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9, version=9.4, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, vcs-type=git, container_name=kepler, release-0.7.12=, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, config_id=edpm, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release=1214.1726694543)
Oct 02 20:04:54 compute-0 ceph-mon[191910]: pgmap v1801: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 MiB/s wr, 19 op/s
Oct 02 20:04:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:04:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.0 MiB/s wr, 16 op/s
Oct 02 20:04:55 compute-0 nova_compute[355794]: 2025-10-02 20:04:55.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:56 compute-0 ceph-mon[191910]: pgmap v1802: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.0 MiB/s wr, 16 op/s
Oct 02 20:04:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1803: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1.0 MiB/s wr, 2 op/s
Oct 02 20:04:57 compute-0 podman[449135]: 2025-10-02 20:04:57.744428751 +0000 UTC m=+0.165254000 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 20:04:57 compute-0 podman[449136]: 2025-10-02 20:04:57.758412889 +0000 UTC m=+0.166165240 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:04:57 compute-0 podman[449137]: 2025-10-02 20:04:57.779020463 +0000 UTC m=+0.178775828 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct 02 20:04:58 compute-0 ceph-mon[191910]: pgmap v1803: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1.0 MiB/s wr, 2 op/s
Oct 02 20:04:59 compute-0 nova_compute[355794]: 2025-10-02 20:04:59.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s wr, 0 op/s
Oct 02 20:04:59 compute-0 podman[449195]: 2025-10-02 20:04:59.717133768 +0000 UTC m=+0.138678005 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, vcs-type=git, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, container_name=openstack_network_exporter, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.openshift.expose-services=)
Oct 02 20:04:59 compute-0 podman[449196]: 2025-10-02 20:04:59.720650805 +0000 UTC m=+0.132111460 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 20:04:59 compute-0 podman[157186]: time="2025-10-02T20:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:04:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:04:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9064 "" "Go-http-client/1.1"
Oct 02 20:05:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:05:00 compute-0 nova_compute[355794]: 2025-10-02 20:05:00.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:00 compute-0 nova_compute[355794]: 2025-10-02 20:05:00.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:05:00 compute-0 ceph-mon[191910]: pgmap v1804: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s wr, 0 op/s
Oct 02 20:05:00 compute-0 nova_compute[355794]: 2025-10-02 20:05:00.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:01 compute-0 openstack_network_exporter[372736]: ERROR   20:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:05:01 compute-0 openstack_network_exporter[372736]: ERROR   20:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:05:01 compute-0 openstack_network_exporter[372736]: ERROR   20:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:05:01 compute-0 openstack_network_exporter[372736]: ERROR   20:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:05:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:05:01 compute-0 openstack_network_exporter[372736]: ERROR   20:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:05:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:05:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1805: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:01 compute-0 nova_compute[355794]: 2025-10-02 20:05:01.577 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:02 compute-0 ceph-mon[191910]: pgmap v1805: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1806: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:03 compute-0 nova_compute[355794]: 2025-10-02 20:05:03.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:03 compute-0 nova_compute[355794]: 2025-10-02 20:05:03.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:05:03 compute-0 nova_compute[355794]: 2025-10-02 20:05:03.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:05:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:05:03
Oct 02 20:05:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:05:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:05:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.log', 'backups', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'vms', 'cephfs.cephfs.data']
Oct 02 20:05:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:05:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:05:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:05:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:05:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:05:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:05:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.302 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.303 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.315 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.317 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.319 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.319 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.319 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.320 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T20:05:04.319480) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{'inspect_disks': {}}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{'inspect_disks': {}}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{'inspect_disks': {}}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{'inspect_disks': {}}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{'inspect_disks': {}}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{'inspect_disks': {}}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{'inspect_disks': {}}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{'inspect_disks': {}}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{'inspect_disks': {}}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{'inspect_disks': {}}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{'inspect_disks': {}}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{'inspect_disks': {}}], pollster history [{'disk.device.read.requests': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:05:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:05:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:05:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:05:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:05:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:05:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:05:04 compute-0 nova_compute[355794]: 2025-10-02 20:05:04.343 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:05:04 compute-0 nova_compute[355794]: 2025-10-02 20:05:04.344 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:05:04 compute-0 nova_compute[355794]: 2025-10-02 20:05:04.345 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:05:04 compute-0 nova_compute[355794]: 2025-10-02 20:05:04.346 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:05:04 compute-0 nova_compute[355794]: 2025-10-02 20:05:04.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.395 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.396 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.397 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.398 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.398 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.398 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.398 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.399 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.399 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.400 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:05:04.399117) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.432 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.433 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.433 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.434 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.435 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.435 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.435 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.436 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.436 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.437 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:05:04.435366) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.437 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.437 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.438 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.438 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.438 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.438 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.438 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.439 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.439 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.440 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:05:04.438595) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.440 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.440 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.441 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.441 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.441 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.441 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.442 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:05:04.441665) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.479 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.480 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.480 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.480 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.481 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.481 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.481 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.481 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:05:04.481474) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.483 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.483 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.484 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.484 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.484 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.484 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.484 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.485 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.486 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:05:04.485126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.491 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.492 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.492 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.492 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.492 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.493 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.493 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.493 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.494 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:05:04.493287) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.495 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.495 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.495 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.495 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.496 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.496 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.496 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.497 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.497 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.497 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.497 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.498 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.498 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.498 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.499 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:05:04.496286) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:05:04.498097) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.500 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.501 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.501 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.501 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.502 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:05:04.501600) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.503 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.503 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.504 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.505 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:05:04.504729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.504 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.505 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.506 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.507 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.507 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.507 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.508 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:05:04.507307) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.509 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.509 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.510 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.510 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.511 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.511 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.512 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:05:04.509731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.513 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.513 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.513 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.513 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.513 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.514 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.515 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:05:04.513587) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.515 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.515 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.515 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.515 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.516 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.516 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.517 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.518 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:05:04.515814) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.519 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.519 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.519 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.519 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.520 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.520 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:05:04.519881) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.521 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.521 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.521 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.521 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.522 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.522 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2730 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.523 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:05:04.522098) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.523 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.524 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.524 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.524 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.524 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:05:04.524323) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.525 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.526 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.526 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.526 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.526 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.526 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.527 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.528 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:05:04.526694) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.529 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.529 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.529 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.530 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.530 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:05:04.530082) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.530 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.531 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.531 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.531 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.532 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.532 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.533 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.533 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.535 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.535 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:05:04.532588) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.535 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 52870000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.536 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.536 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.537 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.537 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.537 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.537 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:05:04.535119) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.538 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.538 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.539 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.540 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.542 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:05:04.537527) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.544 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.544 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.544 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.544 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.544 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:05:04.544 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:04 compute-0 ceph-mon[191910]: pgmap v1806: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:05:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1807: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:05 compute-0 nova_compute[355794]: 2025-10-02 20:05:05.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:05 compute-0 ceph-mon[191910]: pgmap v1807: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:05 compute-0 ovn_controller[88435]: 2025-10-02T20:05:05Z|00061|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 02 20:05:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1808: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:08 compute-0 ceph-mon[191910]: pgmap v1808: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:09 compute-0 nova_compute[355794]: 2025-10-02 20:05:09.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1809: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:05:10 compute-0 ceph-mon[191910]: pgmap v1809: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:10 compute-0 podman[449239]: 2025-10-02 20:05:10.719957969 +0000 UTC m=+0.138042340 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 20:05:10 compute-0 nova_compute[355794]: 2025-10-02 20:05:10.776 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:05:10 compute-0 nova_compute[355794]: 2025-10-02 20:05:10.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:10 compute-0 nova_compute[355794]: 2025-10-02 20:05:10.819 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:05:10 compute-0 nova_compute[355794]: 2025-10-02 20:05:10.820 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:05:10 compute-0 nova_compute[355794]: 2025-10-02 20:05:10.821 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:10 compute-0 nova_compute[355794]: 2025-10-02 20:05:10.822 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:10 compute-0 nova_compute[355794]: 2025-10-02 20:05:10.823 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:10 compute-0 nova_compute[355794]: 2025-10-02 20:05:10.824 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:10 compute-0 nova_compute[355794]: 2025-10-02 20:05:10.825 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:10 compute-0 nova_compute[355794]: 2025-10-02 20:05:10.872 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:10 compute-0 nova_compute[355794]: 2025-10-02 20:05:10.872 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:10 compute-0 nova_compute[355794]: 2025-10-02 20:05:10.873 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:10 compute-0 nova_compute[355794]: 2025-10-02 20:05:10.873 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:05:10 compute-0 nova_compute[355794]: 2025-10-02 20:05:10.874 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:05:11 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2613593326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:05:11 compute-0 nova_compute[355794]: 2025-10-02 20:05:11.407 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1810: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:11 compute-0 nova_compute[355794]: 2025-10-02 20:05:11.530 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:05:11 compute-0 nova_compute[355794]: 2025-10-02 20:05:11.530 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:05:11 compute-0 nova_compute[355794]: 2025-10-02 20:05:11.530 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:05:11 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2613593326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:05:12 compute-0 nova_compute[355794]: 2025-10-02 20:05:12.060 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:05:12 compute-0 nova_compute[355794]: 2025-10-02 20:05:12.063 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3833MB free_disk=59.955204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:05:12 compute-0 nova_compute[355794]: 2025-10-02 20:05:12.063 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:12 compute-0 nova_compute[355794]: 2025-10-02 20:05:12.064 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:12 compute-0 nova_compute[355794]: 2025-10-02 20:05:12.204 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:05:12 compute-0 nova_compute[355794]: 2025-10-02 20:05:12.205 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:05:12 compute-0 nova_compute[355794]: 2025-10-02 20:05:12.205 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:05:12 compute-0 nova_compute[355794]: 2025-10-02 20:05:12.221 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing inventories for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 20:05:12 compute-0 nova_compute[355794]: 2025-10-02 20:05:12.251 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating ProviderTree inventory for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 20:05:12 compute-0 nova_compute[355794]: 2025-10-02 20:05:12.251 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating inventory in ProviderTree for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 20:05:12 compute-0 nova_compute[355794]: 2025-10-02 20:05:12.278 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing aggregate associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 20:05:12 compute-0 nova_compute[355794]: 2025-10-02 20:05:12.300 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing trait associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, traits: COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 20:05:12 compute-0 nova_compute[355794]: 2025-10-02 20:05:12.355 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:12 compute-0 ceph-mon[191910]: pgmap v1810: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:12 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:05:12 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/380066719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:05:12 compute-0 nova_compute[355794]: 2025-10-02 20:05:12.891 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:12 compute-0 nova_compute[355794]: 2025-10-02 20:05:12.904 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:05:12 compute-0 nova_compute[355794]: 2025-10-02 20:05:12.939 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:05:12 compute-0 nova_compute[355794]: 2025-10-02 20:05:12.943 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:05:12 compute-0 nova_compute[355794]: 2025-10-02 20:05:12.944 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:05:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1811: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:13 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/380066719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:05:14 compute-0 nova_compute[355794]: 2025-10-02 20:05:14.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:14 compute-0 ceph-mon[191910]: pgmap v1811: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:14 compute-0 nova_compute[355794]: 2025-10-02 20:05:14.938 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:05:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1812: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:15 compute-0 nova_compute[355794]: 2025-10-02 20:05:15.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:16 compute-0 nova_compute[355794]: 2025-10-02 20:05:16.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:16 compute-0 ceph-mon[191910]: pgmap v1812: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1813: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:18 compute-0 ceph-mon[191910]: pgmap v1813: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:18 compute-0 nova_compute[355794]: 2025-10-02 20:05:18.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:19 compute-0 nova_compute[355794]: 2025-10-02 20:05:19.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1814: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:19 compute-0 podman[449305]: 2025-10-02 20:05:19.71279836 +0000 UTC m=+0.124724177 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 20:05:19 compute-0 podman[449306]: 2025-10-02 20:05:19.727511084 +0000 UTC m=+0.135730179 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:05:19 compute-0 nova_compute[355794]: 2025-10-02 20:05:19.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:05:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/97650641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:05:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:05:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/97650641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:05:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:05:20 compute-0 nova_compute[355794]: 2025-10-02 20:05:20.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:20 compute-0 ceph-mon[191910]: pgmap v1814: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/97650641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:05:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/97650641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:05:20 compute-0 nova_compute[355794]: 2025-10-02 20:05:20.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1815: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:22 compute-0 ceph-mon[191910]: pgmap v1815: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:24 compute-0 nova_compute[355794]: 2025-10-02 20:05:24.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:24 compute-0 nova_compute[355794]: 2025-10-02 20:05:24.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:24 compute-0 ceph-mon[191910]: pgmap v1816: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:25 compute-0 nova_compute[355794]: 2025-10-02 20:05:25.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 20:05:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1817: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:25 compute-0 podman[449349]: 2025-10-02 20:05:25.706263351 +0000 UTC m=+0.123617973 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, distribution-scope=public, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Oct 02 20:05:25 compute-0 podman[449348]: 2025-10-02 20:05:25.733202144 +0000 UTC m=+0.157480888 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 20:05:25 compute-0 nova_compute[355794]: 2025-10-02 20:05:25.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:26 compute-0 nova_compute[355794]: 2025-10-02 20:05:25.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct 02 20:05:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct 02 20:05:26 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct 02 20:05:26 compute-0 ceph-mon[191910]: pgmap v1817: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:26 compute-0 nova_compute[355794]: 2025-10-02 20:05:26.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1819: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:27 compute-0 ceph-mon[191910]: osdmap e137: 3 total, 3 up, 3 in
Oct 02 20:05:28 compute-0 podman[449385]: 2025-10-02 20:05:28.701271517 +0000 UTC m=+0.120920003 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 20:05:28 compute-0 podman[449386]: 2025-10-02 20:05:28.712806951 +0000 UTC m=+0.130762540 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible)
Oct 02 20:05:28 compute-0 podman[449387]: 2025-10-02 20:05:28.734057139 +0000 UTC m=+0.152877587 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 20:05:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct 02 20:05:28 compute-0 ceph-mon[191910]: pgmap v1819: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:05:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct 02 20:05:28 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct 02 20:05:29 compute-0 nova_compute[355794]: 2025-10-02 20:05:29.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.7 KiB/s wr, 55 op/s
Oct 02 20:05:29 compute-0 podman[157186]: time="2025-10-02T20:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:05:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:05:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9067 "" "Go-http-client/1.1"
Oct 02 20:05:29 compute-0 ceph-mon[191910]: osdmap e138: 3 total, 3 up, 3 in
Oct 02 20:05:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:05:30 compute-0 podman[449447]: 2025-10-02 20:05:30.726156102 +0000 UTC m=+0.147700583 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, version=9.6, managed_by=edpm_ansible, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, release=1755695350, distribution-scope=public, vcs-type=git, io.openshift.expose-services=)
Oct 02 20:05:30 compute-0 podman[449448]: 2025-10-02 20:05:30.728234378 +0000 UTC m=+0.140748930 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 20:05:30 compute-0 nova_compute[355794]: 2025-10-02 20:05:30.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:30 compute-0 ceph-mon[191910]: pgmap v1821: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.7 KiB/s wr, 55 op/s
Oct 02 20:05:31 compute-0 openstack_network_exporter[372736]: ERROR   20:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:05:31 compute-0 openstack_network_exporter[372736]: ERROR   20:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:05:31 compute-0 openstack_network_exporter[372736]: ERROR   20:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:05:31 compute-0 openstack_network_exporter[372736]: ERROR   20:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:05:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:05:31 compute-0 openstack_network_exporter[372736]: ERROR   20:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:05:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:05:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1822: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.7 KiB/s wr, 57 op/s
Oct 02 20:05:31 compute-0 ceph-mon[191910]: pgmap v1822: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.7 KiB/s wr, 57 op/s
Oct 02 20:05:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:32.319 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:32.320 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:32.320 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1823: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.7 KiB/s wr, 58 op/s
Oct 02 20:05:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:05:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:05:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:05:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:05:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:05:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:05:34 compute-0 nova_compute[355794]: 2025-10-02 20:05:34.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:34 compute-0 nova_compute[355794]: 2025-10-02 20:05:34.503 2 DEBUG oslo_concurrency.lockutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Acquiring lock "f8be75db-d124-4069-a573-db7410ea2b5e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:34 compute-0 nova_compute[355794]: 2025-10-02 20:05:34.505 2 DEBUG oslo_concurrency.lockutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lock "f8be75db-d124-4069-a573-db7410ea2b5e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:34 compute-0 nova_compute[355794]: 2025-10-02 20:05:34.539 2 DEBUG nova.compute.manager [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 20:05:34 compute-0 ceph-mon[191910]: pgmap v1823: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.7 KiB/s wr, 58 op/s
Oct 02 20:05:34 compute-0 nova_compute[355794]: 2025-10-02 20:05:34.655 2 DEBUG oslo_concurrency.lockutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:34 compute-0 nova_compute[355794]: 2025-10-02 20:05:34.656 2 DEBUG oslo_concurrency.lockutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:34 compute-0 nova_compute[355794]: 2025-10-02 20:05:34.670 2 DEBUG nova.virt.hardware [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 20:05:34 compute-0 nova_compute[355794]: 2025-10-02 20:05:34.671 2 INFO nova.compute.claims [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Claim successful on node compute-0.ctlplane.example.com
Oct 02 20:05:34 compute-0 nova_compute[355794]: 2025-10-02 20:05:34.836 2 DEBUG oslo_concurrency.processutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.124 2 DEBUG oslo_concurrency.lockutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Acquiring lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.125 2 DEBUG oslo_concurrency.lockutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.144 2 DEBUG nova.compute.manager [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.231 2 DEBUG oslo_concurrency.lockutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:05:35 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2186786272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.403 2 DEBUG oslo_concurrency.processutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.417 2 DEBUG nova.compute.provider_tree [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.437 2 DEBUG nova.scheduler.client.report [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.465 2 DEBUG oslo_concurrency.lockutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.466 2 DEBUG nova.compute.manager [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.471 2 DEBUG oslo_concurrency.lockutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.486 2 DEBUG nova.virt.hardware [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.487 2 INFO nova.compute.claims [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Claim successful on node compute-0.ctlplane.example.com
Oct 02 20:05:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:05:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct 02 20:05:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct 02 20:05:35 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct 02 20:05:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1825: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 3.7 KiB/s wr, 59 op/s
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.545 2 DEBUG nova.compute.manager [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.545 2 DEBUG nova.network.neutron [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.566 2 INFO nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.591 2 DEBUG nova.compute.manager [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 20:05:35 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2186786272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:05:35 compute-0 ceph-mon[191910]: osdmap e139: 3 total, 3 up, 3 in
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.662 2 DEBUG oslo_concurrency.processutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.699 2 DEBUG nova.compute.manager [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.703 2 DEBUG nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.703 2 INFO nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Creating image(s)
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.763 2 DEBUG nova.storage.rbd_utils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] rbd image f8be75db-d124-4069-a573-db7410ea2b5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.827 2 DEBUG nova.storage.rbd_utils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] rbd image f8be75db-d124-4069-a573-db7410ea2b5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.897 2 DEBUG nova.storage.rbd_utils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] rbd image f8be75db-d124-4069-a573-db7410ea2b5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.908 2 DEBUG oslo_concurrency.lockutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Acquiring lock "0c456b520d71abe557f4853537116bdcc2ff0a79" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.909 2 DEBUG oslo_concurrency.lockutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lock "0c456b520d71abe557f4853537116bdcc2ff0a79" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:35 compute-0 nova_compute[355794]: 2025-10-02 20:05:35.975 2 DEBUG nova.policy [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b0d30c42cdda433ebd7d28421e967748', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cca17e8f28a243bcaf58d01bf55608e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 20:05:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:05:36 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/982361712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:05:36 compute-0 nova_compute[355794]: 2025-10-02 20:05:36.214 2 DEBUG oslo_concurrency.processutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:36 compute-0 nova_compute[355794]: 2025-10-02 20:05:36.227 2 DEBUG nova.compute.provider_tree [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:05:36 compute-0 nova_compute[355794]: 2025-10-02 20:05:36.257 2 DEBUG nova.scheduler.client.report [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:05:36 compute-0 nova_compute[355794]: 2025-10-02 20:05:36.288 2 DEBUG oslo_concurrency.lockutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:36 compute-0 nova_compute[355794]: 2025-10-02 20:05:36.288 2 DEBUG nova.compute.manager [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 20:05:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:36.297 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '6e:8a:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:85:f4:22:b9:4f'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:05:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:36.299 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 20:05:36 compute-0 nova_compute[355794]: 2025-10-02 20:05:36.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:36 compute-0 nova_compute[355794]: 2025-10-02 20:05:36.351 2 DEBUG nova.compute.manager [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 20:05:36 compute-0 nova_compute[355794]: 2025-10-02 20:05:36.352 2 DEBUG nova.network.neutron [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 20:05:36 compute-0 nova_compute[355794]: 2025-10-02 20:05:36.376 2 INFO nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 20:05:36 compute-0 nova_compute[355794]: 2025-10-02 20:05:36.399 2 DEBUG nova.compute.manager [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 20:05:36 compute-0 nova_compute[355794]: 2025-10-02 20:05:36.477 2 DEBUG nova.virt.libvirt.imagebackend [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Image locations are: [{'url': 'rbd://6019f664-a1c2-5955-8391-692cb79a59f9/images/2881b8cb-4cad-4124-8a6e-ae21054c9692/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://6019f664-a1c2-5955-8391-692cb79a59f9/images/2881b8cb-4cad-4124-8a6e-ae21054c9692/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 20:05:36 compute-0 nova_compute[355794]: 2025-10-02 20:05:36.492 2 DEBUG nova.compute.manager [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 20:05:36 compute-0 nova_compute[355794]: 2025-10-02 20:05:36.493 2 DEBUG nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 20:05:36 compute-0 nova_compute[355794]: 2025-10-02 20:05:36.494 2 INFO nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Creating image(s)
Oct 02 20:05:36 compute-0 nova_compute[355794]: 2025-10-02 20:05:36.556 2 DEBUG nova.storage.rbd_utils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] rbd image cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:05:36 compute-0 nova_compute[355794]: 2025-10-02 20:05:36.625 2 DEBUG nova.storage.rbd_utils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] rbd image cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:05:36 compute-0 ceph-mon[191910]: pgmap v1825: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 3.7 KiB/s wr, 59 op/s
Oct 02 20:05:36 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/982361712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:05:36 compute-0 nova_compute[355794]: 2025-10-02 20:05:36.679 2 DEBUG nova.storage.rbd_utils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] rbd image cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:05:36 compute-0 nova_compute[355794]: 2025-10-02 20:05:36.688 2 DEBUG oslo_concurrency.lockutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Acquiring lock "0c456b520d71abe557f4853537116bdcc2ff0a79" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:37 compute-0 nova_compute[355794]: 2025-10-02 20:05:37.049 2 DEBUG nova.policy [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f962d436a03a4b70951908eb9f826d11', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0db170bd1e464f2ea61c24a9079861a4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 20:05:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1826: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.1 KiB/s wr, 52 op/s
Oct 02 20:05:37 compute-0 nova_compute[355794]: 2025-10-02 20:05:37.882 2 DEBUG nova.network.neutron [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Successfully created port: 6e6c016d-9003-4a4b-92ce-11e00a91b399 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 20:05:38 compute-0 ceph-mon[191910]: pgmap v1826: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.1 KiB/s wr, 52 op/s
Oct 02 20:05:39 compute-0 nova_compute[355794]: 2025-10-02 20:05:39.039 2 DEBUG oslo_concurrency.processutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:39 compute-0 nova_compute[355794]: 2025-10-02 20:05:39.149 2 DEBUG oslo_concurrency.processutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79.part --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:39 compute-0 nova_compute[355794]: 2025-10-02 20:05:39.151 2 DEBUG nova.virt.images [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] 2881b8cb-4cad-4124-8a6e-ae21054c9692 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 02 20:05:39 compute-0 nova_compute[355794]: 2025-10-02 20:05:39.154 2 DEBUG nova.privsep.utils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 20:05:39 compute-0 nova_compute[355794]: 2025-10-02 20:05:39.155 2 DEBUG oslo_concurrency.processutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79.part /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:39 compute-0 nova_compute[355794]: 2025-10-02 20:05:39.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:39 compute-0 nova_compute[355794]: 2025-10-02 20:05:39.501 2 DEBUG oslo_concurrency.processutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79.part /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79.converted" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:39 compute-0 nova_compute[355794]: 2025-10-02 20:05:39.513 2 DEBUG oslo_concurrency.processutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 827 KiB/s rd, 0 B/s wr, 10 op/s
Oct 02 20:05:39 compute-0 nova_compute[355794]: 2025-10-02 20:05:39.654 2 DEBUG oslo_concurrency.processutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79.converted --force-share --output=json" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:39 compute-0 nova_compute[355794]: 2025-10-02 20:05:39.658 2 DEBUG oslo_concurrency.lockutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lock "0c456b520d71abe557f4853537116bdcc2ff0a79" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:39 compute-0 nova_compute[355794]: 2025-10-02 20:05:39.720 2 DEBUG nova.storage.rbd_utils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] rbd image f8be75db-d124-4069-a573-db7410ea2b5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:05:39 compute-0 nova_compute[355794]: 2025-10-02 20:05:39.746 2 DEBUG oslo_concurrency.processutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 f8be75db-d124-4069-a573-db7410ea2b5e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:39 compute-0 nova_compute[355794]: 2025-10-02 20:05:39.784 2 DEBUG oslo_concurrency.lockutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lock "0c456b520d71abe557f4853537116bdcc2ff0a79" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 3.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:39 compute-0 nova_compute[355794]: 2025-10-02 20:05:39.787 2 DEBUG oslo_concurrency.lockutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lock "0c456b520d71abe557f4853537116bdcc2ff0a79" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:39 compute-0 nova_compute[355794]: 2025-10-02 20:05:39.841 2 DEBUG nova.storage.rbd_utils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] rbd image cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:05:39 compute-0 nova_compute[355794]: 2025-10-02 20:05:39.853 2 DEBUG oslo_concurrency.processutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:39 compute-0 nova_compute[355794]: 2025-10-02 20:05:39.893 2 DEBUG nova.network.neutron [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Successfully created port: 668a7aea-bc00-4cac-b1dd-b0786e76c474 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 20:05:40 compute-0 nova_compute[355794]: 2025-10-02 20:05:40.232 2 DEBUG oslo_concurrency.processutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 f8be75db-d124-4069-a573-db7410ea2b5e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:40 compute-0 nova_compute[355794]: 2025-10-02 20:05:40.386 2 DEBUG oslo_concurrency.processutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:40 compute-0 nova_compute[355794]: 2025-10-02 20:05:40.446 2 DEBUG nova.storage.rbd_utils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] resizing rbd image f8be75db-d124-4069-a573-db7410ea2b5e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 20:05:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:05:40 compute-0 nova_compute[355794]: 2025-10-02 20:05:40.631 2 DEBUG nova.storage.rbd_utils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] resizing rbd image cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 20:05:40 compute-0 ceph-mon[191910]: pgmap v1827: 321 pgs: 321 active+clean; 118 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 827 KiB/s rd, 0 B/s wr, 10 op/s
Oct 02 20:05:40 compute-0 nova_compute[355794]: 2025-10-02 20:05:40.848 2 DEBUG nova.objects.instance [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lazy-loading 'migration_context' on Instance uuid f8be75db-d124-4069-a573-db7410ea2b5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:05:40 compute-0 nova_compute[355794]: 2025-10-02 20:05:40.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:40 compute-0 nova_compute[355794]: 2025-10-02 20:05:40.961 2 DEBUG nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 20:05:40 compute-0 nova_compute[355794]: 2025-10-02 20:05:40.962 2 DEBUG nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Ensure instance console log exists: /var/lib/nova/instances/f8be75db-d124-4069-a573-db7410ea2b5e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 20:05:40 compute-0 nova_compute[355794]: 2025-10-02 20:05:40.963 2 DEBUG oslo_concurrency.lockutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:40 compute-0 nova_compute[355794]: 2025-10-02 20:05:40.964 2 DEBUG oslo_concurrency.lockutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:40 compute-0 nova_compute[355794]: 2025-10-02 20:05:40.965 2 DEBUG oslo_concurrency.lockutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:40 compute-0 nova_compute[355794]: 2025-10-02 20:05:40.996 2 DEBUG nova.objects.instance [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lazy-loading 'migration_context' on Instance uuid cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:05:41 compute-0 nova_compute[355794]: 2025-10-02 20:05:41.018 2 DEBUG nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 20:05:41 compute-0 nova_compute[355794]: 2025-10-02 20:05:41.019 2 DEBUG nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Ensure instance console log exists: /var/lib/nova/instances/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 20:05:41 compute-0 nova_compute[355794]: 2025-10-02 20:05:41.020 2 DEBUG oslo_concurrency.lockutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:41 compute-0 nova_compute[355794]: 2025-10-02 20:05:41.021 2 DEBUG oslo_concurrency.lockutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:41 compute-0 nova_compute[355794]: 2025-10-02 20:05:41.021 2 DEBUG oslo_concurrency.lockutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:41 compute-0 nova_compute[355794]: 2025-10-02 20:05:41.082 2 DEBUG nova.network.neutron [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Successfully updated port: 6e6c016d-9003-4a4b-92ce-11e00a91b399 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 20:05:41 compute-0 nova_compute[355794]: 2025-10-02 20:05:41.106 2 DEBUG oslo_concurrency.lockutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Acquiring lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:05:41 compute-0 nova_compute[355794]: 2025-10-02 20:05:41.107 2 DEBUG oslo_concurrency.lockutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Acquired lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:05:41 compute-0 nova_compute[355794]: 2025-10-02 20:05:41.108 2 DEBUG nova.network.neutron [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 20:05:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1828: 321 pgs: 321 active+clean; 149 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 12 op/s
Oct 02 20:05:41 compute-0 podman[449878]: 2025-10-02 20:05:41.721307481 +0000 UTC m=+0.134773719 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:05:42 compute-0 nova_compute[355794]: 2025-10-02 20:05:42.487 2 DEBUG nova.network.neutron [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 20:05:42 compute-0 ceph-mon[191910]: pgmap v1828: 321 pgs: 321 active+clean; 149 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 12 op/s
Oct 02 20:05:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:43.303 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1de2af17-a89c-45e5-97c6-db433f26bbb6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:05:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1829: 321 pgs: 321 active+clean; 169 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 20:05:43 compute-0 nova_compute[355794]: 2025-10-02 20:05:43.706 2 DEBUG nova.compute.manager [req-b08b1544-0312-49a7-add5-157deb56189c req-d03188d9-37e1-485e-8dec-d9f2d55d11ef 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Received event network-changed-6e6c016d-9003-4a4b-92ce-11e00a91b399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:05:43 compute-0 nova_compute[355794]: 2025-10-02 20:05:43.707 2 DEBUG nova.compute.manager [req-b08b1544-0312-49a7-add5-157deb56189c req-d03188d9-37e1-485e-8dec-d9f2d55d11ef 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Refreshing instance network info cache due to event network-changed-6e6c016d-9003-4a4b-92ce-11e00a91b399. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:05:43 compute-0 nova_compute[355794]: 2025-10-02 20:05:43.707 2 DEBUG oslo_concurrency.lockutils [req-b08b1544-0312-49a7-add5-157deb56189c req-d03188d9-37e1-485e-8dec-d9f2d55d11ef 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.082 2 DEBUG nova.network.neutron [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Successfully updated port: 668a7aea-bc00-4cac-b1dd-b0786e76c474 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.105 2 DEBUG oslo_concurrency.lockutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Acquiring lock "refresh_cache-cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.105 2 DEBUG oslo_concurrency.lockutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Acquired lock "refresh_cache-cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.106 2 DEBUG nova.network.neutron [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.490 2 DEBUG nova.network.neutron [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.679 2 DEBUG nova.network.neutron [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Updating instance_info_cache with network_info: [{"id": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "address": "fa:16:3e:04:5e:6a", "network": {"id": "3f70fd72-8355-46f5-8b19-cebed2c28970", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1770026923-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e6c016d-90", "ovs_interfaceid": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:05:44 compute-0 ceph-mon[191910]: pgmap v1829: 321 pgs: 321 active+clean; 169 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.708 2 DEBUG oslo_concurrency.lockutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Releasing lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.709 2 DEBUG nova.compute.manager [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Instance network_info: |[{"id": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "address": "fa:16:3e:04:5e:6a", "network": {"id": "3f70fd72-8355-46f5-8b19-cebed2c28970", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1770026923-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e6c016d-90", "ovs_interfaceid": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.711 2 DEBUG oslo_concurrency.lockutils [req-b08b1544-0312-49a7-add5-157deb56189c req-d03188d9-37e1-485e-8dec-d9f2d55d11ef 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.711 2 DEBUG nova.network.neutron [req-b08b1544-0312-49a7-add5-157deb56189c req-d03188d9-37e1-485e-8dec-d9f2d55d11ef 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Refreshing network info cache for port 6e6c016d-9003-4a4b-92ce-11e00a91b399 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.717 2 DEBUG nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Start _get_guest_xml network_info=[{"id": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "address": "fa:16:3e:04:5e:6a", "network": {"id": "3f70fd72-8355-46f5-8b19-cebed2c28970", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1770026923-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e6c016d-90", "ovs_interfaceid": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:04:39Z,direct_url=<?>,disk_format='qcow2',id=2881b8cb-4cad-4124-8a6e-ae21054c9692,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:04:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'disk_bus': 'virtio', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'image_id': '2881b8cb-4cad-4124-8a6e-ae21054c9692'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.731 2 WARNING nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.750 2 DEBUG nova.virt.libvirt.host [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.752 2 DEBUG nova.virt.libvirt.host [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.760 2 DEBUG nova.virt.libvirt.host [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.761 2 DEBUG nova.virt.libvirt.host [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.762 2 DEBUG nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.763 2 DEBUG nova.virt.hardware [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T20:04:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2a4d7fef-934e-4921-8c3b-c6783966faa5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:04:39Z,direct_url=<?>,disk_format='qcow2',id=2881b8cb-4cad-4124-8a6e-ae21054c9692,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:04:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.764 2 DEBUG nova.virt.hardware [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.765 2 DEBUG nova.virt.hardware [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.766 2 DEBUG nova.virt.hardware [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.767 2 DEBUG nova.virt.hardware [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.767 2 DEBUG nova.virt.hardware [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.768 2 DEBUG nova.virt.hardware [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.769 2 DEBUG nova.virt.hardware [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.769 2 DEBUG nova.virt.hardware [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.770 2 DEBUG nova.virt.hardware [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.771 2 DEBUG nova.virt.hardware [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 20:05:44 compute-0 nova_compute[355794]: 2025-10-02 20:05:44.776 2 DEBUG oslo_concurrency.processutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:05:45 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769781503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.360 2 DEBUG oslo_concurrency.processutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.413 2 DEBUG nova.storage.rbd_utils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] rbd image f8be75db-d124-4069-a573-db7410ea2b5e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.425 2 DEBUG oslo_concurrency.processutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:05:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.2 MiB/s wr, 72 op/s
Oct 02 20:05:45 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/769781503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.882 2 DEBUG nova.network.neutron [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Updating instance_info_cache with network_info: [{"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.908 2 DEBUG oslo_concurrency.lockutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Releasing lock "refresh_cache-cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.909 2 DEBUG nova.compute.manager [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Instance network_info: |[{"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.912 2 DEBUG nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Start _get_guest_xml network_info=[{"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:04:39Z,direct_url=<?>,disk_format='qcow2',id=2881b8cb-4cad-4124-8a6e-ae21054c9692,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:04:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'disk_bus': 'virtio', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'image_id': '2881b8cb-4cad-4124-8a6e-ae21054c9692'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.921 2 WARNING nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.928 2 DEBUG nova.virt.libvirt.host [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.929 2 DEBUG nova.virt.libvirt.host [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.934 2 DEBUG nova.virt.libvirt.host [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.934 2 DEBUG nova.virt.libvirt.host [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.935 2 DEBUG nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.935 2 DEBUG nova.virt.hardware [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T20:04:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2a4d7fef-934e-4921-8c3b-c6783966faa5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:04:39Z,direct_url=<?>,disk_format='qcow2',id=2881b8cb-4cad-4124-8a6e-ae21054c9692,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:04:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.936 2 DEBUG nova.virt.hardware [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.937 2 DEBUG nova.virt.hardware [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.937 2 DEBUG nova.virt.hardware [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.937 2 DEBUG nova.virt.hardware [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.938 2 DEBUG nova.virt.hardware [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.938 2 DEBUG nova.virt.hardware [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.939 2 DEBUG nova.virt.hardware [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.939 2 DEBUG nova.virt.hardware [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.940 2 DEBUG nova.virt.hardware [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.940 2 DEBUG nova.virt.hardware [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.944 2 DEBUG oslo_concurrency.processutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:45 compute-0 nova_compute[355794]: 2025-10-02 20:05:45.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:05:46 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2158716072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.039 2 DEBUG oslo_concurrency.processutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.042 2 DEBUG nova.virt.libvirt.vif [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T20:05:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1217822364',display_name='tempest-AttachInterfacesUnderV243Test-server-1217822364',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1217822364',id=6,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBlPsRkBlqgHx/BmjRPVDlBpptxjSDWYPURGAF2R+sS2VpCCQPiKVY59JVCOUD1P0G52Bb+7sbsVkqTPymDRO6SWoHX6J6G8pwCTS8EqALGPk0PYcRh2YWFhti1jIuVIxQ==',key_name='tempest-keypair-459640716',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cca17e8f28a243bcaf58d01bf55608e9',ramdisk_id='',reservation_id='r-m3pch4t0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-2039487239',owner_user_name='tempest-AttachInterfacesUnderV243Test-2039487239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:05:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b0d30c42cdda433ebd7d28421e967748',uuid=f8be75db-d124-4069-a573-db7410ea2b5e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "address": "fa:16:3e:04:5e:6a", "network": {"id": "3f70fd72-8355-46f5-8b19-cebed2c28970", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1770026923-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e6c016d-90", "ovs_interfaceid": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.043 2 DEBUG nova.network.os_vif_util [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Converting VIF {"id": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "address": "fa:16:3e:04:5e:6a", "network": {"id": "3f70fd72-8355-46f5-8b19-cebed2c28970", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1770026923-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e6c016d-90", "ovs_interfaceid": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.046 2 DEBUG nova.network.os_vif_util [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:5e:6a,bridge_name='br-int',has_traffic_filtering=True,id=6e6c016d-9003-4a4b-92ce-11e00a91b399,network=Network(3f70fd72-8355-46f5-8b19-cebed2c28970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e6c016d-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.048 2 DEBUG nova.objects.instance [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid f8be75db-d124-4069-a573-db7410ea2b5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.071 2 DEBUG nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] End _get_guest_xml xml=<domain type="kvm">
Oct 02 20:05:46 compute-0 nova_compute[355794]:   <uuid>f8be75db-d124-4069-a573-db7410ea2b5e</uuid>
Oct 02 20:05:46 compute-0 nova_compute[355794]:   <name>instance-00000006</name>
Oct 02 20:05:46 compute-0 nova_compute[355794]:   <memory>131072</memory>
Oct 02 20:05:46 compute-0 nova_compute[355794]:   <vcpu>1</vcpu>
Oct 02 20:05:46 compute-0 nova_compute[355794]:   <metadata>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-1217822364</nova:name>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <nova:creationTime>2025-10-02 20:05:44</nova:creationTime>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <nova:flavor name="m1.nano">
Oct 02 20:05:46 compute-0 nova_compute[355794]:         <nova:memory>128</nova:memory>
Oct 02 20:05:46 compute-0 nova_compute[355794]:         <nova:disk>1</nova:disk>
Oct 02 20:05:46 compute-0 nova_compute[355794]:         <nova:swap>0</nova:swap>
Oct 02 20:05:46 compute-0 nova_compute[355794]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 20:05:46 compute-0 nova_compute[355794]:         <nova:vcpus>1</nova:vcpus>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       </nova:flavor>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <nova:owner>
Oct 02 20:05:46 compute-0 nova_compute[355794]:         <nova:user uuid="b0d30c42cdda433ebd7d28421e967748">tempest-AttachInterfacesUnderV243Test-2039487239-project-member</nova:user>
Oct 02 20:05:46 compute-0 nova_compute[355794]:         <nova:project uuid="cca17e8f28a243bcaf58d01bf55608e9">tempest-AttachInterfacesUnderV243Test-2039487239</nova:project>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       </nova:owner>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <nova:root type="image" uuid="2881b8cb-4cad-4124-8a6e-ae21054c9692"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <nova:ports>
Oct 02 20:05:46 compute-0 nova_compute[355794]:         <nova:port uuid="6e6c016d-9003-4a4b-92ce-11e00a91b399">
Oct 02 20:05:46 compute-0 nova_compute[355794]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:         </nova:port>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       </nova:ports>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     </nova:instance>
Oct 02 20:05:46 compute-0 nova_compute[355794]:   </metadata>
Oct 02 20:05:46 compute-0 nova_compute[355794]:   <sysinfo type="smbios">
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <system>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <entry name="manufacturer">RDO</entry>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <entry name="product">OpenStack Compute</entry>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <entry name="serial">f8be75db-d124-4069-a573-db7410ea2b5e</entry>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <entry name="uuid">f8be75db-d124-4069-a573-db7410ea2b5e</entry>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <entry name="family">Virtual Machine</entry>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     </system>
Oct 02 20:05:46 compute-0 nova_compute[355794]:   </sysinfo>
Oct 02 20:05:46 compute-0 nova_compute[355794]:   <os>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <boot dev="hd"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <smbios mode="sysinfo"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:   </os>
Oct 02 20:05:46 compute-0 nova_compute[355794]:   <features>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <acpi/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <apic/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <vmcoreinfo/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:   </features>
Oct 02 20:05:46 compute-0 nova_compute[355794]:   <clock offset="utc">
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <timer name="hpet" present="no"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:   </clock>
Oct 02 20:05:46 compute-0 nova_compute[355794]:   <cpu mode="host-model" match="exact">
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:   </cpu>
Oct 02 20:05:46 compute-0 nova_compute[355794]:   <devices>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/f8be75db-d124-4069-a573-db7410ea2b5e_disk">
Oct 02 20:05:46 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       </source>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:05:46 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <target dev="vda" bus="virtio"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <disk type="network" device="cdrom">
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/f8be75db-d124-4069-a573-db7410ea2b5e_disk.config">
Oct 02 20:05:46 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       </source>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:05:46 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <target dev="sda" bus="sata"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <interface type="ethernet">
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <mac address="fa:16:3e:04:5e:6a"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <mtu size="1442"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <target dev="tap6e6c016d-90"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     </interface>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <serial type="pty">
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <log file="/var/lib/nova/instances/f8be75db-d124-4069-a573-db7410ea2b5e/console.log" append="off"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     </serial>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <video>
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     </video>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <input type="tablet" bus="usb"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <rng model="virtio">
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <backend model="random">/dev/urandom</backend>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     </rng>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <controller type="usb" index="0"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     <memballoon model="virtio">
Oct 02 20:05:46 compute-0 nova_compute[355794]:       <stats period="10"/>
Oct 02 20:05:46 compute-0 nova_compute[355794]:     </memballoon>
Oct 02 20:05:46 compute-0 nova_compute[355794]:   </devices>
Oct 02 20:05:46 compute-0 nova_compute[355794]: </domain>
Oct 02 20:05:46 compute-0 nova_compute[355794]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.074 2 DEBUG nova.compute.manager [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Preparing to wait for external event network-vif-plugged-6e6c016d-9003-4a4b-92ce-11e00a91b399 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.075 2 DEBUG oslo_concurrency.lockutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Acquiring lock "f8be75db-d124-4069-a573-db7410ea2b5e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.076 2 DEBUG oslo_concurrency.lockutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lock "f8be75db-d124-4069-a573-db7410ea2b5e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.076 2 DEBUG oslo_concurrency.lockutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lock "f8be75db-d124-4069-a573-db7410ea2b5e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.078 2 DEBUG nova.virt.libvirt.vif [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T20:05:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1217822364',display_name='tempest-AttachInterfacesUnderV243Test-server-1217822364',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1217822364',id=6,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBlPsRkBlqgHx/BmjRPVDlBpptxjSDWYPURGAF2R+sS2VpCCQPiKVY59JVCOUD1P0G52Bb+7sbsVkqTPymDRO6SWoHX6J6G8pwCTS8EqALGPk0PYcRh2YWFhti1jIuVIxQ==',key_name='tempest-keypair-459640716',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cca17e8f28a243bcaf58d01bf55608e9',ramdisk_id='',reservation_id='r-m3pch4t0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-2039487239',owner_user_name='tempest-AttachInterfacesUnderV243Test-2039487239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:05:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b0d30c42cdda433ebd7d28421e967748',uuid=f8be75db-d124-4069-a573-db7410ea2b5e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "address": "fa:16:3e:04:5e:6a", "network": {"id": "3f70fd72-8355-46f5-8b19-cebed2c28970", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1770026923-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e6c016d-90", "ovs_interfaceid": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.078 2 DEBUG nova.network.os_vif_util [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Converting VIF {"id": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "address": "fa:16:3e:04:5e:6a", "network": {"id": "3f70fd72-8355-46f5-8b19-cebed2c28970", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1770026923-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e6c016d-90", "ovs_interfaceid": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.080 2 DEBUG nova.network.os_vif_util [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:5e:6a,bridge_name='br-int',has_traffic_filtering=True,id=6e6c016d-9003-4a4b-92ce-11e00a91b399,network=Network(3f70fd72-8355-46f5-8b19-cebed2c28970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e6c016d-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.081 2 DEBUG os_vif [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:5e:6a,bridge_name='br-int',has_traffic_filtering=True,id=6e6c016d-9003-4a4b-92ce-11e00a91b399,network=Network(3f70fd72-8355-46f5-8b19-cebed2c28970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e6c016d-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.083 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.084 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.098 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e6c016d-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.099 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e6c016d-90, col_values=(('external_ids', {'iface-id': '6e6c016d-9003-4a4b-92ce-11e00a91b399', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:5e:6a', 'vm-uuid': 'f8be75db-d124-4069-a573-db7410ea2b5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:46 compute-0 NetworkManager[44968]: <info>  [1759435546.1042] manager: (tap6e6c016d-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.118 2 INFO os_vif [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:5e:6a,bridge_name='br-int',has_traffic_filtering=True,id=6e6c016d-9003-4a4b-92ce-11e00a91b399,network=Network(3f70fd72-8355-46f5-8b19-cebed2c28970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e6c016d-90')
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.181 2 DEBUG nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.182 2 DEBUG nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.182 2 DEBUG nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] No VIF found with MAC fa:16:3e:04:5e:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.182 2 INFO nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Using config drive
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.228 2 DEBUG nova.storage.rbd_utils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] rbd image f8be75db-d124-4069-a573-db7410ea2b5e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.317 2 DEBUG nova.compute.manager [req-67aafe80-b575-426a-96ae-ff7c29409964 req-393509c8-226b-46c1-a3b8-a3b3bdec7b05 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Received event network-changed-668a7aea-bc00-4cac-b1dd-b0786e76c474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.317 2 DEBUG nova.compute.manager [req-67aafe80-b575-426a-96ae-ff7c29409964 req-393509c8-226b-46c1-a3b8-a3b3bdec7b05 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Refreshing instance network info cache due to event network-changed-668a7aea-bc00-4cac-b1dd-b0786e76c474. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.318 2 DEBUG oslo_concurrency.lockutils [req-67aafe80-b575-426a-96ae-ff7c29409964 req-393509c8-226b-46c1-a3b8-a3b3bdec7b05 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.318 2 DEBUG oslo_concurrency.lockutils [req-67aafe80-b575-426a-96ae-ff7c29409964 req-393509c8-226b-46c1-a3b8-a3b3bdec7b05 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.318 2 DEBUG nova.network.neutron [req-67aafe80-b575-426a-96ae-ff7c29409964 req-393509c8-226b-46c1-a3b8-a3b3bdec7b05 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Refreshing network info cache for port 668a7aea-bc00-4cac-b1dd-b0786e76c474 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:05:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:05:46 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/787762799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.463 2 DEBUG oslo_concurrency.processutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.515 2 DEBUG nova.storage.rbd_utils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] rbd image cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.536 2 DEBUG oslo_concurrency.processutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:46 compute-0 ceph-mon[191910]: pgmap v1830: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.2 MiB/s wr, 72 op/s
Oct 02 20:05:46 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2158716072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:05:46 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/787762799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.826 2 INFO nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Creating config drive at /var/lib/nova/instances/f8be75db-d124-4069-a573-db7410ea2b5e/disk.config
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.834 2 DEBUG oslo_concurrency.processutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f8be75db-d124-4069-a573-db7410ea2b5e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvrq_vrwx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:46 compute-0 nova_compute[355794]: 2025-10-02 20:05:46.980 2 DEBUG oslo_concurrency.processutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f8be75db-d124-4069-a573-db7410ea2b5e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvrq_vrwx" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.040 2 DEBUG nova.storage.rbd_utils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] rbd image f8be75db-d124-4069-a573-db7410ea2b5e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.050 2 DEBUG oslo_concurrency.processutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f8be75db-d124-4069-a573-db7410ea2b5e/disk.config f8be75db-d124-4069-a573-db7410ea2b5e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:05:47 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3900962875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.095 2 DEBUG oslo_concurrency.processutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.099 2 DEBUG nova.virt.libvirt.vif [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T20:05:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-521568053',display_name='tempest-ServerActionsTestJSON-server-521568053',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-521568053',id=7,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO3+Ml3MDFRwjAbHlLVelOVaJFw9wMrAgiM3K5ECgv/8gOHV2c2HY+qo3hzkkazGL3NjANBQg+Uxykp7yYUSzraCqdB1dpSHggRBXiV5RbTjrArOXyRLYbqWS943JQegug==',key_name='tempest-keypair-1325070434',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0db170bd1e464f2ea61c24a9079861a4',ramdisk_id='',reservation_id='r-w750d1sy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-872820255',owner_user_name='tempest-ServerActionsTestJSON-872820255-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:05:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f962d436a03a4b70951908eb9f826d11',uuid=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.100 2 DEBUG nova.network.os_vif_util [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Converting VIF {"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.102 2 DEBUG nova.network.os_vif_util [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:42:64,bridge_name='br-int',has_traffic_filtering=True,id=668a7aea-bc00-4cac-b1dd-b0786e76c474,network=Network(c59c91fb-efec-4ddf-b699-e072223ea127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap668a7aea-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.104 2 DEBUG nova.objects.instance [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lazy-loading 'pci_devices' on Instance uuid cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.130 2 DEBUG nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] End _get_guest_xml xml=<domain type="kvm">
Oct 02 20:05:47 compute-0 nova_compute[355794]:   <uuid>cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9</uuid>
Oct 02 20:05:47 compute-0 nova_compute[355794]:   <name>instance-00000007</name>
Oct 02 20:05:47 compute-0 nova_compute[355794]:   <memory>131072</memory>
Oct 02 20:05:47 compute-0 nova_compute[355794]:   <vcpu>1</vcpu>
Oct 02 20:05:47 compute-0 nova_compute[355794]:   <metadata>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <nova:name>tempest-ServerActionsTestJSON-server-521568053</nova:name>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <nova:creationTime>2025-10-02 20:05:45</nova:creationTime>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <nova:flavor name="m1.nano">
Oct 02 20:05:47 compute-0 nova_compute[355794]:         <nova:memory>128</nova:memory>
Oct 02 20:05:47 compute-0 nova_compute[355794]:         <nova:disk>1</nova:disk>
Oct 02 20:05:47 compute-0 nova_compute[355794]:         <nova:swap>0</nova:swap>
Oct 02 20:05:47 compute-0 nova_compute[355794]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 20:05:47 compute-0 nova_compute[355794]:         <nova:vcpus>1</nova:vcpus>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       </nova:flavor>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <nova:owner>
Oct 02 20:05:47 compute-0 nova_compute[355794]:         <nova:user uuid="f962d436a03a4b70951908eb9f826d11">tempest-ServerActionsTestJSON-872820255-project-member</nova:user>
Oct 02 20:05:47 compute-0 nova_compute[355794]:         <nova:project uuid="0db170bd1e464f2ea61c24a9079861a4">tempest-ServerActionsTestJSON-872820255</nova:project>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       </nova:owner>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <nova:root type="image" uuid="2881b8cb-4cad-4124-8a6e-ae21054c9692"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <nova:ports>
Oct 02 20:05:47 compute-0 nova_compute[355794]:         <nova:port uuid="668a7aea-bc00-4cac-b1dd-b0786e76c474">
Oct 02 20:05:47 compute-0 nova_compute[355794]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:         </nova:port>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       </nova:ports>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     </nova:instance>
Oct 02 20:05:47 compute-0 nova_compute[355794]:   </metadata>
Oct 02 20:05:47 compute-0 nova_compute[355794]:   <sysinfo type="smbios">
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <system>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <entry name="manufacturer">RDO</entry>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <entry name="product">OpenStack Compute</entry>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <entry name="serial">cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9</entry>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <entry name="uuid">cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9</entry>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <entry name="family">Virtual Machine</entry>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     </system>
Oct 02 20:05:47 compute-0 nova_compute[355794]:   </sysinfo>
Oct 02 20:05:47 compute-0 nova_compute[355794]:   <os>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <boot dev="hd"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <smbios mode="sysinfo"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:   </os>
Oct 02 20:05:47 compute-0 nova_compute[355794]:   <features>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <acpi/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <apic/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <vmcoreinfo/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:   </features>
Oct 02 20:05:47 compute-0 nova_compute[355794]:   <clock offset="utc">
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <timer name="hpet" present="no"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:   </clock>
Oct 02 20:05:47 compute-0 nova_compute[355794]:   <cpu mode="host-model" match="exact">
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:   </cpu>
Oct 02 20:05:47 compute-0 nova_compute[355794]:   <devices>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9_disk">
Oct 02 20:05:47 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       </source>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:05:47 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <target dev="vda" bus="virtio"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <disk type="network" device="cdrom">
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9_disk.config">
Oct 02 20:05:47 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       </source>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:05:47 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <target dev="sda" bus="sata"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <interface type="ethernet">
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <mac address="fa:16:3e:eb:42:64"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <mtu size="1442"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <target dev="tap668a7aea-bc"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     </interface>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <serial type="pty">
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <log file="/var/lib/nova/instances/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9/console.log" append="off"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     </serial>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <video>
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     </video>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <input type="tablet" bus="usb"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <rng model="virtio">
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <backend model="random">/dev/urandom</backend>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     </rng>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <controller type="usb" index="0"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     <memballoon model="virtio">
Oct 02 20:05:47 compute-0 nova_compute[355794]:       <stats period="10"/>
Oct 02 20:05:47 compute-0 nova_compute[355794]:     </memballoon>
Oct 02 20:05:47 compute-0 nova_compute[355794]:   </devices>
Oct 02 20:05:47 compute-0 nova_compute[355794]: </domain>
Oct 02 20:05:47 compute-0 nova_compute[355794]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.132 2 DEBUG nova.compute.manager [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Preparing to wait for external event network-vif-plugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.133 2 DEBUG oslo_concurrency.lockutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Acquiring lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.134 2 DEBUG oslo_concurrency.lockutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.134 2 DEBUG oslo_concurrency.lockutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.136 2 DEBUG nova.virt.libvirt.vif [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T20:05:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-521568053',display_name='tempest-ServerActionsTestJSON-server-521568053',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-521568053',id=7,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO3+Ml3MDFRwjAbHlLVelOVaJFw9wMrAgiM3K5ECgv/8gOHV2c2HY+qo3hzkkazGL3NjANBQg+Uxykp7yYUSzraCqdB1dpSHggRBXiV5RbTjrArOXyRLYbqWS943JQegug==',key_name='tempest-keypair-1325070434',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0db170bd1e464f2ea61c24a9079861a4',ramdisk_id='',reservation_id='r-w750d1sy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-872820255',owner_user_name='tempest-ServerActionsTestJSON-872820255-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:05:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f962d436a03a4b70951908eb9f826d11',uuid=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.137 2 DEBUG nova.network.os_vif_util [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Converting VIF {"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.138 2 DEBUG nova.network.os_vif_util [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:42:64,bridge_name='br-int',has_traffic_filtering=True,id=668a7aea-bc00-4cac-b1dd-b0786e76c474,network=Network(c59c91fb-efec-4ddf-b699-e072223ea127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap668a7aea-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.139 2 DEBUG os_vif [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:42:64,bridge_name='br-int',has_traffic_filtering=True,id=668a7aea-bc00-4cac-b1dd-b0786e76c474,network=Network(c59c91fb-efec-4ddf-b699-e072223ea127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap668a7aea-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.142 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.143 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.148 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap668a7aea-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.149 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap668a7aea-bc, col_values=(('external_ids', {'iface-id': '668a7aea-bc00-4cac-b1dd-b0786e76c474', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:42:64', 'vm-uuid': 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:05:47 compute-0 NetworkManager[44968]: <info>  [1759435547.1547] manager: (tap668a7aea-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.178 2 INFO os_vif [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:42:64,bridge_name='br-int',has_traffic_filtering=True,id=668a7aea-bc00-4cac-b1dd-b0786e76c474,network=Network(c59c91fb-efec-4ddf-b699-e072223ea127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap668a7aea-bc')
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.305 2 DEBUG nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.306 2 DEBUG nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.307 2 DEBUG nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] No VIF found with MAC fa:16:3e:eb:42:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.308 2 INFO nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Using config drive
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.368 2 DEBUG nova.storage.rbd_utils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] rbd image cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.388 2 DEBUG oslo_concurrency.processutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f8be75db-d124-4069-a573-db7410ea2b5e/disk.config f8be75db-d124-4069-a573-db7410ea2b5e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.389 2 INFO nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Deleting local config drive /var/lib/nova/instances/f8be75db-d124-4069-a573-db7410ea2b5e/disk.config because it was imported into RBD.
Oct 02 20:05:47 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 02 20:05:47 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.472 2 DEBUG nova.network.neutron [req-b08b1544-0312-49a7-add5-157deb56189c req-d03188d9-37e1-485e-8dec-d9f2d55d11ef 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Updated VIF entry in instance network info cache for port 6e6c016d-9003-4a4b-92ce-11e00a91b399. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.474 2 DEBUG nova.network.neutron [req-b08b1544-0312-49a7-add5-157deb56189c req-d03188d9-37e1-485e-8dec-d9f2d55d11ef 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Updating instance_info_cache with network_info: [{"id": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "address": "fa:16:3e:04:5e:6a", "network": {"id": "3f70fd72-8355-46f5-8b19-cebed2c28970", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1770026923-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e6c016d-90", "ovs_interfaceid": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.528 2 DEBUG oslo_concurrency.lockutils [req-b08b1544-0312-49a7-add5-157deb56189c req-d03188d9-37e1-485e-8dec-d9f2d55d11ef 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:05:47 compute-0 NetworkManager[44968]: <info>  [1759435547.5431] manager: (tap6e6c016d-90): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Oct 02 20:05:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 60 op/s
Oct 02 20:05:47 compute-0 kernel: tap6e6c016d-90: entered promiscuous mode
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:47 compute-0 ovn_controller[88435]: 2025-10-02T20:05:47Z|00062|binding|INFO|Claiming lport 6e6c016d-9003-4a4b-92ce-11e00a91b399 for this chassis.
Oct 02 20:05:47 compute-0 ovn_controller[88435]: 2025-10-02T20:05:47Z|00063|binding|INFO|6e6c016d-9003-4a4b-92ce-11e00a91b399: Claiming fa:16:3e:04:5e:6a 10.100.0.8
Oct 02 20:05:47 compute-0 ovn_controller[88435]: 2025-10-02T20:05:47Z|00064|binding|INFO|Setting lport 6e6c016d-9003-4a4b-92ce-11e00a91b399 ovn-installed in OVS
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:47 compute-0 systemd-machined[137646]: New machine qemu-6-instance-00000006.
Oct 02 20:05:47 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Oct 02 20:05:47 compute-0 systemd-udevd[450133]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 20:05:47 compute-0 NetworkManager[44968]: <info>  [1759435547.6489] device (tap6e6c016d-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 20:05:47 compute-0 NetworkManager[44968]: <info>  [1759435547.6510] device (tap6e6c016d-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 20:05:47 compute-0 ovn_controller[88435]: 2025-10-02T20:05:47Z|00065|binding|INFO|Setting lport 6e6c016d-9003-4a4b-92ce-11e00a91b399 up in Southbound
Oct 02 20:05:47 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:47.655 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:5e:6a 10.100.0.8'], port_security=['fa:16:3e:04:5e:6a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f8be75db-d124-4069-a573-db7410ea2b5e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f70fd72-8355-46f5-8b19-cebed2c28970', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cca17e8f28a243bcaf58d01bf55608e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a85e186d-f4f6-43d0-947c-4e33f66e56e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f4ef716-4296-4970-8894-f8467917d8f9, chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=6e6c016d-9003-4a4b-92ce-11e00a91b399) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:05:47 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:47.658 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 6e6c016d-9003-4a4b-92ce-11e00a91b399 in datapath 3f70fd72-8355-46f5-8b19-cebed2c28970 bound to our chassis
Oct 02 20:05:47 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:47.663 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3f70fd72-8355-46f5-8b19-cebed2c28970
Oct 02 20:05:47 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:47.689 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[09081922-1dfc-4763-a124-d57cf836e5b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:47 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:47.690 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3f70fd72-81 in ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 20:05:47 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:47.694 420728 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3f70fd72-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 20:05:47 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:47.694 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[0e137b47-79e8-452d-8bc0-94011d4989e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:47 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:47.696 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[9cbeba09-3caf-4ff3-b7a0-bc4f2553f1fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:47 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:47.740 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[0098afba-0a60-42b6-b2c0-3db6a17e2f1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:47 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3900962875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:05:47 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:47.778 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[3102a745-4c2d-4b1c-8a09-bc7730d32448]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:47 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:47.830 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[0e627586-744b-4759-b1c8-297f35111136]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:47 compute-0 NetworkManager[44968]: <info>  [1759435547.8445] manager: (tap3f70fd72-80): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Oct 02 20:05:47 compute-0 systemd-udevd[450135]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 20:05:47 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:47.848 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[da55bbe4-a874-4540-bc9a-64dbd6db2f55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:47 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:47.906 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca26893-a860-41d5-9e52-3892d835ebb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:47 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:47.912 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[499cc99c-c1b7-4ce1-98c4-fa4a493c5893]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.914 2 INFO nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Creating config drive at /var/lib/nova/instances/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9/disk.config
Oct 02 20:05:47 compute-0 nova_compute[355794]: 2025-10-02 20:05:47.926 2 DEBUG oslo_concurrency.processutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6whq4nc7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:47 compute-0 NetworkManager[44968]: <info>  [1759435547.9504] device (tap3f70fd72-80): carrier: link connected
Oct 02 20:05:47 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:47.957 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[2bab1184-920d-4849-950f-5a4855f1e13f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:47 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:47.985 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[9de60817-92bf-4681-86fc-7dd363ca09d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f70fd72-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:a3:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671951, 'reachable_time': 20405, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 450171, 'error': None, 'target': 'ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:48.021 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[bce6dff7-6361-479b-8951-25daae4f1e98]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:a3a2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671951, 'tstamp': 671951}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 450174, 'error': None, 'target': 'ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:48.050 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[3443c72a-9307-4cd5-8345-e91520a6582b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f70fd72-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:a3:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671951, 'reachable_time': 20405, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 450175, 'error': None, 'target': 'ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:48 compute-0 nova_compute[355794]: 2025-10-02 20:05:48.075 2 DEBUG oslo_concurrency.processutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6whq4nc7" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:48.120 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[a283683f-4dae-43ed-ba21-fb3845cfe0c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:48 compute-0 nova_compute[355794]: 2025-10-02 20:05:48.146 2 DEBUG nova.storage.rbd_utils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] rbd image cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:05:48 compute-0 nova_compute[355794]: 2025-10-02 20:05:48.165 2 DEBUG oslo_concurrency.processutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9/disk.config cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:48.237 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb35296-6d71-449f-acc5-7d936b5d2303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:48.238 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f70fd72-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:48.238 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:48.239 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f70fd72-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:05:48 compute-0 nova_compute[355794]: 2025-10-02 20:05:48.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:48 compute-0 NetworkManager[44968]: <info>  [1759435548.2438] manager: (tap3f70fd72-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Oct 02 20:05:48 compute-0 kernel: tap3f70fd72-80: entered promiscuous mode
Oct 02 20:05:48 compute-0 nova_compute[355794]: 2025-10-02 20:05:48.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:48.253 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3f70fd72-80, col_values=(('external_ids', {'iface-id': '406bcb5b-e20c-483d-9dc9-ab2e2e75e0f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:05:48 compute-0 ovn_controller[88435]: 2025-10-02T20:05:48Z|00066|binding|INFO|Releasing lport 406bcb5b-e20c-483d-9dc9-ab2e2e75e0f6 from this chassis (sb_readonly=0)
Oct 02 20:05:48 compute-0 nova_compute[355794]: 2025-10-02 20:05:48.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:48 compute-0 nova_compute[355794]: 2025-10-02 20:05:48.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:48.277 285790 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3f70fd72-8355-46f5-8b19-cebed2c28970.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3f70fd72-8355-46f5-8b19-cebed2c28970.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:48.279 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d5b72a-b5ad-4c70-9a52-a60f78506de2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:48.280 285790 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: global
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     log         /dev/log local0 debug
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     log-tag     haproxy-metadata-proxy-3f70fd72-8355-46f5-8b19-cebed2c28970
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     user        root
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     group       root
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     maxconn     1024
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     pidfile     /var/lib/neutron/external/pids/3f70fd72-8355-46f5-8b19-cebed2c28970.pid.haproxy
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     daemon
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: defaults
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     log global
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     mode http
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     option httplog
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     option dontlognull
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     option http-server-close
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     option forwardfor
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     retries                 3
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     timeout http-request    30s
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     timeout connect         30s
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     timeout client          32s
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     timeout server          32s
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     timeout http-keep-alive 30s
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: listen listener
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     bind 169.254.169.254:80
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:     http-request add-header X-OVN-Network-ID 3f70fd72-8355-46f5-8b19-cebed2c28970
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:48.280 285790 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970', 'env', 'PROCESS_TAG=haproxy-3f70fd72-8355-46f5-8b19-cebed2c28970', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3f70fd72-8355-46f5-8b19-cebed2c28970.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 20:05:48 compute-0 nova_compute[355794]: 2025-10-02 20:05:48.485 2 DEBUG oslo_concurrency.processutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9/disk.config cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:48 compute-0 nova_compute[355794]: 2025-10-02 20:05:48.486 2 INFO nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Deleting local config drive /var/lib/nova/instances/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9/disk.config because it was imported into RBD.
Oct 02 20:05:48 compute-0 systemd-udevd[450157]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 20:05:48 compute-0 NetworkManager[44968]: <info>  [1759435548.5517] manager: (tap668a7aea-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Oct 02 20:05:48 compute-0 kernel: tap668a7aea-bc: entered promiscuous mode
Oct 02 20:05:48 compute-0 ovn_controller[88435]: 2025-10-02T20:05:48Z|00067|binding|INFO|Claiming lport 668a7aea-bc00-4cac-b1dd-b0786e76c474 for this chassis.
Oct 02 20:05:48 compute-0 ovn_controller[88435]: 2025-10-02T20:05:48Z|00068|binding|INFO|668a7aea-bc00-4cac-b1dd-b0786e76c474: Claiming fa:16:3e:eb:42:64 10.100.0.13
Oct 02 20:05:48 compute-0 nova_compute[355794]: 2025-10-02 20:05:48.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:48 compute-0 NetworkManager[44968]: <info>  [1759435548.5689] device (tap668a7aea-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:48.569 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:42:64 10.100.0.13'], port_security=['fa:16:3e:eb:42:64 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c59c91fb-efec-4ddf-b699-e072223ea127', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0db170bd1e464f2ea61c24a9079861a4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f24334a9-c477-489f-956b-2cd2adaeee19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34cee90-562d-4e73-b869-f45c74e302ff, chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=668a7aea-bc00-4cac-b1dd-b0786e76c474) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:05:48 compute-0 NetworkManager[44968]: <info>  [1759435548.5741] device (tap668a7aea-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 20:05:48 compute-0 ovn_controller[88435]: 2025-10-02T20:05:48Z|00069|binding|INFO|Setting lport 668a7aea-bc00-4cac-b1dd-b0786e76c474 ovn-installed in OVS
Oct 02 20:05:48 compute-0 ovn_controller[88435]: 2025-10-02T20:05:48Z|00070|binding|INFO|Setting lport 668a7aea-bc00-4cac-b1dd-b0786e76c474 up in Southbound
Oct 02 20:05:48 compute-0 nova_compute[355794]: 2025-10-02 20:05:48.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:48 compute-0 nova_compute[355794]: 2025-10-02 20:05:48.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:48 compute-0 systemd-machined[137646]: New machine qemu-7-instance-00000007.
Oct 02 20:05:48 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Oct 02 20:05:48 compute-0 nova_compute[355794]: 2025-10-02 20:05:48.739 2 DEBUG nova.compute.manager [req-9cd08b38-7187-473c-92a5-7fbdda82bf03 req-b1662d7f-ffbc-4e93-9240-4be4515fbc7a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Received event network-vif-plugged-6e6c016d-9003-4a4b-92ce-11e00a91b399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:05:48 compute-0 nova_compute[355794]: 2025-10-02 20:05:48.740 2 DEBUG oslo_concurrency.lockutils [req-9cd08b38-7187-473c-92a5-7fbdda82bf03 req-b1662d7f-ffbc-4e93-9240-4be4515fbc7a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "f8be75db-d124-4069-a573-db7410ea2b5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:48 compute-0 nova_compute[355794]: 2025-10-02 20:05:48.740 2 DEBUG oslo_concurrency.lockutils [req-9cd08b38-7187-473c-92a5-7fbdda82bf03 req-b1662d7f-ffbc-4e93-9240-4be4515fbc7a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "f8be75db-d124-4069-a573-db7410ea2b5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:48 compute-0 nova_compute[355794]: 2025-10-02 20:05:48.740 2 DEBUG oslo_concurrency.lockutils [req-9cd08b38-7187-473c-92a5-7fbdda82bf03 req-b1662d7f-ffbc-4e93-9240-4be4515fbc7a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "f8be75db-d124-4069-a573-db7410ea2b5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:48 compute-0 nova_compute[355794]: 2025-10-02 20:05:48.740 2 DEBUG nova.compute.manager [req-9cd08b38-7187-473c-92a5-7fbdda82bf03 req-b1662d7f-ffbc-4e93-9240-4be4515fbc7a 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Processing event network-vif-plugged-6e6c016d-9003-4a4b-92ce-11e00a91b399 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 20:05:48 compute-0 ceph-mon[191910]: pgmap v1831: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 60 op/s
Oct 02 20:05:48 compute-0 podman[450264]: 2025-10-02 20:05:48.778692545 +0000 UTC m=+0.086538386 container create c14083b0b464b20d2710d609e018a7f0a178c0046f07d821180147ea4b1b6ea1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 20:05:48 compute-0 systemd[1]: Started libpod-conmon-c14083b0b464b20d2710d609e018a7f0a178c0046f07d821180147ea4b1b6ea1.scope.
Oct 02 20:05:48 compute-0 podman[450264]: 2025-10-02 20:05:48.741901255 +0000 UTC m=+0.049747106 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 20:05:48 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d796f0aad873155a7bdc24af8977dc3f4f3488adc8db2d26b5ca31e5a9930e5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 20:05:48 compute-0 podman[450264]: 2025-10-02 20:05:48.898746489 +0000 UTC m=+0.206592350 container init c14083b0b464b20d2710d609e018a7f0a178c0046f07d821180147ea4b1b6ea1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 20:05:48 compute-0 podman[450264]: 2025-10-02 20:05:48.91109648 +0000 UTC m=+0.218942321 container start c14083b0b464b20d2710d609e018a7f0a178c0046f07d821180147ea4b1b6ea1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:05:48 compute-0 neutron-haproxy-ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970[450305]: [NOTICE]   (450324) : New worker (450327) forked
Oct 02 20:05:48 compute-0 neutron-haproxy-ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970[450305]: [NOTICE]   (450324) : Loading success.
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:48.972 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 668a7aea-bc00-4cac-b1dd-b0786e76c474 in datapath c59c91fb-efec-4ddf-b699-e072223ea127 unbound from our chassis
Oct 02 20:05:48 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:48.975 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c59c91fb-efec-4ddf-b699-e072223ea127
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:48.999 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[37e16df9-5d0b-4af6-972f-f0fb46107f17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.000 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc59c91fb-e1 in ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.006 420728 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc59c91fb-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.006 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[e04e7299-0d65-4e04-9495-34437ded061a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.008 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[f53d34ad-8635-4c8c-a2b8-b02544dd002e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.053 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[96223377-3c31-48fc-a8bc-74dac6bbc7de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.091 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[61f480e6-73d3-4e5c-b1d6-bda59999b3fe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.149 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[d2b202f0-5d1a-488a-ac72-e4ced42ca317]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:49 compute-0 NetworkManager[44968]: <info>  [1759435549.1583] manager: (tapc59c91fb-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.157 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[0b91e883-43b3-47fa-abea-7723ce996a35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.201 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8b29cd-0428-4e41-bcca-76f5950525ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.207 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f4b747-e26c-47d7-8368-50ab02131a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:49 compute-0 NetworkManager[44968]: <info>  [1759435549.2408] device (tapc59c91fb-e0): carrier: link connected
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.249 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[f6dde9f8-5ff0-411b-9ab1-82787584c03a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.278 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[b6298483-dda7-45ec-830c-d92bd22c5407]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc59c91fb-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:05:b0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672080, 'reachable_time': 41346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 450388, 'error': None, 'target': 'ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.288 2 DEBUG nova.compute.manager [req-97f05451-3300-4436-b642-54c556acda76 req-7f5414d5-474f-4e79-8a92-9c4d5ae1d0f8 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Received event network-vif-plugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.289 2 DEBUG oslo_concurrency.lockutils [req-97f05451-3300-4436-b642-54c556acda76 req-7f5414d5-474f-4e79-8a92-9c4d5ae1d0f8 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.290 2 DEBUG oslo_concurrency.lockutils [req-97f05451-3300-4436-b642-54c556acda76 req-7f5414d5-474f-4e79-8a92-9c4d5ae1d0f8 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.291 2 DEBUG oslo_concurrency.lockutils [req-97f05451-3300-4436-b642-54c556acda76 req-7f5414d5-474f-4e79-8a92-9c4d5ae1d0f8 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.291 2 DEBUG nova.compute.manager [req-97f05451-3300-4436-b642-54c556acda76 req-7f5414d5-474f-4e79-8a92-9c4d5ae1d0f8 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Processing event network-vif-plugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.300 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6a149a-b651-4703-9164-f7d32f01e571]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:5b0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672080, 'tstamp': 672080}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 450390, 'error': None, 'target': 'ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.324 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[07cf982f-afb9-4625-96a1-ba1e1628c584]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc59c91fb-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:05:b0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672080, 'reachable_time': 41346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 450391, 'error': None, 'target': 'ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.358 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[110fb43d-cb6b-40de-98f7-a8f0871e4401]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.442 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[b9289cc5-4d2d-4bec-817f-2d94a93cc4b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.443 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc59c91fb-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.443 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.444 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc59c91fb-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:05:49 compute-0 kernel: tapc59c91fb-e0: entered promiscuous mode
Oct 02 20:05:49 compute-0 NetworkManager[44968]: <info>  [1759435549.4465] manager: (tapc59c91fb-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.451 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc59c91fb-e0, col_values=(('external_ids', {'iface-id': 'b59aad26-fd1d-4c37-adbd-b18497c4c15f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:05:49 compute-0 ovn_controller[88435]: 2025-10-02T20:05:49Z|00071|binding|INFO|Releasing lport b59aad26-fd1d-4c37-adbd-b18497c4c15f from this chassis (sb_readonly=0)
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:49 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.472 285790 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c59c91fb-efec-4ddf-b699-e072223ea127.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c59c91fb-efec-4ddf-b699-e072223ea127.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.475 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[3050288a-8408-4f4d-bc77-d768a68a97da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.476 285790 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: global
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     log         /dev/log local0 debug
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     log-tag     haproxy-metadata-proxy-c59c91fb-efec-4ddf-b699-e072223ea127
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     user        root
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     group       root
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     maxconn     1024
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     pidfile     /var/lib/neutron/external/pids/c59c91fb-efec-4ddf-b699-e072223ea127.pid.haproxy
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     daemon
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: defaults
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     log global
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     mode http
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     option httplog
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     option dontlognull
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     option http-server-close
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     option forwardfor
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     retries                 3
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     timeout http-request    30s
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     timeout connect         30s
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     timeout client          32s
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     timeout server          32s
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     timeout http-keep-alive 30s
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: listen listener
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     bind 169.254.169.254:80
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:     http-request add-header X-OVN-Network-ID c59c91fb-efec-4ddf-b699-e072223ea127
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 20:05:49 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:05:49.477 285790 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127', 'env', 'PROCESS_TAG=haproxy-c59c91fb-efec-4ddf-b699-e072223ea127', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c59c91fb-efec-4ddf-b699-e072223ea127.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 20:05:49 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 02 20:05:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1832: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 78 op/s
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.870 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435549.8698256, f8be75db-d124-4069-a573-db7410ea2b5e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.872 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] VM Started (Lifecycle Event)
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.874 2 DEBUG nova.compute.manager [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.881 2 DEBUG nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.890 2 INFO nova.virt.libvirt.driver [-] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Instance spawned successfully.
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.891 2 DEBUG nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.898 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.906 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.919 2 DEBUG nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.920 2 DEBUG nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.920 2 DEBUG nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.921 2 DEBUG nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.922 2 DEBUG nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.922 2 DEBUG nova.virt.libvirt.driver [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.929 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.929 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435549.871243, f8be75db-d124-4069-a573-db7410ea2b5e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.929 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] VM Paused (Lifecycle Event)
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.962 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.968 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435549.8802016, f8be75db-d124-4069-a573-db7410ea2b5e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:05:49 compute-0 podman[450442]: 2025-10-02 20:05:49.968450342 +0000 UTC m=+0.082760723 container create 3e08790c9152141ff885002a951013446b35450a856387bde42e8f2f2465b6c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.969 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] VM Resumed (Lifecycle Event)
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.986 2 INFO nova.compute.manager [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Took 14.29 seconds to spawn the instance on the hypervisor.
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.987 2 DEBUG nova.compute.manager [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:05:49 compute-0 nova_compute[355794]: 2025-10-02 20:05:49.995 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.001 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:05:50 compute-0 podman[450442]: 2025-10-02 20:05:49.929008584 +0000 UTC m=+0.043319065 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.022 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:05:50 compute-0 systemd[1]: Started libpod-conmon-3e08790c9152141ff885002a951013446b35450a856387bde42e8f2f2465b6c3.scope.
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.073 2 INFO nova.compute.manager [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Took 15.45 seconds to build instance.
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.096 2 DEBUG oslo_concurrency.lockutils [None req-a2b3977f-bff8-4953-91e4-87ddce4bf000 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lock "f8be75db-d124-4069-a573-db7410ea2b5e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/566b20235485254baaf2d70417afc43c003f5ea98d7dc78a03785b5aa169a59e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 20:05:50 compute-0 podman[450442]: 2025-10-02 20:05:50.123846444 +0000 UTC m=+0.238156845 container init 3e08790c9152141ff885002a951013446b35450a856387bde42e8f2f2465b6c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 20:05:50 compute-0 podman[450454]: 2025-10-02 20:05:50.1246063 +0000 UTC m=+0.111930085 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:05:50 compute-0 podman[450442]: 2025-10-02 20:05:50.145677594 +0000 UTC m=+0.259987975 container start 3e08790c9152141ff885002a951013446b35450a856387bde42e8f2f2465b6c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 20:05:50 compute-0 podman[450455]: 2025-10-02 20:05:50.159700333 +0000 UTC m=+0.136064757 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:05:50 compute-0 neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127[450474]: [NOTICE]   (450498) : New worker (450500) forked
Oct 02 20:05:50 compute-0 neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127[450474]: [NOTICE]   (450498) : Loading success.
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.247 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435550.2462523, cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.247 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] VM Started (Lifecycle Event)
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.249 2 DEBUG nova.compute.manager [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.253 2 DEBUG nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.260 2 INFO nova.virt.libvirt.driver [-] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Instance spawned successfully.
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.261 2 DEBUG nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.278 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.286 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.290 2 DEBUG nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.290 2 DEBUG nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.291 2 DEBUG nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.292 2 DEBUG nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.292 2 DEBUG nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.293 2 DEBUG nova.virt.libvirt.driver [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.331 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.332 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435550.2465186, cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.332 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] VM Paused (Lifecycle Event)
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.368 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.373 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435550.2522995, cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.374 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] VM Resumed (Lifecycle Event)
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.384 2 INFO nova.compute.manager [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Took 13.89 seconds to spawn the instance on the hypervisor.
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.385 2 DEBUG nova.compute.manager [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.401 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.408 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.441 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.470 2 DEBUG nova.network.neutron [req-67aafe80-b575-426a-96ae-ff7c29409964 req-393509c8-226b-46c1-a3b8-a3b3bdec7b05 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Updated VIF entry in instance network info cache for port 668a7aea-bc00-4cac-b1dd-b0786e76c474. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.471 2 DEBUG nova.network.neutron [req-67aafe80-b575-426a-96ae-ff7c29409964 req-393509c8-226b-46c1-a3b8-a3b3bdec7b05 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Updating instance_info_cache with network_info: [{"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.476 2 INFO nova.compute.manager [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Took 15.28 seconds to build instance.
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.497 2 DEBUG oslo_concurrency.lockutils [req-67aafe80-b575-426a-96ae-ff7c29409964 req-393509c8-226b-46c1-a3b8-a3b3bdec7b05 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.499 2 DEBUG oslo_concurrency.lockutils [None req-f95dd485-bba7-4f45-9c75-230eaa4a401b f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:05:50 compute-0 ceph-mon[191910]: pgmap v1832: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 78 op/s
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.851 2 DEBUG nova.compute.manager [req-6e589075-ac37-4086-b9b3-77089f6aaa7a req-7c912b20-a0f2-445d-b622-cb32894bc319 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Received event network-vif-plugged-6e6c016d-9003-4a4b-92ce-11e00a91b399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.853 2 DEBUG oslo_concurrency.lockutils [req-6e589075-ac37-4086-b9b3-77089f6aaa7a req-7c912b20-a0f2-445d-b622-cb32894bc319 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "f8be75db-d124-4069-a573-db7410ea2b5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.854 2 DEBUG oslo_concurrency.lockutils [req-6e589075-ac37-4086-b9b3-77089f6aaa7a req-7c912b20-a0f2-445d-b622-cb32894bc319 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "f8be75db-d124-4069-a573-db7410ea2b5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.855 2 DEBUG oslo_concurrency.lockutils [req-6e589075-ac37-4086-b9b3-77089f6aaa7a req-7c912b20-a0f2-445d-b622-cb32894bc319 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "f8be75db-d124-4069-a573-db7410ea2b5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.856 2 DEBUG nova.compute.manager [req-6e589075-ac37-4086-b9b3-77089f6aaa7a req-7c912b20-a0f2-445d-b622-cb32894bc319 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] No waiting events found dispatching network-vif-plugged-6e6c016d-9003-4a4b-92ce-11e00a91b399 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:05:50 compute-0 nova_compute[355794]: 2025-10-02 20:05:50.857 2 WARNING nova.compute.manager [req-6e589075-ac37-4086-b9b3-77089f6aaa7a req-7c912b20-a0f2-445d-b622-cb32894bc319 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Received unexpected event network-vif-plugged-6e6c016d-9003-4a4b-92ce-11e00a91b399 for instance with vm_state active and task_state None.
Oct 02 20:05:51 compute-0 nova_compute[355794]: 2025-10-02 20:05:51.384 2 DEBUG nova.compute.manager [req-7c376bab-8198-4c89-a50e-77247f656343 req-30849e3c-1a3c-43de-8f18-9d04d8b862bb 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Received event network-vif-plugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:05:51 compute-0 nova_compute[355794]: 2025-10-02 20:05:51.385 2 DEBUG oslo_concurrency.lockutils [req-7c376bab-8198-4c89-a50e-77247f656343 req-30849e3c-1a3c-43de-8f18-9d04d8b862bb 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:51 compute-0 nova_compute[355794]: 2025-10-02 20:05:51.385 2 DEBUG oslo_concurrency.lockutils [req-7c376bab-8198-4c89-a50e-77247f656343 req-30849e3c-1a3c-43de-8f18-9d04d8b862bb 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:51 compute-0 nova_compute[355794]: 2025-10-02 20:05:51.386 2 DEBUG oslo_concurrency.lockutils [req-7c376bab-8198-4c89-a50e-77247f656343 req-30849e3c-1a3c-43de-8f18-9d04d8b862bb 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:51 compute-0 nova_compute[355794]: 2025-10-02 20:05:51.386 2 DEBUG nova.compute.manager [req-7c376bab-8198-4c89-a50e-77247f656343 req-30849e3c-1a3c-43de-8f18-9d04d8b862bb 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] No waiting events found dispatching network-vif-plugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:05:51 compute-0 nova_compute[355794]: 2025-10-02 20:05:51.386 2 WARNING nova.compute.manager [req-7c376bab-8198-4c89-a50e-77247f656343 req-30849e3c-1a3c-43de-8f18-9d04d8b862bb 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Received unexpected event network-vif-plugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 for instance with vm_state active and task_state None.
Oct 02 20:05:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1833: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 73 op/s
Oct 02 20:05:52 compute-0 nova_compute[355794]: 2025-10-02 20:05:52.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:52 compute-0 sudo[450511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:05:52 compute-0 sudo[450511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:05:52 compute-0 sudo[450511]: pam_unix(sudo:session): session closed for user root
Oct 02 20:05:52 compute-0 sudo[450536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:05:52 compute-0 sudo[450536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:05:52 compute-0 sudo[450536]: pam_unix(sudo:session): session closed for user root
Oct 02 20:05:52 compute-0 ceph-mon[191910]: pgmap v1833: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 73 op/s
Oct 02 20:05:52 compute-0 sudo[450561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:05:52 compute-0 sudo[450561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:05:52 compute-0 sudo[450561]: pam_unix(sudo:session): session closed for user root
Oct 02 20:05:52 compute-0 sudo[450586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:05:52 compute-0 sudo[450586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:05:52 compute-0 unix_chkpwd[450611]: password check failed for user (root)
Oct 02 20:05:52 compute-0 sshd-session[450509]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=220.154.129.88  user=root
Oct 02 20:05:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 365 KiB/s rd, 2.7 MiB/s wr, 83 op/s
Oct 02 20:05:53 compute-0 sudo[450586]: pam_unix(sudo:session): session closed for user root
Oct 02 20:05:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:05:53 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:05:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:05:53 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:05:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:05:53 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:05:53 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 2c95ea6c-f130-4427-ac73-3a4960da3e16 does not exist
Oct 02 20:05:53 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 42a28071-d9f3-44b6-b070-231bd6559521 does not exist
Oct 02 20:05:53 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 5429e693-5f78-404c-80db-569715086388 does not exist
Oct 02 20:05:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:05:53 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:05:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:05:53 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:05:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:05:53 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:05:53 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:05:53 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:05:53 compute-0 sudo[450643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:05:53 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:05:53 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:05:53 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:05:53 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:05:53 compute-0 sudo[450643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:05:53 compute-0 sudo[450643]: pam_unix(sudo:session): session closed for user root
Oct 02 20:05:53 compute-0 sudo[450668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:05:53 compute-0 sudo[450668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:05:53 compute-0 sudo[450668]: pam_unix(sudo:session): session closed for user root
Oct 02 20:05:54 compute-0 sudo[450693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:05:54 compute-0 sudo[450693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:05:54 compute-0 sudo[450693]: pam_unix(sudo:session): session closed for user root
Oct 02 20:05:54 compute-0 sudo[450718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:05:54 compute-0 sudo[450718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:05:54 compute-0 nova_compute[355794]: 2025-10-02 20:05:54.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:54 compute-0 podman[450781]: 2025-10-02 20:05:54.66199528 +0000 UTC m=+0.083509560 container create 544eaa5deaf91622fb2bd8d68456441fc7b16b51b5083439ea3ebf4d4a70c834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_keldysh, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 02 20:05:54 compute-0 podman[450781]: 2025-10-02 20:05:54.632686915 +0000 UTC m=+0.054201205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:05:54 compute-0 systemd[1]: Started libpod-conmon-544eaa5deaf91622fb2bd8d68456441fc7b16b51b5083439ea3ebf4d4a70c834.scope.
Oct 02 20:05:54 compute-0 sshd-session[450509]: Failed password for root from 220.154.129.88 port 37182 ssh2
Oct 02 20:05:54 compute-0 nova_compute[355794]: 2025-10-02 20:05:54.758 2 DEBUG nova.compute.manager [req-d590b2c4-5bd9-442a-9c48-b0e9b09e4375 req-61dbeec5-6da7-46c9-9c6c-948e0c49f0c9 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Received event network-changed-6e6c016d-9003-4a4b-92ce-11e00a91b399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:05:54 compute-0 nova_compute[355794]: 2025-10-02 20:05:54.760 2 DEBUG nova.compute.manager [req-d590b2c4-5bd9-442a-9c48-b0e9b09e4375 req-61dbeec5-6da7-46c9-9c6c-948e0c49f0c9 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Refreshing instance network info cache due to event network-changed-6e6c016d-9003-4a4b-92ce-11e00a91b399. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:05:54 compute-0 nova_compute[355794]: 2025-10-02 20:05:54.761 2 DEBUG oslo_concurrency.lockutils [req-d590b2c4-5bd9-442a-9c48-b0e9b09e4375 req-61dbeec5-6da7-46c9-9c6c-948e0c49f0c9 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:05:54 compute-0 nova_compute[355794]: 2025-10-02 20:05:54.761 2 DEBUG oslo_concurrency.lockutils [req-d590b2c4-5bd9-442a-9c48-b0e9b09e4375 req-61dbeec5-6da7-46c9-9c6c-948e0c49f0c9 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:05:54 compute-0 nova_compute[355794]: 2025-10-02 20:05:54.762 2 DEBUG nova.network.neutron [req-d590b2c4-5bd9-442a-9c48-b0e9b09e4375 req-61dbeec5-6da7-46c9-9c6c-948e0c49f0c9 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Refreshing network info cache for port 6e6c016d-9003-4a4b-92ce-11e00a91b399 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:05:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:05:54 compute-0 ceph-mon[191910]: pgmap v1834: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 365 KiB/s rd, 2.7 MiB/s wr, 83 op/s
Oct 02 20:05:54 compute-0 podman[450781]: 2025-10-02 20:05:54.834364556 +0000 UTC m=+0.255878836 container init 544eaa5deaf91622fb2bd8d68456441fc7b16b51b5083439ea3ebf4d4a70c834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 20:05:54 compute-0 podman[450781]: 2025-10-02 20:05:54.856003572 +0000 UTC m=+0.277517852 container start 544eaa5deaf91622fb2bd8d68456441fc7b16b51b5083439ea3ebf4d4a70c834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 20:05:54 compute-0 friendly_keldysh[450797]: 167 167
Oct 02 20:05:54 compute-0 podman[450781]: 2025-10-02 20:05:54.8649923 +0000 UTC m=+0.286506560 container attach 544eaa5deaf91622fb2bd8d68456441fc7b16b51b5083439ea3ebf4d4a70c834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:05:54 compute-0 systemd[1]: libpod-544eaa5deaf91622fb2bd8d68456441fc7b16b51b5083439ea3ebf4d4a70c834.scope: Deactivated successfully.
Oct 02 20:05:54 compute-0 conmon[450797]: conmon 544eaa5deaf91622fb2b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-544eaa5deaf91622fb2bd8d68456441fc7b16b51b5083439ea3ebf4d4a70c834.scope/container/memory.events
Oct 02 20:05:54 compute-0 podman[450781]: 2025-10-02 20:05:54.87181777 +0000 UTC m=+0.293332060 container died 544eaa5deaf91622fb2bd8d68456441fc7b16b51b5083439ea3ebf4d4a70c834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_keldysh, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 20:05:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3e94ba89276434f8e0424f309fbf0cd09391e915c3e3db655f0b34119eb104c-merged.mount: Deactivated successfully.
Oct 02 20:05:54 compute-0 podman[450781]: 2025-10-02 20:05:54.932733742 +0000 UTC m=+0.354247992 container remove 544eaa5deaf91622fb2bd8d68456441fc7b16b51b5083439ea3ebf4d4a70c834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_keldysh, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:05:54 compute-0 systemd[1]: libpod-conmon-544eaa5deaf91622fb2bd8d68456441fc7b16b51b5083439ea3ebf4d4a70c834.scope: Deactivated successfully.
Oct 02 20:05:55 compute-0 podman[450819]: 2025-10-02 20:05:55.249462726 +0000 UTC m=+0.093922999 container create 1c8ec079e465e3b94e5a2c8b9e6933f1e23782335b21991102fbcb51a3cdfad7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khorana, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:05:55 compute-0 podman[450819]: 2025-10-02 20:05:55.220911557 +0000 UTC m=+0.065371870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:05:55 compute-0 systemd[1]: Started libpod-conmon-1c8ec079e465e3b94e5a2c8b9e6933f1e23782335b21991102fbcb51a3cdfad7.scope.
Oct 02 20:05:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04aac231911446f6809f9831bf90a11dca09f57b3a9db6167b7e457e33214ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04aac231911446f6809f9831bf90a11dca09f57b3a9db6167b7e457e33214ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04aac231911446f6809f9831bf90a11dca09f57b3a9db6167b7e457e33214ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04aac231911446f6809f9831bf90a11dca09f57b3a9db6167b7e457e33214ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04aac231911446f6809f9831bf90a11dca09f57b3a9db6167b7e457e33214ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:05:55 compute-0 podman[450819]: 2025-10-02 20:05:55.466168218 +0000 UTC m=+0.310628551 container init 1c8ec079e465e3b94e5a2c8b9e6933f1e23782335b21991102fbcb51a3cdfad7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 20:05:55 compute-0 podman[450819]: 2025-10-02 20:05:55.498282925 +0000 UTC m=+0.342743208 container start 1c8ec079e465e3b94e5a2c8b9e6933f1e23782335b21991102fbcb51a3cdfad7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:05:55 compute-0 podman[450819]: 2025-10-02 20:05:55.505256778 +0000 UTC m=+0.349717131 container attach 1c8ec079e465e3b94e5a2c8b9e6933f1e23782335b21991102fbcb51a3cdfad7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khorana, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 02 20:05:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:05:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1835: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.0 MiB/s wr, 135 op/s
Oct 02 20:05:56 compute-0 sshd-session[450509]: Received disconnect from 220.154.129.88 port 37182:11:  [preauth]
Oct 02 20:05:56 compute-0 sshd-session[450509]: Disconnected from authenticating user root 220.154.129.88 port 37182 [preauth]
Oct 02 20:05:56 compute-0 podman[450856]: 2025-10-02 20:05:56.703716438 +0000 UTC m=+0.114919502 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9, version=9.4, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vcs-type=git, com.redhat.component=ubi9-container, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, config_id=edpm)
Oct 02 20:05:56 compute-0 agitated_khorana[450833]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:05:56 compute-0 agitated_khorana[450833]: --> relative data size: 1.0
Oct 02 20:05:56 compute-0 agitated_khorana[450833]: --> All data devices are unavailable
Oct 02 20:05:56 compute-0 podman[450854]: 2025-10-02 20:05:56.720779483 +0000 UTC m=+0.138910829 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct 02 20:05:56 compute-0 systemd[1]: libpod-1c8ec079e465e3b94e5a2c8b9e6933f1e23782335b21991102fbcb51a3cdfad7.scope: Deactivated successfully.
Oct 02 20:05:56 compute-0 systemd[1]: libpod-1c8ec079e465e3b94e5a2c8b9e6933f1e23782335b21991102fbcb51a3cdfad7.scope: Consumed 1.167s CPU time.
Oct 02 20:05:56 compute-0 podman[450819]: 2025-10-02 20:05:56.751798546 +0000 UTC m=+1.596258829 container died 1c8ec079e465e3b94e5a2c8b9e6933f1e23782335b21991102fbcb51a3cdfad7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khorana, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:05:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d04aac231911446f6809f9831bf90a11dca09f57b3a9db6167b7e457e33214ad-merged.mount: Deactivated successfully.
Oct 02 20:05:56 compute-0 ceph-mon[191910]: pgmap v1835: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.0 MiB/s wr, 135 op/s
Oct 02 20:05:56 compute-0 podman[450819]: 2025-10-02 20:05:56.823602517 +0000 UTC m=+1.668062770 container remove 1c8ec079e465e3b94e5a2c8b9e6933f1e23782335b21991102fbcb51a3cdfad7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khorana, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:05:56 compute-0 systemd[1]: libpod-conmon-1c8ec079e465e3b94e5a2c8b9e6933f1e23782335b21991102fbcb51a3cdfad7.scope: Deactivated successfully.
Oct 02 20:05:56 compute-0 sudo[450718]: pam_unix(sudo:session): session closed for user root
Oct 02 20:05:56 compute-0 sudo[450906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:05:56 compute-0 sudo[450906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:05:56 compute-0 sudo[450906]: pam_unix(sudo:session): session closed for user root
Oct 02 20:05:57 compute-0 nova_compute[355794]: 2025-10-02 20:05:57.053 2 DEBUG nova.compute.manager [req-c9cbfde1-b91b-447b-a00a-749e6f870dec req-44ad7e10-6cbe-448d-8912-4a0958c0f11d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Received event network-changed-668a7aea-bc00-4cac-b1dd-b0786e76c474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:05:57 compute-0 nova_compute[355794]: 2025-10-02 20:05:57.053 2 DEBUG nova.compute.manager [req-c9cbfde1-b91b-447b-a00a-749e6f870dec req-44ad7e10-6cbe-448d-8912-4a0958c0f11d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Refreshing instance network info cache due to event network-changed-668a7aea-bc00-4cac-b1dd-b0786e76c474. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:05:57 compute-0 nova_compute[355794]: 2025-10-02 20:05:57.054 2 DEBUG oslo_concurrency.lockutils [req-c9cbfde1-b91b-447b-a00a-749e6f870dec req-44ad7e10-6cbe-448d-8912-4a0958c0f11d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:05:57 compute-0 nova_compute[355794]: 2025-10-02 20:05:57.054 2 DEBUG oslo_concurrency.lockutils [req-c9cbfde1-b91b-447b-a00a-749e6f870dec req-44ad7e10-6cbe-448d-8912-4a0958c0f11d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:05:57 compute-0 nova_compute[355794]: 2025-10-02 20:05:57.055 2 DEBUG nova.network.neutron [req-c9cbfde1-b91b-447b-a00a-749e6f870dec req-44ad7e10-6cbe-448d-8912-4a0958c0f11d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Refreshing network info cache for port 668a7aea-bc00-4cac-b1dd-b0786e76c474 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:05:57 compute-0 sudo[450931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:05:57 compute-0 sudo[450931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:05:57 compute-0 sudo[450931]: pam_unix(sudo:session): session closed for user root
Oct 02 20:05:57 compute-0 nova_compute[355794]: 2025-10-02 20:05:57.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:57 compute-0 ovn_controller[88435]: 2025-10-02T20:05:57Z|00072|binding|INFO|Releasing lport b59aad26-fd1d-4c37-adbd-b18497c4c15f from this chassis (sb_readonly=0)
Oct 02 20:05:57 compute-0 ovn_controller[88435]: 2025-10-02T20:05:57Z|00073|binding|INFO|Releasing lport 406bcb5b-e20c-483d-9dc9-ab2e2e75e0f6 from this chassis (sb_readonly=0)
Oct 02 20:05:57 compute-0 ovn_controller[88435]: 2025-10-02T20:05:57Z|00074|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 20:05:57 compute-0 sudo[450956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:05:57 compute-0 sudo[450956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:05:57 compute-0 sudo[450956]: pam_unix(sudo:session): session closed for user root
Oct 02 20:05:57 compute-0 nova_compute[355794]: 2025-10-02 20:05:57.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:57 compute-0 sudo[450981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:05:57 compute-0 sudo[450981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:05:57 compute-0 nova_compute[355794]: 2025-10-02 20:05:57.467 2 DEBUG nova.network.neutron [req-d590b2c4-5bd9-442a-9c48-b0e9b09e4375 req-61dbeec5-6da7-46c9-9c6c-948e0c49f0c9 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Updated VIF entry in instance network info cache for port 6e6c016d-9003-4a4b-92ce-11e00a91b399. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:05:57 compute-0 nova_compute[355794]: 2025-10-02 20:05:57.468 2 DEBUG nova.network.neutron [req-d590b2c4-5bd9-442a-9c48-b0e9b09e4375 req-61dbeec5-6da7-46c9-9c6c-948e0c49f0c9 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Updating instance_info_cache with network_info: [{"id": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "address": "fa:16:3e:04:5e:6a", "network": {"id": "3f70fd72-8355-46f5-8b19-cebed2c28970", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1770026923-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e6c016d-90", "ovs_interfaceid": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:05:57 compute-0 nova_compute[355794]: 2025-10-02 20:05:57.516 2 DEBUG oslo_concurrency.lockutils [req-d590b2c4-5bd9-442a-9c48-b0e9b09e4375 req-61dbeec5-6da7-46c9-9c6c-948e0c49f0c9 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:05:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1836: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 29 KiB/s wr, 130 op/s
Oct 02 20:05:57 compute-0 podman[451045]: 2025-10-02 20:05:57.963707871 +0000 UTC m=+0.087394556 container create fe7ef3d86d78b56877e9a01a28c2727db4b600c54e8c27857b7f82f6419ea082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_satoshi, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 20:05:58 compute-0 podman[451045]: 2025-10-02 20:05:57.931234806 +0000 UTC m=+0.054921471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:05:58 compute-0 systemd[1]: Started libpod-conmon-fe7ef3d86d78b56877e9a01a28c2727db4b600c54e8c27857b7f82f6419ea082.scope.
Oct 02 20:05:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:05:58 compute-0 podman[451045]: 2025-10-02 20:05:58.096937505 +0000 UTC m=+0.220624220 container init fe7ef3d86d78b56877e9a01a28c2727db4b600c54e8c27857b7f82f6419ea082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 20:05:58 compute-0 podman[451045]: 2025-10-02 20:05:58.114337508 +0000 UTC m=+0.238024183 container start fe7ef3d86d78b56877e9a01a28c2727db4b600c54e8c27857b7f82f6419ea082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_satoshi, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:05:58 compute-0 podman[451045]: 2025-10-02 20:05:58.122006917 +0000 UTC m=+0.245693592 container attach fe7ef3d86d78b56877e9a01a28c2727db4b600c54e8c27857b7f82f6419ea082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_satoshi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 20:05:58 compute-0 friendly_satoshi[451060]: 167 167
Oct 02 20:05:58 compute-0 systemd[1]: libpod-fe7ef3d86d78b56877e9a01a28c2727db4b600c54e8c27857b7f82f6419ea082.scope: Deactivated successfully.
Oct 02 20:05:58 compute-0 podman[451045]: 2025-10-02 20:05:58.12762975 +0000 UTC m=+0.251316395 container died fe7ef3d86d78b56877e9a01a28c2727db4b600c54e8c27857b7f82f6419ea082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_satoshi, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:05:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0465062c62fd145d56b7d3a3ac154b11f1064bd7d70ec16614895e152c757f6e-merged.mount: Deactivated successfully.
Oct 02 20:05:58 compute-0 podman[451045]: 2025-10-02 20:05:58.186458796 +0000 UTC m=+0.310145431 container remove fe7ef3d86d78b56877e9a01a28c2727db4b600c54e8c27857b7f82f6419ea082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_satoshi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 20:05:58 compute-0 systemd[1]: libpod-conmon-fe7ef3d86d78b56877e9a01a28c2727db4b600c54e8c27857b7f82f6419ea082.scope: Deactivated successfully.
Oct 02 20:05:58 compute-0 podman[451082]: 2025-10-02 20:05:58.534738555 +0000 UTC m=+0.126867505 container create 8cd688a7818d55df9efd03263d07b3c0e94b8f612ca91e4c7946a1f82563e3c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_thompson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:05:58 compute-0 podman[451082]: 2025-10-02 20:05:58.47093453 +0000 UTC m=+0.063063540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:05:58 compute-0 ovn_controller[88435]: 2025-10-02T20:05:58Z|00075|binding|INFO|Releasing lport b59aad26-fd1d-4c37-adbd-b18497c4c15f from this chassis (sb_readonly=0)
Oct 02 20:05:58 compute-0 ovn_controller[88435]: 2025-10-02T20:05:58Z|00076|binding|INFO|Releasing lport 406bcb5b-e20c-483d-9dc9-ab2e2e75e0f6 from this chassis (sb_readonly=0)
Oct 02 20:05:58 compute-0 ovn_controller[88435]: 2025-10-02T20:05:58Z|00077|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 20:05:58 compute-0 nova_compute[355794]: 2025-10-02 20:05:58.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:58 compute-0 systemd[1]: Started libpod-conmon-8cd688a7818d55df9efd03263d07b3c0e94b8f612ca91e4c7946a1f82563e3c0.scope.
Oct 02 20:05:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf890311a7b6ae147008c08dc53451eddbb491eda4a453d7351f4f1317a1006/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf890311a7b6ae147008c08dc53451eddbb491eda4a453d7351f4f1317a1006/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf890311a7b6ae147008c08dc53451eddbb491eda4a453d7351f4f1317a1006/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf890311a7b6ae147008c08dc53451eddbb491eda4a453d7351f4f1317a1006/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:05:58 compute-0 podman[451082]: 2025-10-02 20:05:58.719842811 +0000 UTC m=+0.311971731 container init 8cd688a7818d55df9efd03263d07b3c0e94b8f612ca91e4c7946a1f82563e3c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 20:05:58 compute-0 podman[451082]: 2025-10-02 20:05:58.731034727 +0000 UTC m=+0.323163667 container start 8cd688a7818d55df9efd03263d07b3c0e94b8f612ca91e4c7946a1f82563e3c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_thompson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 20:05:58 compute-0 podman[451082]: 2025-10-02 20:05:58.737694014 +0000 UTC m=+0.329823074 container attach 8cd688a7818d55df9efd03263d07b3c0e94b8f612ca91e4c7946a1f82563e3c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_thompson, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:05:58 compute-0 ceph-mon[191910]: pgmap v1836: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 29 KiB/s wr, 130 op/s
Oct 02 20:05:59 compute-0 nova_compute[355794]: 2025-10-02 20:05:59.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:59 compute-0 quirky_thompson[451098]: {
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:     "0": [
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:         {
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "devices": [
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "/dev/loop3"
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             ],
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "lv_name": "ceph_lv0",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "lv_size": "21470642176",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "name": "ceph_lv0",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "tags": {
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.cluster_name": "ceph",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.crush_device_class": "",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.encrypted": "0",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.osd_id": "0",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.type": "block",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.vdo": "0"
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             },
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "type": "block",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "vg_name": "ceph_vg0"
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:         }
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:     ],
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:     "1": [
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:         {
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "devices": [
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "/dev/loop4"
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             ],
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "lv_name": "ceph_lv1",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "lv_size": "21470642176",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "name": "ceph_lv1",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "tags": {
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.cluster_name": "ceph",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.crush_device_class": "",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.encrypted": "0",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.osd_id": "1",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.type": "block",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.vdo": "0"
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             },
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "type": "block",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "vg_name": "ceph_vg1"
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:         }
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:     ],
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:     "2": [
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:         {
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "devices": [
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "/dev/loop5"
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             ],
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "lv_name": "ceph_lv2",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "lv_size": "21470642176",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "name": "ceph_lv2",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "tags": {
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.cluster_name": "ceph",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.crush_device_class": "",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.encrypted": "0",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.osd_id": "2",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.type": "block",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:                 "ceph.vdo": "0"
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             },
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "type": "block",
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:             "vg_name": "ceph_vg2"
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:         }
Oct 02 20:05:59 compute-0 quirky_thompson[451098]:     ]
Oct 02 20:05:59 compute-0 quirky_thompson[451098]: }
Oct 02 20:05:59 compute-0 systemd[1]: libpod-8cd688a7818d55df9efd03263d07b3c0e94b8f612ca91e4c7946a1f82563e3c0.scope: Deactivated successfully.
Oct 02 20:05:59 compute-0 conmon[451098]: conmon 8cd688a7818d55df9efd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8cd688a7818d55df9efd03263d07b3c0e94b8f612ca91e4c7946a1f82563e3c0.scope/container/memory.events
Oct 02 20:05:59 compute-0 podman[451082]: 2025-10-02 20:05:59.5295609 +0000 UTC m=+1.121689850 container died 8cd688a7818d55df9efd03263d07b3c0e94b8f612ca91e4c7946a1f82563e3c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_thompson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:05:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1837: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 147 op/s
Oct 02 20:05:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdf890311a7b6ae147008c08dc53451eddbb491eda4a453d7351f4f1317a1006-merged.mount: Deactivated successfully.
Oct 02 20:05:59 compute-0 podman[451082]: 2025-10-02 20:05:59.631772491 +0000 UTC m=+1.223901401 container remove 8cd688a7818d55df9efd03263d07b3c0e94b8f612ca91e4c7946a1f82563e3c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 20:05:59 compute-0 nova_compute[355794]: 2025-10-02 20:05:59.663 2 DEBUG nova.network.neutron [req-c9cbfde1-b91b-447b-a00a-749e6f870dec req-44ad7e10-6cbe-448d-8912-4a0958c0f11d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Updated VIF entry in instance network info cache for port 668a7aea-bc00-4cac-b1dd-b0786e76c474. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:05:59 compute-0 nova_compute[355794]: 2025-10-02 20:05:59.665 2 DEBUG nova.network.neutron [req-c9cbfde1-b91b-447b-a00a-749e6f870dec req-44ad7e10-6cbe-448d-8912-4a0958c0f11d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Updating instance_info_cache with network_info: [{"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:05:59 compute-0 sudo[450981]: pam_unix(sudo:session): session closed for user root
Oct 02 20:05:59 compute-0 systemd[1]: libpod-conmon-8cd688a7818d55df9efd03263d07b3c0e94b8f612ca91e4c7946a1f82563e3c0.scope: Deactivated successfully.
Oct 02 20:05:59 compute-0 nova_compute[355794]: 2025-10-02 20:05:59.703 2 DEBUG oslo_concurrency.lockutils [req-c9cbfde1-b91b-447b-a00a-749e6f870dec req-44ad7e10-6cbe-448d-8912-4a0958c0f11d 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:05:59 compute-0 podman[451107]: 2025-10-02 20:05:59.723342517 +0000 UTC m=+0.154032433 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 02 20:05:59 compute-0 podman[451109]: 2025-10-02 20:05:59.742924718 +0000 UTC m=+0.164636266 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 20:05:59 compute-0 podman[157186]: time="2025-10-02T20:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:05:59 compute-0 sudo[451155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:05:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 48732 "" "Go-http-client/1.1"
Oct 02 20:05:59 compute-0 sudo[451155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:05:59 compute-0 sudo[451155]: pam_unix(sudo:session): session closed for user root
Oct 02 20:05:59 compute-0 podman[451111]: 2025-10-02 20:05:59.762416657 +0000 UTC m=+0.190691379 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 02 20:05:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10008 "" "Go-http-client/1.1"
Oct 02 20:05:59 compute-0 sudo[451201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:05:59 compute-0 sudo[451201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:05:59 compute-0 sudo[451201]: pam_unix(sudo:session): session closed for user root
Oct 02 20:05:59 compute-0 sudo[451226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:05:59 compute-0 sudo[451226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:05:59 compute-0 sudo[451226]: pam_unix(sudo:session): session closed for user root
Oct 02 20:06:00 compute-0 sudo[451251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:06:00 compute-0 sudo[451251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:06:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:06:00 compute-0 podman[451314]: 2025-10-02 20:06:00.664280426 +0000 UTC m=+0.077394815 container create 896efb55bb512f2d6eefe40b55fc3718be1e261056b78f9b7915de4211bc5bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 20:06:00 compute-0 podman[451314]: 2025-10-02 20:06:00.632905855 +0000 UTC m=+0.046020294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:06:00 compute-0 systemd[1]: Started libpod-conmon-896efb55bb512f2d6eefe40b55fc3718be1e261056b78f9b7915de4211bc5bcb.scope.
Oct 02 20:06:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:06:00 compute-0 podman[451314]: 2025-10-02 20:06:00.812370636 +0000 UTC m=+0.225523876 container init 896efb55bb512f2d6eefe40b55fc3718be1e261056b78f9b7915de4211bc5bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nash, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:06:00 compute-0 podman[451314]: 2025-10-02 20:06:00.824684987 +0000 UTC m=+0.237799386 container start 896efb55bb512f2d6eefe40b55fc3718be1e261056b78f9b7915de4211bc5bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nash, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:06:00 compute-0 podman[451314]: 2025-10-02 20:06:00.83162417 +0000 UTC m=+0.244738539 container attach 896efb55bb512f2d6eefe40b55fc3718be1e261056b78f9b7915de4211bc5bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nash, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 20:06:00 compute-0 brave_nash[451329]: 167 167
Oct 02 20:06:00 compute-0 systemd[1]: libpod-896efb55bb512f2d6eefe40b55fc3718be1e261056b78f9b7915de4211bc5bcb.scope: Deactivated successfully.
Oct 02 20:06:00 compute-0 podman[451314]: 2025-10-02 20:06:00.835202339 +0000 UTC m=+0.248316708 container died 896efb55bb512f2d6eefe40b55fc3718be1e261056b78f9b7915de4211bc5bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 20:06:00 compute-0 ceph-mon[191910]: pgmap v1837: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 147 op/s
Oct 02 20:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f19c428f001feb09b94c940cc58c3ae8a9000edd50e9cd789444ba24784783a-merged.mount: Deactivated successfully.
Oct 02 20:06:00 compute-0 podman[451314]: 2025-10-02 20:06:00.904793841 +0000 UTC m=+0.317908200 container remove 896efb55bb512f2d6eefe40b55fc3718be1e261056b78f9b7915de4211bc5bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 20:06:00 compute-0 podman[451330]: 2025-10-02 20:06:00.915235631 +0000 UTC m=+0.134775949 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, maintainer=Red Hat, Inc., version=9.6, distribution-scope=public, name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible)
Oct 02 20:06:00 compute-0 systemd[1]: libpod-conmon-896efb55bb512f2d6eefe40b55fc3718be1e261056b78f9b7915de4211bc5bcb.scope: Deactivated successfully.
Oct 02 20:06:00 compute-0 podman[451332]: 2025-10-02 20:06:00.925069247 +0000 UTC m=+0.138261405 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 20:06:01 compute-0 podman[451391]: 2025-10-02 20:06:01.140959911 +0000 UTC m=+0.067281892 container create fa75db4b24b7e3595467b307c206422d3f34bbe94c1cef33f7622ada2faf7def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kalam, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:06:01 compute-0 podman[451391]: 2025-10-02 20:06:01.110621433 +0000 UTC m=+0.036943504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:06:01 compute-0 systemd[1]: Started libpod-conmon-fa75db4b24b7e3595467b307c206422d3f34bbe94c1cef33f7622ada2faf7def.scope.
Oct 02 20:06:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c2b8e80b54c75330a19348b473e6404e5a0475c791b7c5f595e3fb54684a97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c2b8e80b54c75330a19348b473e6404e5a0475c791b7c5f595e3fb54684a97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c2b8e80b54c75330a19348b473e6404e5a0475c791b7c5f595e3fb54684a97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c2b8e80b54c75330a19348b473e6404e5a0475c791b7c5f595e3fb54684a97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:06:01 compute-0 podman[451391]: 2025-10-02 20:06:01.288675554 +0000 UTC m=+0.214997585 container init fa75db4b24b7e3595467b307c206422d3f34bbe94c1cef33f7622ada2faf7def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kalam, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:06:01 compute-0 podman[451391]: 2025-10-02 20:06:01.308434439 +0000 UTC m=+0.234756460 container start fa75db4b24b7e3595467b307c206422d3f34bbe94c1cef33f7622ada2faf7def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct 02 20:06:01 compute-0 podman[451391]: 2025-10-02 20:06:01.316236861 +0000 UTC m=+0.242558882 container attach fa75db4b24b7e3595467b307c206422d3f34bbe94c1cef33f7622ada2faf7def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kalam, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 20:06:01 compute-0 nova_compute[355794]: 2025-10-02 20:06:01.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:01 compute-0 openstack_network_exporter[372736]: ERROR   20:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:06:01 compute-0 openstack_network_exporter[372736]: ERROR   20:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:06:01 compute-0 openstack_network_exporter[372736]: ERROR   20:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:06:01 compute-0 openstack_network_exporter[372736]: ERROR   20:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:06:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:06:01 compute-0 openstack_network_exporter[372736]: ERROR   20:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:06:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:06:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1838: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 129 op/s
Oct 02 20:06:01 compute-0 nova_compute[355794]: 2025-10-02 20:06:01.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:01 compute-0 anacron[143512]: Job `cron.monthly' started
Oct 02 20:06:01 compute-0 anacron[143512]: Job `cron.monthly' terminated
Oct 02 20:06:01 compute-0 anacron[143512]: Normal exit (3 jobs run)
Oct 02 20:06:01 compute-0 ceph-mon[191910]: pgmap v1838: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 129 op/s
Oct 02 20:06:02 compute-0 nova_compute[355794]: 2025-10-02 20:06:02.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:02 compute-0 festive_kalam[451406]: {
Oct 02 20:06:02 compute-0 festive_kalam[451406]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:06:02 compute-0 festive_kalam[451406]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:06:02 compute-0 festive_kalam[451406]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:06:02 compute-0 festive_kalam[451406]:         "osd_id": 1,
Oct 02 20:06:02 compute-0 festive_kalam[451406]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:06:02 compute-0 festive_kalam[451406]:         "type": "bluestore"
Oct 02 20:06:02 compute-0 festive_kalam[451406]:     },
Oct 02 20:06:02 compute-0 festive_kalam[451406]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:06:02 compute-0 festive_kalam[451406]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:06:02 compute-0 festive_kalam[451406]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:06:02 compute-0 festive_kalam[451406]:         "osd_id": 2,
Oct 02 20:06:02 compute-0 festive_kalam[451406]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:06:02 compute-0 festive_kalam[451406]:         "type": "bluestore"
Oct 02 20:06:02 compute-0 festive_kalam[451406]:     },
Oct 02 20:06:02 compute-0 festive_kalam[451406]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:06:02 compute-0 festive_kalam[451406]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:06:02 compute-0 festive_kalam[451406]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:06:02 compute-0 festive_kalam[451406]:         "osd_id": 0,
Oct 02 20:06:02 compute-0 festive_kalam[451406]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:06:02 compute-0 festive_kalam[451406]:         "type": "bluestore"
Oct 02 20:06:02 compute-0 festive_kalam[451406]:     }
Oct 02 20:06:02 compute-0 festive_kalam[451406]: }
Oct 02 20:06:02 compute-0 nova_compute[355794]: 2025-10-02 20:06:02.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:02 compute-0 nova_compute[355794]: 2025-10-02 20:06:02.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:06:02 compute-0 systemd[1]: libpod-fa75db4b24b7e3595467b307c206422d3f34bbe94c1cef33f7622ada2faf7def.scope: Deactivated successfully.
Oct 02 20:06:02 compute-0 podman[451391]: 2025-10-02 20:06:02.583799702 +0000 UTC m=+1.510121693 container died fa75db4b24b7e3595467b307c206422d3f34bbe94c1cef33f7622ada2faf7def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:06:02 compute-0 systemd[1]: libpod-fa75db4b24b7e3595467b307c206422d3f34bbe94c1cef33f7622ada2faf7def.scope: Consumed 1.253s CPU time.
Oct 02 20:06:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-87c2b8e80b54c75330a19348b473e6404e5a0475c791b7c5f595e3fb54684a97-merged.mount: Deactivated successfully.
Oct 02 20:06:02 compute-0 podman[451391]: 2025-10-02 20:06:02.684130541 +0000 UTC m=+1.610452532 container remove fa75db4b24b7e3595467b307c206422d3f34bbe94c1cef33f7622ada2faf7def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kalam, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:06:02 compute-0 systemd[1]: libpod-conmon-fa75db4b24b7e3595467b307c206422d3f34bbe94c1cef33f7622ada2faf7def.scope: Deactivated successfully.
Oct 02 20:06:02 compute-0 sudo[451251]: pam_unix(sudo:session): session closed for user root
Oct 02 20:06:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:06:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:06:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:06:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:06:02 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev aaceefc7-111d-430f-984f-7973c324564c does not exist
Oct 02 20:06:02 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 87da1b41-7076-4f54-8319-4524b2ce1e28 does not exist
Oct 02 20:06:02 compute-0 sudo[451455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:06:02 compute-0 sudo[451455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:06:02 compute-0 sudo[451455]: pam_unix(sudo:session): session closed for user root
Oct 02 20:06:03 compute-0 sudo[451480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:06:03 compute-0 sudo[451480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:06:03 compute-0 sudo[451480]: pam_unix(sudo:session): session closed for user root
Oct 02 20:06:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1839: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 128 op/s
Oct 02 20:06:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:06:03
Oct 02 20:06:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:06:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:06:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'vms', 'default.rgw.control', 'images']
Oct 02 20:06:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:06:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:06:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:06:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:06:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:06:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:06:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:06:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:06:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:06:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:06:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:06:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:06:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:06:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:06:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:06:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:06:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:06:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:06:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:06:04 compute-0 nova_compute[355794]: 2025-10-02 20:06:04.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:04 compute-0 nova_compute[355794]: 2025-10-02 20:06:04.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:04 compute-0 nova_compute[355794]: 2025-10-02 20:06:04.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:06:04 compute-0 nova_compute[355794]: 2025-10-02 20:06:04.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:06:04 compute-0 ceph-mon[191910]: pgmap v1839: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 128 op/s
Oct 02 20:06:05 compute-0 nova_compute[355794]: 2025-10-02 20:06:05.180 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:06:05 compute-0 nova_compute[355794]: 2025-10-02 20:06:05.181 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:06:05 compute-0 nova_compute[355794]: 2025-10-02 20:06:05.181 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:06:05 compute-0 nova_compute[355794]: 2025-10-02 20:06:05.182 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:06:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:06:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1840: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 116 op/s
Oct 02 20:06:06 compute-0 ceph-mon[191910]: pgmap v1840: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 116 op/s
Oct 02 20:06:07 compute-0 nova_compute[355794]: 2025-10-02 20:06:07.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 49 op/s
Oct 02 20:06:07 compute-0 nova_compute[355794]: 2025-10-02 20:06:07.895 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:06:07 compute-0 nova_compute[355794]: 2025-10-02 20:06:07.911 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:06:07 compute-0 nova_compute[355794]: 2025-10-02 20:06:07.912 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:06:07 compute-0 nova_compute[355794]: 2025-10-02 20:06:07.914 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:07 compute-0 nova_compute[355794]: 2025-10-02 20:06:07.915 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:07 compute-0 nova_compute[355794]: 2025-10-02 20:06:07.917 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:07 compute-0 nova_compute[355794]: 2025-10-02 20:06:07.918 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:07 compute-0 nova_compute[355794]: 2025-10-02 20:06:07.951 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:07 compute-0 nova_compute[355794]: 2025-10-02 20:06:07.952 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:07 compute-0 nova_compute[355794]: 2025-10-02 20:06:07.953 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:07 compute-0 nova_compute[355794]: 2025-10-02 20:06:07.955 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:06:07 compute-0 nova_compute[355794]: 2025-10-02 20:06:07.956 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:06:08 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1538347355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:06:08 compute-0 nova_compute[355794]: 2025-10-02 20:06:08.473 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:08 compute-0 nova_compute[355794]: 2025-10-02 20:06:08.591 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:06:08 compute-0 nova_compute[355794]: 2025-10-02 20:06:08.592 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:06:08 compute-0 nova_compute[355794]: 2025-10-02 20:06:08.600 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:06:08 compute-0 nova_compute[355794]: 2025-10-02 20:06:08.601 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:06:08 compute-0 nova_compute[355794]: 2025-10-02 20:06:08.602 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:06:08 compute-0 nova_compute[355794]: 2025-10-02 20:06:08.609 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:06:08 compute-0 nova_compute[355794]: 2025-10-02 20:06:08.610 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:06:08 compute-0 ceph-mon[191910]: pgmap v1841: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 49 op/s
Oct 02 20:06:08 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1538347355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:06:09 compute-0 nova_compute[355794]: 2025-10-02 20:06:09.227 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:06:09 compute-0 nova_compute[355794]: 2025-10-02 20:06:09.234 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3462MB free_disk=59.91336441040039GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:06:09 compute-0 nova_compute[355794]: 2025-10-02 20:06:09.235 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:09 compute-0 nova_compute[355794]: 2025-10-02 20:06:09.237 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:09 compute-0 nova_compute[355794]: 2025-10-02 20:06:09.330 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:06:09 compute-0 nova_compute[355794]: 2025-10-02 20:06:09.331 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance f8be75db-d124-4069-a573-db7410ea2b5e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:06:09 compute-0 nova_compute[355794]: 2025-10-02 20:06:09.332 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:06:09 compute-0 nova_compute[355794]: 2025-10-02 20:06:09.333 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:06:09 compute-0 nova_compute[355794]: 2025-10-02 20:06:09.334 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:06:09 compute-0 nova_compute[355794]: 2025-10-02 20:06:09.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:09 compute-0 nova_compute[355794]: 2025-10-02 20:06:09.425 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1842: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 517 KiB/s rd, 16 op/s
Oct 02 20:06:09 compute-0 nova_compute[355794]: 2025-10-02 20:06:09.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:06:09 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2554721388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:06:09 compute-0 nova_compute[355794]: 2025-10-02 20:06:09.993 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:10 compute-0 nova_compute[355794]: 2025-10-02 20:06:10.011 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:06:10 compute-0 nova_compute[355794]: 2025-10-02 20:06:10.036 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:06:10 compute-0 nova_compute[355794]: 2025-10-02 20:06:10.067 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:06:10 compute-0 nova_compute[355794]: 2025-10-02 20:06:10.069 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:10 compute-0 nova_compute[355794]: 2025-10-02 20:06:10.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:06:10 compute-0 ceph-mon[191910]: pgmap v1842: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 517 KiB/s rd, 16 op/s
Oct 02 20:06:10 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2554721388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:06:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:06:11 compute-0 nova_compute[355794]: 2025-10-02 20:06:11.729 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:11 compute-0 nova_compute[355794]: 2025-10-02 20:06:11.732 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:11 compute-0 nova_compute[355794]: 2025-10-02 20:06:11.759 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:12 compute-0 nova_compute[355794]: 2025-10-02 20:06:12.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:12 compute-0 podman[451550]: 2025-10-02 20:06:12.727752373 +0000 UTC m=+0.142012448 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 20:06:12 compute-0 ceph-mon[191910]: pgmap v1843: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001248857910887543 of space, bias 1.0, pg target 0.3746573732662629 quantized to 32 (current 32)
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:06:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1844: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:06:14 compute-0 nova_compute[355794]: 2025-10-02 20:06:14.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:14 compute-0 ceph-mon[191910]: pgmap v1844: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:06:15 compute-0 nova_compute[355794]: 2025-10-02 20:06:15.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:06:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:06:16 compute-0 ceph-mon[191910]: pgmap v1845: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:06:17 compute-0 nova_compute[355794]: 2025-10-02 20:06:17.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1846: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:06:18 compute-0 ceph-mon[191910]: pgmap v1846: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:06:19 compute-0 nova_compute[355794]: 2025-10-02 20:06:19.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:19 compute-0 nova_compute[355794]: 2025-10-02 20:06:19.476 2 DEBUG oslo_concurrency.lockutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquiring lock "a6e095a0-cb58-430d-9347-4aab385c6e69" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:19 compute-0 nova_compute[355794]: 2025-10-02 20:06:19.478 2 DEBUG oslo_concurrency.lockutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "a6e095a0-cb58-430d-9347-4aab385c6e69" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:19 compute-0 nova_compute[355794]: 2025-10-02 20:06:19.500 2 DEBUG nova.compute.manager [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 20:06:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:06:19 compute-0 nova_compute[355794]: 2025-10-02 20:06:19.596 2 DEBUG oslo_concurrency.lockutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:19 compute-0 nova_compute[355794]: 2025-10-02 20:06:19.598 2 DEBUG oslo_concurrency.lockutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:19 compute-0 nova_compute[355794]: 2025-10-02 20:06:19.615 2 DEBUG nova.virt.hardware [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 20:06:19 compute-0 nova_compute[355794]: 2025-10-02 20:06:19.616 2 INFO nova.compute.claims [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Claim successful on node compute-0.ctlplane.example.com
Oct 02 20:06:19 compute-0 ceph-mon[191910]: pgmap v1847: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:06:19 compute-0 nova_compute[355794]: 2025-10-02 20:06:19.890 2 DEBUG oslo_concurrency.processutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:06:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1373310469' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:06:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:06:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1373310469' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:06:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:06:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3440198721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:06:20 compute-0 nova_compute[355794]: 2025-10-02 20:06:20.446 2 DEBUG oslo_concurrency.processutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:20 compute-0 nova_compute[355794]: 2025-10-02 20:06:20.461 2 DEBUG nova.compute.provider_tree [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:06:20 compute-0 nova_compute[355794]: 2025-10-02 20:06:20.495 2 DEBUG nova.scheduler.client.report [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:06:20 compute-0 nova_compute[355794]: 2025-10-02 20:06:20.517 2 DEBUG oslo_concurrency.lockutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:20 compute-0 nova_compute[355794]: 2025-10-02 20:06:20.518 2 DEBUG nova.compute.manager [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 20:06:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:06:20 compute-0 nova_compute[355794]: 2025-10-02 20:06:20.575 2 DEBUG nova.compute.manager [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 20:06:20 compute-0 nova_compute[355794]: 2025-10-02 20:06:20.576 2 DEBUG nova.network.neutron [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 20:06:20 compute-0 nova_compute[355794]: 2025-10-02 20:06:20.599 2 INFO nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 20:06:20 compute-0 nova_compute[355794]: 2025-10-02 20:06:20.629 2 DEBUG nova.compute.manager [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 20:06:20 compute-0 podman[451593]: 2025-10-02 20:06:20.719990962 +0000 UTC m=+0.125829032 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 20:06:20 compute-0 podman[451594]: 2025-10-02 20:06:20.733309315 +0000 UTC m=+0.135243449 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 20:06:20 compute-0 nova_compute[355794]: 2025-10-02 20:06:20.770 2 DEBUG nova.compute.manager [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 20:06:20 compute-0 nova_compute[355794]: 2025-10-02 20:06:20.772 2 DEBUG nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 20:06:20 compute-0 nova_compute[355794]: 2025-10-02 20:06:20.773 2 INFO nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Creating image(s)
Oct 02 20:06:20 compute-0 nova_compute[355794]: 2025-10-02 20:06:20.818 2 DEBUG nova.storage.rbd_utils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] rbd image a6e095a0-cb58-430d-9347-4aab385c6e69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:06:20 compute-0 nova_compute[355794]: 2025-10-02 20:06:20.872 2 DEBUG nova.storage.rbd_utils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] rbd image a6e095a0-cb58-430d-9347-4aab385c6e69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:06:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1373310469' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:06:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1373310469' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:06:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3440198721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:06:20 compute-0 nova_compute[355794]: 2025-10-02 20:06:20.933 2 DEBUG nova.storage.rbd_utils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] rbd image a6e095a0-cb58-430d-9347-4aab385c6e69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:06:20 compute-0 nova_compute[355794]: 2025-10-02 20:06:20.944 2 DEBUG oslo_concurrency.processutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:21 compute-0 nova_compute[355794]: 2025-10-02 20:06:21.057 2 DEBUG oslo_concurrency.processutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:21 compute-0 nova_compute[355794]: 2025-10-02 20:06:21.058 2 DEBUG oslo_concurrency.lockutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquiring lock "0c456b520d71abe557f4853537116bdcc2ff0a79" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:21 compute-0 nova_compute[355794]: 2025-10-02 20:06:21.059 2 DEBUG oslo_concurrency.lockutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "0c456b520d71abe557f4853537116bdcc2ff0a79" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:21 compute-0 nova_compute[355794]: 2025-10-02 20:06:21.059 2 DEBUG oslo_concurrency.lockutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "0c456b520d71abe557f4853537116bdcc2ff0a79" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:21 compute-0 nova_compute[355794]: 2025-10-02 20:06:21.098 2 DEBUG nova.storage.rbd_utils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] rbd image a6e095a0-cb58-430d-9347-4aab385c6e69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:06:21 compute-0 nova_compute[355794]: 2025-10-02 20:06:21.106 2 DEBUG oslo_concurrency.processutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 a6e095a0-cb58-430d-9347-4aab385c6e69_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:21 compute-0 nova_compute[355794]: 2025-10-02 20:06:21.138 2 DEBUG nova.policy [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e87db118c0374d50a374f0ceaf961159', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a7c52835a9494ea98fd26390771eb77f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 20:06:21 compute-0 nova_compute[355794]: 2025-10-02 20:06:21.527 2 DEBUG oslo_concurrency.processutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 a6e095a0-cb58-430d-9347-4aab385c6e69_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1848: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:06:21 compute-0 nova_compute[355794]: 2025-10-02 20:06:21.658 2 DEBUG nova.storage.rbd_utils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] resizing rbd image a6e095a0-cb58-430d-9347-4aab385c6e69_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 20:06:21 compute-0 ceph-mon[191910]: pgmap v1848: 321 pgs: 321 active+clean; 211 MiB data, 343 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:06:21 compute-0 nova_compute[355794]: 2025-10-02 20:06:21.948 2 DEBUG nova.objects.instance [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lazy-loading 'migration_context' on Instance uuid a6e095a0-cb58-430d-9347-4aab385c6e69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:06:21 compute-0 nova_compute[355794]: 2025-10-02 20:06:21.975 2 DEBUG nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 20:06:21 compute-0 nova_compute[355794]: 2025-10-02 20:06:21.975 2 DEBUG nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Ensure instance console log exists: /var/lib/nova/instances/a6e095a0-cb58-430d-9347-4aab385c6e69/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 20:06:21 compute-0 nova_compute[355794]: 2025-10-02 20:06:21.976 2 DEBUG oslo_concurrency.lockutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:21 compute-0 nova_compute[355794]: 2025-10-02 20:06:21.976 2 DEBUG oslo_concurrency.lockutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:21 compute-0 nova_compute[355794]: 2025-10-02 20:06:21.977 2 DEBUG oslo_concurrency.lockutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:22 compute-0 nova_compute[355794]: 2025-10-02 20:06:22.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:22 compute-0 nova_compute[355794]: 2025-10-02 20:06:22.749 2 DEBUG oslo_concurrency.lockutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Acquiring lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:22 compute-0 nova_compute[355794]: 2025-10-02 20:06:22.750 2 DEBUG oslo_concurrency.lockutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:22 compute-0 nova_compute[355794]: 2025-10-02 20:06:22.773 2 DEBUG nova.compute.manager [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 20:06:22 compute-0 nova_compute[355794]: 2025-10-02 20:06:22.841 2 DEBUG oslo_concurrency.lockutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:22 compute-0 nova_compute[355794]: 2025-10-02 20:06:22.842 2 DEBUG oslo_concurrency.lockutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:22 compute-0 nova_compute[355794]: 2025-10-02 20:06:22.850 2 DEBUG nova.virt.hardware [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 20:06:22 compute-0 nova_compute[355794]: 2025-10-02 20:06:22.851 2 INFO nova.compute.claims [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Claim successful on node compute-0.ctlplane.example.com
Oct 02 20:06:23 compute-0 nova_compute[355794]: 2025-10-02 20:06:23.001 2 DEBUG nova.network.neutron [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Successfully created port: 4af10480-1bf8-4efe-bb0e-ef9ee356a470 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 20:06:23 compute-0 nova_compute[355794]: 2025-10-02 20:06:23.084 2 DEBUG oslo_concurrency.processutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:23 compute-0 ovn_controller[88435]: 2025-10-02T20:06:23Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:04:5e:6a 10.100.0.8
Oct 02 20:06:23 compute-0 ovn_controller[88435]: 2025-10-02T20:06:23Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:04:5e:6a 10.100.0.8
Oct 02 20:06:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 225 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 914 KiB/s wr, 15 op/s
Oct 02 20:06:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:06:23 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/530281952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:06:23 compute-0 nova_compute[355794]: 2025-10-02 20:06:23.621 2 DEBUG oslo_concurrency.processutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:23 compute-0 nova_compute[355794]: 2025-10-02 20:06:23.631 2 DEBUG nova.compute.provider_tree [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:06:23 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/530281952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:06:23 compute-0 nova_compute[355794]: 2025-10-02 20:06:23.651 2 DEBUG nova.scheduler.client.report [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:06:23 compute-0 nova_compute[355794]: 2025-10-02 20:06:23.681 2 DEBUG oslo_concurrency.lockutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:23 compute-0 nova_compute[355794]: 2025-10-02 20:06:23.682 2 DEBUG nova.compute.manager [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 20:06:23 compute-0 nova_compute[355794]: 2025-10-02 20:06:23.742 2 DEBUG nova.compute.manager [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 20:06:23 compute-0 nova_compute[355794]: 2025-10-02 20:06:23.743 2 DEBUG nova.network.neutron [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 20:06:23 compute-0 nova_compute[355794]: 2025-10-02 20:06:23.768 2 INFO nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 20:06:23 compute-0 nova_compute[355794]: 2025-10-02 20:06:23.790 2 DEBUG nova.compute.manager [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 20:06:23 compute-0 nova_compute[355794]: 2025-10-02 20:06:23.916 2 DEBUG nova.compute.manager [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 20:06:23 compute-0 nova_compute[355794]: 2025-10-02 20:06:23.919 2 DEBUG nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 20:06:23 compute-0 nova_compute[355794]: 2025-10-02 20:06:23.919 2 INFO nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Creating image(s)
Oct 02 20:06:23 compute-0 nova_compute[355794]: 2025-10-02 20:06:23.969 2 DEBUG nova.storage.rbd_utils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] rbd image 018136d8-a19a-40ff-a7fb-72157ab8d8b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:06:24 compute-0 nova_compute[355794]: 2025-10-02 20:06:24.036 2 DEBUG nova.storage.rbd_utils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] rbd image 018136d8-a19a-40ff-a7fb-72157ab8d8b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:06:24 compute-0 nova_compute[355794]: 2025-10-02 20:06:24.124 2 DEBUG nova.storage.rbd_utils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] rbd image 018136d8-a19a-40ff-a7fb-72157ab8d8b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:06:24 compute-0 nova_compute[355794]: 2025-10-02 20:06:24.145 2 DEBUG oslo_concurrency.processutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:24 compute-0 nova_compute[355794]: 2025-10-02 20:06:24.191 2 DEBUG nova.policy [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2642bfaf2f5c4f468a9a9392415e6de8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e5246e6335a54a93b5e530d232da3468', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 20:06:24 compute-0 nova_compute[355794]: 2025-10-02 20:06:24.253 2 DEBUG oslo_concurrency.processutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:24 compute-0 nova_compute[355794]: 2025-10-02 20:06:24.255 2 DEBUG oslo_concurrency.lockutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Acquiring lock "0c456b520d71abe557f4853537116bdcc2ff0a79" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:24 compute-0 nova_compute[355794]: 2025-10-02 20:06:24.258 2 DEBUG oslo_concurrency.lockutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lock "0c456b520d71abe557f4853537116bdcc2ff0a79" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:24 compute-0 nova_compute[355794]: 2025-10-02 20:06:24.259 2 DEBUG oslo_concurrency.lockutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lock "0c456b520d71abe557f4853537116bdcc2ff0a79" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:24 compute-0 nova_compute[355794]: 2025-10-02 20:06:24.316 2 DEBUG nova.storage.rbd_utils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] rbd image 018136d8-a19a-40ff-a7fb-72157ab8d8b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:06:24 compute-0 nova_compute[355794]: 2025-10-02 20:06:24.340 2 DEBUG oslo_concurrency.processutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 018136d8-a19a-40ff-a7fb-72157ab8d8b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:24 compute-0 nova_compute[355794]: 2025-10-02 20:06:24.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:24 compute-0 ceph-mon[191910]: pgmap v1849: 321 pgs: 321 active+clean; 225 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 914 KiB/s wr, 15 op/s
Oct 02 20:06:24 compute-0 nova_compute[355794]: 2025-10-02 20:06:24.734 2 DEBUG oslo_concurrency.processutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 018136d8-a19a-40ff-a7fb-72157ab8d8b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:24 compute-0 nova_compute[355794]: 2025-10-02 20:06:24.907 2 DEBUG nova.storage.rbd_utils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] resizing rbd image 018136d8-a19a-40ff-a7fb-72157ab8d8b5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 20:06:25 compute-0 nova_compute[355794]: 2025-10-02 20:06:25.170 2 DEBUG nova.objects.instance [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lazy-loading 'migration_context' on Instance uuid 018136d8-a19a-40ff-a7fb-72157ab8d8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:06:25 compute-0 nova_compute[355794]: 2025-10-02 20:06:25.193 2 DEBUG nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 20:06:25 compute-0 nova_compute[355794]: 2025-10-02 20:06:25.195 2 DEBUG nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Ensure instance console log exists: /var/lib/nova/instances/018136d8-a19a-40ff-a7fb-72157ab8d8b5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 20:06:25 compute-0 nova_compute[355794]: 2025-10-02 20:06:25.196 2 DEBUG oslo_concurrency.lockutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:25 compute-0 nova_compute[355794]: 2025-10-02 20:06:25.196 2 DEBUG oslo_concurrency.lockutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:25 compute-0 nova_compute[355794]: 2025-10-02 20:06:25.196 2 DEBUG oslo_concurrency.lockutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:25 compute-0 nova_compute[355794]: 2025-10-02 20:06:25.431 2 DEBUG nova.network.neutron [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Successfully updated port: 4af10480-1bf8-4efe-bb0e-ef9ee356a470 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 20:06:25 compute-0 nova_compute[355794]: 2025-10-02 20:06:25.458 2 DEBUG oslo_concurrency.lockutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquiring lock "refresh_cache-a6e095a0-cb58-430d-9347-4aab385c6e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:06:25 compute-0 nova_compute[355794]: 2025-10-02 20:06:25.459 2 DEBUG oslo_concurrency.lockutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquired lock "refresh_cache-a6e095a0-cb58-430d-9347-4aab385c6e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:06:25 compute-0 nova_compute[355794]: 2025-10-02 20:06:25.460 2 DEBUG nova.network.neutron [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 20:06:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:06:25.536542) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435585536841, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1100, "num_deletes": 258, "total_data_size": 1517975, "memory_usage": 1546128, "flush_reason": "Manual Compaction"}
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435585550098, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1491631, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36824, "largest_seqno": 37923, "table_properties": {"data_size": 1486407, "index_size": 2684, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11441, "raw_average_key_size": 19, "raw_value_size": 1475651, "raw_average_value_size": 2509, "num_data_blocks": 120, "num_entries": 588, "num_filter_entries": 588, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759435489, "oldest_key_time": 1759435489, "file_creation_time": 1759435585, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 13633 microseconds, and 7326 cpu microseconds.
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:06:25.550177) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1491631 bytes OK
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:06:25.550198) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:06:25.553297) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:06:25.553320) EVENT_LOG_v1 {"time_micros": 1759435585553313, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:06:25.553340) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1512837, prev total WAL file size 1512837, number of live WAL files 2.
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:06:25.554415) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323532' seq:72057594037927935, type:22 .. '6C6F676D0031353036' seq:0, type:0; will stop at (end)
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1456KB)], [83(8689KB)]
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435585554472, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 10389401, "oldest_snapshot_seqno": -1}
Oct 02 20:06:25 compute-0 nova_compute[355794]: 2025-10-02 20:06:25.571 2 DEBUG nova.compute.manager [req-89e94829-120d-465b-aefe-f3df64f6449e req-8400db29-ab74-4f96-a0e5-dd6586f8c317 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Received event network-changed-4af10480-1bf8-4efe-bb0e-ef9ee356a470 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:06:25 compute-0 nova_compute[355794]: 2025-10-02 20:06:25.572 2 DEBUG nova.compute.manager [req-89e94829-120d-465b-aefe-f3df64f6449e req-8400db29-ab74-4f96-a0e5-dd6586f8c317 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Refreshing instance network info cache due to event network-changed-4af10480-1bf8-4efe-bb0e-ef9ee356a470. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:06:25 compute-0 nova_compute[355794]: 2025-10-02 20:06:25.572 2 DEBUG oslo_concurrency.lockutils [req-89e94829-120d-465b-aefe-f3df64f6449e req-8400db29-ab74-4f96-a0e5-dd6586f8c317 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-a6e095a0-cb58-430d-9347-4aab385c6e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:06:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 246 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 1.9 MiB/s wr, 37 op/s
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 5742 keys, 10285285 bytes, temperature: kUnknown
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435585614048, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 10285285, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10245027, "index_size": 24788, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 145586, "raw_average_key_size": 25, "raw_value_size": 10139395, "raw_average_value_size": 1765, "num_data_blocks": 1021, "num_entries": 5742, "num_filter_entries": 5742, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759435585, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:06:25.614272) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 10285285 bytes
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:06:25.616128) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.2 rd, 172.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 8.5 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(13.9) write-amplify(6.9) OK, records in: 6274, records dropped: 532 output_compression: NoCompression
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:06:25.616143) EVENT_LOG_v1 {"time_micros": 1759435585616136, "job": 48, "event": "compaction_finished", "compaction_time_micros": 59646, "compaction_time_cpu_micros": 23032, "output_level": 6, "num_output_files": 1, "total_output_size": 10285285, "num_input_records": 6274, "num_output_records": 5742, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435585616478, "job": 48, "event": "table_file_deletion", "file_number": 85}
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435585617861, "job": 48, "event": "table_file_deletion", "file_number": 83}
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:06:25.554237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:06:25.617971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:06:25.617976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:06:25.617977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:06:25.617979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:06:25 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:06:25.617980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:06:25 compute-0 nova_compute[355794]: 2025-10-02 20:06:25.751 2 DEBUG nova.network.neutron [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 20:06:26 compute-0 nova_compute[355794]: 2025-10-02 20:06:26.323 2 DEBUG nova.network.neutron [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Successfully created port: 11f0969e-82ab-4143-81be-7c01575f2855 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 20:06:26 compute-0 ceph-mon[191910]: pgmap v1850: 321 pgs: 321 active+clean; 246 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 1.9 MiB/s wr, 37 op/s
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:27 compute-0 ovn_controller[88435]: 2025-10-02T20:06:27Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:eb:42:64 10.100.0.13
Oct 02 20:06:27 compute-0 ovn_controller[88435]: 2025-10-02T20:06:27Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:42:64 10.100.0.13
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.304 2 DEBUG nova.network.neutron [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Updating instance_info_cache with network_info: [{"id": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "address": "fa:16:3e:d8:72:8b", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4af10480-1b", "ovs_interfaceid": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.345 2 DEBUG oslo_concurrency.lockutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Releasing lock "refresh_cache-a6e095a0-cb58-430d-9347-4aab385c6e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.345 2 DEBUG nova.compute.manager [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Instance network_info: |[{"id": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "address": "fa:16:3e:d8:72:8b", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4af10480-1b", "ovs_interfaceid": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.346 2 DEBUG oslo_concurrency.lockutils [req-89e94829-120d-465b-aefe-f3df64f6449e req-8400db29-ab74-4f96-a0e5-dd6586f8c317 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-a6e095a0-cb58-430d-9347-4aab385c6e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.346 2 DEBUG nova.network.neutron [req-89e94829-120d-465b-aefe-f3df64f6449e req-8400db29-ab74-4f96-a0e5-dd6586f8c317 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Refreshing network info cache for port 4af10480-1bf8-4efe-bb0e-ef9ee356a470 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.351 2 DEBUG nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Start _get_guest_xml network_info=[{"id": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "address": "fa:16:3e:d8:72:8b", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4af10480-1b", "ovs_interfaceid": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:04:39Z,direct_url=<?>,disk_format='qcow2',id=2881b8cb-4cad-4124-8a6e-ae21054c9692,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:04:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'disk_bus': 'virtio', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'image_id': '2881b8cb-4cad-4124-8a6e-ae21054c9692'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.383 2 WARNING nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.401 2 DEBUG nova.virt.libvirt.host [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.403 2 DEBUG nova.virt.libvirt.host [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.410 2 DEBUG nova.virt.libvirt.host [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.411 2 DEBUG nova.virt.libvirt.host [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.412 2 DEBUG nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.413 2 DEBUG nova.virt.hardware [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T20:04:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2a4d7fef-934e-4921-8c3b-c6783966faa5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:04:39Z,direct_url=<?>,disk_format='qcow2',id=2881b8cb-4cad-4124-8a6e-ae21054c9692,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:04:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.414 2 DEBUG nova.virt.hardware [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.415 2 DEBUG nova.virt.hardware [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.416 2 DEBUG nova.virt.hardware [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.416 2 DEBUG nova.virt.hardware [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.417 2 DEBUG nova.virt.hardware [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.418 2 DEBUG nova.virt.hardware [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.419 2 DEBUG nova.virt.hardware [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.420 2 DEBUG nova.virt.hardware [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.420 2 DEBUG nova.virt.hardware [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.421 2 DEBUG nova.virt.hardware [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.427 2 DEBUG oslo_concurrency.processutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1851: 321 pgs: 321 active+clean; 290 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 4.5 MiB/s wr, 95 op/s
Oct 02 20:06:27 compute-0 podman[451992]: 2025-10-02 20:06:27.672772105 +0000 UTC m=+0.088993000 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, version=9.4, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, managed_by=edpm_ansible, config_id=edpm, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git)
Oct 02 20:06:27 compute-0 podman[451991]: 2025-10-02 20:06:27.699145656 +0000 UTC m=+0.128507300 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 20:06:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:06:27 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2891497527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:06:27 compute-0 nova_compute[355794]: 2025-10-02 20:06:27.983 2 DEBUG oslo_concurrency.processutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.036 2 DEBUG nova.storage.rbd_utils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] rbd image a6e095a0-cb58-430d-9347-4aab385c6e69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.047 2 DEBUG oslo_concurrency.processutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.296 2 DEBUG nova.network.neutron [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Successfully updated port: 11f0969e-82ab-4143-81be-7c01575f2855 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.314 2 DEBUG oslo_concurrency.lockutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Acquiring lock "refresh_cache-018136d8-a19a-40ff-a7fb-72157ab8d8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.315 2 DEBUG oslo_concurrency.lockutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Acquired lock "refresh_cache-018136d8-a19a-40ff-a7fb-72157ab8d8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.316 2 DEBUG nova.network.neutron [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.420 2 DEBUG nova.compute.manager [req-8d9cfbfa-d25f-46a3-a685-3f2e7f57d791 req-ee773d11-86e4-42b8-b3b0-9b4e55fd0738 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Received event network-changed-11f0969e-82ab-4143-81be-7c01575f2855 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.420 2 DEBUG nova.compute.manager [req-8d9cfbfa-d25f-46a3-a685-3f2e7f57d791 req-ee773d11-86e4-42b8-b3b0-9b4e55fd0738 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Refreshing instance network info cache due to event network-changed-11f0969e-82ab-4143-81be-7c01575f2855. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.421 2 DEBUG oslo_concurrency.lockutils [req-8d9cfbfa-d25f-46a3-a685-3f2e7f57d791 req-ee773d11-86e4-42b8-b3b0-9b4e55fd0738 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-018136d8-a19a-40ff-a7fb-72157ab8d8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:06:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:06:28 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2695526809' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.508 2 DEBUG oslo_concurrency.processutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.510 2 DEBUG nova.virt.libvirt.vif [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T20:06:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-332031272',display_name='tempest-TestNetworkBasicOps-server-332031272',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-332031272',id=8,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+gizvNNhk87DzMIKZAzdFdrNakQS09f3n8/hsElwZOR6W+1OR1WlE16FZq4XAVBI1PnHc1iNjlKAiJ6aqdGaonrPyunVFvVvPgUMCTqaVFzbO55Hz8ocdQlO2t7Ap3sQ==',key_name='tempest-TestNetworkBasicOps-1755477534',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a7c52835a9494ea98fd26390771eb77f',ramdisk_id='',reservation_id='r-3x1sbpqi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1027837101',owner_user_name='tempest-TestNetworkBasicOps-1027837101-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:06:20Z,user_data=None,user_id='e87db118c0374d50a374f0ceaf961159',uuid=a6e095a0-cb58-430d-9347-4aab385c6e69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "address": "fa:16:3e:d8:72:8b", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4af10480-1b", "ovs_interfaceid": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.511 2 DEBUG nova.network.os_vif_util [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Converting VIF {"id": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "address": "fa:16:3e:d8:72:8b", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4af10480-1b", "ovs_interfaceid": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.512 2 DEBUG nova.network.os_vif_util [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:72:8b,bridge_name='br-int',has_traffic_filtering=True,id=4af10480-1bf8-4efe-bb0e-ef9ee356a470,network=Network(aefd878a-4767-48ff-8dcb-ccb5b8fcb84b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4af10480-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.513 2 DEBUG nova.objects.instance [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lazy-loading 'pci_devices' on Instance uuid a6e095a0-cb58-430d-9347-4aab385c6e69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.534 2 DEBUG nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] End _get_guest_xml xml=<domain type="kvm">
Oct 02 20:06:28 compute-0 nova_compute[355794]:   <uuid>a6e095a0-cb58-430d-9347-4aab385c6e69</uuid>
Oct 02 20:06:28 compute-0 nova_compute[355794]:   <name>instance-00000008</name>
Oct 02 20:06:28 compute-0 nova_compute[355794]:   <memory>131072</memory>
Oct 02 20:06:28 compute-0 nova_compute[355794]:   <vcpu>1</vcpu>
Oct 02 20:06:28 compute-0 nova_compute[355794]:   <metadata>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <nova:name>tempest-TestNetworkBasicOps-server-332031272</nova:name>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <nova:creationTime>2025-10-02 20:06:27</nova:creationTime>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <nova:flavor name="m1.nano">
Oct 02 20:06:28 compute-0 nova_compute[355794]:         <nova:memory>128</nova:memory>
Oct 02 20:06:28 compute-0 nova_compute[355794]:         <nova:disk>1</nova:disk>
Oct 02 20:06:28 compute-0 nova_compute[355794]:         <nova:swap>0</nova:swap>
Oct 02 20:06:28 compute-0 nova_compute[355794]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 20:06:28 compute-0 nova_compute[355794]:         <nova:vcpus>1</nova:vcpus>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       </nova:flavor>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <nova:owner>
Oct 02 20:06:28 compute-0 nova_compute[355794]:         <nova:user uuid="e87db118c0374d50a374f0ceaf961159">tempest-TestNetworkBasicOps-1027837101-project-member</nova:user>
Oct 02 20:06:28 compute-0 nova_compute[355794]:         <nova:project uuid="a7c52835a9494ea98fd26390771eb77f">tempest-TestNetworkBasicOps-1027837101</nova:project>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       </nova:owner>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <nova:root type="image" uuid="2881b8cb-4cad-4124-8a6e-ae21054c9692"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <nova:ports>
Oct 02 20:06:28 compute-0 nova_compute[355794]:         <nova:port uuid="4af10480-1bf8-4efe-bb0e-ef9ee356a470">
Oct 02 20:06:28 compute-0 nova_compute[355794]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:         </nova:port>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       </nova:ports>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     </nova:instance>
Oct 02 20:06:28 compute-0 nova_compute[355794]:   </metadata>
Oct 02 20:06:28 compute-0 nova_compute[355794]:   <sysinfo type="smbios">
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <system>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <entry name="manufacturer">RDO</entry>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <entry name="product">OpenStack Compute</entry>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <entry name="serial">a6e095a0-cb58-430d-9347-4aab385c6e69</entry>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <entry name="uuid">a6e095a0-cb58-430d-9347-4aab385c6e69</entry>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <entry name="family">Virtual Machine</entry>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     </system>
Oct 02 20:06:28 compute-0 nova_compute[355794]:   </sysinfo>
Oct 02 20:06:28 compute-0 nova_compute[355794]:   <os>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <boot dev="hd"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <smbios mode="sysinfo"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:   </os>
Oct 02 20:06:28 compute-0 nova_compute[355794]:   <features>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <acpi/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <apic/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <vmcoreinfo/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:   </features>
Oct 02 20:06:28 compute-0 nova_compute[355794]:   <clock offset="utc">
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <timer name="hpet" present="no"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:   </clock>
Oct 02 20:06:28 compute-0 nova_compute[355794]:   <cpu mode="host-model" match="exact">
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:   </cpu>
Oct 02 20:06:28 compute-0 nova_compute[355794]:   <devices>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/a6e095a0-cb58-430d-9347-4aab385c6e69_disk">
Oct 02 20:06:28 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       </source>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:06:28 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <target dev="vda" bus="virtio"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <disk type="network" device="cdrom">
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/a6e095a0-cb58-430d-9347-4aab385c6e69_disk.config">
Oct 02 20:06:28 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       </source>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:06:28 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <target dev="sda" bus="sata"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <interface type="ethernet">
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <mac address="fa:16:3e:d8:72:8b"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <mtu size="1442"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <target dev="tap4af10480-1b"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     </interface>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <serial type="pty">
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <log file="/var/lib/nova/instances/a6e095a0-cb58-430d-9347-4aab385c6e69/console.log" append="off"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     </serial>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <video>
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     </video>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <input type="tablet" bus="usb"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <rng model="virtio">
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <backend model="random">/dev/urandom</backend>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     </rng>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <controller type="usb" index="0"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     <memballoon model="virtio">
Oct 02 20:06:28 compute-0 nova_compute[355794]:       <stats period="10"/>
Oct 02 20:06:28 compute-0 nova_compute[355794]:     </memballoon>
Oct 02 20:06:28 compute-0 nova_compute[355794]:   </devices>
Oct 02 20:06:28 compute-0 nova_compute[355794]: </domain>
Oct 02 20:06:28 compute-0 nova_compute[355794]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.536 2 DEBUG nova.compute.manager [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Preparing to wait for external event network-vif-plugged-4af10480-1bf8-4efe-bb0e-ef9ee356a470 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.536 2 DEBUG oslo_concurrency.lockutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquiring lock "a6e095a0-cb58-430d-9347-4aab385c6e69-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.537 2 DEBUG oslo_concurrency.lockutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "a6e095a0-cb58-430d-9347-4aab385c6e69-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.537 2 DEBUG oslo_concurrency.lockutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "a6e095a0-cb58-430d-9347-4aab385c6e69-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.538 2 DEBUG nova.virt.libvirt.vif [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T20:06:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-332031272',display_name='tempest-TestNetworkBasicOps-server-332031272',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-332031272',id=8,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+gizvNNhk87DzMIKZAzdFdrNakQS09f3n8/hsElwZOR6W+1OR1WlE16FZq4XAVBI1PnHc1iNjlKAiJ6aqdGaonrPyunVFvVvPgUMCTqaVFzbO55Hz8ocdQlO2t7Ap3sQ==',key_name='tempest-TestNetworkBasicOps-1755477534',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a7c52835a9494ea98fd26390771eb77f',ramdisk_id='',reservation_id='r-3x1sbpqi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1027837101',owner_user_name='tempest-TestNetworkBasicOps-1027837101-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:06:20Z,user_data=None,user_id='e87db118c0374d50a374f0ceaf961159',uuid=a6e095a0-cb58-430d-9347-4aab385c6e69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "address": "fa:16:3e:d8:72:8b", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4af10480-1b", "ovs_interfaceid": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.538 2 DEBUG nova.network.os_vif_util [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Converting VIF {"id": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "address": "fa:16:3e:d8:72:8b", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4af10480-1b", "ovs_interfaceid": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.539 2 DEBUG nova.network.os_vif_util [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:72:8b,bridge_name='br-int',has_traffic_filtering=True,id=4af10480-1bf8-4efe-bb0e-ef9ee356a470,network=Network(aefd878a-4767-48ff-8dcb-ccb5b8fcb84b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4af10480-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.540 2 DEBUG os_vif [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:72:8b,bridge_name='br-int',has_traffic_filtering=True,id=4af10480-1bf8-4efe-bb0e-ef9ee356a470,network=Network(aefd878a-4767-48ff-8dcb-ccb5b8fcb84b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4af10480-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.541 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.542 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.547 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4af10480-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4af10480-1b, col_values=(('external_ids', {'iface-id': '4af10480-1bf8-4efe-bb0e-ef9ee356a470', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:72:8b', 'vm-uuid': 'a6e095a0-cb58-430d-9347-4aab385c6e69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:28 compute-0 NetworkManager[44968]: <info>  [1759435588.5528] manager: (tap4af10480-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.563 2 INFO os_vif [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:72:8b,bridge_name='br-int',has_traffic_filtering=True,id=4af10480-1bf8-4efe-bb0e-ef9ee356a470,network=Network(aefd878a-4767-48ff-8dcb-ccb5b8fcb84b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4af10480-1b')
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.637 2 DEBUG nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.638 2 DEBUG nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.639 2 DEBUG nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] No VIF found with MAC fa:16:3e:d8:72:8b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 20:06:28 compute-0 ceph-mon[191910]: pgmap v1851: 321 pgs: 321 active+clean; 290 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 4.5 MiB/s wr, 95 op/s
Oct 02 20:06:28 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2891497527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:06:28 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2695526809' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.640 2 INFO nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Using config drive
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.695 2 DEBUG nova.storage.rbd_utils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] rbd image a6e095a0-cb58-430d-9347-4aab385c6e69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.727 2 DEBUG nova.network.neutron [req-89e94829-120d-465b-aefe-f3df64f6449e req-8400db29-ab74-4f96-a0e5-dd6586f8c317 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Updated VIF entry in instance network info cache for port 4af10480-1bf8-4efe-bb0e-ef9ee356a470. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.727 2 DEBUG nova.network.neutron [req-89e94829-120d-465b-aefe-f3df64f6449e req-8400db29-ab74-4f96-a0e5-dd6586f8c317 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Updating instance_info_cache with network_info: [{"id": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "address": "fa:16:3e:d8:72:8b", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4af10480-1b", "ovs_interfaceid": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.746 2 DEBUG oslo_concurrency.lockutils [req-89e94829-120d-465b-aefe-f3df64f6449e req-8400db29-ab74-4f96-a0e5-dd6586f8c317 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-a6e095a0-cb58-430d-9347-4aab385c6e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:06:28 compute-0 nova_compute[355794]: 2025-10-02 20:06:28.853 2 DEBUG nova.network.neutron [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 20:06:29 compute-0 nova_compute[355794]: 2025-10-02 20:06:29.131 2 INFO nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Creating config drive at /var/lib/nova/instances/a6e095a0-cb58-430d-9347-4aab385c6e69/disk.config
Oct 02 20:06:29 compute-0 nova_compute[355794]: 2025-10-02 20:06:29.138 2 DEBUG oslo_concurrency.processutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a6e095a0-cb58-430d-9347-4aab385c6e69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvhn8t2qx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:29 compute-0 nova_compute[355794]: 2025-10-02 20:06:29.284 2 DEBUG oslo_concurrency.processutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a6e095a0-cb58-430d-9347-4aab385c6e69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvhn8t2qx" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:29 compute-0 nova_compute[355794]: 2025-10-02 20:06:29.328 2 DEBUG nova.storage.rbd_utils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] rbd image a6e095a0-cb58-430d-9347-4aab385c6e69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:06:29 compute-0 nova_compute[355794]: 2025-10-02 20:06:29.338 2 DEBUG oslo_concurrency.processutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a6e095a0-cb58-430d-9347-4aab385c6e69/disk.config a6e095a0-cb58-430d-9347-4aab385c6e69_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:29 compute-0 nova_compute[355794]: 2025-10-02 20:06:29.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1852: 321 pgs: 321 active+clean; 365 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 662 KiB/s rd, 7.8 MiB/s wr, 176 op/s
Oct 02 20:06:29 compute-0 nova_compute[355794]: 2025-10-02 20:06:29.626 2 DEBUG oslo_concurrency.processutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a6e095a0-cb58-430d-9347-4aab385c6e69/disk.config a6e095a0-cb58-430d-9347-4aab385c6e69_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:29 compute-0 nova_compute[355794]: 2025-10-02 20:06:29.627 2 INFO nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Deleting local config drive /var/lib/nova/instances/a6e095a0-cb58-430d-9347-4aab385c6e69/disk.config because it was imported into RBD.
Oct 02 20:06:29 compute-0 kernel: tap4af10480-1b: entered promiscuous mode
Oct 02 20:06:29 compute-0 NetworkManager[44968]: <info>  [1759435589.7079] manager: (tap4af10480-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Oct 02 20:06:29 compute-0 ovn_controller[88435]: 2025-10-02T20:06:29Z|00078|binding|INFO|Claiming lport 4af10480-1bf8-4efe-bb0e-ef9ee356a470 for this chassis.
Oct 02 20:06:29 compute-0 ovn_controller[88435]: 2025-10-02T20:06:29Z|00079|binding|INFO|4af10480-1bf8-4efe-bb0e-ef9ee356a470: Claiming fa:16:3e:d8:72:8b 10.100.0.7
Oct 02 20:06:29 compute-0 nova_compute[355794]: 2025-10-02 20:06:29.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:29 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:29.733 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:72:8b 10.100.0.7'], port_security=['fa:16:3e:d8:72:8b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a6e095a0-cb58-430d-9347-4aab385c6e69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a7c52835a9494ea98fd26390771eb77f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b59875ce-2e1e-411c-9c9d-217f385a6c78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55b0fe3c-2477-4bd1-a279-06ccc23b46bf, chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=4af10480-1bf8-4efe-bb0e-ef9ee356a470) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:06:29 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:29.735 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 4af10480-1bf8-4efe-bb0e-ef9ee356a470 in datapath aefd878a-4767-48ff-8dcb-ccb5b8fcb84b bound to our chassis
Oct 02 20:06:29 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:29.739 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aefd878a-4767-48ff-8dcb-ccb5b8fcb84b
Oct 02 20:06:29 compute-0 podman[157186]: time="2025-10-02T20:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:06:29 compute-0 ovn_controller[88435]: 2025-10-02T20:06:29Z|00080|binding|INFO|Setting lport 4af10480-1bf8-4efe-bb0e-ef9ee356a470 up in Southbound
Oct 02 20:06:29 compute-0 nova_compute[355794]: 2025-10-02 20:06:29.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 48732 "" "Go-http-client/1.1"
Oct 02 20:06:29 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:29.757 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[8c63c589-e5d6-436d-9462-734c4b11ce37]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:29 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:29.759 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaefd878a-41 in ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 20:06:29 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:29.764 420728 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaefd878a-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 20:06:29 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:29.764 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[81892e97-3f6f-402e-af8a-059592a46e51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:29 compute-0 ovn_controller[88435]: 2025-10-02T20:06:29Z|00081|binding|INFO|Setting lport 4af10480-1bf8-4efe-bb0e-ef9ee356a470 ovn-installed in OVS
Oct 02 20:06:29 compute-0 nova_compute[355794]: 2025-10-02 20:06:29.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:29 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:29.772 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[3d80c8ba-fd6e-4717-8e7f-6a9bccae94db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10007 "" "Go-http-client/1.1"
Oct 02 20:06:29 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:29.809 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[97a49dc5-ae78-4824-9830-d7d072c8356c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:29 compute-0 systemd-udevd[452187]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 20:06:29 compute-0 systemd-machined[137646]: New machine qemu-8-instance-00000008.
Oct 02 20:06:29 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Oct 02 20:06:29 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:29.827 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[fc7f36b7-8209-461b-8575-580c1fa81362]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:29 compute-0 NetworkManager[44968]: <info>  [1759435589.8341] device (tap4af10480-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 20:06:29 compute-0 NetworkManager[44968]: <info>  [1759435589.8360] device (tap4af10480-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 20:06:29 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:29.863 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[cc01e431-fdc8-4a96-8299-2ee6d7842e51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:29 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:29.871 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[96c42e0c-ce18-4c06-9574-c4587e2d0f95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:29 compute-0 NetworkManager[44968]: <info>  [1759435589.8753] manager: (tapaefd878a-40): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Oct 02 20:06:29 compute-0 podman[452159]: 2025-10-02 20:06:29.898621336 +0000 UTC m=+0.114981963 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 20:06:29 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:29.910 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[07d784ce-d462-4c35-b5da-0af54e330a79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:29 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:29.913 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[662770ae-5d80-4854-b248-4ef2d9bc8983]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:29 compute-0 podman[452161]: 2025-10-02 20:06:29.936138132 +0000 UTC m=+0.151712281 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 20:06:29 compute-0 NetworkManager[44968]: <info>  [1759435589.9419] device (tapaefd878a-40): carrier: link connected
Oct 02 20:06:29 compute-0 podman[452164]: 2025-10-02 20:06:29.946280666 +0000 UTC m=+0.145332721 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 20:06:29 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:29.949 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe472a0-715f-4672-be66-566ac44cecb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:29 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:29.969 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[778dc7e9-9487-4135-b478-34f43cb90b97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaefd878a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:f4:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676151, 'reachable_time': 33888, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 452252, 'error': None, 'target': 'ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:29 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:29.998 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[4eef3087-001c-4cf5-b331-f0bb66ced9e8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:f437'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676151, 'tstamp': 676151}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 452253, 'error': None, 'target': 'ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:30.016 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[bd90398a-eb9d-4bb5-8619-afd04bb7b51e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaefd878a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:f4:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676151, 'reachable_time': 33888, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 452254, 'error': None, 'target': 'ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:30.051 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[d7fc3471-294c-4153-9bb5-2b557508681a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:30.120 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c0d662-b9e7-4652-a733-7b35f80b6014]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:30.122 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaefd878a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:30.123 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:30.124 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaefd878a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:30 compute-0 kernel: tapaefd878a-40: entered promiscuous mode
Oct 02 20:06:30 compute-0 NetworkManager[44968]: <info>  [1759435590.1290] manager: (tapaefd878a-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:30.131 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaefd878a-40, col_values=(('external_ids', {'iface-id': 'cdbc9f7e-e502-4e46-9d35-398a11c2a99d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:06:30 compute-0 ovn_controller[88435]: 2025-10-02T20:06:30Z|00082|binding|INFO|Releasing lport cdbc9f7e-e502-4e46-9d35-398a11c2a99d from this chassis (sb_readonly=0)
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:30.136 285790 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aefd878a-4767-48ff-8dcb-ccb5b8fcb84b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aefd878a-4767-48ff-8dcb-ccb5b8fcb84b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:30.137 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[05fc430a-0766-4b17-9388-330ff6d19e55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:30.138 285790 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]: global
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     log         /dev/log local0 debug
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     log-tag     haproxy-metadata-proxy-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     user        root
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     group       root
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     maxconn     1024
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     pidfile     /var/lib/neutron/external/pids/aefd878a-4767-48ff-8dcb-ccb5b8fcb84b.pid.haproxy
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     daemon
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]: defaults
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     log global
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     mode http
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     option httplog
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     option dontlognull
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     option http-server-close
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     option forwardfor
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     retries                 3
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     timeout http-request    30s
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     timeout connect         30s
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     timeout client          32s
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     timeout server          32s
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     timeout http-keep-alive 30s
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]: listen listener
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     bind 169.254.169.254:80
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:     http-request add-header X-OVN-Network-ID aefd878a-4767-48ff-8dcb-ccb5b8fcb84b
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 20:06:30 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:30.139 285790 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b', 'env', 'PROCESS_TAG=haproxy-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aefd878a-4767-48ff-8dcb-ccb5b8fcb84b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:06:30 compute-0 ceph-mon[191910]: pgmap v1852: 321 pgs: 321 active+clean; 365 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 662 KiB/s rd, 7.8 MiB/s wr, 176 op/s
Oct 02 20:06:30 compute-0 podman[452327]: 2025-10-02 20:06:30.665738888 +0000 UTC m=+0.099393510 container create 4716511ea772389918ea083514b17219df193fedbad5de0cc559b8d4e7139101 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 20:06:30 compute-0 podman[452327]: 2025-10-02 20:06:30.61451181 +0000 UTC m=+0.048166472 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 20:06:30 compute-0 systemd[1]: Started libpod-conmon-4716511ea772389918ea083514b17219df193fedbad5de0cc559b8d4e7139101.scope.
Oct 02 20:06:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:06:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7299c6461d8d2fb36f8d107a1844ec410972c362cff03175c7e97346ba63b886/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 20:06:30 compute-0 podman[452327]: 2025-10-02 20:06:30.842260855 +0000 UTC m=+0.275915527 container init 4716511ea772389918ea083514b17219df193fedbad5de0cc559b8d4e7139101 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 20:06:30 compute-0 podman[452327]: 2025-10-02 20:06:30.852762316 +0000 UTC m=+0.286416938 container start 4716511ea772389918ea083514b17219df193fedbad5de0cc559b8d4e7139101 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:06:30 compute-0 neutron-haproxy-ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b[452343]: [NOTICE]   (452347) : New worker (452349) forked
Oct 02 20:06:30 compute-0 neutron-haproxy-ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b[452343]: [NOTICE]   (452347) : Loading success.
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.886 2 DEBUG nova.network.neutron [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Updating instance_info_cache with network_info: [{"id": "11f0969e-82ab-4143-81be-7c01575f2855", "address": "fa:16:3e:c7:68:45", "network": {"id": "ce5f7f3e-491e-453b-954f-909a9b2b6947", "bridge": "br-int", "label": "tempest-ServersTestJSON-658708010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5246e6335a54a93b5e530d232da3468", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f0969e-82", "ovs_interfaceid": "11f0969e-82ab-4143-81be-7c01575f2855", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.922 2 DEBUG oslo_concurrency.lockutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Releasing lock "refresh_cache-018136d8-a19a-40ff-a7fb-72157ab8d8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.922 2 DEBUG nova.compute.manager [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Instance network_info: |[{"id": "11f0969e-82ab-4143-81be-7c01575f2855", "address": "fa:16:3e:c7:68:45", "network": {"id": "ce5f7f3e-491e-453b-954f-909a9b2b6947", "bridge": "br-int", "label": "tempest-ServersTestJSON-658708010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5246e6335a54a93b5e530d232da3468", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f0969e-82", "ovs_interfaceid": "11f0969e-82ab-4143-81be-7c01575f2855", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.923 2 DEBUG oslo_concurrency.lockutils [req-8d9cfbfa-d25f-46a3-a685-3f2e7f57d791 req-ee773d11-86e4-42b8-b3b0-9b4e55fd0738 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-018136d8-a19a-40ff-a7fb-72157ab8d8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.924 2 DEBUG nova.network.neutron [req-8d9cfbfa-d25f-46a3-a685-3f2e7f57d791 req-ee773d11-86e4-42b8-b3b0-9b4e55fd0738 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Refreshing network info cache for port 11f0969e-82ab-4143-81be-7c01575f2855 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.929 2 DEBUG nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Start _get_guest_xml network_info=[{"id": "11f0969e-82ab-4143-81be-7c01575f2855", "address": "fa:16:3e:c7:68:45", "network": {"id": "ce5f7f3e-491e-453b-954f-909a9b2b6947", "bridge": "br-int", "label": "tempest-ServersTestJSON-658708010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5246e6335a54a93b5e530d232da3468", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f0969e-82", "ovs_interfaceid": "11f0969e-82ab-4143-81be-7c01575f2855", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:04:39Z,direct_url=<?>,disk_format='qcow2',id=2881b8cb-4cad-4124-8a6e-ae21054c9692,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:04:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'disk_bus': 'virtio', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'image_id': '2881b8cb-4cad-4124-8a6e-ae21054c9692'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.947 2 WARNING nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.963 2 DEBUG nova.virt.libvirt.host [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.964 2 DEBUG nova.virt.libvirt.host [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.972 2 DEBUG nova.virt.libvirt.host [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.973 2 DEBUG nova.virt.libvirt.host [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.974 2 DEBUG nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.974 2 DEBUG nova.virt.hardware [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T20:04:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2a4d7fef-934e-4921-8c3b-c6783966faa5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:04:39Z,direct_url=<?>,disk_format='qcow2',id=2881b8cb-4cad-4124-8a6e-ae21054c9692,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:04:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.976 2 DEBUG nova.virt.hardware [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.976 2 DEBUG nova.virt.hardware [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.977 2 DEBUG nova.virt.hardware [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.977 2 DEBUG nova.virt.hardware [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.978 2 DEBUG nova.virt.hardware [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.978 2 DEBUG nova.virt.hardware [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.979 2 DEBUG nova.virt.hardware [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.979 2 DEBUG nova.virt.hardware [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.980 2 DEBUG nova.virt.hardware [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.980 2 DEBUG nova.virt.hardware [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 20:06:30 compute-0 nova_compute[355794]: 2025-10-02 20:06:30.989 2 DEBUG oslo_concurrency.processutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.203 2 DEBUG nova.compute.manager [req-753f09a5-27e4-4ec6-9ed7-abd0900d9de0 req-8c83f82a-c29d-4a4f-9959-bd597d117f37 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Received event network-vif-plugged-4af10480-1bf8-4efe-bb0e-ef9ee356a470 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.204 2 DEBUG oslo_concurrency.lockutils [req-753f09a5-27e4-4ec6-9ed7-abd0900d9de0 req-8c83f82a-c29d-4a4f-9959-bd597d117f37 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "a6e095a0-cb58-430d-9347-4aab385c6e69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.205 2 DEBUG oslo_concurrency.lockutils [req-753f09a5-27e4-4ec6-9ed7-abd0900d9de0 req-8c83f82a-c29d-4a4f-9959-bd597d117f37 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "a6e095a0-cb58-430d-9347-4aab385c6e69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.205 2 DEBUG oslo_concurrency.lockutils [req-753f09a5-27e4-4ec6-9ed7-abd0900d9de0 req-8c83f82a-c29d-4a4f-9959-bd597d117f37 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "a6e095a0-cb58-430d-9347-4aab385c6e69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.206 2 DEBUG nova.compute.manager [req-753f09a5-27e4-4ec6-9ed7-abd0900d9de0 req-8c83f82a-c29d-4a4f-9959-bd597d117f37 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Processing event network-vif-plugged-4af10480-1bf8-4efe-bb0e-ef9ee356a470 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.324 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435591.3235848, a6e095a0-cb58-430d-9347-4aab385c6e69 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.325 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] VM Started (Lifecycle Event)
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.327 2 DEBUG nova.compute.manager [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.332 2 DEBUG nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.342 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.356 2 INFO nova.virt.libvirt.driver [-] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Instance spawned successfully.
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.358 2 DEBUG nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.363 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.384 2 DEBUG nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.385 2 DEBUG nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.386 2 DEBUG nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.386 2 DEBUG nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.387 2 DEBUG nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.388 2 DEBUG nova.virt.libvirt.driver [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.394 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.394 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435591.3236694, a6e095a0-cb58-430d-9347-4aab385c6e69 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.394 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] VM Paused (Lifecycle Event)
Oct 02 20:06:31 compute-0 openstack_network_exporter[372736]: ERROR   20:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:06:31 compute-0 openstack_network_exporter[372736]: ERROR   20:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:06:31 compute-0 openstack_network_exporter[372736]: ERROR   20:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:06:31 compute-0 openstack_network_exporter[372736]: ERROR   20:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:06:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:06:31 compute-0 openstack_network_exporter[372736]: ERROR   20:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:06:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.468 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.479 2 INFO nova.compute.manager [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Took 10.71 seconds to spawn the instance on the hypervisor.
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.480 2 DEBUG nova.compute.manager [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.487 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435591.3317974, a6e095a0-cb58-430d-9347-4aab385c6e69 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.487 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] VM Resumed (Lifecycle Event)
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.523 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.529 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.564 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:06:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1853: 321 pgs: 321 active+clean; 370 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 696 KiB/s rd, 7.8 MiB/s wr, 183 op/s
Oct 02 20:06:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:06:31 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2222141268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.590 2 INFO nova.compute.manager [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Took 12.04 seconds to build instance.
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.629 2 DEBUG oslo_concurrency.processutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:31 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2222141268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.683 2 DEBUG nova.storage.rbd_utils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] rbd image 018136d8-a19a-40ff-a7fb-72157ab8d8b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:06:31 compute-0 podman[452378]: 2025-10-02 20:06:31.685712457 +0000 UTC m=+0.110379542 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, release=1755695350, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vcs-type=git, distribution-scope=public, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 20:06:31 compute-0 podman[452379]: 2025-10-02 20:06:31.697938776 +0000 UTC m=+0.122802415 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.704 2 DEBUG oslo_concurrency.processutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:31 compute-0 nova_compute[355794]: 2025-10-02 20:06:31.734 2 DEBUG oslo_concurrency.lockutils [None req-477adc2c-0c5a-4462-9594-4027e2ceab0c e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "a6e095a0-cb58-430d-9347-4aab385c6e69" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:06:32 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1649061498' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.199 2 DEBUG oslo_concurrency.processutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.203 2 DEBUG nova.virt.libvirt.vif [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T20:06:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1230541754',display_name='tempest-ServersTestJSON-server-1230541754',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1230541754',id=9,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSP1BH5mG+kRZ90GOfbV5vsAJW5OFEw4m1cLQc07wJkxZGGY4Q5Mt3SH5bB0Tx2C1W0WpkJ3V5xXtFwQU5PfCA5Kdbi0IyNVtFoRvkM9Gn/jLKXZS4PnM5ujNC8cwz65Q==',key_name='tempest-keypair-1833569111',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e5246e6335a54a93b5e530d232da3468',ramdisk_id='',reservation_id='r-5g2aotu0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1308547104',owner_user_name='tempest-ServersTestJSON-1308547104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:06:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2642bfaf2f5c4f468a9a9392415e6de8',uuid=018136d8-a19a-40ff-a7fb-72157ab8d8b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11f0969e-82ab-4143-81be-7c01575f2855", "address": "fa:16:3e:c7:68:45", "network": {"id": "ce5f7f3e-491e-453b-954f-909a9b2b6947", "bridge": "br-int", "label": "tempest-ServersTestJSON-658708010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5246e6335a54a93b5e530d232da3468", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f0969e-82", "ovs_interfaceid": "11f0969e-82ab-4143-81be-7c01575f2855", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.204 2 DEBUG nova.network.os_vif_util [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Converting VIF {"id": "11f0969e-82ab-4143-81be-7c01575f2855", "address": "fa:16:3e:c7:68:45", "network": {"id": "ce5f7f3e-491e-453b-954f-909a9b2b6947", "bridge": "br-int", "label": "tempest-ServersTestJSON-658708010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5246e6335a54a93b5e530d232da3468", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f0969e-82", "ovs_interfaceid": "11f0969e-82ab-4143-81be-7c01575f2855", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.206 2 DEBUG nova.network.os_vif_util [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:68:45,bridge_name='br-int',has_traffic_filtering=True,id=11f0969e-82ab-4143-81be-7c01575f2855,network=Network(ce5f7f3e-491e-453b-954f-909a9b2b6947),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f0969e-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.210 2 DEBUG nova.objects.instance [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lazy-loading 'pci_devices' on Instance uuid 018136d8-a19a-40ff-a7fb-72157ab8d8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.239 2 DEBUG nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] End _get_guest_xml xml=<domain type="kvm">
Oct 02 20:06:32 compute-0 nova_compute[355794]:   <uuid>018136d8-a19a-40ff-a7fb-72157ab8d8b5</uuid>
Oct 02 20:06:32 compute-0 nova_compute[355794]:   <name>instance-00000009</name>
Oct 02 20:06:32 compute-0 nova_compute[355794]:   <memory>131072</memory>
Oct 02 20:06:32 compute-0 nova_compute[355794]:   <vcpu>1</vcpu>
Oct 02 20:06:32 compute-0 nova_compute[355794]:   <metadata>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <nova:name>tempest-ServersTestJSON-server-1230541754</nova:name>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <nova:creationTime>2025-10-02 20:06:30</nova:creationTime>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <nova:flavor name="m1.nano">
Oct 02 20:06:32 compute-0 nova_compute[355794]:         <nova:memory>128</nova:memory>
Oct 02 20:06:32 compute-0 nova_compute[355794]:         <nova:disk>1</nova:disk>
Oct 02 20:06:32 compute-0 nova_compute[355794]:         <nova:swap>0</nova:swap>
Oct 02 20:06:32 compute-0 nova_compute[355794]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 20:06:32 compute-0 nova_compute[355794]:         <nova:vcpus>1</nova:vcpus>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       </nova:flavor>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <nova:owner>
Oct 02 20:06:32 compute-0 nova_compute[355794]:         <nova:user uuid="2642bfaf2f5c4f468a9a9392415e6de8">tempest-ServersTestJSON-1308547104-project-member</nova:user>
Oct 02 20:06:32 compute-0 nova_compute[355794]:         <nova:project uuid="e5246e6335a54a93b5e530d232da3468">tempest-ServersTestJSON-1308547104</nova:project>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       </nova:owner>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <nova:root type="image" uuid="2881b8cb-4cad-4124-8a6e-ae21054c9692"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <nova:ports>
Oct 02 20:06:32 compute-0 nova_compute[355794]:         <nova:port uuid="11f0969e-82ab-4143-81be-7c01575f2855">
Oct 02 20:06:32 compute-0 nova_compute[355794]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:         </nova:port>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       </nova:ports>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     </nova:instance>
Oct 02 20:06:32 compute-0 nova_compute[355794]:   </metadata>
Oct 02 20:06:32 compute-0 nova_compute[355794]:   <sysinfo type="smbios">
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <system>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <entry name="manufacturer">RDO</entry>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <entry name="product">OpenStack Compute</entry>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <entry name="serial">018136d8-a19a-40ff-a7fb-72157ab8d8b5</entry>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <entry name="uuid">018136d8-a19a-40ff-a7fb-72157ab8d8b5</entry>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <entry name="family">Virtual Machine</entry>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     </system>
Oct 02 20:06:32 compute-0 nova_compute[355794]:   </sysinfo>
Oct 02 20:06:32 compute-0 nova_compute[355794]:   <os>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <boot dev="hd"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <smbios mode="sysinfo"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:   </os>
Oct 02 20:06:32 compute-0 nova_compute[355794]:   <features>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <acpi/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <apic/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <vmcoreinfo/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:   </features>
Oct 02 20:06:32 compute-0 nova_compute[355794]:   <clock offset="utc">
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <timer name="hpet" present="no"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:   </clock>
Oct 02 20:06:32 compute-0 nova_compute[355794]:   <cpu mode="host-model" match="exact">
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:   </cpu>
Oct 02 20:06:32 compute-0 nova_compute[355794]:   <devices>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/018136d8-a19a-40ff-a7fb-72157ab8d8b5_disk">
Oct 02 20:06:32 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       </source>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:06:32 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <target dev="vda" bus="virtio"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <disk type="network" device="cdrom">
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/018136d8-a19a-40ff-a7fb-72157ab8d8b5_disk.config">
Oct 02 20:06:32 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       </source>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:06:32 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <target dev="sda" bus="sata"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <interface type="ethernet">
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <mac address="fa:16:3e:c7:68:45"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <mtu size="1442"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <target dev="tap11f0969e-82"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     </interface>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <serial type="pty">
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <log file="/var/lib/nova/instances/018136d8-a19a-40ff-a7fb-72157ab8d8b5/console.log" append="off"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     </serial>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <video>
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     </video>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <input type="tablet" bus="usb"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <rng model="virtio">
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <backend model="random">/dev/urandom</backend>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     </rng>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <controller type="usb" index="0"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     <memballoon model="virtio">
Oct 02 20:06:32 compute-0 nova_compute[355794]:       <stats period="10"/>
Oct 02 20:06:32 compute-0 nova_compute[355794]:     </memballoon>
Oct 02 20:06:32 compute-0 nova_compute[355794]:   </devices>
Oct 02 20:06:32 compute-0 nova_compute[355794]: </domain>
Oct 02 20:06:32 compute-0 nova_compute[355794]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.241 2 DEBUG nova.compute.manager [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Preparing to wait for external event network-vif-plugged-11f0969e-82ab-4143-81be-7c01575f2855 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.242 2 DEBUG oslo_concurrency.lockutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Acquiring lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.242 2 DEBUG oslo_concurrency.lockutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.243 2 DEBUG oslo_concurrency.lockutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.245 2 DEBUG nova.virt.libvirt.vif [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T20:06:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1230541754',display_name='tempest-ServersTestJSON-server-1230541754',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1230541754',id=9,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSP1BH5mG+kRZ90GOfbV5vsAJW5OFEw4m1cLQc07wJkxZGGY4Q5Mt3SH5bB0Tx2C1W0WpkJ3V5xXtFwQU5PfCA5Kdbi0IyNVtFoRvkM9Gn/jLKXZS4PnM5ujNC8cwz65Q==',key_name='tempest-keypair-1833569111',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e5246e6335a54a93b5e530d232da3468',ramdisk_id='',reservation_id='r-5g2aotu0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1308547104',owner_user_name='tempest-ServersTestJSON-1308547104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:06:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2642bfaf2f5c4f468a9a9392415e6de8',uuid=018136d8-a19a-40ff-a7fb-72157ab8d8b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11f0969e-82ab-4143-81be-7c01575f2855", "address": "fa:16:3e:c7:68:45", "network": {"id": "ce5f7f3e-491e-453b-954f-909a9b2b6947", "bridge": "br-int", "label": "tempest-ServersTestJSON-658708010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5246e6335a54a93b5e530d232da3468", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f0969e-82", "ovs_interfaceid": "11f0969e-82ab-4143-81be-7c01575f2855", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.246 2 DEBUG nova.network.os_vif_util [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Converting VIF {"id": "11f0969e-82ab-4143-81be-7c01575f2855", "address": "fa:16:3e:c7:68:45", "network": {"id": "ce5f7f3e-491e-453b-954f-909a9b2b6947", "bridge": "br-int", "label": "tempest-ServersTestJSON-658708010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5246e6335a54a93b5e530d232da3468", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f0969e-82", "ovs_interfaceid": "11f0969e-82ab-4143-81be-7c01575f2855", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.246 2 DEBUG nova.network.os_vif_util [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:68:45,bridge_name='br-int',has_traffic_filtering=True,id=11f0969e-82ab-4143-81be-7c01575f2855,network=Network(ce5f7f3e-491e-453b-954f-909a9b2b6947),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f0969e-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.247 2 DEBUG os_vif [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:68:45,bridge_name='br-int',has_traffic_filtering=True,id=11f0969e-82ab-4143-81be-7c01575f2855,network=Network(ce5f7f3e-491e-453b-954f-909a9b2b6947),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f0969e-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.249 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.250 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.254 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11f0969e-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.254 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap11f0969e-82, col_values=(('external_ids', {'iface-id': '11f0969e-82ab-4143-81be-7c01575f2855', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:68:45', 'vm-uuid': '018136d8-a19a-40ff-a7fb-72157ab8d8b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:32 compute-0 NetworkManager[44968]: <info>  [1759435592.2597] manager: (tap11f0969e-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.277 2 INFO os_vif [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:68:45,bridge_name='br-int',has_traffic_filtering=True,id=11f0969e-82ab-4143-81be-7c01575f2855,network=Network(ce5f7f3e-491e-453b-954f-909a9b2b6947),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f0969e-82')
Oct 02 20:06:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:32.321 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:32.322 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:32.322 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.334 2 DEBUG nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.334 2 DEBUG nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.335 2 DEBUG nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] No VIF found with MAC fa:16:3e:c7:68:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.335 2 INFO nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Using config drive
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.370 2 DEBUG nova.storage.rbd_utils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] rbd image 018136d8-a19a-40ff-a7fb-72157ab8d8b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:06:32 compute-0 ceph-mon[191910]: pgmap v1853: 321 pgs: 321 active+clean; 370 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 696 KiB/s rd, 7.8 MiB/s wr, 183 op/s
Oct 02 20:06:32 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1649061498' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.909 2 DEBUG nova.network.neutron [req-8d9cfbfa-d25f-46a3-a685-3f2e7f57d791 req-ee773d11-86e4-42b8-b3b0-9b4e55fd0738 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Updated VIF entry in instance network info cache for port 11f0969e-82ab-4143-81be-7c01575f2855. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.910 2 DEBUG nova.network.neutron [req-8d9cfbfa-d25f-46a3-a685-3f2e7f57d791 req-ee773d11-86e4-42b8-b3b0-9b4e55fd0738 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Updating instance_info_cache with network_info: [{"id": "11f0969e-82ab-4143-81be-7c01575f2855", "address": "fa:16:3e:c7:68:45", "network": {"id": "ce5f7f3e-491e-453b-954f-909a9b2b6947", "bridge": "br-int", "label": "tempest-ServersTestJSON-658708010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5246e6335a54a93b5e530d232da3468", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f0969e-82", "ovs_interfaceid": "11f0969e-82ab-4143-81be-7c01575f2855", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.926 2 DEBUG oslo_concurrency.lockutils [req-8d9cfbfa-d25f-46a3-a685-3f2e7f57d791 req-ee773d11-86e4-42b8-b3b0-9b4e55fd0738 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-018136d8-a19a-40ff-a7fb-72157ab8d8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:06:32 compute-0 nova_compute[355794]: 2025-10-02 20:06:32.991 2 INFO nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Creating config drive at /var/lib/nova/instances/018136d8-a19a-40ff-a7fb-72157ab8d8b5/disk.config
Oct 02 20:06:33 compute-0 nova_compute[355794]: 2025-10-02 20:06:33.005 2 DEBUG oslo_concurrency.processutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/018136d8-a19a-40ff-a7fb-72157ab8d8b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa38fw2_n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:33 compute-0 nova_compute[355794]: 2025-10-02 20:06:33.166 2 DEBUG oslo_concurrency.processutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/018136d8-a19a-40ff-a7fb-72157ab8d8b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa38fw2_n" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:33 compute-0 nova_compute[355794]: 2025-10-02 20:06:33.235 2 DEBUG nova.storage.rbd_utils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] rbd image 018136d8-a19a-40ff-a7fb-72157ab8d8b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:06:33 compute-0 nova_compute[355794]: 2025-10-02 20:06:33.254 2 DEBUG oslo_concurrency.processutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/018136d8-a19a-40ff-a7fb-72157ab8d8b5/disk.config 018136d8-a19a-40ff-a7fb-72157ab8d8b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:33 compute-0 nova_compute[355794]: 2025-10-02 20:06:33.328 2 DEBUG nova.compute.manager [req-c233649e-bd3e-462c-82ad-99c39432cbca req-dff9808d-c73b-451b-a0b4-e7bbcf5d0871 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Received event network-vif-plugged-4af10480-1bf8-4efe-bb0e-ef9ee356a470 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:06:33 compute-0 nova_compute[355794]: 2025-10-02 20:06:33.329 2 DEBUG oslo_concurrency.lockutils [req-c233649e-bd3e-462c-82ad-99c39432cbca req-dff9808d-c73b-451b-a0b4-e7bbcf5d0871 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "a6e095a0-cb58-430d-9347-4aab385c6e69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:33 compute-0 nova_compute[355794]: 2025-10-02 20:06:33.329 2 DEBUG oslo_concurrency.lockutils [req-c233649e-bd3e-462c-82ad-99c39432cbca req-dff9808d-c73b-451b-a0b4-e7bbcf5d0871 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "a6e095a0-cb58-430d-9347-4aab385c6e69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:33 compute-0 nova_compute[355794]: 2025-10-02 20:06:33.330 2 DEBUG oslo_concurrency.lockutils [req-c233649e-bd3e-462c-82ad-99c39432cbca req-dff9808d-c73b-451b-a0b4-e7bbcf5d0871 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "a6e095a0-cb58-430d-9347-4aab385c6e69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:33 compute-0 nova_compute[355794]: 2025-10-02 20:06:33.330 2 DEBUG nova.compute.manager [req-c233649e-bd3e-462c-82ad-99c39432cbca req-dff9808d-c73b-451b-a0b4-e7bbcf5d0871 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] No waiting events found dispatching network-vif-plugged-4af10480-1bf8-4efe-bb0e-ef9ee356a470 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:06:33 compute-0 nova_compute[355794]: 2025-10-02 20:06:33.330 2 WARNING nova.compute.manager [req-c233649e-bd3e-462c-82ad-99c39432cbca req-dff9808d-c73b-451b-a0b4-e7bbcf5d0871 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Received unexpected event network-vif-plugged-4af10480-1bf8-4efe-bb0e-ef9ee356a470 for instance with vm_state active and task_state None.
Oct 02 20:06:33 compute-0 nova_compute[355794]: 2025-10-02 20:06:33.528 2 DEBUG oslo_concurrency.processutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/018136d8-a19a-40ff-a7fb-72157ab8d8b5/disk.config 018136d8-a19a-40ff-a7fb-72157ab8d8b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:33 compute-0 nova_compute[355794]: 2025-10-02 20:06:33.528 2 INFO nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Deleting local config drive /var/lib/nova/instances/018136d8-a19a-40ff-a7fb-72157ab8d8b5/disk.config because it was imported into RBD.
Oct 02 20:06:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1854: 321 pgs: 321 active+clean; 370 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 705 KiB/s rd, 7.8 MiB/s wr, 186 op/s
Oct 02 20:06:33 compute-0 kernel: tap11f0969e-82: entered promiscuous mode
Oct 02 20:06:33 compute-0 nova_compute[355794]: 2025-10-02 20:06:33.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:33 compute-0 ovn_controller[88435]: 2025-10-02T20:06:33Z|00083|binding|INFO|Claiming lport 11f0969e-82ab-4143-81be-7c01575f2855 for this chassis.
Oct 02 20:06:33 compute-0 ovn_controller[88435]: 2025-10-02T20:06:33Z|00084|binding|INFO|11f0969e-82ab-4143-81be-7c01575f2855: Claiming fa:16:3e:c7:68:45 10.100.0.8
Oct 02 20:06:33 compute-0 NetworkManager[44968]: <info>  [1759435593.6272] manager: (tap11f0969e-82): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Oct 02 20:06:33 compute-0 nova_compute[355794]: 2025-10-02 20:06:33.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:33.639 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:68:45 10.100.0.8'], port_security=['fa:16:3e:c7:68:45 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '018136d8-a19a-40ff-a7fb-72157ab8d8b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5f7f3e-491e-453b-954f-909a9b2b6947', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e5246e6335a54a93b5e530d232da3468', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8c11797e-b92f-445e-bacd-87d9bc10d56c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9dfdf6a-95a9-4df5-a018-d6058f02c851, chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=11f0969e-82ab-4143-81be-7c01575f2855) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:06:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:33.641 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 11f0969e-82ab-4143-81be-7c01575f2855 in datapath ce5f7f3e-491e-453b-954f-909a9b2b6947 bound to our chassis
Oct 02 20:06:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:33.646 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5f7f3e-491e-453b-954f-909a9b2b6947
Oct 02 20:06:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:06:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:06:33 compute-0 ovn_controller[88435]: 2025-10-02T20:06:33Z|00085|binding|INFO|Setting lport 11f0969e-82ab-4143-81be-7c01575f2855 ovn-installed in OVS
Oct 02 20:06:33 compute-0 ovn_controller[88435]: 2025-10-02T20:06:33Z|00086|binding|INFO|Setting lport 11f0969e-82ab-4143-81be-7c01575f2855 up in Southbound
Oct 02 20:06:33 compute-0 nova_compute[355794]: 2025-10-02 20:06:33.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:33.666 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[cd061496-3017-4a50-9da0-3ddbbdf847e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:33.673 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce5f7f3e-41 in ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 20:06:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:06:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:06:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:06:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:06:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:33.678 420728 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce5f7f3e-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 20:06:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:33.678 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[70f9c129-fb56-4dcc-b063-e831070b229f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:33 compute-0 systemd-udevd[452535]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 20:06:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:33.681 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[f194aec9-3bb9-4b16-8fb9-edeb552bb4e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:33 compute-0 systemd-machined[137646]: New machine qemu-9-instance-00000009.
Oct 02 20:06:33 compute-0 NetworkManager[44968]: <info>  [1759435593.6993] device (tap11f0969e-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 20:06:33 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Oct 02 20:06:33 compute-0 NetworkManager[44968]: <info>  [1759435593.7017] device (tap11f0969e-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 20:06:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:33.713 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[c5bdc023-00c5-41df-9b24-a8b17192bf74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:33.741 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[35ca4a4d-2d51-4c13-9877-f4babf4789d3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:33.783 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[1206da5d-0149-4631-aa3d-8d2704f9d328]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:33.794 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[b53a3888-8ae1-420f-8baa-24009c44d56e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:33 compute-0 NetworkManager[44968]: <info>  [1759435593.7965] manager: (tapce5f7f3e-40): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Oct 02 20:06:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:33.843 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[68fc0a6c-e29d-4f7d-a315-4d7374adbe0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:33.849 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[82ace848-857b-466d-b9d9-f30f904d9d52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:33 compute-0 NetworkManager[44968]: <info>  [1759435593.8782] device (tapce5f7f3e-40): carrier: link connected
Oct 02 20:06:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:33.888 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[0712d774-55d8-42e3-aea8-f306c04a3810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:33.918 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[712ce808-1723-4dcf-9ca7-edff86f328c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5f7f3e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:8e:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676544, 'reachable_time': 40119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 452569, 'error': None, 'target': 'ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:33.950 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[29233e10-f248-4a6e-ada6-ff054fb9456a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe57:8e4d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676544, 'tstamp': 676544}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 452570, 'error': None, 'target': 'ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:33.985 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[7b82bc0c-6a9d-4328-ac88-bab1a2e10766]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5f7f3e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:8e:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676544, 'reachable_time': 40119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 452571, 'error': None, 'target': 'ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:34.042 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[146d77da-edac-4e22-af07-e8c42c20dfb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:34.153 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[c19abd0a-a512-44e7-a8fe-7e16d1dcc403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:34.155 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5f7f3e-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:34.155 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:34.157 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5f7f3e-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:06:34 compute-0 NetworkManager[44968]: <info>  [1759435594.1617] manager: (tapce5f7f3e-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 02 20:06:34 compute-0 kernel: tapce5f7f3e-40: entered promiscuous mode
Oct 02 20:06:34 compute-0 nova_compute[355794]: 2025-10-02 20:06:34.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:34.175 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5f7f3e-40, col_values=(('external_ids', {'iface-id': '29d60311-3694-4229-8303-4c35f164491c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:06:34 compute-0 ovn_controller[88435]: 2025-10-02T20:06:34Z|00087|binding|INFO|Releasing lport 29d60311-3694-4229-8303-4c35f164491c from this chassis (sb_readonly=0)
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:34.180 285790 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce5f7f3e-491e-453b-954f-909a9b2b6947.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce5f7f3e-491e-453b-954f-909a9b2b6947.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:34.183 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[9100f566-76ca-4535-88d1-0a45b266df8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:34.184 285790 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]: global
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     log         /dev/log local0 debug
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     log-tag     haproxy-metadata-proxy-ce5f7f3e-491e-453b-954f-909a9b2b6947
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     user        root
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     group       root
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     maxconn     1024
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     pidfile     /var/lib/neutron/external/pids/ce5f7f3e-491e-453b-954f-909a9b2b6947.pid.haproxy
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     daemon
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]: defaults
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     log global
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     mode http
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     option httplog
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     option dontlognull
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     option http-server-close
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     option forwardfor
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     retries                 3
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     timeout http-request    30s
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     timeout connect         30s
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     timeout client          32s
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     timeout server          32s
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     timeout http-keep-alive 30s
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]: listen listener
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     bind 169.254.169.254:80
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:     http-request add-header X-OVN-Network-ID ce5f7f3e-491e-453b-954f-909a9b2b6947
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 20:06:34 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:34.185 285790 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947', 'env', 'PROCESS_TAG=haproxy-ce5f7f3e-491e-453b-954f-909a9b2b6947', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce5f7f3e-491e-453b-954f-909a9b2b6947.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 20:06:34 compute-0 nova_compute[355794]: 2025-10-02 20:06:34.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:34 compute-0 nova_compute[355794]: 2025-10-02 20:06:34.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:34 compute-0 nova_compute[355794]: 2025-10-02 20:06:34.495 2 DEBUG nova.compute.manager [req-4c2e0dcb-0896-4301-b0cf-8786ee34bd87 req-9b8cc242-13ee-41cc-ac2f-aea0ab9dde89 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Received event network-vif-plugged-11f0969e-82ab-4143-81be-7c01575f2855 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:06:34 compute-0 nova_compute[355794]: 2025-10-02 20:06:34.496 2 DEBUG oslo_concurrency.lockutils [req-4c2e0dcb-0896-4301-b0cf-8786ee34bd87 req-9b8cc242-13ee-41cc-ac2f-aea0ab9dde89 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:34 compute-0 nova_compute[355794]: 2025-10-02 20:06:34.496 2 DEBUG oslo_concurrency.lockutils [req-4c2e0dcb-0896-4301-b0cf-8786ee34bd87 req-9b8cc242-13ee-41cc-ac2f-aea0ab9dde89 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:34 compute-0 nova_compute[355794]: 2025-10-02 20:06:34.496 2 DEBUG oslo_concurrency.lockutils [req-4c2e0dcb-0896-4301-b0cf-8786ee34bd87 req-9b8cc242-13ee-41cc-ac2f-aea0ab9dde89 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:34 compute-0 nova_compute[355794]: 2025-10-02 20:06:34.497 2 DEBUG nova.compute.manager [req-4c2e0dcb-0896-4301-b0cf-8786ee34bd87 req-9b8cc242-13ee-41cc-ac2f-aea0ab9dde89 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Processing event network-vif-plugged-11f0969e-82ab-4143-81be-7c01575f2855 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 20:06:34 compute-0 ceph-mon[191910]: pgmap v1854: 321 pgs: 321 active+clean; 370 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 705 KiB/s rd, 7.8 MiB/s wr, 186 op/s
Oct 02 20:06:34 compute-0 podman[452645]: 2025-10-02 20:06:34.718968496 +0000 UTC m=+0.093066950 container create 3feab9901906e58d53c296ad2d70ceb80a68ec63873e51f33b1d2309a35fb8fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 20:06:34 compute-0 podman[452645]: 2025-10-02 20:06:34.669778483 +0000 UTC m=+0.043877017 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 20:06:34 compute-0 systemd[1]: Started libpod-conmon-3feab9901906e58d53c296ad2d70ceb80a68ec63873e51f33b1d2309a35fb8fc.scope.
Oct 02 20:06:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd0f53164890a8f36c3abb0e212b4a286404688e5a48b6e3331a998d4045de9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 20:06:34 compute-0 podman[452645]: 2025-10-02 20:06:34.856195368 +0000 UTC m=+0.230293822 container init 3feab9901906e58d53c296ad2d70ceb80a68ec63873e51f33b1d2309a35fb8fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 20:06:34 compute-0 podman[452645]: 2025-10-02 20:06:34.871171218 +0000 UTC m=+0.245269682 container start 3feab9901906e58d53c296ad2d70ceb80a68ec63873e51f33b1d2309a35fb8fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:06:34 compute-0 neutron-haproxy-ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947[452659]: [NOTICE]   (452663) : New worker (452665) forked
Oct 02 20:06:34 compute-0 neutron-haproxy-ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947[452659]: [NOTICE]   (452663) : Loading success.
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.086 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435595.086019, 018136d8-a19a-40ff-a7fb-72157ab8d8b5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.087 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] VM Started (Lifecycle Event)
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.090 2 DEBUG nova.compute.manager [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.102 2 DEBUG nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.111 2 INFO nova.virt.libvirt.driver [-] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Instance spawned successfully.
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.112 2 DEBUG nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.120 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.126 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.140 2 DEBUG nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.141 2 DEBUG nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.143 2 DEBUG nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.144 2 DEBUG nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.145 2 DEBUG nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.146 2 DEBUG nova.virt.libvirt.driver [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.153 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.154 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435595.0861435, 018136d8-a19a-40ff-a7fb-72157ab8d8b5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.154 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] VM Paused (Lifecycle Event)
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.180 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.189 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435595.096054, 018136d8-a19a-40ff-a7fb-72157ab8d8b5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.190 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] VM Resumed (Lifecycle Event)
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.211 2 INFO nova.compute.manager [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Took 11.29 seconds to spawn the instance on the hypervisor.
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.212 2 DEBUG nova.compute.manager [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.215 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.227 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.255 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.299 2 INFO nova.compute.manager [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Took 12.48 seconds to build instance.
Oct 02 20:06:35 compute-0 nova_compute[355794]: 2025-10-02 20:06:35.321 2 DEBUG oslo_concurrency.lockutils [None req-0756ede2-7538-493b-afa6-faebc1cc5166 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:06:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1855: 321 pgs: 321 active+clean; 370 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 7.0 MiB/s wr, 201 op/s
Oct 02 20:06:36 compute-0 ceph-mon[191910]: pgmap v1855: 321 pgs: 321 active+clean; 370 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 7.0 MiB/s wr, 201 op/s
Oct 02 20:06:36 compute-0 nova_compute[355794]: 2025-10-02 20:06:36.712 2 DEBUG nova.compute.manager [req-3532176f-052c-4831-b032-7508f9eda73f req-075bf02d-1a88-42de-970f-e2671ac49b36 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Received event network-vif-plugged-11f0969e-82ab-4143-81be-7c01575f2855 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:06:36 compute-0 nova_compute[355794]: 2025-10-02 20:06:36.715 2 DEBUG oslo_concurrency.lockutils [req-3532176f-052c-4831-b032-7508f9eda73f req-075bf02d-1a88-42de-970f-e2671ac49b36 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:36 compute-0 nova_compute[355794]: 2025-10-02 20:06:36.717 2 DEBUG oslo_concurrency.lockutils [req-3532176f-052c-4831-b032-7508f9eda73f req-075bf02d-1a88-42de-970f-e2671ac49b36 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:36 compute-0 nova_compute[355794]: 2025-10-02 20:06:36.720 2 DEBUG oslo_concurrency.lockutils [req-3532176f-052c-4831-b032-7508f9eda73f req-075bf02d-1a88-42de-970f-e2671ac49b36 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:36 compute-0 nova_compute[355794]: 2025-10-02 20:06:36.721 2 DEBUG nova.compute.manager [req-3532176f-052c-4831-b032-7508f9eda73f req-075bf02d-1a88-42de-970f-e2671ac49b36 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] No waiting events found dispatching network-vif-plugged-11f0969e-82ab-4143-81be-7c01575f2855 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:06:36 compute-0 nova_compute[355794]: 2025-10-02 20:06:36.722 2 WARNING nova.compute.manager [req-3532176f-052c-4831-b032-7508f9eda73f req-075bf02d-1a88-42de-970f-e2671ac49b36 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Received unexpected event network-vif-plugged-11f0969e-82ab-4143-81be-7c01575f2855 for instance with vm_state active and task_state None.
Oct 02 20:06:36 compute-0 nova_compute[355794]: 2025-10-02 20:06:36.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:36.856 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '6e:8a:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:85:f4:22:b9:4f'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:06:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:36.858 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 20:06:37 compute-0 nova_compute[355794]: 2025-10-02 20:06:37.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 321 active+clean; 370 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.9 MiB/s wr, 216 op/s
Oct 02 20:06:37 compute-0 nova_compute[355794]: 2025-10-02 20:06:37.787 2 DEBUG nova.compute.manager [req-9e0191fd-c038-4b84-aa11-67fed8aa1b4e req-8cec239d-7e5a-447c-976d-30c5c5da6a79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Received event network-changed-4af10480-1bf8-4efe-bb0e-ef9ee356a470 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:06:37 compute-0 nova_compute[355794]: 2025-10-02 20:06:37.787 2 DEBUG nova.compute.manager [req-9e0191fd-c038-4b84-aa11-67fed8aa1b4e req-8cec239d-7e5a-447c-976d-30c5c5da6a79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Refreshing instance network info cache due to event network-changed-4af10480-1bf8-4efe-bb0e-ef9ee356a470. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:06:37 compute-0 nova_compute[355794]: 2025-10-02 20:06:37.788 2 DEBUG oslo_concurrency.lockutils [req-9e0191fd-c038-4b84-aa11-67fed8aa1b4e req-8cec239d-7e5a-447c-976d-30c5c5da6a79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-a6e095a0-cb58-430d-9347-4aab385c6e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:06:37 compute-0 nova_compute[355794]: 2025-10-02 20:06:37.788 2 DEBUG oslo_concurrency.lockutils [req-9e0191fd-c038-4b84-aa11-67fed8aa1b4e req-8cec239d-7e5a-447c-976d-30c5c5da6a79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-a6e095a0-cb58-430d-9347-4aab385c6e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:06:37 compute-0 nova_compute[355794]: 2025-10-02 20:06:37.788 2 DEBUG nova.network.neutron [req-9e0191fd-c038-4b84-aa11-67fed8aa1b4e req-8cec239d-7e5a-447c-976d-30c5c5da6a79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Refreshing network info cache for port 4af10480-1bf8-4efe-bb0e-ef9ee356a470 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:06:38 compute-0 ceph-mon[191910]: pgmap v1856: 321 pgs: 321 active+clean; 370 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.9 MiB/s wr, 216 op/s
Oct 02 20:06:38 compute-0 nova_compute[355794]: 2025-10-02 20:06:38.807 2 DEBUG nova.compute.manager [req-1af9eb78-e40d-4dc7-9319-07f68525e1b9 req-f111c60c-0c77-499e-b510-fffe9f694e13 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Received event network-changed-11f0969e-82ab-4143-81be-7c01575f2855 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:06:38 compute-0 nova_compute[355794]: 2025-10-02 20:06:38.808 2 DEBUG nova.compute.manager [req-1af9eb78-e40d-4dc7-9319-07f68525e1b9 req-f111c60c-0c77-499e-b510-fffe9f694e13 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Refreshing instance network info cache due to event network-changed-11f0969e-82ab-4143-81be-7c01575f2855. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:06:38 compute-0 nova_compute[355794]: 2025-10-02 20:06:38.810 2 DEBUG oslo_concurrency.lockutils [req-1af9eb78-e40d-4dc7-9319-07f68525e1b9 req-f111c60c-0c77-499e-b510-fffe9f694e13 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-018136d8-a19a-40ff-a7fb-72157ab8d8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:06:38 compute-0 nova_compute[355794]: 2025-10-02 20:06:38.810 2 DEBUG oslo_concurrency.lockutils [req-1af9eb78-e40d-4dc7-9319-07f68525e1b9 req-f111c60c-0c77-499e-b510-fffe9f694e13 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-018136d8-a19a-40ff-a7fb-72157ab8d8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:06:38 compute-0 nova_compute[355794]: 2025-10-02 20:06:38.810 2 DEBUG nova.network.neutron [req-1af9eb78-e40d-4dc7-9319-07f68525e1b9 req-f111c60c-0c77-499e-b510-fffe9f694e13 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Refreshing network info cache for port 11f0969e-82ab-4143-81be-7c01575f2855 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:06:39 compute-0 nova_compute[355794]: 2025-10-02 20:06:39.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 370 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.3 MiB/s wr, 224 op/s
Oct 02 20:06:39 compute-0 nova_compute[355794]: 2025-10-02 20:06:39.633 2 DEBUG nova.network.neutron [req-9e0191fd-c038-4b84-aa11-67fed8aa1b4e req-8cec239d-7e5a-447c-976d-30c5c5da6a79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Updated VIF entry in instance network info cache for port 4af10480-1bf8-4efe-bb0e-ef9ee356a470. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:06:39 compute-0 nova_compute[355794]: 2025-10-02 20:06:39.634 2 DEBUG nova.network.neutron [req-9e0191fd-c038-4b84-aa11-67fed8aa1b4e req-8cec239d-7e5a-447c-976d-30c5c5da6a79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Updating instance_info_cache with network_info: [{"id": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "address": "fa:16:3e:d8:72:8b", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4af10480-1b", "ovs_interfaceid": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:06:39 compute-0 nova_compute[355794]: 2025-10-02 20:06:39.679 2 DEBUG oslo_concurrency.lockutils [req-9e0191fd-c038-4b84-aa11-67fed8aa1b4e req-8cec239d-7e5a-447c-976d-30c5c5da6a79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-a6e095a0-cb58-430d-9347-4aab385c6e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:06:39 compute-0 nova_compute[355794]: 2025-10-02 20:06:39.784 2 DEBUG oslo_concurrency.lockutils [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Acquiring lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:39 compute-0 nova_compute[355794]: 2025-10-02 20:06:39.785 2 DEBUG oslo_concurrency.lockutils [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:39 compute-0 nova_compute[355794]: 2025-10-02 20:06:39.786 2 DEBUG oslo_concurrency.lockutils [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Acquiring lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:39 compute-0 nova_compute[355794]: 2025-10-02 20:06:39.786 2 DEBUG oslo_concurrency.lockutils [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:39 compute-0 nova_compute[355794]: 2025-10-02 20:06:39.787 2 DEBUG oslo_concurrency.lockutils [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:39 compute-0 nova_compute[355794]: 2025-10-02 20:06:39.790 2 INFO nova.compute.manager [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Terminating instance
Oct 02 20:06:39 compute-0 nova_compute[355794]: 2025-10-02 20:06:39.792 2 DEBUG nova.compute.manager [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 20:06:39 compute-0 kernel: tap11f0969e-82 (unregistering): left promiscuous mode
Oct 02 20:06:39 compute-0 NetworkManager[44968]: <info>  [1759435599.8992] device (tap11f0969e-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 20:06:39 compute-0 ovn_controller[88435]: 2025-10-02T20:06:39Z|00088|binding|INFO|Releasing lport 11f0969e-82ab-4143-81be-7c01575f2855 from this chassis (sb_readonly=0)
Oct 02 20:06:39 compute-0 ovn_controller[88435]: 2025-10-02T20:06:39Z|00089|binding|INFO|Setting lport 11f0969e-82ab-4143-81be-7c01575f2855 down in Southbound
Oct 02 20:06:39 compute-0 ovn_controller[88435]: 2025-10-02T20:06:39Z|00090|binding|INFO|Removing iface tap11f0969e-82 ovn-installed in OVS
Oct 02 20:06:39 compute-0 nova_compute[355794]: 2025-10-02 20:06:39.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:39 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:39.941 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:68:45 10.100.0.8'], port_security=['fa:16:3e:c7:68:45 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '018136d8-a19a-40ff-a7fb-72157ab8d8b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5f7f3e-491e-453b-954f-909a9b2b6947', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e5246e6335a54a93b5e530d232da3468', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8c11797e-b92f-445e-bacd-87d9bc10d56c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9dfdf6a-95a9-4df5-a018-d6058f02c851, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=11f0969e-82ab-4143-81be-7c01575f2855) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:06:39 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:39.942 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 11f0969e-82ab-4143-81be-7c01575f2855 in datapath ce5f7f3e-491e-453b-954f-909a9b2b6947 unbound from our chassis
Oct 02 20:06:39 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:39.947 285790 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce5f7f3e-491e-453b-954f-909a9b2b6947, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 20:06:39 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:39.948 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[9a001809-8e3c-40b5-8878-0eab079629a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:39 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:39.949 285790 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947 namespace which is not needed anymore
Oct 02 20:06:39 compute-0 nova_compute[355794]: 2025-10-02 20:06:39.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:39 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct 02 20:06:39 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 6.321s CPU time.
Oct 02 20:06:39 compute-0 systemd-machined[137646]: Machine qemu-9-instance-00000009 terminated.
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.051 2 INFO nova.virt.libvirt.driver [-] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Instance destroyed successfully.
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.052 2 DEBUG nova.objects.instance [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lazy-loading 'resources' on Instance uuid 018136d8-a19a-40ff-a7fb-72157ab8d8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.084 2 DEBUG nova.virt.libvirt.vif [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T20:06:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1230541754',display_name='tempest-ServersTestJSON-server-1230541754',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1230541754',id=9,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFSP1BH5mG+kRZ90GOfbV5vsAJW5OFEw4m1cLQc07wJkxZGGY4Q5Mt3SH5bB0Tx2C1W0WpkJ3V5xXtFwQU5PfCA5Kdbi0IyNVtFoRvkM9Gn/jLKXZS4PnM5ujNC8cwz65Q==',key_name='tempest-keypair-1833569111',keypairs=<?>,launch_index=0,launched_at=2025-10-02T20:06:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e5246e6335a54a93b5e530d232da3468',ramdisk_id='',reservation_id='r-5g2aotu0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1308547104',owner_user_name='tempest-ServersTestJSON-1308547104-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T20:06:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2642bfaf2f5c4f468a9a9392415e6de8',uuid=018136d8-a19a-40ff-a7fb-72157ab8d8b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11f0969e-82ab-4143-81be-7c01575f2855", "address": "fa:16:3e:c7:68:45", "network": {"id": "ce5f7f3e-491e-453b-954f-909a9b2b6947", "bridge": "br-int", "label": "tempest-ServersTestJSON-658708010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5246e6335a54a93b5e530d232da3468", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f0969e-82", "ovs_interfaceid": "11f0969e-82ab-4143-81be-7c01575f2855", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.084 2 DEBUG nova.network.os_vif_util [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Converting VIF {"id": "11f0969e-82ab-4143-81be-7c01575f2855", "address": "fa:16:3e:c7:68:45", "network": {"id": "ce5f7f3e-491e-453b-954f-909a9b2b6947", "bridge": "br-int", "label": "tempest-ServersTestJSON-658708010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5246e6335a54a93b5e530d232da3468", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f0969e-82", "ovs_interfaceid": "11f0969e-82ab-4143-81be-7c01575f2855", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.086 2 DEBUG nova.network.os_vif_util [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:68:45,bridge_name='br-int',has_traffic_filtering=True,id=11f0969e-82ab-4143-81be-7c01575f2855,network=Network(ce5f7f3e-491e-453b-954f-909a9b2b6947),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f0969e-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.086 2 DEBUG os_vif [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:68:45,bridge_name='br-int',has_traffic_filtering=True,id=11f0969e-82ab-4143-81be-7c01575f2855,network=Network(ce5f7f3e-491e-453b-954f-909a9b2b6947),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f0969e-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.090 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11f0969e-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.101 2 INFO os_vif [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:68:45,bridge_name='br-int',has_traffic_filtering=True,id=11f0969e-82ab-4143-81be-7c01575f2855,network=Network(ce5f7f3e-491e-453b-954f-909a9b2b6947),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f0969e-82')
Oct 02 20:06:40 compute-0 neutron-haproxy-ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947[452659]: [NOTICE]   (452663) : haproxy version is 2.8.14-c23fe91
Oct 02 20:06:40 compute-0 neutron-haproxy-ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947[452659]: [NOTICE]   (452663) : path to executable is /usr/sbin/haproxy
Oct 02 20:06:40 compute-0 neutron-haproxy-ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947[452659]: [WARNING]  (452663) : Exiting Master process...
Oct 02 20:06:40 compute-0 neutron-haproxy-ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947[452659]: [ALERT]    (452663) : Current worker (452665) exited with code 143 (Terminated)
Oct 02 20:06:40 compute-0 neutron-haproxy-ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947[452659]: [WARNING]  (452663) : All workers exited. Exiting... (0)
Oct 02 20:06:40 compute-0 systemd[1]: libpod-3feab9901906e58d53c296ad2d70ceb80a68ec63873e51f33b1d2309a35fb8fc.scope: Deactivated successfully.
Oct 02 20:06:40 compute-0 podman[452705]: 2025-10-02 20:06:40.186777262 +0000 UTC m=+0.065269868 container died 3feab9901906e58d53c296ad2d70ceb80a68ec63873e51f33b1d2309a35fb8fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:06:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3feab9901906e58d53c296ad2d70ceb80a68ec63873e51f33b1d2309a35fb8fc-userdata-shm.mount: Deactivated successfully.
Oct 02 20:06:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cd0f53164890a8f36c3abb0e212b4a286404688e5a48b6e3331a998d4045de9-merged.mount: Deactivated successfully.
Oct 02 20:06:40 compute-0 podman[452705]: 2025-10-02 20:06:40.251424386 +0000 UTC m=+0.129917002 container cleanup 3feab9901906e58d53c296ad2d70ceb80a68ec63873e51f33b1d2309a35fb8fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 20:06:40 compute-0 systemd[1]: libpod-conmon-3feab9901906e58d53c296ad2d70ceb80a68ec63873e51f33b1d2309a35fb8fc.scope: Deactivated successfully.
Oct 02 20:06:40 compute-0 podman[452753]: 2025-10-02 20:06:40.380670302 +0000 UTC m=+0.087931737 container remove 3feab9901906e58d53c296ad2d70ceb80a68ec63873e51f33b1d2309a35fb8fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 20:06:40 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:40.393 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[ab20b568-ee57-4735-97d0-9d3e2e33ff93]: (4, ('Thu Oct  2 08:06:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947 (3feab9901906e58d53c296ad2d70ceb80a68ec63873e51f33b1d2309a35fb8fc)\n3feab9901906e58d53c296ad2d70ceb80a68ec63873e51f33b1d2309a35fb8fc\nThu Oct  2 08:06:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947 (3feab9901906e58d53c296ad2d70ceb80a68ec63873e51f33b1d2309a35fb8fc)\n3feab9901906e58d53c296ad2d70ceb80a68ec63873e51f33b1d2309a35fb8fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:40 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:40.396 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[f291a96d-ed77-456d-925a-31acf34b6dd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:40 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:40.397 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5f7f3e-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:40 compute-0 kernel: tapce5f7f3e-40: left promiscuous mode
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:40 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:40.436 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[2754b2d7-7337-492d-af2c-afb25e68e163]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:40 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:40.457 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[e973224a-6f40-4dbd-acee-d6a185eefcf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:40 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:40.459 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[1fec94d5-bc18-4f86-b310-cb676260846f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:40 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:40.484 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[fb2b684a-1736-47f3-b075-69c57bad39f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676534, 'reachable_time': 28316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 452768, 'error': None, 'target': 'ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:40 compute-0 systemd[1]: run-netns-ovnmeta\x2dce5f7f3e\x2d491e\x2d453b\x2d954f\x2d909a9b2b6947.mount: Deactivated successfully.
Oct 02 20:06:40 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:40.503 285947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce5f7f3e-491e-453b-954f-909a9b2b6947 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 20:06:40 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:40.504 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e11602-2e6b-4fb8-8ede-26a20dde131a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:06:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:06:40 compute-0 ceph-mon[191910]: pgmap v1857: 321 pgs: 321 active+clean; 370 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.3 MiB/s wr, 224 op/s
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.781 2 INFO nova.virt.libvirt.driver [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Deleting instance files /var/lib/nova/instances/018136d8-a19a-40ff-a7fb-72157ab8d8b5_del
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.782 2 INFO nova.virt.libvirt.driver [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Deletion of /var/lib/nova/instances/018136d8-a19a-40ff-a7fb-72157ab8d8b5_del complete
Oct 02 20:06:40 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:06:40.861 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1de2af17-a89c-45e5-97c6-db433f26bbb6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.885 2 INFO nova.compute.manager [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Took 1.09 seconds to destroy the instance on the hypervisor.
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.886 2 DEBUG oslo.service.loopingcall [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.887 2 DEBUG nova.compute.manager [-] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.887 2 DEBUG nova.network.neutron [-] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.895 2 DEBUG nova.network.neutron [req-1af9eb78-e40d-4dc7-9319-07f68525e1b9 req-f111c60c-0c77-499e-b510-fffe9f694e13 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Updated VIF entry in instance network info cache for port 11f0969e-82ab-4143-81be-7c01575f2855. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.896 2 DEBUG nova.network.neutron [req-1af9eb78-e40d-4dc7-9319-07f68525e1b9 req-f111c60c-0c77-499e-b510-fffe9f694e13 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Updating instance_info_cache with network_info: [{"id": "11f0969e-82ab-4143-81be-7c01575f2855", "address": "fa:16:3e:c7:68:45", "network": {"id": "ce5f7f3e-491e-453b-954f-909a9b2b6947", "bridge": "br-int", "label": "tempest-ServersTestJSON-658708010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5246e6335a54a93b5e530d232da3468", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f0969e-82", "ovs_interfaceid": "11f0969e-82ab-4143-81be-7c01575f2855", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.926 2 DEBUG oslo_concurrency.lockutils [req-1af9eb78-e40d-4dc7-9319-07f68525e1b9 req-f111c60c-0c77-499e-b510-fffe9f694e13 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-018136d8-a19a-40ff-a7fb-72157ab8d8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.968 2 DEBUG nova.compute.manager [req-955674a6-5373-4fd2-87b0-a14e0c9c042a req-9065fa18-9f23-45bf-a81f-43f18aacef9b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Received event network-vif-unplugged-11f0969e-82ab-4143-81be-7c01575f2855 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.968 2 DEBUG oslo_concurrency.lockutils [req-955674a6-5373-4fd2-87b0-a14e0c9c042a req-9065fa18-9f23-45bf-a81f-43f18aacef9b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.969 2 DEBUG oslo_concurrency.lockutils [req-955674a6-5373-4fd2-87b0-a14e0c9c042a req-9065fa18-9f23-45bf-a81f-43f18aacef9b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.969 2 DEBUG oslo_concurrency.lockutils [req-955674a6-5373-4fd2-87b0-a14e0c9c042a req-9065fa18-9f23-45bf-a81f-43f18aacef9b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.970 2 DEBUG nova.compute.manager [req-955674a6-5373-4fd2-87b0-a14e0c9c042a req-9065fa18-9f23-45bf-a81f-43f18aacef9b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] No waiting events found dispatching network-vif-unplugged-11f0969e-82ab-4143-81be-7c01575f2855 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.970 2 DEBUG nova.compute.manager [req-955674a6-5373-4fd2-87b0-a14e0c9c042a req-9065fa18-9f23-45bf-a81f-43f18aacef9b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Received event network-vif-unplugged-11f0969e-82ab-4143-81be-7c01575f2855 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.970 2 DEBUG nova.compute.manager [req-955674a6-5373-4fd2-87b0-a14e0c9c042a req-9065fa18-9f23-45bf-a81f-43f18aacef9b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Received event network-vif-plugged-11f0969e-82ab-4143-81be-7c01575f2855 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.971 2 DEBUG oslo_concurrency.lockutils [req-955674a6-5373-4fd2-87b0-a14e0c9c042a req-9065fa18-9f23-45bf-a81f-43f18aacef9b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.971 2 DEBUG oslo_concurrency.lockutils [req-955674a6-5373-4fd2-87b0-a14e0c9c042a req-9065fa18-9f23-45bf-a81f-43f18aacef9b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.971 2 DEBUG oslo_concurrency.lockutils [req-955674a6-5373-4fd2-87b0-a14e0c9c042a req-9065fa18-9f23-45bf-a81f-43f18aacef9b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.972 2 DEBUG nova.compute.manager [req-955674a6-5373-4fd2-87b0-a14e0c9c042a req-9065fa18-9f23-45bf-a81f-43f18aacef9b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] No waiting events found dispatching network-vif-plugged-11f0969e-82ab-4143-81be-7c01575f2855 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:06:40 compute-0 nova_compute[355794]: 2025-10-02 20:06:40.972 2 WARNING nova.compute.manager [req-955674a6-5373-4fd2-87b0-a14e0c9c042a req-9065fa18-9f23-45bf-a81f-43f18aacef9b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Received unexpected event network-vif-plugged-11f0969e-82ab-4143-81be-7c01575f2855 for instance with vm_state active and task_state deleting.
Oct 02 20:06:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 370 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 82 KiB/s wr, 153 op/s
Oct 02 20:06:41 compute-0 nova_compute[355794]: 2025-10-02 20:06:41.711 2 DEBUG nova.network.neutron [-] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:06:41 compute-0 nova_compute[355794]: 2025-10-02 20:06:41.750 2 INFO nova.compute.manager [-] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Took 0.86 seconds to deallocate network for instance.
Oct 02 20:06:41 compute-0 nova_compute[355794]: 2025-10-02 20:06:41.863 2 DEBUG nova.compute.manager [req-0ec58ddf-441e-415d-9d95-7387ef93e9a5 req-bfb7cd23-355f-4010-aca8-ed684f8f6817 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Received event network-vif-deleted-11f0969e-82ab-4143-81be-7c01575f2855 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:06:41 compute-0 nova_compute[355794]: 2025-10-02 20:06:41.992 2 DEBUG oslo_concurrency.lockutils [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:41 compute-0 nova_compute[355794]: 2025-10-02 20:06:41.992 2 DEBUG oslo_concurrency.lockutils [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:42 compute-0 nova_compute[355794]: 2025-10-02 20:06:42.129 2 DEBUG oslo_concurrency.processutils [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:06:42 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/774883291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:06:42 compute-0 nova_compute[355794]: 2025-10-02 20:06:42.659 2 DEBUG oslo_concurrency.processutils [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:42 compute-0 nova_compute[355794]: 2025-10-02 20:06:42.672 2 DEBUG nova.compute.provider_tree [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:06:42 compute-0 nova_compute[355794]: 2025-10-02 20:06:42.705 2 DEBUG nova.scheduler.client.report [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:06:42 compute-0 ceph-mon[191910]: pgmap v1858: 321 pgs: 321 active+clean; 370 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 82 KiB/s wr, 153 op/s
Oct 02 20:06:42 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/774883291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:06:42 compute-0 nova_compute[355794]: 2025-10-02 20:06:42.763 2 DEBUG oslo_concurrency.lockutils [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:43 compute-0 nova_compute[355794]: 2025-10-02 20:06:43.072 2 INFO nova.scheduler.client.report [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Deleted allocations for instance 018136d8-a19a-40ff-a7fb-72157ab8d8b5
Oct 02 20:06:43 compute-0 nova_compute[355794]: 2025-10-02 20:06:43.168 2 DEBUG oslo_concurrency.lockutils [None req-4dc8fb14-9842-4b9d-8516-5331a77ca49c 2642bfaf2f5c4f468a9a9392415e6de8 e5246e6335a54a93b5e530d232da3468 - - default default] Lock "018136d8-a19a-40ff-a7fb-72157ab8d8b5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.384s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1859: 321 pgs: 321 active+clean; 355 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 53 KiB/s wr, 160 op/s
Oct 02 20:06:43 compute-0 podman[452792]: 2025-10-02 20:06:43.743625521 +0000 UTC m=+0.163535211 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 20:06:44 compute-0 nova_compute[355794]: 2025-10-02 20:06:44.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:44 compute-0 ceph-mon[191910]: pgmap v1859: 321 pgs: 321 active+clean; 355 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 53 KiB/s wr, 160 op/s
Oct 02 20:06:45 compute-0 nova_compute[355794]: 2025-10-02 20:06:45.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:06:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 324 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 42 KiB/s wr, 171 op/s
Oct 02 20:06:46 compute-0 ceph-mon[191910]: pgmap v1860: 321 pgs: 321 active+clean; 324 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 42 KiB/s wr, 171 op/s
Oct 02 20:06:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1861: 321 pgs: 321 active+clean; 324 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.7 KiB/s wr, 140 op/s
Oct 02 20:06:48 compute-0 ceph-mon[191910]: pgmap v1861: 321 pgs: 321 active+clean; 324 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.7 KiB/s wr, 140 op/s
Oct 02 20:06:48 compute-0 ovn_controller[88435]: 2025-10-02T20:06:48Z|00091|binding|INFO|Releasing lport b59aad26-fd1d-4c37-adbd-b18497c4c15f from this chassis (sb_readonly=0)
Oct 02 20:06:48 compute-0 ovn_controller[88435]: 2025-10-02T20:06:48Z|00092|binding|INFO|Releasing lport cdbc9f7e-e502-4e46-9d35-398a11c2a99d from this chassis (sb_readonly=0)
Oct 02 20:06:48 compute-0 ovn_controller[88435]: 2025-10-02T20:06:48Z|00093|binding|INFO|Releasing lport 406bcb5b-e20c-483d-9dc9-ab2e2e75e0f6 from this chassis (sb_readonly=0)
Oct 02 20:06:48 compute-0 ovn_controller[88435]: 2025-10-02T20:06:48Z|00094|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 20:06:48 compute-0 nova_compute[355794]: 2025-10-02 20:06:48.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:49 compute-0 nova_compute[355794]: 2025-10-02 20:06:49.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 324 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 KiB/s wr, 104 op/s
Oct 02 20:06:50 compute-0 nova_compute[355794]: 2025-10-02 20:06:50.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:50 compute-0 nova_compute[355794]: 2025-10-02 20:06:50.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:06:50 compute-0 ceph-mon[191910]: pgmap v1862: 321 pgs: 321 active+clean; 324 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 KiB/s wr, 104 op/s
Oct 02 20:06:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1863: 321 pgs: 321 active+clean; 324 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 375 KiB/s rd, 3.3 KiB/s wr, 38 op/s
Oct 02 20:06:51 compute-0 podman[452813]: 2025-10-02 20:06:51.731813455 +0000 UTC m=+0.138234515 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2)
Oct 02 20:06:51 compute-0 podman[452812]: 2025-10-02 20:06:51.738705986 +0000 UTC m=+0.157060719 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 20:06:52 compute-0 ceph-mon[191910]: pgmap v1863: 321 pgs: 321 active+clean; 324 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 375 KiB/s rd, 3.3 KiB/s wr, 38 op/s
Oct 02 20:06:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1864: 321 pgs: 321 active+clean; 324 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 2.8 KiB/s wr, 29 op/s
Oct 02 20:06:54 compute-0 nova_compute[355794]: 2025-10-02 20:06:54.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:54 compute-0 ceph-mon[191910]: pgmap v1864: 321 pgs: 321 active+clean; 324 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 2.8 KiB/s wr, 29 op/s
Oct 02 20:06:55 compute-0 nova_compute[355794]: 2025-10-02 20:06:55.038 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759435600.0363448, 018136d8-a19a-40ff-a7fb-72157ab8d8b5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:06:55 compute-0 nova_compute[355794]: 2025-10-02 20:06:55.038 2 INFO nova.compute.manager [-] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] VM Stopped (Lifecycle Event)
Oct 02 20:06:55 compute-0 nova_compute[355794]: 2025-10-02 20:06:55.068 2 DEBUG nova.compute.manager [None req-f77ea1fa-c692-45bd-a215-f832245e0e25 - - - - - -] [instance: 018136d8-a19a-40ff-a7fb-72157ab8d8b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:06:55 compute-0 nova_compute[355794]: 2025-10-02 20:06:55.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:55 compute-0 nova_compute[355794]: 2025-10-02 20:06:55.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:06:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 321 active+clean; 324 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Oct 02 20:06:56 compute-0 ceph-mon[191910]: pgmap v1865: 321 pgs: 321 active+clean; 324 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Oct 02 20:06:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1866: 321 pgs: 321 active+clean; 324 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 20:06:58 compute-0 podman[452853]: 2025-10-02 20:06:58.705005109 +0000 UTC m=+0.119440282 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, managed_by=edpm_ansible, name=ubi9, distribution-scope=public, vendor=Red Hat, Inc., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 20:06:58 compute-0 podman[452852]: 2025-10-02 20:06:58.712864812 +0000 UTC m=+0.122656443 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm)
Oct 02 20:06:58 compute-0 ceph-mon[191910]: pgmap v1866: 321 pgs: 321 active+clean; 324 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 20:06:59 compute-0 nova_compute[355794]: 2025-10-02 20:06:59.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1867: 321 pgs: 321 active+clean; 324 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 4.2 KiB/s wr, 1 op/s
Oct 02 20:06:59 compute-0 podman[157186]: time="2025-10-02T20:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:06:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 49965 "" "Go-http-client/1.1"
Oct 02 20:06:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10469 "" "Go-http-client/1.1"
Oct 02 20:07:00 compute-0 nova_compute[355794]: 2025-10-02 20:07:00.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:00 compute-0 nova_compute[355794]: 2025-10-02 20:07:00.299 2 DEBUG nova.objects.instance [None req-2199aa8c-d8aa-4876-b62d-708cbcd9a688 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lazy-loading 'flavor' on Instance uuid f8be75db-d124-4069-a573-db7410ea2b5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:07:00 compute-0 nova_compute[355794]: 2025-10-02 20:07:00.346 2 DEBUG oslo_concurrency.lockutils [None req-2199aa8c-d8aa-4876-b62d-708cbcd9a688 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Acquiring lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:07:00 compute-0 nova_compute[355794]: 2025-10-02 20:07:00.348 2 DEBUG oslo_concurrency.lockutils [None req-2199aa8c-d8aa-4876-b62d-708cbcd9a688 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Acquired lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:07:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:07:00 compute-0 podman[452889]: 2025-10-02 20:07:00.704227789 +0000 UTC m=+0.116311073 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 20:07:00 compute-0 podman[452890]: 2025-10-02 20:07:00.740602499 +0000 UTC m=+0.140936404 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 20:07:00 compute-0 podman[452891]: 2025-10-02 20:07:00.788108265 +0000 UTC m=+0.182128771 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 20:07:00 compute-0 ceph-mon[191910]: pgmap v1867: 321 pgs: 321 active+clean; 324 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 4.2 KiB/s wr, 1 op/s
Oct 02 20:07:01 compute-0 openstack_network_exporter[372736]: ERROR   20:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:07:01 compute-0 openstack_network_exporter[372736]: ERROR   20:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:07:01 compute-0 openstack_network_exporter[372736]: ERROR   20:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:07:01 compute-0 openstack_network_exporter[372736]: ERROR   20:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:07:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:07:01 compute-0 openstack_network_exporter[372736]: ERROR   20:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:07:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:07:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1868: 321 pgs: 321 active+clean; 324 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 3.2 KiB/s wr, 1 op/s
Oct 02 20:07:01 compute-0 nova_compute[355794]: 2025-10-02 20:07:01.811 2 DEBUG nova.network.neutron [None req-2199aa8c-d8aa-4876-b62d-708cbcd9a688 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 20:07:01 compute-0 nova_compute[355794]: 2025-10-02 20:07:01.957 2 DEBUG nova.compute.manager [req-ec507ae7-fa4e-4a4b-9570-605a3e7320a4 req-ee735a69-7672-419b-a42e-c5057ff44a3c 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Received event network-changed-6e6c016d-9003-4a4b-92ce-11e00a91b399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:07:01 compute-0 nova_compute[355794]: 2025-10-02 20:07:01.958 2 DEBUG nova.compute.manager [req-ec507ae7-fa4e-4a4b-9570-605a3e7320a4 req-ee735a69-7672-419b-a42e-c5057ff44a3c 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Refreshing instance network info cache due to event network-changed-6e6c016d-9003-4a4b-92ce-11e00a91b399. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:07:01 compute-0 nova_compute[355794]: 2025-10-02 20:07:01.959 2 DEBUG oslo_concurrency.lockutils [req-ec507ae7-fa4e-4a4b-9570-605a3e7320a4 req-ee735a69-7672-419b-a42e-c5057ff44a3c 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:07:02 compute-0 nova_compute[355794]: 2025-10-02 20:07:02.443 2 DEBUG oslo_concurrency.lockutils [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Acquiring lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:02 compute-0 nova_compute[355794]: 2025-10-02 20:07:02.444 2 DEBUG oslo_concurrency.lockutils [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:02 compute-0 nova_compute[355794]: 2025-10-02 20:07:02.444 2 INFO nova.compute.manager [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Rebooting instance
Oct 02 20:07:02 compute-0 nova_compute[355794]: 2025-10-02 20:07:02.477 2 DEBUG oslo_concurrency.lockutils [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Acquiring lock "refresh_cache-cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:07:02 compute-0 nova_compute[355794]: 2025-10-02 20:07:02.478 2 DEBUG oslo_concurrency.lockutils [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Acquired lock "refresh_cache-cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:07:02 compute-0 nova_compute[355794]: 2025-10-02 20:07:02.479 2 DEBUG nova.network.neutron [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 20:07:02 compute-0 nova_compute[355794]: 2025-10-02 20:07:02.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:07:02 compute-0 nova_compute[355794]: 2025-10-02 20:07:02.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:07:02 compute-0 nova_compute[355794]: 2025-10-02 20:07:02.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:02 compute-0 podman[452949]: 2025-10-02 20:07:02.719885821 +0000 UTC m=+0.121697651 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:07:02 compute-0 podman[452948]: 2025-10-02 20:07:02.756337943 +0000 UTC m=+0.166864975 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 02 20:07:02 compute-0 ceph-mon[191910]: pgmap v1868: 321 pgs: 321 active+clean; 324 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 3.2 KiB/s wr, 1 op/s
Oct 02 20:07:03 compute-0 sudo[452987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:07:03 compute-0 sudo[452987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:03 compute-0 sudo[452987]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:03 compute-0 sudo[453012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:07:03 compute-0 sudo[453012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:03 compute-0 sudo[453012]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:03 compute-0 sudo[453037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:07:03 compute-0 sudo[453037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:03 compute-0 sudo[453037]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:03 compute-0 nova_compute[355794]: 2025-10-02 20:07:03.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:07:03 compute-0 nova_compute[355794]: 2025-10-02 20:07:03.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:07:03 compute-0 sudo[453062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 20:07:03 compute-0 sudo[453062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 321 active+clean; 324 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 4.8 KiB/s wr, 1 op/s
Oct 02 20:07:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:07:03
Oct 02 20:07:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:07:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:07:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['volumes', 'images', 'default.rgw.control', 'backups', 'default.rgw.meta', '.mgr', '.rgw.root', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Oct 02 20:07:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:07:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:07:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:07:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:07:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:07:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:07:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:07:03 compute-0 ceph-mon[191910]: pgmap v1869: 321 pgs: 321 active+clean; 324 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 4.8 KiB/s wr, 1 op/s
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.302 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.304 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.313 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f8be75db-d124-4069-a573-db7410ea2b5e from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 20:07:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:04.315 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f8be75db-d124-4069-a573-db7410ea2b5e -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}83403cbd27ca997ec29507654ce1f65c7e3234a2ba47473760e788c908204f10" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 20:07:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:07:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:07:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:07:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:07:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:07:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:07:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:07:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:07:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:07:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:07:04 compute-0 nova_compute[355794]: 2025-10-02 20:07:04.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:04 compute-0 podman[453162]: 2025-10-02 20:07:04.511515771 +0000 UTC m=+0.132573220 container exec a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 20:07:04 compute-0 nova_compute[355794]: 2025-10-02 20:07:04.599 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:07:04 compute-0 nova_compute[355794]: 2025-10-02 20:07:04.600 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 20:07:04 compute-0 podman[453162]: 2025-10-02 20:07:04.617822952 +0000 UTC m=+0.238880411 container exec_died a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:07:04 compute-0 nova_compute[355794]: 2025-10-02 20:07:04.753 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 20:07:04 compute-0 nova_compute[355794]: 2025-10-02 20:07:04.996 2 DEBUG nova.network.neutron [None req-2199aa8c-d8aa-4876-b62d-708cbcd9a688 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Updating instance_info_cache with network_info: [{"id": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "address": "fa:16:3e:04:5e:6a", "network": {"id": "3f70fd72-8355-46f5-8b19-cebed2c28970", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1770026923-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e6c016d-90", "ovs_interfaceid": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:07:05 compute-0 nova_compute[355794]: 2025-10-02 20:07:05.032 2 DEBUG oslo_concurrency.lockutils [None req-2199aa8c-d8aa-4876-b62d-708cbcd9a688 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Releasing lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:07:05 compute-0 nova_compute[355794]: 2025-10-02 20:07:05.033 2 DEBUG nova.compute.manager [None req-2199aa8c-d8aa-4876-b62d-708cbcd9a688 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Oct 02 20:07:05 compute-0 nova_compute[355794]: 2025-10-02 20:07:05.033 2 DEBUG nova.compute.manager [None req-2199aa8c-d8aa-4876-b62d-708cbcd9a688 b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] network_info to inject: |[{"id": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "address": "fa:16:3e:04:5e:6a", "network": {"id": "3f70fd72-8355-46f5-8b19-cebed2c28970", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1770026923-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e6c016d-90", "ovs_interfaceid": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Oct 02 20:07:05 compute-0 nova_compute[355794]: 2025-10-02 20:07:05.035 2 DEBUG oslo_concurrency.lockutils [req-ec507ae7-fa4e-4a4b-9570-605a3e7320a4 req-ee735a69-7672-419b-a42e-c5057ff44a3c 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:07:05 compute-0 nova_compute[355794]: 2025-10-02 20:07:05.035 2 DEBUG nova.network.neutron [req-ec507ae7-fa4e-4a4b-9570-605a3e7320a4 req-ee735a69-7672-419b-a42e-c5057ff44a3c 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Refreshing network info cache for port 6e6c016d-9003-4a4b-92ce-11e00a91b399 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:07:05 compute-0 nova_compute[355794]: 2025-10-02 20:07:05.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:05.216 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1996 Content-Type: application/json Date: Thu, 02 Oct 2025 20:07:04 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-e3e37596-d9cf-43d6-a8ce-8d065921ffe4 x-openstack-request-id: req-e3e37596-d9cf-43d6-a8ce-8d065921ffe4 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 20:07:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:05.217 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "f8be75db-d124-4069-a573-db7410ea2b5e", "name": "tempest-AttachInterfacesUnderV243Test-server-1217822364", "status": "ACTIVE", "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "user_id": "b0d30c42cdda433ebd7d28421e967748", "metadata": {}, "hostId": "4acab9a5bbd23f0156714dae7305efdffd9219cc74419a5df1772aca", "image": {"id": "2881b8cb-4cad-4124-8a6e-ae21054c9692", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/2881b8cb-4cad-4124-8a6e-ae21054c9692"}]}, "flavor": {"id": "2a4d7fef-934e-4921-8c3b-c6783966faa5", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/2a4d7fef-934e-4921-8c3b-c6783966faa5"}]}, "created": "2025-10-02T20:05:32Z", "updated": "2025-10-02T20:05:50Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-1770026923-network": [{"version": 4, "addr": "10.100.0.8", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:04:5e:6a"}, {"version": 4, "addr": "192.168.122.175", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:04:5e:6a"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/f8be75db-d124-4069-a573-db7410ea2b5e"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/f8be75db-d124-4069-a573-db7410ea2b5e"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-459640716", "OS-SRV-USG:launched_at": "2025-10-02T20:05:49.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1515706313"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000006", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 20:07:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:05.217 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f8be75db-d124-4069-a573-db7410ea2b5e used request id req-e3e37596-d9cf-43d6-a8ce-8d065921ffe4 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 20:07:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:05.218 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f8be75db-d124-4069-a573-db7410ea2b5e', 'name': 'tempest-AttachInterfacesUnderV243Test-server-1217822364', 'flavor': {'id': '2a4d7fef-934e-4921-8c3b-c6783966faa5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '2881b8cb-4cad-4124-8a6e-ae21054c9692'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cca17e8f28a243bcaf58d01bf55608e9', 'user_id': 'b0d30c42cdda433ebd7d28421e967748', 'hostId': '4acab9a5bbd23f0156714dae7305efdffd9219cc74419a5df1772aca', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:07:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:05.223 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:07:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:05.225 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 20:07:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:05.226 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}83403cbd27ca997ec29507654ce1f65c7e3234a2ba47473760e788c908204f10" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 20:07:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:07:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 324 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.2 KiB/s wr, 1 op/s
Oct 02 20:07:05 compute-0 nova_compute[355794]: 2025-10-02 20:07:05.729 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:07:05 compute-0 sudo[453062]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:07:05 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:07:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:07:05 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:07:05 compute-0 sudo[453314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:07:05 compute-0 sudo[453314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:05 compute-0 sudo[453314]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:06 compute-0 sudo[453339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:07:06 compute-0 sudo[453339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:06 compute-0 sudo[453339]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:06 compute-0 sudo[453364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:07:06 compute-0 sudo[453364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:06 compute-0 sudo[453364]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:06.260 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1981 Content-Type: application/json Date: Thu, 02 Oct 2025 20:07:05 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-c1becc3b-dc8f-4796-8863-0e5633e38268 x-openstack-request-id: req-c1becc3b-dc8f-4796-8863-0e5633e38268 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 20:07:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:06.261 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9", "name": "tempest-ServerActionsTestJSON-server-521568053", "status": "HARD_REBOOT", "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "user_id": "f962d436a03a4b70951908eb9f826d11", "metadata": {}, "hostId": "a7d4273d7d5652a69a2cfec1ecefc1a7232fd88fac2086ddca0d2065", "image": {"id": "2881b8cb-4cad-4124-8a6e-ae21054c9692", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/2881b8cb-4cad-4124-8a6e-ae21054c9692"}]}, "flavor": {"id": "2a4d7fef-934e-4921-8c3b-c6783966faa5", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/2a4d7fef-934e-4921-8c3b-c6783966faa5"}]}, "created": "2025-10-02T20:05:34Z", "updated": "2025-10-02T20:07:02Z", "addresses": {"tempest-ServerActionsTestJSON-226494039-network": [{"version": 4, "addr": "10.100.0.13", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:eb:42:64"}, {"version": 4, "addr": "192.168.122.218", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:eb:42:64"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9"}], "OS-DCF:diskConfig": "MANUAL", "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1325070434", "OS-SRV-USG:launched_at": "2025-10-02T20:05:50.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--538174358"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "rebooting_hard", "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 20:07:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:06.261 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 used request id req-c1becc3b-dc8f-4796-8863-0e5633e38268 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 20:07:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:06.265 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9', 'name': 'tempest-ServerActionsTestJSON-server-521568053', 'flavor': {'id': '2a4d7fef-934e-4921-8c3b-c6783966faa5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '2881b8cb-4cad-4124-8a6e-ae21054c9692'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '0db170bd1e464f2ea61c24a9079861a4', 'user_id': 'f962d436a03a4b70951908eb9f826d11', 'hostId': 'a7d4273d7d5652a69a2cfec1ecefc1a7232fd88fac2086ddca0d2065', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:07:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:06.271 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a6e095a0-cb58-430d-9347-4aab385c6e69 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 20:07:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:06.273 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a6e095a0-cb58-430d-9347-4aab385c6e69 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}83403cbd27ca997ec29507654ce1f65c7e3234a2ba47473760e788c908204f10" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.322 2 DEBUG nova.network.neutron [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Updating instance_info_cache with network_info: [{"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.342 2 DEBUG oslo_concurrency.lockutils [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Releasing lock "refresh_cache-cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.347 2 DEBUG nova.compute.manager [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:07:06 compute-0 sudo[453389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:07:06 compute-0 sudo[453389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:07:06 compute-0 kernel: tap668a7aea-bc (unregistering): left promiscuous mode
Oct 02 20:07:06 compute-0 NetworkManager[44968]: <info>  [1759435626.6107] device (tap668a7aea-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:06 compute-0 ovn_controller[88435]: 2025-10-02T20:07:06Z|00095|binding|INFO|Releasing lport 668a7aea-bc00-4cac-b1dd-b0786e76c474 from this chassis (sb_readonly=0)
Oct 02 20:07:06 compute-0 ovn_controller[88435]: 2025-10-02T20:07:06Z|00096|binding|INFO|Setting lport 668a7aea-bc00-4cac-b1dd-b0786e76c474 down in Southbound
Oct 02 20:07:06 compute-0 ovn_controller[88435]: 2025-10-02T20:07:06Z|00097|binding|INFO|Removing iface tap668a7aea-bc ovn-installed in OVS
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:06 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:06.643 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:42:64 10.100.0.13'], port_security=['fa:16:3e:eb:42:64 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c59c91fb-efec-4ddf-b699-e072223ea127', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0db170bd1e464f2ea61c24a9079861a4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f24334a9-c477-489f-956b-2cd2adaeee19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34cee90-562d-4e73-b869-f45c74e302ff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=668a7aea-bc00-4cac-b1dd-b0786e76c474) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:07:06 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:06.644 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 668a7aea-bc00-4cac-b1dd-b0786e76c474 in datapath c59c91fb-efec-4ddf-b699-e072223ea127 unbound from our chassis
Oct 02 20:07:06 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:06.647 285790 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c59c91fb-efec-4ddf-b699-e072223ea127, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 20:07:06 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:06.648 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[72611bc9-475f-45a0-9789-430a50c9e442]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:06 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:06.655 285790 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127 namespace which is not needed anymore
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:06 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct 02 20:07:06 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 44.814s CPU time.
Oct 02 20:07:06 compute-0 systemd-machined[137646]: Machine qemu-7-instance-00000007 terminated.
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.772 2 INFO nova.virt.libvirt.driver [-] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Instance destroyed successfully.
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.773 2 DEBUG nova.objects.instance [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lazy-loading 'resources' on Instance uuid cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.790 2 DEBUG nova.virt.libvirt.vif [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T20:05:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-521568053',display_name='tempest-ServerActionsTestJSON-server-521568053',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-521568053',id=7,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO3+Ml3MDFRwjAbHlLVelOVaJFw9wMrAgiM3K5ECgv/8gOHV2c2HY+qo3hzkkazGL3NjANBQg+Uxykp7yYUSzraCqdB1dpSHggRBXiV5RbTjrArOXyRLYbqWS943JQegug==',key_name='tempest-keypair-1325070434',keypairs=<?>,launch_index=0,launched_at=2025-10-02T20:05:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0db170bd1e464f2ea61c24a9079861a4',ramdisk_id='',reservation_id='r-w750d1sy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-872820255',owner_user_name='tempest-ServerActionsTestJSON-872820255-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T20:07:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f962d436a03a4b70951908eb9f826d11',uuid=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.790 2 DEBUG nova.network.os_vif_util [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Converting VIF {"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.791 2 DEBUG nova.network.os_vif_util [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:42:64,bridge_name='br-int',has_traffic_filtering=True,id=668a7aea-bc00-4cac-b1dd-b0786e76c474,network=Network(c59c91fb-efec-4ddf-b699-e072223ea127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap668a7aea-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.791 2 DEBUG os_vif [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:42:64,bridge_name='br-int',has_traffic_filtering=True,id=668a7aea-bc00-4cac-b1dd-b0786e76c474,network=Network(c59c91fb-efec-4ddf-b699-e072223ea127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap668a7aea-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.794 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap668a7aea-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:06 compute-0 ceph-mon[191910]: pgmap v1870: 321 pgs: 321 active+clean; 324 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.2 KiB/s wr, 1 op/s
Oct 02 20:07:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:07:06 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.811 2 INFO os_vif [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:42:64,bridge_name='br-int',has_traffic_filtering=True,id=668a7aea-bc00-4cac-b1dd-b0786e76c474,network=Network(c59c91fb-efec-4ddf-b699-e072223ea127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap668a7aea-bc')
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.823 2 DEBUG nova.virt.libvirt.driver [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Start _get_guest_xml network_info=[{"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=2881b8cb-4cad-4124-8a6e-ae21054c9692,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'disk_bus': 'virtio', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'image_id': '2881b8cb-4cad-4124-8a6e-ae21054c9692'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.833 2 WARNING nova.virt.libvirt.driver [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.839 2 DEBUG nova.virt.libvirt.host [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.839 2 DEBUG nova.virt.libvirt.host [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.843 2 DEBUG nova.virt.libvirt.host [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.844 2 DEBUG nova.virt.libvirt.host [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.844 2 DEBUG nova.virt.libvirt.driver [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.844 2 DEBUG nova.virt.hardware [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T20:04:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2a4d7fef-934e-4921-8c3b-c6783966faa5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=2881b8cb-4cad-4124-8a6e-ae21054c9692,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.844 2 DEBUG nova.virt.hardware [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.844 2 DEBUG nova.virt.hardware [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.845 2 DEBUG nova.virt.hardware [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.845 2 DEBUG nova.virt.hardware [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.845 2 DEBUG nova.virt.hardware [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.845 2 DEBUG nova.virt.hardware [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.845 2 DEBUG nova.virt.hardware [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.845 2 DEBUG nova.virt.hardware [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.846 2 DEBUG nova.virt.hardware [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.846 2 DEBUG nova.virt.hardware [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.846 2 DEBUG nova.objects.instance [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:07:06 compute-0 nova_compute[355794]: 2025-10-02 20:07:06.865 2 DEBUG oslo_concurrency.processutils [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:06 compute-0 neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127[450474]: [NOTICE]   (450498) : haproxy version is 2.8.14-c23fe91
Oct 02 20:07:06 compute-0 neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127[450474]: [NOTICE]   (450498) : path to executable is /usr/sbin/haproxy
Oct 02 20:07:06 compute-0 neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127[450474]: [WARNING]  (450498) : Exiting Master process...
Oct 02 20:07:06 compute-0 neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127[450474]: [ALERT]    (450498) : Current worker (450500) exited with code 143 (Terminated)
Oct 02 20:07:06 compute-0 neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127[450474]: [WARNING]  (450498) : All workers exited. Exiting... (0)
Oct 02 20:07:06 compute-0 systemd[1]: libpod-3e08790c9152141ff885002a951013446b35450a856387bde42e8f2f2465b6c3.scope: Deactivated successfully.
Oct 02 20:07:06 compute-0 podman[453461]: 2025-10-02 20:07:06.973313488 +0000 UTC m=+0.112449668 container died 3e08790c9152141ff885002a951013446b35450a856387bde42e8f2f2465b6c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.007 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:07:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3e08790c9152141ff885002a951013446b35450a856387bde42e8f2f2465b6c3-userdata-shm.mount: Deactivated successfully.
Oct 02 20:07:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-566b20235485254baaf2d70417afc43c003f5ea98d7dc78a03785b5aa169a59e-merged.mount: Deactivated successfully.
Oct 02 20:07:07 compute-0 podman[453461]: 2025-10-02 20:07:07.039125397 +0000 UTC m=+0.178261567 container cleanup 3e08790c9152141ff885002a951013446b35450a856387bde42e8f2f2465b6c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:07:07 compute-0 systemd[1]: libpod-conmon-3e08790c9152141ff885002a951013446b35450a856387bde42e8f2f2465b6c3.scope: Deactivated successfully.
Oct 02 20:07:07 compute-0 sudo[453389]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.086 2 DEBUG nova.network.neutron [req-ec507ae7-fa4e-4a4b-9570-605a3e7320a4 req-ee735a69-7672-419b-a42e-c5057ff44a3c 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Updated VIF entry in instance network info cache for port 6e6c016d-9003-4a4b-92ce-11e00a91b399. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.086 2 DEBUG nova.network.neutron [req-ec507ae7-fa4e-4a4b-9570-605a3e7320a4 req-ee735a69-7672-419b-a42e-c5057ff44a3c 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Updating instance_info_cache with network_info: [{"id": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "address": "fa:16:3e:04:5e:6a", "network": {"id": "3f70fd72-8355-46f5-8b19-cebed2c28970", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1770026923-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e6c016d-90", "ovs_interfaceid": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.106 2 DEBUG oslo_concurrency.lockutils [req-ec507ae7-fa4e-4a4b-9570-605a3e7320a4 req-ee735a69-7672-419b-a42e-c5057ff44a3c 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.106 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.107 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:07:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:07:07 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:07:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:07:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:07:07 compute-0 podman[453517]: 2025-10-02 20:07:07.150478819 +0000 UTC m=+0.077855856 container remove 3e08790c9152141ff885002a951013446b35450a856387bde42e8f2f2465b6c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 20:07:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:07:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:07:07 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev c9818814-10bd-4fae-9958-5411ed5e5491 does not exist
Oct 02 20:07:07 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev a54e2ecb-0dae-4682-8106-46a6ca814293 does not exist
Oct 02 20:07:07 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 3c372dae-a2b1-45f3-a1e6-3427b5d375c2 does not exist
Oct 02 20:07:07 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:07.165 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[a3485325-fe04-48c3-9cce-e7b70a389faf]: (4, ('Thu Oct  2 08:07:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127 (3e08790c9152141ff885002a951013446b35450a856387bde42e8f2f2465b6c3)\n3e08790c9152141ff885002a951013446b35450a856387bde42e8f2f2465b6c3\nThu Oct  2 08:07:07 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127 (3e08790c9152141ff885002a951013446b35450a856387bde42e8f2f2465b6c3)\n3e08790c9152141ff885002a951013446b35450a856387bde42e8f2f2465b6c3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:07:07 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:07:07 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:07.171 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[41b98783-14c3-4f90-a470-fcd1255c79c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:07:07 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:07:07 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:07.173 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc59c91fb-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:07 compute-0 kernel: tapc59c91fb-e0: left promiscuous mode
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:07:07 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:07:07 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:07.189 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[86d84ecb-ea48-4936-bc45-0414ad3994da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:07 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:07.206 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c0c17c-72bc-4112-9d85-e825bb5a4ff8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:07 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:07.208 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[6003094c-6787-4237-a9ff-5199091f8299]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:07 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:07.232 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[dd100a1c-b8e0-41f2-ba16-7a1cf53daa94]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672070, 'reachable_time': 31314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 453536, 'error': None, 'target': 'ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:07 compute-0 systemd[1]: run-netns-ovnmeta\x2dc59c91fb\x2defec\x2d4ddf\x2db699\x2de072223ea127.mount: Deactivated successfully.
Oct 02 20:07:07 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:07.238 285947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 20:07:07 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:07.239 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[45fd0eb3-3278-4f60-a0ad-b9674e654980]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.247 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1974 Content-Type: application/json Date: Thu, 02 Oct 2025 20:07:06 GMT Keep-Alive: timeout=5, max=98 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-156a39f3-5b1f-4769-8a03-eba4c87e26f8 x-openstack-request-id: req-156a39f3-5b1f-4769-8a03-eba4c87e26f8 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.248 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "a6e095a0-cb58-430d-9347-4aab385c6e69", "name": "tempest-TestNetworkBasicOps-server-332031272", "status": "ACTIVE", "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "user_id": "e87db118c0374d50a374f0ceaf961159", "metadata": {}, "hostId": "267c0e61767297168b60fa0a1dd987f3002d763bbf8b1807b6543847", "image": {"id": "2881b8cb-4cad-4124-8a6e-ae21054c9692", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/2881b8cb-4cad-4124-8a6e-ae21054c9692"}]}, "flavor": {"id": "2a4d7fef-934e-4921-8c3b-c6783966faa5", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/2a4d7fef-934e-4921-8c3b-c6783966faa5"}]}, "created": "2025-10-02T20:06:18Z", "updated": "2025-10-02T20:06:31Z", "addresses": {"tempest-network-smoke--797126595": [{"version": 4, "addr": "10.100.0.7", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d8:72:8b"}, {"version": 4, "addr": "192.168.122.239", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d8:72:8b"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/a6e095a0-cb58-430d-9347-4aab385c6e69"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/a6e095a0-cb58-430d-9347-4aab385c6e69"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-1755477534", "OS-SRV-USG:launched_at": "2025-10-02T20:06:31.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1585688900"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000008", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.248 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a6e095a0-cb58-430d-9347-4aab385c6e69 used request id req-156a39f3-5b1f-4769-8a03-eba4c87e26f8 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.251 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6e095a0-cb58-430d-9347-4aab385c6e69', 'name': 'tempest-TestNetworkBasicOps-server-332031272', 'flavor': {'id': '2a4d7fef-934e-4921-8c3b-c6783966faa5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '2881b8cb-4cad-4124-8a6e-ae21054c9692'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000008', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'a7c52835a9494ea98fd26390771eb77f', 'user_id': 'e87db118c0374d50a374f0ceaf961159', 'hostId': '267c0e61767297168b60fa0a1dd987f3002d763bbf8b1807b6543847', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.251 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.252 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.252 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.252 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.253 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T20:07:07.252296) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 sudo[453528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:07:07 compute-0 sudo[453528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:07 compute-0 sudo[453528]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.307 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/disk.device.read.requests volume: 1142 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.308 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.335 2 DEBUG nova.compute.manager [req-4d18380e-46fa-4935-bfce-3b283c98674e req-b16a658f-348f-4f9f-af5a-dbd3b2dcb49e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Received event network-vif-unplugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.336 2 DEBUG oslo_concurrency.lockutils [req-4d18380e-46fa-4935-bfce-3b283c98674e req-b16a658f-348f-4f9f-af5a-dbd3b2dcb49e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.336 2 DEBUG oslo_concurrency.lockutils [req-4d18380e-46fa-4935-bfce-3b283c98674e req-b16a658f-348f-4f9f-af5a-dbd3b2dcb49e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.337 2 DEBUG oslo_concurrency.lockutils [req-4d18380e-46fa-4935-bfce-3b283c98674e req-b16a658f-348f-4f9f-af5a-dbd3b2dcb49e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.337 2 DEBUG nova.compute.manager [req-4d18380e-46fa-4935-bfce-3b283c98674e req-b16a658f-348f-4f9f-af5a-dbd3b2dcb49e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] No waiting events found dispatching network-vif-unplugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.337 2 WARNING nova.compute.manager [req-4d18380e-46fa-4935-bfce-3b283c98674e req-b16a658f-348f-4f9f-af5a-dbd3b2dcb49e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Received unexpected event network-vif-unplugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.352 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.353 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.353 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.355 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:07:07 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1010957087' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.399 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/disk.device.read.requests volume: 900 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.399 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/disk.device.read.requests volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.409 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.409 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.409 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.409 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.411 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:07:07.409730) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.414 2 DEBUG oslo_concurrency.processutils [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.425 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 sudo[453556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.425 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 sudo[453556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:07 compute-0 sudo[453556]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.446 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.446 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.446 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.447 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.461 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.462 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.462 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.462 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.462 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.462 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.462 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.462 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.463 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/disk.device.write.bytes volume: 72978432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.463 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.463 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.463 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.463 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.465 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.465 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/disk.device.write.bytes volume: 25628672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.465 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.464 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:07:07.462907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.466 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.466 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.466 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.466 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.466 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.466 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/disk.device.write.latency volume: 9143761405 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.467 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.467 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.467 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.467 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:07:07.466707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.467 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.468 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.468 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/disk.device.write.latency volume: 6876004974 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.468 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.469 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.469 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.469 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.471 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:07:07.469943) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.484 2 DEBUG nova.objects.instance [None req-f91231f4-eca7-4f5b-b400-2654029bd5cb b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lazy-loading 'flavor' on Instance uuid f8be75db-d124-4069-a573-db7410ea2b5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.494 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.507 2 DEBUG oslo_concurrency.processutils [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.515 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.517 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 sudo[453598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:07:07 compute-0 sudo[453598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:07 compute-0 sudo[453598]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.548 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.549 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.549 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.549 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.550 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.550 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.550 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/disk.device.write.requests volume: 302 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.550 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.550 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.551 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.551 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.549 2 DEBUG oslo_concurrency.lockutils [None req-f91231f4-eca7-4f5b-b400-2654029bd5cb b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Acquiring lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.552 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.551 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:07:07.550073) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.552 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.552 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.552 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.552 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.553 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.553 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.553 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.553 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:07:07.553195) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.556 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for f8be75db-d124-4069-a573-db7410ea2b5e / tap6e6c016d-90 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.556 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.560 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.561 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.564 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for a6e095a0-cb58-430d-9347-4aab385c6e69 / tap4af10480-1b inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.564 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.564 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.564 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.564 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.564 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.564 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.565 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.565 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.565 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:07:07.564937) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.565 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.566 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.566 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.566 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.566 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-02T20:07:07.566094) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.566 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1217822364>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-521568053>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-332031272>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1217822364>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-521568053>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-332031272>]
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.567 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.567 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.567 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.567 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.568 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.568 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:07:07.567338) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:07:07.568407) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.568 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.569 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.569 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.569 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.570 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.570 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.570 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.570 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.570 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.570 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:07:07.570331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.571 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.571 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.572 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.572 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.572 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:07:07.572298) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.572 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.574 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.574 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.574 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.575 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.575 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:07:07.575268) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.575 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.576 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.576 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.576 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.577 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.577 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.577 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.577 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/disk.device.read.bytes volume: 31091200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.577 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.578 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.578 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.578 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:07:07.577321) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.579 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.579 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/disk.device.read.bytes volume: 26926592 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.580 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/disk.device.read.bytes volume: 55474 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.581 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.581 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.581 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:07:07.581483) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.581 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.582 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.582 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.583 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.583 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.583 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:07:07.583331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.584 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.584 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.584 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.585 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.585 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.586 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.586 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.586 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.586 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.587 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1217822364>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-521568053>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-332031272>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1217822364>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-521568053>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-332031272>]
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.587 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-02T20:07:07.586864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.588 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:07:07.587758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.588 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/memory.usage volume: 42.86328125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.588 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.589 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.589 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/memory.usage volume: 40.41015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.590 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.590 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.590 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.590 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/network.incoming.bytes volume: 4643 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:07:07.590491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.591 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2730 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.592 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.592 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/network.incoming.bytes volume: 110 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.592 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.592 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.592 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.592 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.592 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.592 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.593 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.593 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.593 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.594 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.594 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.594 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:07:07.592824) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.595 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:07:07.594730) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.595 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.595 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.595 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.596 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.596 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.596 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.597 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.597 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.597 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.597 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.597 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.597 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.597 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.598 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:07:07.597486) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1871: 321 pgs: 321 active+clean; 324 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 15 KiB/s wr, 3 op/s
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.599 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.600 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.600 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.600 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.600 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.600 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.600 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.600 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.600 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.601 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.601 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:07:07.600710) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.601 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.601 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.602 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.602 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.602 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.603 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/cpu volume: 34020000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.603 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 54920000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:07:07.603080) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.604 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.604 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/cpu volume: 33940000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.604 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.604 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.604 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.604 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.604 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.605 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.605 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/disk.device.read.latency volume: 2691292999 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.605 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:07:07.605014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.605 14 DEBUG ceilometer.compute.pollsters [-] f8be75db-d124-4069-a573-db7410ea2b5e/disk.device.read.latency volume: 203040318 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.605 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.605 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.606 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.606 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000007, id=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9>: [Error Code 42] Domain not found: no domain with matching uuid 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.606 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/disk.device.read.latency volume: 2205228177 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.607 14 DEBUG ceilometer.compute.pollsters [-] a6e095a0-cb58-430d-9347-4aab385c6e69/disk.device.read.latency volume: 22709059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.607 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.608 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.608 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.608 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.608 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.608 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.610 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.610 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.610 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.610 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.610 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.610 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.610 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:07:07.610 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:07:07 compute-0 sudo[453627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:07:07 compute-0 sudo[453627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:07:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:07:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:07:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:07:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:07:07 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:07:07 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1010957087' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:07:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:07:07 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2488610458' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.985 2 DEBUG oslo_concurrency.processutils [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.986 2 DEBUG nova.virt.libvirt.vif [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T20:05:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-521568053',display_name='tempest-ServerActionsTestJSON-server-521568053',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-521568053',id=7,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO3+Ml3MDFRwjAbHlLVelOVaJFw9wMrAgiM3K5ECgv/8gOHV2c2HY+qo3hzkkazGL3NjANBQg+Uxykp7yYUSzraCqdB1dpSHggRBXiV5RbTjrArOXyRLYbqWS943JQegug==',key_name='tempest-keypair-1325070434',keypairs=<?>,launch_index=0,launched_at=2025-10-02T20:05:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0db170bd1e464f2ea61c24a9079861a4',ramdisk_id='',reservation_id='r-w750d1sy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-872820255',owner_user_name='tempest-ServerActionsTestJSON-872820255-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T20:07:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f962d436a03a4b70951908eb9f826d11',uuid=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.986 2 DEBUG nova.network.os_vif_util [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Converting VIF {"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.988 2 DEBUG nova.network.os_vif_util [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:42:64,bridge_name='br-int',has_traffic_filtering=True,id=668a7aea-bc00-4cac-b1dd-b0786e76c474,network=Network(c59c91fb-efec-4ddf-b699-e072223ea127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap668a7aea-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:07:07 compute-0 nova_compute[355794]: 2025-10-02 20:07:07.990 2 DEBUG nova.objects.instance [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lazy-loading 'pci_devices' on Instance uuid cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.004 2 DEBUG nova.virt.libvirt.driver [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] End _get_guest_xml xml=<domain type="kvm">
Oct 02 20:07:08 compute-0 nova_compute[355794]:   <uuid>cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9</uuid>
Oct 02 20:07:08 compute-0 nova_compute[355794]:   <name>instance-00000007</name>
Oct 02 20:07:08 compute-0 nova_compute[355794]:   <memory>131072</memory>
Oct 02 20:07:08 compute-0 nova_compute[355794]:   <vcpu>1</vcpu>
Oct 02 20:07:08 compute-0 nova_compute[355794]:   <metadata>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <nova:name>tempest-ServerActionsTestJSON-server-521568053</nova:name>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <nova:creationTime>2025-10-02 20:07:06</nova:creationTime>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <nova:flavor name="m1.nano">
Oct 02 20:07:08 compute-0 nova_compute[355794]:         <nova:memory>128</nova:memory>
Oct 02 20:07:08 compute-0 nova_compute[355794]:         <nova:disk>1</nova:disk>
Oct 02 20:07:08 compute-0 nova_compute[355794]:         <nova:swap>0</nova:swap>
Oct 02 20:07:08 compute-0 nova_compute[355794]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 20:07:08 compute-0 nova_compute[355794]:         <nova:vcpus>1</nova:vcpus>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       </nova:flavor>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <nova:owner>
Oct 02 20:07:08 compute-0 nova_compute[355794]:         <nova:user uuid="f962d436a03a4b70951908eb9f826d11">tempest-ServerActionsTestJSON-872820255-project-member</nova:user>
Oct 02 20:07:08 compute-0 nova_compute[355794]:         <nova:project uuid="0db170bd1e464f2ea61c24a9079861a4">tempest-ServerActionsTestJSON-872820255</nova:project>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       </nova:owner>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <nova:root type="image" uuid="2881b8cb-4cad-4124-8a6e-ae21054c9692"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <nova:ports>
Oct 02 20:07:08 compute-0 nova_compute[355794]:         <nova:port uuid="668a7aea-bc00-4cac-b1dd-b0786e76c474">
Oct 02 20:07:08 compute-0 nova_compute[355794]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:         </nova:port>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       </nova:ports>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     </nova:instance>
Oct 02 20:07:08 compute-0 nova_compute[355794]:   </metadata>
Oct 02 20:07:08 compute-0 nova_compute[355794]:   <sysinfo type="smbios">
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <system>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <entry name="manufacturer">RDO</entry>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <entry name="product">OpenStack Compute</entry>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <entry name="serial">cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9</entry>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <entry name="uuid">cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9</entry>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <entry name="family">Virtual Machine</entry>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     </system>
Oct 02 20:07:08 compute-0 nova_compute[355794]:   </sysinfo>
Oct 02 20:07:08 compute-0 nova_compute[355794]:   <os>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <boot dev="hd"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <smbios mode="sysinfo"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:   </os>
Oct 02 20:07:08 compute-0 nova_compute[355794]:   <features>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <acpi/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <apic/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <vmcoreinfo/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:   </features>
Oct 02 20:07:08 compute-0 nova_compute[355794]:   <clock offset="utc">
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <timer name="hpet" present="no"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:   </clock>
Oct 02 20:07:08 compute-0 nova_compute[355794]:   <cpu mode="host-model" match="exact">
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:   </cpu>
Oct 02 20:07:08 compute-0 nova_compute[355794]:   <devices>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9_disk">
Oct 02 20:07:08 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       </source>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:07:08 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <target dev="vda" bus="virtio"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <disk type="network" device="cdrom">
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9_disk.config">
Oct 02 20:07:08 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       </source>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:07:08 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <target dev="sda" bus="sata"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <interface type="ethernet">
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <mac address="fa:16:3e:eb:42:64"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <mtu size="1442"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <target dev="tap668a7aea-bc"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     </interface>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <serial type="pty">
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <log file="/var/lib/nova/instances/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9/console.log" append="off"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     </serial>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <video>
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     </video>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <input type="tablet" bus="usb"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <input type="keyboard" bus="usb"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <rng model="virtio">
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <backend model="random">/dev/urandom</backend>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     </rng>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <controller type="usb" index="0"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     <memballoon model="virtio">
Oct 02 20:07:08 compute-0 nova_compute[355794]:       <stats period="10"/>
Oct 02 20:07:08 compute-0 nova_compute[355794]:     </memballoon>
Oct 02 20:07:08 compute-0 nova_compute[355794]:   </devices>
Oct 02 20:07:08 compute-0 nova_compute[355794]: </domain>
Oct 02 20:07:08 compute-0 nova_compute[355794]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.005 2 DEBUG nova.virt.libvirt.driver [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.005 2 DEBUG nova.virt.libvirt.driver [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.006 2 DEBUG nova.virt.libvirt.vif [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T20:05:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-521568053',display_name='tempest-ServerActionsTestJSON-server-521568053',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-521568053',id=7,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO3+Ml3MDFRwjAbHlLVelOVaJFw9wMrAgiM3K5ECgv/8gOHV2c2HY+qo3hzkkazGL3NjANBQg+Uxykp7yYUSzraCqdB1dpSHggRBXiV5RbTjrArOXyRLYbqWS943JQegug==',key_name='tempest-keypair-1325070434',keypairs=<?>,launch_index=0,launched_at=2025-10-02T20:05:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='0db170bd1e464f2ea61c24a9079861a4',ramdisk_id='',reservation_id='r-w750d1sy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-872820255',owner_user_name='tempest-ServerActionsTestJSON-872820255-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T20:07:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f962d436a03a4b70951908eb9f826d11',uuid=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.006 2 DEBUG nova.network.os_vif_util [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Converting VIF {"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.007 2 DEBUG nova.network.os_vif_util [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:42:64,bridge_name='br-int',has_traffic_filtering=True,id=668a7aea-bc00-4cac-b1dd-b0786e76c474,network=Network(c59c91fb-efec-4ddf-b699-e072223ea127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap668a7aea-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.007 2 DEBUG os_vif [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:42:64,bridge_name='br-int',has_traffic_filtering=True,id=668a7aea-bc00-4cac-b1dd-b0786e76c474,network=Network(c59c91fb-efec-4ddf-b699-e072223ea127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap668a7aea-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.008 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.009 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.014 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap668a7aea-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.014 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap668a7aea-bc, col_values=(('external_ids', {'iface-id': '668a7aea-bc00-4cac-b1dd-b0786e76c474', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:42:64', 'vm-uuid': 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:08 compute-0 NetworkManager[44968]: <info>  [1759435628.0178] manager: (tap668a7aea-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.035 2 INFO os_vif [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:42:64,bridge_name='br-int',has_traffic_filtering=True,id=668a7aea-bc00-4cac-b1dd-b0786e76c474,network=Network(c59c91fb-efec-4ddf-b699-e072223ea127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap668a7aea-bc')
Oct 02 20:07:08 compute-0 kernel: tap668a7aea-bc: entered promiscuous mode
Oct 02 20:07:08 compute-0 systemd-udevd[453430]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 20:07:08 compute-0 NetworkManager[44968]: <info>  [1759435628.1429] manager: (tap668a7aea-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Oct 02 20:07:08 compute-0 ovn_controller[88435]: 2025-10-02T20:07:08Z|00098|binding|INFO|Claiming lport 668a7aea-bc00-4cac-b1dd-b0786e76c474 for this chassis.
Oct 02 20:07:08 compute-0 ovn_controller[88435]: 2025-10-02T20:07:08Z|00099|binding|INFO|668a7aea-bc00-4cac-b1dd-b0786e76c474: Claiming fa:16:3e:eb:42:64 10.100.0.13
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:08 compute-0 NetworkManager[44968]: <info>  [1759435628.1578] device (tap668a7aea-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.156 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:42:64 10.100.0.13'], port_security=['fa:16:3e:eb:42:64 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c59c91fb-efec-4ddf-b699-e072223ea127', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0db170bd1e464f2ea61c24a9079861a4', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f24334a9-c477-489f-956b-2cd2adaeee19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34cee90-562d-4e73-b869-f45c74e302ff, chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=668a7aea-bc00-4cac-b1dd-b0786e76c474) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.157 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 668a7aea-bc00-4cac-b1dd-b0786e76c474 in datapath c59c91fb-efec-4ddf-b699-e072223ea127 bound to our chassis
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.159 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c59c91fb-efec-4ddf-b699-e072223ea127
Oct 02 20:07:08 compute-0 NetworkManager[44968]: <info>  [1759435628.1605] device (tap668a7aea-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 20:07:08 compute-0 ovn_controller[88435]: 2025-10-02T20:07:08Z|00100|binding|INFO|Setting lport 668a7aea-bc00-4cac-b1dd-b0786e76c474 ovn-installed in OVS
Oct 02 20:07:08 compute-0 ovn_controller[88435]: 2025-10-02T20:07:08Z|00101|binding|INFO|Setting lport 668a7aea-bc00-4cac-b1dd-b0786e76c474 up in Southbound
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.175 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[ad6d3ccc-9429-4118-b668-2b82750522b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.176 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc59c91fb-e1 in ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.179 420728 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc59c91fb-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.179 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[c1682e31-8836-4875-9116-a9a59906ee05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.182 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[dbec2b47-5276-4c5c-a7c9-134206287eb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.199 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[b5dca8ee-2156-4b42-b220-eae60098b231]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:08 compute-0 podman[453717]: 2025-10-02 20:07:08.205215713 +0000 UTC m=+0.089497241 container create a371efa171028c0e8b088c5bcf972e3c6b986369a871aedc415f9bc189888bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:07:08 compute-0 systemd-machined[137646]: New machine qemu-10-instance-00000007.
Oct 02 20:07:08 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-00000007.
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.229 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[1e80467a-2d21-4db9-9f51-49e86d607684]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:08 compute-0 podman[453717]: 2025-10-02 20:07:08.179570999 +0000 UTC m=+0.063852517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.269 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[07517db2-1518-431e-b7dd-14092e1fa7b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:08 compute-0 systemd[1]: Started libpod-conmon-a371efa171028c0e8b088c5bcf972e3c6b986369a871aedc415f9bc189888bf6.scope.
Oct 02 20:07:08 compute-0 NetworkManager[44968]: <info>  [1759435628.2799] manager: (tapc59c91fb-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/55)
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.278 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[350dc5e3-e5df-4406-96d8-dd6141f40549]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.325 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[239b7ff6-0741-497b-b9c3-edfc663c8291]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:08 compute-0 podman[453717]: 2025-10-02 20:07:08.33176794 +0000 UTC m=+0.216049498 container init a371efa171028c0e8b088c5bcf972e3c6b986369a871aedc415f9bc189888bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meninsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.335 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe7c571-19f8-433d-9f1c-1bfee89b4282]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:08 compute-0 podman[453717]: 2025-10-02 20:07:08.342817433 +0000 UTC m=+0.227098951 container start a371efa171028c0e8b088c5bcf972e3c6b986369a871aedc415f9bc189888bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meninsky, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:07:08 compute-0 podman[453717]: 2025-10-02 20:07:08.347618819 +0000 UTC m=+0.231900377 container attach a371efa171028c0e8b088c5bcf972e3c6b986369a871aedc415f9bc189888bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:07:08 compute-0 exciting_meninsky[453753]: 167 167
Oct 02 20:07:08 compute-0 systemd[1]: libpod-a371efa171028c0e8b088c5bcf972e3c6b986369a871aedc415f9bc189888bf6.scope: Deactivated successfully.
Oct 02 20:07:08 compute-0 podman[453717]: 2025-10-02 20:07:08.353517589 +0000 UTC m=+0.237799107 container died a371efa171028c0e8b088c5bcf972e3c6b986369a871aedc415f9bc189888bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meninsky, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 20:07:08 compute-0 NetworkManager[44968]: <info>  [1759435628.3626] device (tapc59c91fb-e0): carrier: link connected
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.371 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[54bb1a44-05f3-4db3-916a-4e11fa487582]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-b739f503de5bee773da2f91e93d85681de6f42e592cf949325db02fd6cfa1bc1-merged.mount: Deactivated successfully.
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.396 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5c8123-15ab-446e-892a-0296e4a8e01e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc59c91fb-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:05:b0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679993, 'reachable_time': 37298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 453786, 'error': None, 'target': 'ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:08 compute-0 podman[453717]: 2025-10-02 20:07:08.407455706 +0000 UTC m=+0.291737224 container remove a371efa171028c0e8b088c5bcf972e3c6b986369a871aedc415f9bc189888bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meninsky, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.429 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1fc837-9d58-431d-adb1-45670ba18934]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:5b0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 679993, 'tstamp': 679993}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 453791, 'error': None, 'target': 'ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:08 compute-0 systemd[1]: libpod-conmon-a371efa171028c0e8b088c5bcf972e3c6b986369a871aedc415f9bc189888bf6.scope: Deactivated successfully.
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.453 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[849aff8b-10a7-4985-8612-e2166b81490f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc59c91fb-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:05:b0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679993, 'reachable_time': 37298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 453794, 'error': None, 'target': 'ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:08 compute-0 ovn_controller[88435]: 2025-10-02T20:07:08Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d8:72:8b 10.100.0.7
Oct 02 20:07:08 compute-0 ovn_controller[88435]: 2025-10-02T20:07:08Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d8:72:8b 10.100.0.7
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.490 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[d4bceaf0-0ed7-4a3e-adc5-38636c681330]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.588 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[8f0590a1-dd5e-4b31-a92e-927d3cd54881]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.590 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc59c91fb-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.591 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.592 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc59c91fb-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:08 compute-0 NetworkManager[44968]: <info>  [1759435628.5957] manager: (tapc59c91fb-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct 02 20:07:08 compute-0 kernel: tapc59c91fb-e0: entered promiscuous mode
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.603 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc59c91fb-e0, col_values=(('external_ids', {'iface-id': 'b59aad26-fd1d-4c37-adbd-b18497c4c15f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:08 compute-0 ovn_controller[88435]: 2025-10-02T20:07:08Z|00102|binding|INFO|Releasing lport b59aad26-fd1d-4c37-adbd-b18497c4c15f from this chassis (sb_readonly=0)
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.631 285790 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c59c91fb-efec-4ddf-b699-e072223ea127.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c59c91fb-efec-4ddf-b699-e072223ea127.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:08 compute-0 nova_compute[355794]: 2025-10-02 20:07:08.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.634 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[21a74caf-ce54-4bea-b75b-9101f569745a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.635 285790 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: global
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     log         /dev/log local0 debug
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     log-tag     haproxy-metadata-proxy-c59c91fb-efec-4ddf-b699-e072223ea127
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     user        root
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     group       root
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     maxconn     1024
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     pidfile     /var/lib/neutron/external/pids/c59c91fb-efec-4ddf-b699-e072223ea127.pid.haproxy
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     daemon
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: defaults
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     log global
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     mode http
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     option httplog
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     option dontlognull
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     option http-server-close
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     option forwardfor
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     retries                 3
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     timeout http-request    30s
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     timeout connect         30s
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     timeout client          32s
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     timeout server          32s
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     timeout http-keep-alive 30s
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: listen listener
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     bind 169.254.169.254:80
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:     http-request add-header X-OVN-Network-ID c59c91fb-efec-4ddf-b699-e072223ea127
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 20:07:08 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:08.635 285790 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127', 'env', 'PROCESS_TAG=haproxy-c59c91fb-efec-4ddf-b699-e072223ea127', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c59c91fb-efec-4ddf-b699-e072223ea127.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 20:07:08 compute-0 podman[453839]: 2025-10-02 20:07:08.649268161 +0000 UTC m=+0.056571157 container create e14e99e62c719e2917b4375802d6985f0e337d3824a3f3b53a8c5030cebcb1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pasteur, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 20:07:08 compute-0 systemd[1]: Started libpod-conmon-e14e99e62c719e2917b4375802d6985f0e337d3824a3f3b53a8c5030cebcb1f2.scope.
Oct 02 20:07:08 compute-0 podman[453839]: 2025-10-02 20:07:08.623818481 +0000 UTC m=+0.031121497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:07:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:07:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f278f0e223f04d3e3643fc17a10d539c85bce68bf15f4ddb4a14ec5cb28c7d6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:07:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f278f0e223f04d3e3643fc17a10d539c85bce68bf15f4ddb4a14ec5cb28c7d6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:07:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f278f0e223f04d3e3643fc17a10d539c85bce68bf15f4ddb4a14ec5cb28c7d6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:07:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f278f0e223f04d3e3643fc17a10d539c85bce68bf15f4ddb4a14ec5cb28c7d6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:07:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f278f0e223f04d3e3643fc17a10d539c85bce68bf15f4ddb4a14ec5cb28c7d6f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:07:08 compute-0 ceph-mon[191910]: pgmap v1871: 321 pgs: 321 active+clean; 324 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 15 KiB/s wr, 3 op/s
Oct 02 20:07:08 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2488610458' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:07:08 compute-0 podman[453839]: 2025-10-02 20:07:08.814237113 +0000 UTC m=+0.221540129 container init e14e99e62c719e2917b4375802d6985f0e337d3824a3f3b53a8c5030cebcb1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pasteur, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:07:08 compute-0 podman[453839]: 2025-10-02 20:07:08.831324769 +0000 UTC m=+0.238627765 container start e14e99e62c719e2917b4375802d6985f0e337d3824a3f3b53a8c5030cebcb1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pasteur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:07:08 compute-0 podman[453839]: 2025-10-02 20:07:08.836480012 +0000 UTC m=+0.243783058 container attach e14e99e62c719e2917b4375802d6985f0e337d3824a3f3b53a8c5030cebcb1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pasteur, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 20:07:09 compute-0 podman[453895]: 2025-10-02 20:07:09.146458128 +0000 UTC m=+0.088332346 container create 9908f6a690995d1b3a4306bf11fedafa55777080a88920bf8ddd0987a02251c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 20:07:09 compute-0 podman[453895]: 2025-10-02 20:07:09.102432138 +0000 UTC m=+0.044306356 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 20:07:09 compute-0 systemd[1]: Started libpod-conmon-9908f6a690995d1b3a4306bf11fedafa55777080a88920bf8ddd0987a02251c9.scope.
Oct 02 20:07:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:07:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2edea184a0937526881ae6e3c691d6e61e3b7e2e863169a47af1c8eb310de6e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 20:07:09 compute-0 podman[453895]: 2025-10-02 20:07:09.287926613 +0000 UTC m=+0.229800791 container init 9908f6a690995d1b3a4306bf11fedafa55777080a88920bf8ddd0987a02251c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 20:07:09 compute-0 podman[453895]: 2025-10-02 20:07:09.309427026 +0000 UTC m=+0.251301214 container start 9908f6a690995d1b3a4306bf11fedafa55777080a88920bf8ddd0987a02251c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 20:07:09 compute-0 neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127[453910]: [NOTICE]   (453914) : New worker (453916) forked
Oct 02 20:07:09 compute-0 neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127[453910]: [NOTICE]   (453914) : Loading success.
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.369 2 DEBUG nova.virt.libvirt.host [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Removed pending event for cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.370 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435629.3682563, cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.370 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] VM Resumed (Lifecycle Event)
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.377 2 DEBUG nova.compute.manager [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.385 2 INFO nova.virt.libvirt.driver [-] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Instance rebooted successfully.
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.386 2 DEBUG nova.compute.manager [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.415 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.430 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.465 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.466 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435629.3783302, cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.466 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] VM Started (Lifecycle Event)
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.469 2 DEBUG nova.compute.manager [req-b58417c6-3576-47b9-b138-c45d9e1b34e5 req-a7147e99-f6c1-4b8a-bb27-358570ab868b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Received event network-vif-plugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.469 2 DEBUG oslo_concurrency.lockutils [req-b58417c6-3576-47b9-b138-c45d9e1b34e5 req-a7147e99-f6c1-4b8a-bb27-358570ab868b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.470 2 DEBUG oslo_concurrency.lockutils [req-b58417c6-3576-47b9-b138-c45d9e1b34e5 req-a7147e99-f6c1-4b8a-bb27-358570ab868b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.470 2 DEBUG oslo_concurrency.lockutils [req-b58417c6-3576-47b9-b138-c45d9e1b34e5 req-a7147e99-f6c1-4b8a-bb27-358570ab868b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.470 2 DEBUG nova.compute.manager [req-b58417c6-3576-47b9-b138-c45d9e1b34e5 req-a7147e99-f6c1-4b8a-bb27-358570ab868b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] No waiting events found dispatching network-vif-plugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.470 2 WARNING nova.compute.manager [req-b58417c6-3576-47b9-b138-c45d9e1b34e5 req-a7147e99-f6c1-4b8a-bb27-358570ab868b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Received unexpected event network-vif-plugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.470 2 DEBUG nova.compute.manager [req-b58417c6-3576-47b9-b138-c45d9e1b34e5 req-a7147e99-f6c1-4b8a-bb27-358570ab868b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Received event network-vif-plugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.471 2 DEBUG oslo_concurrency.lockutils [req-b58417c6-3576-47b9-b138-c45d9e1b34e5 req-a7147e99-f6c1-4b8a-bb27-358570ab868b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.471 2 DEBUG oslo_concurrency.lockutils [req-b58417c6-3576-47b9-b138-c45d9e1b34e5 req-a7147e99-f6c1-4b8a-bb27-358570ab868b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.471 2 DEBUG oslo_concurrency.lockutils [req-b58417c6-3576-47b9-b138-c45d9e1b34e5 req-a7147e99-f6c1-4b8a-bb27-358570ab868b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.471 2 DEBUG nova.compute.manager [req-b58417c6-3576-47b9-b138-c45d9e1b34e5 req-a7147e99-f6c1-4b8a-bb27-358570ab868b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] No waiting events found dispatching network-vif-plugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.472 2 WARNING nova.compute.manager [req-b58417c6-3576-47b9-b138-c45d9e1b34e5 req-a7147e99-f6c1-4b8a-bb27-358570ab868b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Received unexpected event network-vif-plugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.472 2 DEBUG nova.compute.manager [req-b58417c6-3576-47b9-b138-c45d9e1b34e5 req-a7147e99-f6c1-4b8a-bb27-358570ab868b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Received event network-vif-plugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.472 2 DEBUG oslo_concurrency.lockutils [req-b58417c6-3576-47b9-b138-c45d9e1b34e5 req-a7147e99-f6c1-4b8a-bb27-358570ab868b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.472 2 DEBUG oslo_concurrency.lockutils [req-b58417c6-3576-47b9-b138-c45d9e1b34e5 req-a7147e99-f6c1-4b8a-bb27-358570ab868b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.472 2 DEBUG oslo_concurrency.lockutils [req-b58417c6-3576-47b9-b138-c45d9e1b34e5 req-a7147e99-f6c1-4b8a-bb27-358570ab868b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.473 2 DEBUG nova.compute.manager [req-b58417c6-3576-47b9-b138-c45d9e1b34e5 req-a7147e99-f6c1-4b8a-bb27-358570ab868b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] No waiting events found dispatching network-vif-plugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.473 2 WARNING nova.compute.manager [req-b58417c6-3576-47b9-b138-c45d9e1b34e5 req-a7147e99-f6c1-4b8a-bb27-358570ab868b 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Received unexpected event network-vif-plugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.476 2 DEBUG oslo_concurrency.lockutils [None req-79eb07df-619b-483e-9b72-9fe161839dfc f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.496 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.503 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.507 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Updating instance_info_cache with network_info: [{"id": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "address": "fa:16:3e:04:5e:6a", "network": {"id": "3f70fd72-8355-46f5-8b19-cebed2c28970", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1770026923-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e6c016d-90", "ovs_interfaceid": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.527 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.527 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.528 2 DEBUG oslo_concurrency.lockutils [None req-f91231f4-eca7-4f5b-b400-2654029bd5cb b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Acquired lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.530 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.555 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.556 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.556 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.556 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:07:09 compute-0 nova_compute[355794]: 2025-10-02 20:07:09.556 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1872: 321 pgs: 321 active+clean; 342 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 1.4 MiB/s wr, 29 op/s
Oct 02 20:07:10 compute-0 sweet_pasteur[453868]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:07:10 compute-0 sweet_pasteur[453868]: --> relative data size: 1.0
Oct 02 20:07:10 compute-0 sweet_pasteur[453868]: --> All data devices are unavailable
Oct 02 20:07:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:07:10 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1098343543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:07:10 compute-0 systemd[1]: libpod-e14e99e62c719e2917b4375802d6985f0e337d3824a3f3b53a8c5030cebcb1f2.scope: Deactivated successfully.
Oct 02 20:07:10 compute-0 systemd[1]: libpod-e14e99e62c719e2917b4375802d6985f0e337d3824a3f3b53a8c5030cebcb1f2.scope: Consumed 1.165s CPU time.
Oct 02 20:07:10 compute-0 podman[453839]: 2025-10-02 20:07:10.096059277 +0000 UTC m=+1.503362273 container died e14e99e62c719e2917b4375802d6985f0e337d3824a3f3b53a8c5030cebcb1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 20:07:10 compute-0 nova_compute[355794]: 2025-10-02 20:07:10.129 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-f278f0e223f04d3e3643fc17a10d539c85bce68bf15f4ddb4a14ec5cb28c7d6f-merged.mount: Deactivated successfully.
Oct 02 20:07:10 compute-0 podman[453839]: 2025-10-02 20:07:10.1661324 +0000 UTC m=+1.573435396 container remove e14e99e62c719e2917b4375802d6985f0e337d3824a3f3b53a8c5030cebcb1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 20:07:10 compute-0 systemd[1]: libpod-conmon-e14e99e62c719e2917b4375802d6985f0e337d3824a3f3b53a8c5030cebcb1f2.scope: Deactivated successfully.
Oct 02 20:07:10 compute-0 sudo[453627]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:10 compute-0 nova_compute[355794]: 2025-10-02 20:07:10.277 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:07:10 compute-0 nova_compute[355794]: 2025-10-02 20:07:10.278 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:07:10 compute-0 nova_compute[355794]: 2025-10-02 20:07:10.282 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:07:10 compute-0 nova_compute[355794]: 2025-10-02 20:07:10.282 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:07:10 compute-0 nova_compute[355794]: 2025-10-02 20:07:10.282 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:07:10 compute-0 nova_compute[355794]: 2025-10-02 20:07:10.290 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:07:10 compute-0 nova_compute[355794]: 2025-10-02 20:07:10.290 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:07:10 compute-0 nova_compute[355794]: 2025-10-02 20:07:10.294 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:07:10 compute-0 nova_compute[355794]: 2025-10-02 20:07:10.295 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:07:10 compute-0 sudo[453983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:07:10 compute-0 sudo[453983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:10 compute-0 sudo[453983]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:10 compute-0 sudo[454008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:07:10 compute-0 sudo[454008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:10 compute-0 sudo[454008]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:07:10 compute-0 sudo[454033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:07:10 compute-0 sudo[454033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:10 compute-0 sudo[454033]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:10 compute-0 sudo[454058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:07:10 compute-0 sudo[454058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:10 compute-0 nova_compute[355794]: 2025-10-02 20:07:10.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:10 compute-0 ceph-mon[191910]: pgmap v1872: 321 pgs: 321 active+clean; 342 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 1.4 MiB/s wr, 29 op/s
Oct 02 20:07:10 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1098343543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:07:10 compute-0 nova_compute[355794]: 2025-10-02 20:07:10.956 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:07:10 compute-0 nova_compute[355794]: 2025-10-02 20:07:10.958 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3192MB free_disk=59.82701110839844GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:07:10 compute-0 nova_compute[355794]: 2025-10-02 20:07:10.958 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:10 compute-0 nova_compute[355794]: 2025-10-02 20:07:10.959 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:11 compute-0 nova_compute[355794]: 2025-10-02 20:07:11.153 2 DEBUG nova.network.neutron [None req-f91231f4-eca7-4f5b-b400-2654029bd5cb b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 20:07:11 compute-0 nova_compute[355794]: 2025-10-02 20:07:11.302 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:07:11 compute-0 nova_compute[355794]: 2025-10-02 20:07:11.302 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance f8be75db-d124-4069-a573-db7410ea2b5e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:07:11 compute-0 nova_compute[355794]: 2025-10-02 20:07:11.302 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:07:11 compute-0 nova_compute[355794]: 2025-10-02 20:07:11.302 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance a6e095a0-cb58-430d-9347-4aab385c6e69 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:07:11 compute-0 nova_compute[355794]: 2025-10-02 20:07:11.303 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:07:11 compute-0 nova_compute[355794]: 2025-10-02 20:07:11.303 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1408MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:07:11 compute-0 podman[454121]: 2025-10-02 20:07:11.305921198 +0000 UTC m=+0.082120369 container create c5c0930577592c358e28d8ff3520085be6f9596b11dda0f0b88bd07b51b7e38e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hermann, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:07:11 compute-0 podman[454121]: 2025-10-02 20:07:11.277692916 +0000 UTC m=+0.053892097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:07:11 compute-0 systemd[1]: Started libpod-conmon-c5c0930577592c358e28d8ff3520085be6f9596b11dda0f0b88bd07b51b7e38e.scope.
Oct 02 20:07:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:07:11 compute-0 podman[454121]: 2025-10-02 20:07:11.443700052 +0000 UTC m=+0.219899263 container init c5c0930577592c358e28d8ff3520085be6f9596b11dda0f0b88bd07b51b7e38e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 20:07:11 compute-0 podman[454121]: 2025-10-02 20:07:11.461614976 +0000 UTC m=+0.237814147 container start c5c0930577592c358e28d8ff3520085be6f9596b11dda0f0b88bd07b51b7e38e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hermann, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:07:11 compute-0 podman[454121]: 2025-10-02 20:07:11.466724869 +0000 UTC m=+0.242924040 container attach c5c0930577592c358e28d8ff3520085be6f9596b11dda0f0b88bd07b51b7e38e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:07:11 compute-0 nice_hermann[454137]: 167 167
Oct 02 20:07:11 compute-0 systemd[1]: libpod-c5c0930577592c358e28d8ff3520085be6f9596b11dda0f0b88bd07b51b7e38e.scope: Deactivated successfully.
Oct 02 20:07:11 compute-0 podman[454121]: 2025-10-02 20:07:11.478743383 +0000 UTC m=+0.254942554 container died c5c0930577592c358e28d8ff3520085be6f9596b11dda0f0b88bd07b51b7e38e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hermann, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 20:07:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-108f49cc7a09a61697d3cce1e40ede210d7b7170f33dd88bf0d4b858d0777676-merged.mount: Deactivated successfully.
Oct 02 20:07:11 compute-0 podman[454121]: 2025-10-02 20:07:11.551604788 +0000 UTC m=+0.327803959 container remove c5c0930577592c358e28d8ff3520085be6f9596b11dda0f0b88bd07b51b7e38e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hermann, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 20:07:11 compute-0 nova_compute[355794]: 2025-10-02 20:07:11.556 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:11 compute-0 systemd[1]: libpod-conmon-c5c0930577592c358e28d8ff3520085be6f9596b11dda0f0b88bd07b51b7e38e.scope: Deactivated successfully.
Oct 02 20:07:11 compute-0 nova_compute[355794]: 2025-10-02 20:07:11.596 2 DEBUG nova.compute.manager [req-bd62ab62-3ea9-471c-888b-6b8a5b103698 req-40dbd1b2-68e3-408c-a0cf-b63d7175014e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Received event network-changed-6e6c016d-9003-4a4b-92ce-11e00a91b399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:07:11 compute-0 nova_compute[355794]: 2025-10-02 20:07:11.597 2 DEBUG nova.compute.manager [req-bd62ab62-3ea9-471c-888b-6b8a5b103698 req-40dbd1b2-68e3-408c-a0cf-b63d7175014e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Refreshing instance network info cache due to event network-changed-6e6c016d-9003-4a4b-92ce-11e00a91b399. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:07:11 compute-0 nova_compute[355794]: 2025-10-02 20:07:11.597 2 DEBUG oslo_concurrency.lockutils [req-bd62ab62-3ea9-471c-888b-6b8a5b103698 req-40dbd1b2-68e3-408c-a0cf-b63d7175014e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:07:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 350 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 400 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 02 20:07:11 compute-0 podman[454181]: 2025-10-02 20:07:11.845881887 +0000 UTC m=+0.085813930 container create e03dca8b83195b6b8aa89731db29aefbd553fe96c814df6bb3cc869fea1e357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shockley, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 20:07:11 compute-0 podman[454181]: 2025-10-02 20:07:11.800757514 +0000 UTC m=+0.040689527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:07:11 compute-0 systemd[1]: Started libpod-conmon-e03dca8b83195b6b8aa89731db29aefbd553fe96c814df6bb3cc869fea1e357a.scope.
Oct 02 20:07:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c746d0513f5ae5a65d1a6102b24345ab1e57fbb00d7bf10b015c6192525d694/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c746d0513f5ae5a65d1a6102b24345ab1e57fbb00d7bf10b015c6192525d694/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c746d0513f5ae5a65d1a6102b24345ab1e57fbb00d7bf10b015c6192525d694/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c746d0513f5ae5a65d1a6102b24345ab1e57fbb00d7bf10b015c6192525d694/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:07:11 compute-0 podman[454181]: 2025-10-02 20:07:11.995026202 +0000 UTC m=+0.234958245 container init e03dca8b83195b6b8aa89731db29aefbd553fe96c814df6bb3cc869fea1e357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:07:12 compute-0 podman[454181]: 2025-10-02 20:07:12.019456219 +0000 UTC m=+0.259388232 container start e03dca8b83195b6b8aa89731db29aefbd553fe96c814df6bb3cc869fea1e357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shockley, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 20:07:12 compute-0 podman[454181]: 2025-10-02 20:07:12.025531713 +0000 UTC m=+0.265463726 container attach e03dca8b83195b6b8aa89731db29aefbd553fe96c814df6bb3cc869fea1e357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 20:07:12 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:07:12 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/862474010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:07:12 compute-0 nova_compute[355794]: 2025-10-02 20:07:12.131 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:12 compute-0 nova_compute[355794]: 2025-10-02 20:07:12.152 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:07:12 compute-0 nova_compute[355794]: 2025-10-02 20:07:12.172 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:07:12 compute-0 nova_compute[355794]: 2025-10-02 20:07:12.212 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:07:12 compute-0 nova_compute[355794]: 2025-10-02 20:07:12.212 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:12 compute-0 nova_compute[355794]: 2025-10-02 20:07:12.257 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:07:12 compute-0 nova_compute[355794]: 2025-10-02 20:07:12.258 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:07:12 compute-0 nova_compute[355794]: 2025-10-02 20:07:12.258 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:07:12 compute-0 nova_compute[355794]: 2025-10-02 20:07:12.258 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]: {
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:     "0": [
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:         {
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "devices": [
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "/dev/loop3"
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             ],
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "lv_name": "ceph_lv0",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "lv_size": "21470642176",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "name": "ceph_lv0",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "tags": {
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.cluster_name": "ceph",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.crush_device_class": "",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.encrypted": "0",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.osd_id": "0",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.type": "block",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.vdo": "0"
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             },
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "type": "block",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "vg_name": "ceph_vg0"
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:         }
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:     ],
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:     "1": [
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:         {
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "devices": [
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "/dev/loop4"
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             ],
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "lv_name": "ceph_lv1",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "lv_size": "21470642176",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "name": "ceph_lv1",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "tags": {
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.cluster_name": "ceph",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.crush_device_class": "",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.encrypted": "0",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.osd_id": "1",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.type": "block",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.vdo": "0"
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             },
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "type": "block",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "vg_name": "ceph_vg1"
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:         }
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:     ],
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:     "2": [
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:         {
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "devices": [
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "/dev/loop5"
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             ],
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "lv_name": "ceph_lv2",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "lv_size": "21470642176",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "name": "ceph_lv2",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "tags": {
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.cluster_name": "ceph",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.crush_device_class": "",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.encrypted": "0",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.osd_id": "2",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.type": "block",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:                 "ceph.vdo": "0"
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             },
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "type": "block",
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:             "vg_name": "ceph_vg2"
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:         }
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]:     ]
Oct 02 20:07:12 compute-0 quizzical_shockley[454196]: }
Oct 02 20:07:12 compute-0 ceph-mon[191910]: pgmap v1873: 321 pgs: 321 active+clean; 350 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 400 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 02 20:07:12 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/862474010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:07:12 compute-0 systemd[1]: libpod-e03dca8b83195b6b8aa89731db29aefbd553fe96c814df6bb3cc869fea1e357a.scope: Deactivated successfully.
Oct 02 20:07:12 compute-0 conmon[454196]: conmon e03dca8b83195b6b8aa8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e03dca8b83195b6b8aa89731db29aefbd553fe96c814df6bb3cc869fea1e357a.scope/container/memory.events
Oct 02 20:07:12 compute-0 podman[454207]: 2025-10-02 20:07:12.961699856 +0000 UTC m=+0.045901962 container died e03dca8b83195b6b8aa89731db29aefbd553fe96c814df6bb3cc869fea1e357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shockley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 20:07:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c746d0513f5ae5a65d1a6102b24345ab1e57fbb00d7bf10b015c6192525d694-merged.mount: Deactivated successfully.
Oct 02 20:07:13 compute-0 nova_compute[355794]: 2025-10-02 20:07:13.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:13 compute-0 podman[454207]: 2025-10-02 20:07:13.057270411 +0000 UTC m=+0.141472507 container remove e03dca8b83195b6b8aa89731db29aefbd553fe96c814df6bb3cc869fea1e357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:07:13 compute-0 systemd[1]: libpod-conmon-e03dca8b83195b6b8aa89731db29aefbd553fe96c814df6bb3cc869fea1e357a.scope: Deactivated successfully.
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0028248264228039333 of space, bias 1.0, pg target 0.84744792684118 quantized to 32 (current 32)
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:07:13 compute-0 sudo[454058]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:07:13 compute-0 sudo[454221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:07:13 compute-0 sudo[454221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:13 compute-0 sudo[454221]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:13 compute-0 sudo[454246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:07:13 compute-0 sudo[454246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:13 compute-0 sudo[454246]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:13 compute-0 sudo[454271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:07:13 compute-0 sudo[454271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:13 compute-0 sudo[454271]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1874: 321 pgs: 321 active+clean; 355 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 513 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Oct 02 20:07:13 compute-0 nova_compute[355794]: 2025-10-02 20:07:13.630 2 DEBUG nova.network.neutron [None req-f91231f4-eca7-4f5b-b400-2654029bd5cb b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Updating instance_info_cache with network_info: [{"id": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "address": "fa:16:3e:04:5e:6a", "network": {"id": "3f70fd72-8355-46f5-8b19-cebed2c28970", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1770026923-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e6c016d-90", "ovs_interfaceid": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:07:13 compute-0 nova_compute[355794]: 2025-10-02 20:07:13.665 2 DEBUG oslo_concurrency.lockutils [None req-f91231f4-eca7-4f5b-b400-2654029bd5cb b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Releasing lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:07:13 compute-0 nova_compute[355794]: 2025-10-02 20:07:13.666 2 DEBUG nova.compute.manager [None req-f91231f4-eca7-4f5b-b400-2654029bd5cb b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Oct 02 20:07:13 compute-0 nova_compute[355794]: 2025-10-02 20:07:13.666 2 DEBUG nova.compute.manager [None req-f91231f4-eca7-4f5b-b400-2654029bd5cb b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] network_info to inject: |[{"id": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "address": "fa:16:3e:04:5e:6a", "network": {"id": "3f70fd72-8355-46f5-8b19-cebed2c28970", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1770026923-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e6c016d-90", "ovs_interfaceid": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Oct 02 20:07:13 compute-0 nova_compute[355794]: 2025-10-02 20:07:13.669 2 DEBUG oslo_concurrency.lockutils [req-bd62ab62-3ea9-471c-888b-6b8a5b103698 req-40dbd1b2-68e3-408c-a0cf-b63d7175014e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:07:13 compute-0 nova_compute[355794]: 2025-10-02 20:07:13.670 2 DEBUG nova.network.neutron [req-bd62ab62-3ea9-471c-888b-6b8a5b103698 req-40dbd1b2-68e3-408c-a0cf-b63d7175014e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Refreshing network info cache for port 6e6c016d-9003-4a4b-92ce-11e00a91b399 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:07:13 compute-0 sudo[454296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:07:13 compute-0 sudo[454296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:14 compute-0 podman[454362]: 2025-10-02 20:07:14.416901309 +0000 UTC m=+0.087777604 container create 05fdcf109bde50161e5a79966f1a5b014d85c24063e0e4c38b813799d4b58911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_joliot, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:07:14 compute-0 nova_compute[355794]: 2025-10-02 20:07:14.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:14 compute-0 podman[454362]: 2025-10-02 20:07:14.380816174 +0000 UTC m=+0.051692449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:07:14 compute-0 systemd[1]: Started libpod-conmon-05fdcf109bde50161e5a79966f1a5b014d85c24063e0e4c38b813799d4b58911.scope.
Oct 02 20:07:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:07:14 compute-0 podman[454362]: 2025-10-02 20:07:14.545208884 +0000 UTC m=+0.216085159 container init 05fdcf109bde50161e5a79966f1a5b014d85c24063e0e4c38b813799d4b58911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:07:14 compute-0 podman[454362]: 2025-10-02 20:07:14.567915304 +0000 UTC m=+0.238791559 container start 05fdcf109bde50161e5a79966f1a5b014d85c24063e0e4c38b813799d4b58911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_joliot, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 02 20:07:14 compute-0 podman[454362]: 2025-10-02 20:07:14.573428695 +0000 UTC m=+0.244304990 container attach 05fdcf109bde50161e5a79966f1a5b014d85c24063e0e4c38b813799d4b58911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_joliot, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:07:14 compute-0 charming_joliot[454384]: 167 167
Oct 02 20:07:14 compute-0 systemd[1]: libpod-05fdcf109bde50161e5a79966f1a5b014d85c24063e0e4c38b813799d4b58911.scope: Deactivated successfully.
Oct 02 20:07:14 compute-0 conmon[454384]: conmon 05fdcf109bde50161e5a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-05fdcf109bde50161e5a79966f1a5b014d85c24063e0e4c38b813799d4b58911.scope/container/memory.events
Oct 02 20:07:14 compute-0 podman[454362]: 2025-10-02 20:07:14.582311271 +0000 UTC m=+0.253187536 container died 05fdcf109bde50161e5a79966f1a5b014d85c24063e0e4c38b813799d4b58911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_joliot, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 20:07:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-99d33b4efd25d13b1ce59d4916bf3b0817da0d898a64f6a0dd4e4982f253a27f-merged.mount: Deactivated successfully.
Oct 02 20:07:14 compute-0 podman[454376]: 2025-10-02 20:07:14.64448699 +0000 UTC m=+0.156925526 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 20:07:14 compute-0 podman[454362]: 2025-10-02 20:07:14.665154375 +0000 UTC m=+0.336030640 container remove 05fdcf109bde50161e5a79966f1a5b014d85c24063e0e4c38b813799d4b58911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_joliot, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:07:14 compute-0 nova_compute[355794]: 2025-10-02 20:07:14.700 2 DEBUG oslo_concurrency.lockutils [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Acquiring lock "f8be75db-d124-4069-a573-db7410ea2b5e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:14 compute-0 nova_compute[355794]: 2025-10-02 20:07:14.700 2 DEBUG oslo_concurrency.lockutils [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lock "f8be75db-d124-4069-a573-db7410ea2b5e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:14 compute-0 nova_compute[355794]: 2025-10-02 20:07:14.700 2 DEBUG oslo_concurrency.lockutils [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Acquiring lock "f8be75db-d124-4069-a573-db7410ea2b5e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:14 compute-0 nova_compute[355794]: 2025-10-02 20:07:14.701 2 DEBUG oslo_concurrency.lockutils [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lock "f8be75db-d124-4069-a573-db7410ea2b5e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:14 compute-0 nova_compute[355794]: 2025-10-02 20:07:14.701 2 DEBUG oslo_concurrency.lockutils [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lock "f8be75db-d124-4069-a573-db7410ea2b5e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:14 compute-0 nova_compute[355794]: 2025-10-02 20:07:14.703 2 INFO nova.compute.manager [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Terminating instance
Oct 02 20:07:14 compute-0 nova_compute[355794]: 2025-10-02 20:07:14.704 2 DEBUG nova.compute.manager [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 20:07:14 compute-0 systemd[1]: libpod-conmon-05fdcf109bde50161e5a79966f1a5b014d85c24063e0e4c38b813799d4b58911.scope: Deactivated successfully.
Oct 02 20:07:14 compute-0 kernel: tap6e6c016d-90 (unregistering): left promiscuous mode
Oct 02 20:07:14 compute-0 NetworkManager[44968]: <info>  [1759435634.8535] device (tap6e6c016d-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 20:07:14 compute-0 ovn_controller[88435]: 2025-10-02T20:07:14Z|00103|binding|INFO|Releasing lport 6e6c016d-9003-4a4b-92ce-11e00a91b399 from this chassis (sb_readonly=0)
Oct 02 20:07:14 compute-0 ovn_controller[88435]: 2025-10-02T20:07:14Z|00104|binding|INFO|Setting lport 6e6c016d-9003-4a4b-92ce-11e00a91b399 down in Southbound
Oct 02 20:07:14 compute-0 ovn_controller[88435]: 2025-10-02T20:07:14Z|00105|binding|INFO|Removing iface tap6e6c016d-90 ovn-installed in OVS
Oct 02 20:07:14 compute-0 nova_compute[355794]: 2025-10-02 20:07:14.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:14 compute-0 nova_compute[355794]: 2025-10-02 20:07:14.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:14.876 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:5e:6a 10.100.0.8'], port_security=['fa:16:3e:04:5e:6a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f8be75db-d124-4069-a573-db7410ea2b5e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f70fd72-8355-46f5-8b19-cebed2c28970', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cca17e8f28a243bcaf58d01bf55608e9', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a85e186d-f4f6-43d0-947c-4e33f66e56e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f4ef716-4296-4970-8894-f8467917d8f9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=6e6c016d-9003-4a4b-92ce-11e00a91b399) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:07:14 compute-0 ceph-mon[191910]: pgmap v1874: 321 pgs: 321 active+clean; 355 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 513 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Oct 02 20:07:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:14.878 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 6e6c016d-9003-4a4b-92ce-11e00a91b399 in datapath 3f70fd72-8355-46f5-8b19-cebed2c28970 unbound from our chassis
Oct 02 20:07:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:14.881 285790 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f70fd72-8355-46f5-8b19-cebed2c28970, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 20:07:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:14.885 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[cfcec870-79d0-4afa-bf90-61c56839f8bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:14.886 285790 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970 namespace which is not needed anymore
Oct 02 20:07:14 compute-0 nova_compute[355794]: 2025-10-02 20:07:14.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:14 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 02 20:07:14 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 45.045s CPU time.
Oct 02 20:07:14 compute-0 systemd-machined[137646]: Machine qemu-6-instance-00000006 terminated.
Oct 02 20:07:14 compute-0 nova_compute[355794]: 2025-10-02 20:07:14.933 2 INFO nova.compute.manager [None req-4a891147-0384-4a18-81e4-007b604eafe9 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Get console output
Oct 02 20:07:14 compute-0 nova_compute[355794]: 2025-10-02 20:07:14.939 2 INFO oslo.privsep.daemon [None req-4a891147-0384-4a18-81e4-007b604eafe9 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmptj4ou04v/privsep.sock']
Oct 02 20:07:14 compute-0 podman[454423]: 2025-10-02 20:07:14.939831943 +0000 UTC m=+0.070678177 container create cdd69c97cdd4ee2ee5e3c66322345a98bba80385a7d2c6f4911bfa0642af9686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 20:07:15 compute-0 systemd[1]: Started libpod-conmon-cdd69c97cdd4ee2ee5e3c66322345a98bba80385a7d2c6f4911bfa0642af9686.scope.
Oct 02 20:07:15 compute-0 podman[454423]: 2025-10-02 20:07:14.919185549 +0000 UTC m=+0.050031803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:07:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85ba9e9ff249f35a321ecaa64600eebbb465253734bc0a7bd0cfb171cd425cd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85ba9e9ff249f35a321ecaa64600eebbb465253734bc0a7bd0cfb171cd425cd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85ba9e9ff249f35a321ecaa64600eebbb465253734bc0a7bd0cfb171cd425cd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85ba9e9ff249f35a321ecaa64600eebbb465253734bc0a7bd0cfb171cd425cd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:07:15 compute-0 podman[454423]: 2025-10-02 20:07:15.089893488 +0000 UTC m=+0.220739722 container init cdd69c97cdd4ee2ee5e3c66322345a98bba80385a7d2c6f4911bfa0642af9686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_poitras, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 20:07:15 compute-0 podman[454423]: 2025-10-02 20:07:15.103078818 +0000 UTC m=+0.233925052 container start cdd69c97cdd4ee2ee5e3c66322345a98bba80385a7d2c6f4911bfa0642af9686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_poitras, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 20:07:15 compute-0 podman[454423]: 2025-10-02 20:07:15.126412872 +0000 UTC m=+0.257259126 container attach cdd69c97cdd4ee2ee5e3c66322345a98bba80385a7d2c6f4911bfa0642af9686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct 02 20:07:15 compute-0 neutron-haproxy-ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970[450305]: [NOTICE]   (450324) : haproxy version is 2.8.14-c23fe91
Oct 02 20:07:15 compute-0 neutron-haproxy-ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970[450305]: [NOTICE]   (450324) : path to executable is /usr/sbin/haproxy
Oct 02 20:07:15 compute-0 neutron-haproxy-ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970[450305]: [WARNING]  (450324) : Exiting Master process...
Oct 02 20:07:15 compute-0 neutron-haproxy-ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970[450305]: [WARNING]  (450324) : Exiting Master process...
Oct 02 20:07:15 compute-0 neutron-haproxy-ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970[450305]: [ALERT]    (450324) : Current worker (450327) exited with code 143 (Terminated)
Oct 02 20:07:15 compute-0 neutron-haproxy-ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970[450305]: [WARNING]  (450324) : All workers exited. Exiting... (0)
Oct 02 20:07:15 compute-0 systemd[1]: libpod-c14083b0b464b20d2710d609e018a7f0a178c0046f07d821180147ea4b1b6ea1.scope: Deactivated successfully.
Oct 02 20:07:15 compute-0 podman[454467]: 2025-10-02 20:07:15.13995254 +0000 UTC m=+0.087987249 container died c14083b0b464b20d2710d609e018a7f0a178c0046f07d821180147ea4b1b6ea1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 20:07:15 compute-0 NetworkManager[44968]: <info>  [1759435635.1429] manager: (tap6e6c016d-90): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.171 2 INFO nova.virt.libvirt.driver [-] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Instance destroyed successfully.
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.172 2 DEBUG nova.objects.instance [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lazy-loading 'resources' on Instance uuid f8be75db-d124-4069-a573-db7410ea2b5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.190 2 DEBUG nova.virt.libvirt.vif [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T20:05:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1217822364',display_name='tempest-AttachInterfacesUnderV243Test-server-1217822364',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1217822364',id=6,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBlPsRkBlqgHx/BmjRPVDlBpptxjSDWYPURGAF2R+sS2VpCCQPiKVY59JVCOUD1P0G52Bb+7sbsVkqTPymDRO6SWoHX6J6G8pwCTS8EqALGPk0PYcRh2YWFhti1jIuVIxQ==',key_name='tempest-keypair-459640716',keypairs=<?>,launch_index=0,launched_at=2025-10-02T20:05:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cca17e8f28a243bcaf58d01bf55608e9',ramdisk_id='',reservation_id='r-m3pch4t0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-2039487239',owner_user_name='tempest-AttachInterfacesUnderV243Test-2039487239-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T20:07:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b0d30c42cdda433ebd7d28421e967748',uuid=f8be75db-d124-4069-a573-db7410ea2b5e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "address": "fa:16:3e:04:5e:6a", "network": {"id": "3f70fd72-8355-46f5-8b19-cebed2c28970", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1770026923-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e6c016d-90", "ovs_interfaceid": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.191 2 DEBUG nova.network.os_vif_util [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Converting VIF {"id": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "address": "fa:16:3e:04:5e:6a", "network": {"id": "3f70fd72-8355-46f5-8b19-cebed2c28970", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1770026923-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e6c016d-90", "ovs_interfaceid": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.191 2 DEBUG nova.network.os_vif_util [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:04:5e:6a,bridge_name='br-int',has_traffic_filtering=True,id=6e6c016d-9003-4a4b-92ce-11e00a91b399,network=Network(3f70fd72-8355-46f5-8b19-cebed2c28970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e6c016d-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.192 2 DEBUG os_vif [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:5e:6a,bridge_name='br-int',has_traffic_filtering=True,id=6e6c016d-9003-4a4b-92ce-11e00a91b399,network=Network(3f70fd72-8355-46f5-8b19-cebed2c28970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e6c016d-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.194 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e6c016d-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.206 2 INFO os_vif [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:5e:6a,bridge_name='br-int',has_traffic_filtering=True,id=6e6c016d-9003-4a4b-92ce-11e00a91b399,network=Network(3f70fd72-8355-46f5-8b19-cebed2c28970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e6c016d-90')
Oct 02 20:07:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c14083b0b464b20d2710d609e018a7f0a178c0046f07d821180147ea4b1b6ea1-userdata-shm.mount: Deactivated successfully.
Oct 02 20:07:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d796f0aad873155a7bdc24af8977dc3f4f3488adc8db2d26b5ca31e5a9930e5-merged.mount: Deactivated successfully.
Oct 02 20:07:15 compute-0 podman[454467]: 2025-10-02 20:07:15.242503818 +0000 UTC m=+0.190538527 container cleanup c14083b0b464b20d2710d609e018a7f0a178c0046f07d821180147ea4b1b6ea1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 20:07:15 compute-0 systemd[1]: libpod-conmon-c14083b0b464b20d2710d609e018a7f0a178c0046f07d821180147ea4b1b6ea1.scope: Deactivated successfully.
Oct 02 20:07:15 compute-0 podman[454521]: 2025-10-02 20:07:15.367194853 +0000 UTC m=+0.082205611 container remove c14083b0b464b20d2710d609e018a7f0a178c0046f07d821180147ea4b1b6ea1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:07:15 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:15.377 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[dc8fb7df-c3a7-46ef-bd80-b2ed79b1edd5]: (4, ('Thu Oct  2 08:07:15 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970 (c14083b0b464b20d2710d609e018a7f0a178c0046f07d821180147ea4b1b6ea1)\nc14083b0b464b20d2710d609e018a7f0a178c0046f07d821180147ea4b1b6ea1\nThu Oct  2 08:07:15 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970 (c14083b0b464b20d2710d609e018a7f0a178c0046f07d821180147ea4b1b6ea1)\nc14083b0b464b20d2710d609e018a7f0a178c0046f07d821180147ea4b1b6ea1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:15 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:15.379 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a344ba-808e-4c28-9c8b-8766151b1006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:15 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:15.381 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f70fd72-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:15 compute-0 kernel: tap3f70fd72-80: left promiscuous mode
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:15 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:15.408 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[d58959e6-5d60-4544-b05c-8990e539d840]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:15 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:15.441 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[5aed968f-f0ac-474f-aff9-a3f88285c66a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:15 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:15.443 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3b1b25-a143-46da-baf1-2749aaee895c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:15 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:15.470 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[3964cbc2-9bf8-4b56-9391-3f22e8548e36]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671938, 'reachable_time': 32384, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 454538, 'error': None, 'target': 'ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:15 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:15.477 285947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3f70fd72-8355-46f5-8b19-cebed2c28970 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 20:07:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d3f70fd72\x2d8355\x2d46f5\x2d8b19\x2dcebed2c28970.mount: Deactivated successfully.
Oct 02 20:07:15 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:15.477 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[55afe2ff-e381-420f-a780-40c587678952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:07:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 356 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.712 2 DEBUG nova.compute.manager [req-cc685e34-d91e-499a-81cd-93204fd1c209 req-1ab08504-b569-4be4-b8c5-d886deb4e903 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Received event network-vif-unplugged-6e6c016d-9003-4a4b-92ce-11e00a91b399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.713 2 DEBUG oslo_concurrency.lockutils [req-cc685e34-d91e-499a-81cd-93204fd1c209 req-1ab08504-b569-4be4-b8c5-d886deb4e903 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "f8be75db-d124-4069-a573-db7410ea2b5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.713 2 DEBUG oslo_concurrency.lockutils [req-cc685e34-d91e-499a-81cd-93204fd1c209 req-1ab08504-b569-4be4-b8c5-d886deb4e903 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "f8be75db-d124-4069-a573-db7410ea2b5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.714 2 DEBUG oslo_concurrency.lockutils [req-cc685e34-d91e-499a-81cd-93204fd1c209 req-1ab08504-b569-4be4-b8c5-d886deb4e903 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "f8be75db-d124-4069-a573-db7410ea2b5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.714 2 DEBUG nova.compute.manager [req-cc685e34-d91e-499a-81cd-93204fd1c209 req-1ab08504-b569-4be4-b8c5-d886deb4e903 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] No waiting events found dispatching network-vif-unplugged-6e6c016d-9003-4a4b-92ce-11e00a91b399 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.714 2 DEBUG nova.compute.manager [req-cc685e34-d91e-499a-81cd-93204fd1c209 req-1ab08504-b569-4be4-b8c5-d886deb4e903 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Received event network-vif-unplugged-6e6c016d-9003-4a4b-92ce-11e00a91b399 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.820 2 INFO oslo.privsep.daemon [None req-4a891147-0384-4a18-81e4-007b604eafe9 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Spawned new privsep daemon via rootwrap
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.678 5500 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.686 5500 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.692 5500 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.692 5500 INFO oslo.privsep.daemon [-] privsep daemon running as pid 5500
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.930 2 INFO nova.virt.libvirt.driver [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Deleting instance files /var/lib/nova/instances/f8be75db-d124-4069-a573-db7410ea2b5e_del
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.931 2 INFO nova.virt.libvirt.driver [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Deletion of /var/lib/nova/instances/f8be75db-d124-4069-a573-db7410ea2b5e_del complete
Oct 02 20:07:15 compute-0 nova_compute[355794]: 2025-10-02 20:07:15.938 5500 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 20:07:16 compute-0 nova_compute[355794]: 2025-10-02 20:07:16.215 2 INFO nova.compute.manager [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Took 1.51 seconds to destroy the instance on the hypervisor.
Oct 02 20:07:16 compute-0 nova_compute[355794]: 2025-10-02 20:07:16.216 2 DEBUG oslo.service.loopingcall [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 20:07:16 compute-0 nova_compute[355794]: 2025-10-02 20:07:16.216 2 DEBUG nova.compute.manager [-] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 20:07:16 compute-0 nova_compute[355794]: 2025-10-02 20:07:16.216 2 DEBUG nova.network.neutron [-] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 20:07:16 compute-0 charming_poitras[454463]: {
Oct 02 20:07:16 compute-0 charming_poitras[454463]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:07:16 compute-0 charming_poitras[454463]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:07:16 compute-0 charming_poitras[454463]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:07:16 compute-0 charming_poitras[454463]:         "osd_id": 1,
Oct 02 20:07:16 compute-0 charming_poitras[454463]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:07:16 compute-0 charming_poitras[454463]:         "type": "bluestore"
Oct 02 20:07:16 compute-0 charming_poitras[454463]:     },
Oct 02 20:07:16 compute-0 charming_poitras[454463]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:07:16 compute-0 charming_poitras[454463]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:07:16 compute-0 charming_poitras[454463]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:07:16 compute-0 charming_poitras[454463]:         "osd_id": 2,
Oct 02 20:07:16 compute-0 charming_poitras[454463]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:07:16 compute-0 charming_poitras[454463]:         "type": "bluestore"
Oct 02 20:07:16 compute-0 charming_poitras[454463]:     },
Oct 02 20:07:16 compute-0 charming_poitras[454463]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:07:16 compute-0 charming_poitras[454463]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:07:16 compute-0 charming_poitras[454463]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:07:16 compute-0 charming_poitras[454463]:         "osd_id": 0,
Oct 02 20:07:16 compute-0 charming_poitras[454463]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:07:16 compute-0 charming_poitras[454463]:         "type": "bluestore"
Oct 02 20:07:16 compute-0 charming_poitras[454463]:     }
Oct 02 20:07:16 compute-0 charming_poitras[454463]: }
Oct 02 20:07:16 compute-0 systemd[1]: libpod-cdd69c97cdd4ee2ee5e3c66322345a98bba80385a7d2c6f4911bfa0642af9686.scope: Deactivated successfully.
Oct 02 20:07:16 compute-0 systemd[1]: libpod-cdd69c97cdd4ee2ee5e3c66322345a98bba80385a7d2c6f4911bfa0642af9686.scope: Consumed 1.224s CPU time.
Oct 02 20:07:16 compute-0 podman[454423]: 2025-10-02 20:07:16.445735991 +0000 UTC m=+1.576582255 container died cdd69c97cdd4ee2ee5e3c66322345a98bba80385a7d2c6f4911bfa0642af9686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 20:07:16 compute-0 nova_compute[355794]: 2025-10-02 20:07:16.483 2 DEBUG nova.network.neutron [req-bd62ab62-3ea9-471c-888b-6b8a5b103698 req-40dbd1b2-68e3-408c-a0cf-b63d7175014e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Updated VIF entry in instance network info cache for port 6e6c016d-9003-4a4b-92ce-11e00a91b399. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:07:16 compute-0 nova_compute[355794]: 2025-10-02 20:07:16.485 2 DEBUG nova.network.neutron [req-bd62ab62-3ea9-471c-888b-6b8a5b103698 req-40dbd1b2-68e3-408c-a0cf-b63d7175014e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Updating instance_info_cache with network_info: [{"id": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "address": "fa:16:3e:04:5e:6a", "network": {"id": "3f70fd72-8355-46f5-8b19-cebed2c28970", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1770026923-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cca17e8f28a243bcaf58d01bf55608e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e6c016d-90", "ovs_interfaceid": "6e6c016d-9003-4a4b-92ce-11e00a91b399", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:07:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-85ba9e9ff249f35a321ecaa64600eebbb465253734bc0a7bd0cfb171cd425cd2-merged.mount: Deactivated successfully.
Oct 02 20:07:16 compute-0 podman[454423]: 2025-10-02 20:07:16.577210036 +0000 UTC m=+1.708056280 container remove cdd69c97cdd4ee2ee5e3c66322345a98bba80385a7d2c6f4911bfa0642af9686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:07:16 compute-0 nova_compute[355794]: 2025-10-02 20:07:16.583 2 DEBUG oslo_concurrency.lockutils [req-bd62ab62-3ea9-471c-888b-6b8a5b103698 req-40dbd1b2-68e3-408c-a0cf-b63d7175014e 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-f8be75db-d124-4069-a573-db7410ea2b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:07:16 compute-0 systemd[1]: libpod-conmon-cdd69c97cdd4ee2ee5e3c66322345a98bba80385a7d2c6f4911bfa0642af9686.scope: Deactivated successfully.
Oct 02 20:07:16 compute-0 sudo[454296]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:07:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:07:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:07:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:07:16 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 8f5e5289-3785-48ab-aa33-45744f2a9729 does not exist
Oct 02 20:07:16 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 56fec9bf-a187-40fa-a23a-db126a694880 does not exist
Oct 02 20:07:16 compute-0 sudo[454581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:07:16 compute-0 sudo[454581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:16 compute-0 sudo[454581]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:16 compute-0 ceph-mon[191910]: pgmap v1875: 321 pgs: 321 active+clean; 356 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Oct 02 20:07:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:07:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:07:16 compute-0 sudo[454606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:07:16 compute-0 sudo[454606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:07:16 compute-0 sudo[454606]: pam_unix(sudo:session): session closed for user root
Oct 02 20:07:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1876: 321 pgs: 321 active+clean; 323 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Oct 02 20:07:17 compute-0 nova_compute[355794]: 2025-10-02 20:07:17.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:17 compute-0 nova_compute[355794]: 2025-10-02 20:07:17.826 2 DEBUG nova.compute.manager [req-4f6ce0b2-7484-47c4-8f28-1b31eb072fe9 req-d8d4faf5-e927-4329-82ee-5633011d0323 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Received event network-vif-plugged-6e6c016d-9003-4a4b-92ce-11e00a91b399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:07:17 compute-0 nova_compute[355794]: 2025-10-02 20:07:17.827 2 DEBUG oslo_concurrency.lockutils [req-4f6ce0b2-7484-47c4-8f28-1b31eb072fe9 req-d8d4faf5-e927-4329-82ee-5633011d0323 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "f8be75db-d124-4069-a573-db7410ea2b5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:17 compute-0 nova_compute[355794]: 2025-10-02 20:07:17.827 2 DEBUG oslo_concurrency.lockutils [req-4f6ce0b2-7484-47c4-8f28-1b31eb072fe9 req-d8d4faf5-e927-4329-82ee-5633011d0323 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "f8be75db-d124-4069-a573-db7410ea2b5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:17 compute-0 nova_compute[355794]: 2025-10-02 20:07:17.828 2 DEBUG oslo_concurrency.lockutils [req-4f6ce0b2-7484-47c4-8f28-1b31eb072fe9 req-d8d4faf5-e927-4329-82ee-5633011d0323 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "f8be75db-d124-4069-a573-db7410ea2b5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:17 compute-0 nova_compute[355794]: 2025-10-02 20:07:17.828 2 DEBUG nova.compute.manager [req-4f6ce0b2-7484-47c4-8f28-1b31eb072fe9 req-d8d4faf5-e927-4329-82ee-5633011d0323 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] No waiting events found dispatching network-vif-plugged-6e6c016d-9003-4a4b-92ce-11e00a91b399 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:07:17 compute-0 nova_compute[355794]: 2025-10-02 20:07:17.828 2 WARNING nova.compute.manager [req-4f6ce0b2-7484-47c4-8f28-1b31eb072fe9 req-d8d4faf5-e927-4329-82ee-5633011d0323 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Received unexpected event network-vif-plugged-6e6c016d-9003-4a4b-92ce-11e00a91b399 for instance with vm_state active and task_state deleting.
Oct 02 20:07:17 compute-0 ceph-mon[191910]: pgmap v1876: 321 pgs: 321 active+clean; 323 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Oct 02 20:07:18 compute-0 nova_compute[355794]: 2025-10-02 20:07:18.485 2 DEBUG nova.network.neutron [-] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:07:18 compute-0 nova_compute[355794]: 2025-10-02 20:07:18.569 2 INFO nova.compute.manager [-] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Took 2.35 seconds to deallocate network for instance.
Oct 02 20:07:18 compute-0 nova_compute[355794]: 2025-10-02 20:07:18.796 2 DEBUG oslo_concurrency.lockutils [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:18 compute-0 nova_compute[355794]: 2025-10-02 20:07:18.797 2 DEBUG oslo_concurrency.lockutils [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:18 compute-0 nova_compute[355794]: 2025-10-02 20:07:18.950 2 DEBUG oslo_concurrency.processutils [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:07:19 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/977419990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:07:19 compute-0 nova_compute[355794]: 2025-10-02 20:07:19.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:19 compute-0 nova_compute[355794]: 2025-10-02 20:07:19.514 2 DEBUG oslo_concurrency.processutils [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:19 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/977419990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:07:19 compute-0 nova_compute[355794]: 2025-10-02 20:07:19.535 2 DEBUG nova.compute.provider_tree [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:07:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 277 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Oct 02 20:07:19 compute-0 nova_compute[355794]: 2025-10-02 20:07:19.695 2 DEBUG nova.scheduler.client.report [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:07:19 compute-0 nova_compute[355794]: 2025-10-02 20:07:19.951 2 DEBUG nova.compute.manager [req-a96ed89a-5a09-42f8-9284-e06ee0d07b5b req-238be036-fc4d-4690-ab4a-e6cecbc8da53 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Received event network-vif-deleted-6e6c016d-9003-4a4b-92ce-11e00a91b399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:07:19 compute-0 nova_compute[355794]: 2025-10-02 20:07:19.951 2 DEBUG nova.compute.manager [req-a96ed89a-5a09-42f8-9284-e06ee0d07b5b req-238be036-fc4d-4690-ab4a-e6cecbc8da53 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Received event network-changed-4af10480-1bf8-4efe-bb0e-ef9ee356a470 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:07:19 compute-0 nova_compute[355794]: 2025-10-02 20:07:19.952 2 DEBUG nova.compute.manager [req-a96ed89a-5a09-42f8-9284-e06ee0d07b5b req-238be036-fc4d-4690-ab4a-e6cecbc8da53 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Refreshing instance network info cache due to event network-changed-4af10480-1bf8-4efe-bb0e-ef9ee356a470. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:07:19 compute-0 nova_compute[355794]: 2025-10-02 20:07:19.952 2 DEBUG oslo_concurrency.lockutils [req-a96ed89a-5a09-42f8-9284-e06ee0d07b5b req-238be036-fc4d-4690-ab4a-e6cecbc8da53 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-a6e095a0-cb58-430d-9347-4aab385c6e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:07:19 compute-0 nova_compute[355794]: 2025-10-02 20:07:19.953 2 DEBUG oslo_concurrency.lockutils [req-a96ed89a-5a09-42f8-9284-e06ee0d07b5b req-238be036-fc4d-4690-ab4a-e6cecbc8da53 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-a6e095a0-cb58-430d-9347-4aab385c6e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:07:19 compute-0 nova_compute[355794]: 2025-10-02 20:07:19.953 2 DEBUG nova.network.neutron [req-a96ed89a-5a09-42f8-9284-e06ee0d07b5b req-238be036-fc4d-4690-ab4a-e6cecbc8da53 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Refreshing network info cache for port 4af10480-1bf8-4efe-bb0e-ef9ee356a470 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:07:19 compute-0 nova_compute[355794]: 2025-10-02 20:07:19.995 2 DEBUG oslo_concurrency.lockutils [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:07:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/556395792' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:07:20 compute-0 nova_compute[355794]: 2025-10-02 20:07:20.158 2 INFO nova.scheduler.client.report [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Deleted allocations for instance f8be75db-d124-4069-a573-db7410ea2b5e
Oct 02 20:07:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:07:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/556395792' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:07:20 compute-0 nova_compute[355794]: 2025-10-02 20:07:20.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:20 compute-0 ceph-mon[191910]: pgmap v1877: 321 pgs: 321 active+clean; 277 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Oct 02 20:07:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/556395792' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:07:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/556395792' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:07:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:07:20 compute-0 nova_compute[355794]: 2025-10-02 20:07:20.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:07:20 compute-0 nova_compute[355794]: 2025-10-02 20:07:20.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 20:07:20 compute-0 nova_compute[355794]: 2025-10-02 20:07:20.898 2 DEBUG oslo_concurrency.lockutils [None req-a614bb79-0c27-4ace-89a7-4ddf62e6424c b0d30c42cdda433ebd7d28421e967748 cca17e8f28a243bcaf58d01bf55608e9 - - default default] Lock "f8be75db-d124-4069-a573-db7410ea2b5e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1878: 321 pgs: 321 active+clean; 277 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 777 KiB/s wr, 137 op/s
Oct 02 20:07:22 compute-0 ceph-mon[191910]: pgmap v1878: 321 pgs: 321 active+clean; 277 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 777 KiB/s wr, 137 op/s
Oct 02 20:07:22 compute-0 podman[454654]: 2025-10-02 20:07:22.699533845 +0000 UTC m=+0.106427304 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:07:22 compute-0 podman[454655]: 2025-10-02 20:07:22.724441144 +0000 UTC m=+0.130323921 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 20:07:23 compute-0 nova_compute[355794]: 2025-10-02 20:07:23.119 2 DEBUG nova.network.neutron [req-a96ed89a-5a09-42f8-9284-e06ee0d07b5b req-238be036-fc4d-4690-ab4a-e6cecbc8da53 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Updated VIF entry in instance network info cache for port 4af10480-1bf8-4efe-bb0e-ef9ee356a470. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:07:23 compute-0 nova_compute[355794]: 2025-10-02 20:07:23.119 2 DEBUG nova.network.neutron [req-a96ed89a-5a09-42f8-9284-e06ee0d07b5b req-238be036-fc4d-4690-ab4a-e6cecbc8da53 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Updating instance_info_cache with network_info: [{"id": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "address": "fa:16:3e:d8:72:8b", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4af10480-1b", "ovs_interfaceid": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:07:23 compute-0 nova_compute[355794]: 2025-10-02 20:07:23.168 2 DEBUG oslo_concurrency.lockutils [req-a96ed89a-5a09-42f8-9284-e06ee0d07b5b req-238be036-fc4d-4690-ab4a-e6cecbc8da53 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-a6e095a0-cb58-430d-9347-4aab385c6e69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:07:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1879: 321 pgs: 321 active+clean; 277 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 94 op/s
Oct 02 20:07:24 compute-0 nova_compute[355794]: 2025-10-02 20:07:24.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:24 compute-0 ceph-mon[191910]: pgmap v1879: 321 pgs: 321 active+clean; 277 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 94 op/s
Oct 02 20:07:25 compute-0 nova_compute[355794]: 2025-10-02 20:07:25.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:07:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 321 active+clean; 277 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 15 KiB/s wr, 88 op/s
Oct 02 20:07:26 compute-0 ceph-mon[191910]: pgmap v1880: 321 pgs: 321 active+clean; 277 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 15 KiB/s wr, 88 op/s
Oct 02 20:07:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 321 active+clean; 277 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 496 KiB/s rd, 17 KiB/s wr, 43 op/s
Oct 02 20:07:27 compute-0 nova_compute[355794]: 2025-10-02 20:07:27.794 2 DEBUG oslo_concurrency.lockutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Acquiring lock "af875636-eb00-48b8-b1f4-589898eafecb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:27 compute-0 nova_compute[355794]: 2025-10-02 20:07:27.795 2 DEBUG oslo_concurrency.lockutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lock "af875636-eb00-48b8-b1f4-589898eafecb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:27 compute-0 nova_compute[355794]: 2025-10-02 20:07:27.819 2 DEBUG nova.compute.manager [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 20:07:27 compute-0 nova_compute[355794]: 2025-10-02 20:07:27.937 2 DEBUG oslo_concurrency.lockutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:27 compute-0 nova_compute[355794]: 2025-10-02 20:07:27.938 2 DEBUG oslo_concurrency.lockutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:27 compute-0 nova_compute[355794]: 2025-10-02 20:07:27.954 2 DEBUG nova.virt.hardware [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 20:07:27 compute-0 nova_compute[355794]: 2025-10-02 20:07:27.954 2 INFO nova.compute.claims [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Claim successful on node compute-0.ctlplane.example.com
Oct 02 20:07:28 compute-0 nova_compute[355794]: 2025-10-02 20:07:28.142 2 DEBUG oslo_concurrency.processutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:28 compute-0 nova_compute[355794]: 2025-10-02 20:07:28.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:07:28 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3761383771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:07:28 compute-0 ceph-mon[191910]: pgmap v1881: 321 pgs: 321 active+clean; 277 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 496 KiB/s rd, 17 KiB/s wr, 43 op/s
Oct 02 20:07:28 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3761383771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:07:28 compute-0 nova_compute[355794]: 2025-10-02 20:07:28.726 2 DEBUG oslo_concurrency.processutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:28 compute-0 nova_compute[355794]: 2025-10-02 20:07:28.739 2 DEBUG nova.compute.provider_tree [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:07:28 compute-0 nova_compute[355794]: 2025-10-02 20:07:28.758 2 DEBUG nova.scheduler.client.report [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:07:28 compute-0 nova_compute[355794]: 2025-10-02 20:07:28.785 2 DEBUG oslo_concurrency.lockutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:28 compute-0 nova_compute[355794]: 2025-10-02 20:07:28.786 2 DEBUG nova.compute.manager [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 20:07:28 compute-0 nova_compute[355794]: 2025-10-02 20:07:28.847 2 DEBUG nova.compute.manager [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 20:07:28 compute-0 nova_compute[355794]: 2025-10-02 20:07:28.848 2 DEBUG nova.network.neutron [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 20:07:28 compute-0 nova_compute[355794]: 2025-10-02 20:07:28.870 2 INFO nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 20:07:28 compute-0 nova_compute[355794]: 2025-10-02 20:07:28.895 2 DEBUG nova.compute.manager [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 20:07:29 compute-0 nova_compute[355794]: 2025-10-02 20:07:29.014 2 DEBUG nova.compute.manager [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 20:07:29 compute-0 nova_compute[355794]: 2025-10-02 20:07:29.017 2 DEBUG nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 20:07:29 compute-0 nova_compute[355794]: 2025-10-02 20:07:29.017 2 INFO nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Creating image(s)
Oct 02 20:07:29 compute-0 nova_compute[355794]: 2025-10-02 20:07:29.088 2 DEBUG nova.storage.rbd_utils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] rbd image af875636-eb00-48b8-b1f4-589898eafecb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:07:29 compute-0 nova_compute[355794]: 2025-10-02 20:07:29.162 2 DEBUG nova.storage.rbd_utils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] rbd image af875636-eb00-48b8-b1f4-589898eafecb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:07:29 compute-0 nova_compute[355794]: 2025-10-02 20:07:29.220 2 DEBUG nova.storage.rbd_utils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] rbd image af875636-eb00-48b8-b1f4-589898eafecb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:07:29 compute-0 nova_compute[355794]: 2025-10-02 20:07:29.230 2 DEBUG oslo_concurrency.processutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:29 compute-0 nova_compute[355794]: 2025-10-02 20:07:29.266 2 DEBUG nova.policy [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56d9dae393d64f4b925b7c0827ad71e0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '945fe5265b6446a2a61f775a8f3466f2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 20:07:29 compute-0 nova_compute[355794]: 2025-10-02 20:07:29.320 2 DEBUG oslo_concurrency.processutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:29 compute-0 nova_compute[355794]: 2025-10-02 20:07:29.321 2 DEBUG oslo_concurrency.lockutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Acquiring lock "0c456b520d71abe557f4853537116bdcc2ff0a79" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:29 compute-0 nova_compute[355794]: 2025-10-02 20:07:29.322 2 DEBUG oslo_concurrency.lockutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lock "0c456b520d71abe557f4853537116bdcc2ff0a79" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:29 compute-0 nova_compute[355794]: 2025-10-02 20:07:29.323 2 DEBUG oslo_concurrency.lockutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lock "0c456b520d71abe557f4853537116bdcc2ff0a79" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:29 compute-0 nova_compute[355794]: 2025-10-02 20:07:29.407 2 DEBUG nova.storage.rbd_utils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] rbd image af875636-eb00-48b8-b1f4-589898eafecb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:07:29 compute-0 nova_compute[355794]: 2025-10-02 20:07:29.422 2 DEBUG oslo_concurrency.processutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 af875636-eb00-48b8-b1f4-589898eafecb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:29 compute-0 nova_compute[355794]: 2025-10-02 20:07:29.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1882: 321 pgs: 321 active+clean; 277 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 5.4 KiB/s wr, 26 op/s
Oct 02 20:07:29 compute-0 podman[454811]: 2025-10-02 20:07:29.689258703 +0000 UTC m=+0.114812409 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 20:07:29 compute-0 podman[454812]: 2025-10-02 20:07:29.716649566 +0000 UTC m=+0.140478754 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=edpm, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc.)
Oct 02 20:07:29 compute-0 podman[157186]: time="2025-10-02T20:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:07:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 48732 "" "Go-http-client/1.1"
Oct 02 20:07:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10007 "" "Go-http-client/1.1"
Oct 02 20:07:29 compute-0 nova_compute[355794]: 2025-10-02 20:07:29.851 2 DEBUG oslo_concurrency.processutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 af875636-eb00-48b8-b1f4-589898eafecb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:29 compute-0 nova_compute[355794]: 2025-10-02 20:07:29.977 2 DEBUG nova.storage.rbd_utils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] resizing rbd image af875636-eb00-48b8-b1f4-589898eafecb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 20:07:30 compute-0 nova_compute[355794]: 2025-10-02 20:07:30.183 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759435635.166056, f8be75db-d124-4069-a573-db7410ea2b5e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:07:30 compute-0 nova_compute[355794]: 2025-10-02 20:07:30.184 2 INFO nova.compute.manager [-] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] VM Stopped (Lifecycle Event)
Oct 02 20:07:30 compute-0 nova_compute[355794]: 2025-10-02 20:07:30.197 2 DEBUG nova.objects.instance [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lazy-loading 'migration_context' on Instance uuid af875636-eb00-48b8-b1f4-589898eafecb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:07:30 compute-0 nova_compute[355794]: 2025-10-02 20:07:30.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:30 compute-0 nova_compute[355794]: 2025-10-02 20:07:30.303 2 DEBUG nova.compute.manager [None req-aef1ae76-b39e-4e84-b997-aedc331730c1 - - - - - -] [instance: f8be75db-d124-4069-a573-db7410ea2b5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:07:30 compute-0 nova_compute[355794]: 2025-10-02 20:07:30.304 2 DEBUG nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 20:07:30 compute-0 nova_compute[355794]: 2025-10-02 20:07:30.305 2 DEBUG nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Ensure instance console log exists: /var/lib/nova/instances/af875636-eb00-48b8-b1f4-589898eafecb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 20:07:30 compute-0 nova_compute[355794]: 2025-10-02 20:07:30.307 2 DEBUG oslo_concurrency.lockutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:30 compute-0 nova_compute[355794]: 2025-10-02 20:07:30.308 2 DEBUG oslo_concurrency.lockutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:30 compute-0 nova_compute[355794]: 2025-10-02 20:07:30.309 2 DEBUG oslo_concurrency.lockutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:07:30 compute-0 ceph-mon[191910]: pgmap v1882: 321 pgs: 321 active+clean; 277 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 5.4 KiB/s wr, 26 op/s
Oct 02 20:07:30 compute-0 nova_compute[355794]: 2025-10-02 20:07:30.978 2 DEBUG nova.network.neutron [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Successfully created port: e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 20:07:31 compute-0 openstack_network_exporter[372736]: ERROR   20:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:07:31 compute-0 openstack_network_exporter[372736]: ERROR   20:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:07:31 compute-0 openstack_network_exporter[372736]: ERROR   20:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:07:31 compute-0 openstack_network_exporter[372736]: ERROR   20:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:07:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:07:31 compute-0 openstack_network_exporter[372736]: ERROR   20:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:07:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:07:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 289 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 330 KiB/s wr, 23 op/s
Oct 02 20:07:31 compute-0 podman[454927]: 2025-10-02 20:07:31.721645984 +0000 UTC m=+0.147145871 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 20:07:31 compute-0 podman[454926]: 2025-10-02 20:07:31.723246459 +0000 UTC m=+0.149255587 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:07:31 compute-0 podman[454928]: 2025-10-02 20:07:31.778820863 +0000 UTC m=+0.192721855 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.license=GPLv2)
Oct 02 20:07:31 compute-0 nova_compute[355794]: 2025-10-02 20:07:31.990 2 DEBUG oslo_concurrency.lockutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquiring lock "c942a9bd-3760-43df-964d-8aa0e8710a3d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:31 compute-0 nova_compute[355794]: 2025-10-02 20:07:31.990 2 DEBUG oslo_concurrency.lockutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "c942a9bd-3760-43df-964d-8aa0e8710a3d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.012 2 DEBUG nova.compute.manager [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.078 2 DEBUG oslo_concurrency.lockutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.079 2 DEBUG oslo_concurrency.lockutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.089 2 DEBUG nova.virt.hardware [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.089 2 INFO nova.compute.claims [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Claim successful on node compute-0.ctlplane.example.com
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.228 2 DEBUG nova.network.neutron [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Successfully updated port: e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.244 2 DEBUG oslo_concurrency.lockutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Acquiring lock "refresh_cache-af875636-eb00-48b8-b1f4-589898eafecb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.244 2 DEBUG oslo_concurrency.lockutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Acquired lock "refresh_cache-af875636-eb00-48b8-b1f4-589898eafecb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.245 2 DEBUG nova.network.neutron [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 20:07:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:32.321 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:32.322 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:32.324 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.333 2 DEBUG oslo_concurrency.processutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:32 compute-0 ovn_controller[88435]: 2025-10-02T20:07:32Z|00106|binding|INFO|Releasing lport b59aad26-fd1d-4c37-adbd-b18497c4c15f from this chassis (sb_readonly=0)
Oct 02 20:07:32 compute-0 ovn_controller[88435]: 2025-10-02T20:07:32Z|00107|binding|INFO|Releasing lport cdbc9f7e-e502-4e46-9d35-398a11c2a99d from this chassis (sb_readonly=0)
Oct 02 20:07:32 compute-0 ovn_controller[88435]: 2025-10-02T20:07:32Z|00108|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.369 2 DEBUG nova.compute.manager [req-5609aa52-441b-4c23-bd2a-c61384b4820f req-efdb6237-e755-451a-89b3-f99ad5f35fbe 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Received event network-changed-e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.369 2 DEBUG nova.compute.manager [req-5609aa52-441b-4c23-bd2a-c61384b4820f req-efdb6237-e755-451a-89b3-f99ad5f35fbe 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Refreshing instance network info cache due to event network-changed-e50ea0ec-56a1-4e06-bd8b-531ca4d11a04. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.370 2 DEBUG oslo_concurrency.lockutils [req-5609aa52-441b-4c23-bd2a-c61384b4820f req-efdb6237-e755-451a-89b3-f99ad5f35fbe 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-af875636-eb00-48b8-b1f4-589898eafecb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.462 2 DEBUG nova.network.neutron [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 20:07:32 compute-0 ceph-mon[191910]: pgmap v1883: 321 pgs: 321 active+clean; 289 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 330 KiB/s wr, 23 op/s
Oct 02 20:07:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:07:32 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/915679237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.823 2 DEBUG oslo_concurrency.processutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.835 2 DEBUG nova.compute.provider_tree [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.865 2 DEBUG nova.scheduler.client.report [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.894 2 DEBUG oslo_concurrency.lockutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.895 2 DEBUG nova.compute.manager [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.956 2 DEBUG nova.compute.manager [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.957 2 DEBUG nova.network.neutron [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.980 2 INFO nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 20:07:32 compute-0 nova_compute[355794]: 2025-10-02 20:07:32.998 2 DEBUG nova.compute.manager [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 20:07:33 compute-0 nova_compute[355794]: 2025-10-02 20:07:33.123 2 DEBUG nova.compute.manager [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 20:07:33 compute-0 nova_compute[355794]: 2025-10-02 20:07:33.126 2 DEBUG nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 20:07:33 compute-0 nova_compute[355794]: 2025-10-02 20:07:33.126 2 INFO nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Creating image(s)
Oct 02 20:07:33 compute-0 nova_compute[355794]: 2025-10-02 20:07:33.178 2 DEBUG nova.storage.rbd_utils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] rbd image c942a9bd-3760-43df-964d-8aa0e8710a3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:07:33 compute-0 nova_compute[355794]: 2025-10-02 20:07:33.228 2 DEBUG nova.storage.rbd_utils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] rbd image c942a9bd-3760-43df-964d-8aa0e8710a3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:07:33 compute-0 nova_compute[355794]: 2025-10-02 20:07:33.272 2 DEBUG nova.storage.rbd_utils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] rbd image c942a9bd-3760-43df-964d-8aa0e8710a3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:07:33 compute-0 nova_compute[355794]: 2025-10-02 20:07:33.296 2 DEBUG oslo_concurrency.processutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:33 compute-0 nova_compute[355794]: 2025-10-02 20:07:33.343 2 DEBUG nova.policy [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e87db118c0374d50a374f0ceaf961159', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a7c52835a9494ea98fd26390771eb77f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 20:07:33 compute-0 nova_compute[355794]: 2025-10-02 20:07:33.378 2 DEBUG oslo_concurrency.processutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:33 compute-0 nova_compute[355794]: 2025-10-02 20:07:33.380 2 DEBUG oslo_concurrency.lockutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquiring lock "0c456b520d71abe557f4853537116bdcc2ff0a79" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:33 compute-0 nova_compute[355794]: 2025-10-02 20:07:33.381 2 DEBUG oslo_concurrency.lockutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "0c456b520d71abe557f4853537116bdcc2ff0a79" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:33 compute-0 nova_compute[355794]: 2025-10-02 20:07:33.381 2 DEBUG oslo_concurrency.lockutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "0c456b520d71abe557f4853537116bdcc2ff0a79" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:33 compute-0 nova_compute[355794]: 2025-10-02 20:07:33.435 2 DEBUG nova.storage.rbd_utils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] rbd image c942a9bd-3760-43df-964d-8aa0e8710a3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:07:33 compute-0 nova_compute[355794]: 2025-10-02 20:07:33.450 2 DEBUG oslo_concurrency.processutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 c942a9bd-3760-43df-964d-8aa0e8710a3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1884: 321 pgs: 321 active+clean; 304 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 845 KiB/s wr, 26 op/s
Oct 02 20:07:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:07:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:07:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:07:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:07:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:07:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:07:33 compute-0 podman[455091]: 2025-10-02 20:07:33.695771673 +0000 UTC m=+0.119999203 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 20:07:33 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/915679237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:07:33 compute-0 podman[455092]: 2025-10-02 20:07:33.769619218 +0000 UTC m=+0.179109484 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:07:33 compute-0 nova_compute[355794]: 2025-10-02 20:07:33.912 2 DEBUG oslo_concurrency.processutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 c942a9bd-3760-43df-964d-8aa0e8710a3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:33 compute-0 nova_compute[355794]: 2025-10-02 20:07:33.986 2 DEBUG nova.network.neutron [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Updating instance_info_cache with network_info: [{"id": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "address": "fa:16:3e:b0:ca:09", "network": {"id": "3f5a3a36-f114-4439-a81a-9e4ddc58a44b", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1040059364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "945fe5265b6446a2a61f775a8f3466f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50ea0ec-56", "ovs_interfaceid": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.075 2 DEBUG oslo_concurrency.lockutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Releasing lock "refresh_cache-af875636-eb00-48b8-b1f4-589898eafecb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.076 2 DEBUG nova.compute.manager [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Instance network_info: |[{"id": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "address": "fa:16:3e:b0:ca:09", "network": {"id": "3f5a3a36-f114-4439-a81a-9e4ddc58a44b", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1040059364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "945fe5265b6446a2a61f775a8f3466f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50ea0ec-56", "ovs_interfaceid": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.077 2 DEBUG oslo_concurrency.lockutils [req-5609aa52-441b-4c23-bd2a-c61384b4820f req-efdb6237-e755-451a-89b3-f99ad5f35fbe 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-af875636-eb00-48b8-b1f4-589898eafecb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.077 2 DEBUG nova.network.neutron [req-5609aa52-441b-4c23-bd2a-c61384b4820f req-efdb6237-e755-451a-89b3-f99ad5f35fbe 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Refreshing network info cache for port e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.081 2 DEBUG nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Start _get_guest_xml network_info=[{"id": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "address": "fa:16:3e:b0:ca:09", "network": {"id": "3f5a3a36-f114-4439-a81a-9e4ddc58a44b", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1040059364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "945fe5265b6446a2a61f775a8f3466f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50ea0ec-56", "ovs_interfaceid": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:04:39Z,direct_url=<?>,disk_format='qcow2',id=2881b8cb-4cad-4124-8a6e-ae21054c9692,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:04:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'disk_bus': 'virtio', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'image_id': '2881b8cb-4cad-4124-8a6e-ae21054c9692'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.100 2 DEBUG nova.storage.rbd_utils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] resizing rbd image c942a9bd-3760-43df-964d-8aa0e8710a3d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.193 2 WARNING nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.204 2 DEBUG nova.virt.libvirt.host [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.205 2 DEBUG nova.virt.libvirt.host [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.211 2 DEBUG nova.virt.libvirt.host [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.212 2 DEBUG nova.virt.libvirt.host [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.213 2 DEBUG nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.214 2 DEBUG nova.virt.hardware [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T20:04:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2a4d7fef-934e-4921-8c3b-c6783966faa5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:04:39Z,direct_url=<?>,disk_format='qcow2',id=2881b8cb-4cad-4124-8a6e-ae21054c9692,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:04:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.215 2 DEBUG nova.virt.hardware [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.216 2 DEBUG nova.virt.hardware [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.216 2 DEBUG nova.virt.hardware [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.217 2 DEBUG nova.virt.hardware [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.218 2 DEBUG nova.virt.hardware [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.218 2 DEBUG nova.virt.hardware [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.219 2 DEBUG nova.virt.hardware [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.220 2 DEBUG nova.virt.hardware [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.220 2 DEBUG nova.virt.hardware [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.220 2 DEBUG nova.virt.hardware [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.227 2 DEBUG oslo_concurrency.processutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.408 2 DEBUG nova.objects.instance [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lazy-loading 'migration_context' on Instance uuid c942a9bd-3760-43df-964d-8aa0e8710a3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.423 2 DEBUG nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.424 2 DEBUG nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Ensure instance console log exists: /var/lib/nova/instances/c942a9bd-3760-43df-964d-8aa0e8710a3d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.424 2 DEBUG oslo_concurrency.lockutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.425 2 DEBUG oslo_concurrency.lockutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.425 2 DEBUG oslo_concurrency.lockutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:34 compute-0 ceph-mon[191910]: pgmap v1884: 321 pgs: 321 active+clean; 304 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 845 KiB/s wr, 26 op/s
Oct 02 20:07:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:07:34 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/438238697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.817 2 DEBUG oslo_concurrency.processutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.866 2 DEBUG nova.storage.rbd_utils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] rbd image af875636-eb00-48b8-b1f4-589898eafecb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:07:34 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.878 2 DEBUG oslo_concurrency.processutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:34.999 2 DEBUG nova.network.neutron [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Successfully created port: 4adcdafc-fb12-4e7c-9f7a-f2e6d691970d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:07:35 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2479352356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.339 2 DEBUG oslo_concurrency.processutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.342 2 DEBUG nova.virt.libvirt.vif [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T20:07:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-2122533072',display_name='tempest-TestServerBasicOps-server-2122533072',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-2122533072',id=10,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDxPI0CmzQm3wR8m2Vq0zGE2hiiNGt34W7En7pAqLuzoJ7ysyl6XPe7tkRbaBW5GW92Ce/Yooxvj5tcD36c/D4W8bSyhnpmezx4ELw/4LYg6y2osPt0fZXFT30f+OZ2jeQ==',key_name='tempest-TestServerBasicOps-1679027111',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='945fe5265b6446a2a61f775a8f3466f2',ramdisk_id='',reservation_id='r-e6sqh4pr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-644988398',owner_user_name='tempest-TestServerBasicOps-644988398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:07:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='56d9dae393d64f4b925b7c0827ad71e0',uuid=af875636-eb00-48b8-b1f4-589898eafecb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "address": "fa:16:3e:b0:ca:09", "network": {"id": "3f5a3a36-f114-4439-a81a-9e4ddc58a44b", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1040059364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "945fe5265b6446a2a61f775a8f3466f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50ea0ec-56", "ovs_interfaceid": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.343 2 DEBUG nova.network.os_vif_util [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Converting VIF {"id": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "address": "fa:16:3e:b0:ca:09", "network": {"id": "3f5a3a36-f114-4439-a81a-9e4ddc58a44b", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1040059364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "945fe5265b6446a2a61f775a8f3466f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50ea0ec-56", "ovs_interfaceid": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.345 2 DEBUG nova.network.os_vif_util [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:ca:09,bridge_name='br-int',has_traffic_filtering=True,id=e50ea0ec-56a1-4e06-bd8b-531ca4d11a04,network=Network(3f5a3a36-f114-4439-a81a-9e4ddc58a44b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50ea0ec-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.347 2 DEBUG nova.objects.instance [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid af875636-eb00-48b8-b1f4-589898eafecb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.363 2 DEBUG nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] End _get_guest_xml xml=<domain type="kvm">
Oct 02 20:07:35 compute-0 nova_compute[355794]:   <uuid>af875636-eb00-48b8-b1f4-589898eafecb</uuid>
Oct 02 20:07:35 compute-0 nova_compute[355794]:   <name>instance-0000000a</name>
Oct 02 20:07:35 compute-0 nova_compute[355794]:   <memory>131072</memory>
Oct 02 20:07:35 compute-0 nova_compute[355794]:   <vcpu>1</vcpu>
Oct 02 20:07:35 compute-0 nova_compute[355794]:   <metadata>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <nova:name>tempest-TestServerBasicOps-server-2122533072</nova:name>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <nova:creationTime>2025-10-02 20:07:34</nova:creationTime>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <nova:flavor name="m1.nano">
Oct 02 20:07:35 compute-0 nova_compute[355794]:         <nova:memory>128</nova:memory>
Oct 02 20:07:35 compute-0 nova_compute[355794]:         <nova:disk>1</nova:disk>
Oct 02 20:07:35 compute-0 nova_compute[355794]:         <nova:swap>0</nova:swap>
Oct 02 20:07:35 compute-0 nova_compute[355794]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 20:07:35 compute-0 nova_compute[355794]:         <nova:vcpus>1</nova:vcpus>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       </nova:flavor>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <nova:owner>
Oct 02 20:07:35 compute-0 nova_compute[355794]:         <nova:user uuid="56d9dae393d64f4b925b7c0827ad71e0">tempest-TestServerBasicOps-644988398-project-member</nova:user>
Oct 02 20:07:35 compute-0 nova_compute[355794]:         <nova:project uuid="945fe5265b6446a2a61f775a8f3466f2">tempest-TestServerBasicOps-644988398</nova:project>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       </nova:owner>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <nova:root type="image" uuid="2881b8cb-4cad-4124-8a6e-ae21054c9692"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <nova:ports>
Oct 02 20:07:35 compute-0 nova_compute[355794]:         <nova:port uuid="e50ea0ec-56a1-4e06-bd8b-531ca4d11a04">
Oct 02 20:07:35 compute-0 nova_compute[355794]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:         </nova:port>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       </nova:ports>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     </nova:instance>
Oct 02 20:07:35 compute-0 nova_compute[355794]:   </metadata>
Oct 02 20:07:35 compute-0 nova_compute[355794]:   <sysinfo type="smbios">
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <system>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <entry name="manufacturer">RDO</entry>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <entry name="product">OpenStack Compute</entry>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <entry name="serial">af875636-eb00-48b8-b1f4-589898eafecb</entry>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <entry name="uuid">af875636-eb00-48b8-b1f4-589898eafecb</entry>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <entry name="family">Virtual Machine</entry>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     </system>
Oct 02 20:07:35 compute-0 nova_compute[355794]:   </sysinfo>
Oct 02 20:07:35 compute-0 nova_compute[355794]:   <os>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <boot dev="hd"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <smbios mode="sysinfo"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:   </os>
Oct 02 20:07:35 compute-0 nova_compute[355794]:   <features>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <acpi/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <apic/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <vmcoreinfo/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:   </features>
Oct 02 20:07:35 compute-0 nova_compute[355794]:   <clock offset="utc">
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <timer name="hpet" present="no"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:   </clock>
Oct 02 20:07:35 compute-0 nova_compute[355794]:   <cpu mode="host-model" match="exact">
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:   </cpu>
Oct 02 20:07:35 compute-0 nova_compute[355794]:   <devices>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/af875636-eb00-48b8-b1f4-589898eafecb_disk">
Oct 02 20:07:35 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       </source>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:07:35 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <target dev="vda" bus="virtio"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <disk type="network" device="cdrom">
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/af875636-eb00-48b8-b1f4-589898eafecb_disk.config">
Oct 02 20:07:35 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       </source>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:07:35 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <target dev="sda" bus="sata"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <interface type="ethernet">
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <mac address="fa:16:3e:b0:ca:09"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <mtu size="1442"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <target dev="tape50ea0ec-56"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     </interface>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <serial type="pty">
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <log file="/var/lib/nova/instances/af875636-eb00-48b8-b1f4-589898eafecb/console.log" append="off"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     </serial>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <video>
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     </video>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <input type="tablet" bus="usb"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <rng model="virtio">
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <backend model="random">/dev/urandom</backend>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     </rng>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <controller type="usb" index="0"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     <memballoon model="virtio">
Oct 02 20:07:35 compute-0 nova_compute[355794]:       <stats period="10"/>
Oct 02 20:07:35 compute-0 nova_compute[355794]:     </memballoon>
Oct 02 20:07:35 compute-0 nova_compute[355794]:   </devices>
Oct 02 20:07:35 compute-0 nova_compute[355794]: </domain>
Oct 02 20:07:35 compute-0 nova_compute[355794]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.364 2 DEBUG nova.compute.manager [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Preparing to wait for external event network-vif-plugged-e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.364 2 DEBUG oslo_concurrency.lockutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Acquiring lock "af875636-eb00-48b8-b1f4-589898eafecb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.365 2 DEBUG oslo_concurrency.lockutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lock "af875636-eb00-48b8-b1f4-589898eafecb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.365 2 DEBUG oslo_concurrency.lockutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lock "af875636-eb00-48b8-b1f4-589898eafecb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.366 2 DEBUG nova.virt.libvirt.vif [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T20:07:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-2122533072',display_name='tempest-TestServerBasicOps-server-2122533072',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-2122533072',id=10,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDxPI0CmzQm3wR8m2Vq0zGE2hiiNGt34W7En7pAqLuzoJ7ysyl6XPe7tkRbaBW5GW92Ce/Yooxvj5tcD36c/D4W8bSyhnpmezx4ELw/4LYg6y2osPt0fZXFT30f+OZ2jeQ==',key_name='tempest-TestServerBasicOps-1679027111',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='945fe5265b6446a2a61f775a8f3466f2',ramdisk_id='',reservation_id='r-e6sqh4pr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-644988398',owner_user_name='tempest-TestServerBasicOps-644988398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:07:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='56d9dae393d64f4b925b7c0827ad71e0',uuid=af875636-eb00-48b8-b1f4-589898eafecb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "address": "fa:16:3e:b0:ca:09", "network": {"id": "3f5a3a36-f114-4439-a81a-9e4ddc58a44b", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1040059364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "945fe5265b6446a2a61f775a8f3466f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50ea0ec-56", "ovs_interfaceid": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.366 2 DEBUG nova.network.os_vif_util [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Converting VIF {"id": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "address": "fa:16:3e:b0:ca:09", "network": {"id": "3f5a3a36-f114-4439-a81a-9e4ddc58a44b", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1040059364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "945fe5265b6446a2a61f775a8f3466f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50ea0ec-56", "ovs_interfaceid": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.366 2 DEBUG nova.network.os_vif_util [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:ca:09,bridge_name='br-int',has_traffic_filtering=True,id=e50ea0ec-56a1-4e06-bd8b-531ca4d11a04,network=Network(3f5a3a36-f114-4439-a81a-9e4ddc58a44b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50ea0ec-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.367 2 DEBUG os_vif [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:ca:09,bridge_name='br-int',has_traffic_filtering=True,id=e50ea0ec-56a1-4e06-bd8b-531ca4d11a04,network=Network(3f5a3a36-f114-4439-a81a-9e4ddc58a44b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50ea0ec-56') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.368 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.368 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.372 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape50ea0ec-56, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape50ea0ec-56, col_values=(('external_ids', {'iface-id': 'e50ea0ec-56a1-4e06-bd8b-531ca4d11a04', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:ca:09', 'vm-uuid': 'af875636-eb00-48b8-b1f4-589898eafecb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:35 compute-0 NetworkManager[44968]: <info>  [1759435655.3765] manager: (tape50ea0ec-56): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.385 2 INFO os_vif [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:ca:09,bridge_name='br-int',has_traffic_filtering=True,id=e50ea0ec-56a1-4e06-bd8b-531ca4d11a04,network=Network(3f5a3a36-f114-4439-a81a-9e4ddc58a44b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50ea0ec-56')
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.418 2 DEBUG nova.network.neutron [req-5609aa52-441b-4c23-bd2a-c61384b4820f req-efdb6237-e755-451a-89b3-f99ad5f35fbe 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Updated VIF entry in instance network info cache for port e50ea0ec-56a1-4e06-bd8b-531ca4d11a04. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.419 2 DEBUG nova.network.neutron [req-5609aa52-441b-4c23-bd2a-c61384b4820f req-efdb6237-e755-451a-89b3-f99ad5f35fbe 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Updating instance_info_cache with network_info: [{"id": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "address": "fa:16:3e:b0:ca:09", "network": {"id": "3f5a3a36-f114-4439-a81a-9e4ddc58a44b", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1040059364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "945fe5265b6446a2a61f775a8f3466f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50ea0ec-56", "ovs_interfaceid": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.441 2 DEBUG oslo_concurrency.lockutils [req-5609aa52-441b-4c23-bd2a-c61384b4820f req-efdb6237-e755-451a-89b3-f99ad5f35fbe 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-af875636-eb00-48b8-b1f4-589898eafecb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.462 2 DEBUG nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.462 2 DEBUG nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.462 2 DEBUG nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] No VIF found with MAC fa:16:3e:b0:ca:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.463 2 INFO nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Using config drive
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.512 2 DEBUG nova.storage.rbd_utils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] rbd image af875636-eb00-48b8-b1f4-589898eafecb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:07:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:07:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1885: 321 pgs: 321 active+clean; 345 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.4 MiB/s wr, 50 op/s
Oct 02 20:07:35 compute-0 nova_compute[355794]: 2025-10-02 20:07:35.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:35 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/438238697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:07:35 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2479352356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:07:36 compute-0 nova_compute[355794]: 2025-10-02 20:07:36.084 2 INFO nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Creating config drive at /var/lib/nova/instances/af875636-eb00-48b8-b1f4-589898eafecb/disk.config
Oct 02 20:07:36 compute-0 nova_compute[355794]: 2025-10-02 20:07:36.090 2 DEBUG oslo_concurrency.processutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af875636-eb00-48b8-b1f4-589898eafecb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_1tkdwq3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:36 compute-0 nova_compute[355794]: 2025-10-02 20:07:36.240 2 DEBUG oslo_concurrency.processutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af875636-eb00-48b8-b1f4-589898eafecb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_1tkdwq3" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:36 compute-0 nova_compute[355794]: 2025-10-02 20:07:36.285 2 DEBUG nova.storage.rbd_utils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] rbd image af875636-eb00-48b8-b1f4-589898eafecb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:07:36 compute-0 nova_compute[355794]: 2025-10-02 20:07:36.295 2 DEBUG oslo_concurrency.processutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af875636-eb00-48b8-b1f4-589898eafecb/disk.config af875636-eb00-48b8-b1f4-589898eafecb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:36 compute-0 nova_compute[355794]: 2025-10-02 20:07:36.343 2 DEBUG nova.network.neutron [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Successfully updated port: 4adcdafc-fb12-4e7c-9f7a-f2e6d691970d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 20:07:36 compute-0 nova_compute[355794]: 2025-10-02 20:07:36.365 2 DEBUG oslo_concurrency.lockutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquiring lock "refresh_cache-c942a9bd-3760-43df-964d-8aa0e8710a3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:07:36 compute-0 nova_compute[355794]: 2025-10-02 20:07:36.365 2 DEBUG oslo_concurrency.lockutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquired lock "refresh_cache-c942a9bd-3760-43df-964d-8aa0e8710a3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:07:36 compute-0 nova_compute[355794]: 2025-10-02 20:07:36.366 2 DEBUG nova.network.neutron [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 20:07:36 compute-0 nova_compute[355794]: 2025-10-02 20:07:36.429 2 DEBUG nova.compute.manager [req-a6d5eced-eccd-489a-bd7c-440fd2ac82b5 req-0148f477-3c91-4f53-bf21-e7a609a84bdf 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Received event network-changed-4adcdafc-fb12-4e7c-9f7a-f2e6d691970d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:07:36 compute-0 nova_compute[355794]: 2025-10-02 20:07:36.430 2 DEBUG nova.compute.manager [req-a6d5eced-eccd-489a-bd7c-440fd2ac82b5 req-0148f477-3c91-4f53-bf21-e7a609a84bdf 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Refreshing instance network info cache due to event network-changed-4adcdafc-fb12-4e7c-9f7a-f2e6d691970d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:07:36 compute-0 nova_compute[355794]: 2025-10-02 20:07:36.430 2 DEBUG oslo_concurrency.lockutils [req-a6d5eced-eccd-489a-bd7c-440fd2ac82b5 req-0148f477-3c91-4f53-bf21-e7a609a84bdf 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-c942a9bd-3760-43df-964d-8aa0e8710a3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:07:36 compute-0 nova_compute[355794]: 2025-10-02 20:07:36.502 2 DEBUG nova.network.neutron [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 20:07:36 compute-0 nova_compute[355794]: 2025-10-02 20:07:36.592 2 DEBUG oslo_concurrency.processutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af875636-eb00-48b8-b1f4-589898eafecb/disk.config af875636-eb00-48b8-b1f4-589898eafecb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:36 compute-0 nova_compute[355794]: 2025-10-02 20:07:36.593 2 INFO nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Deleting local config drive /var/lib/nova/instances/af875636-eb00-48b8-b1f4-589898eafecb/disk.config because it was imported into RBD.
Oct 02 20:07:36 compute-0 kernel: tape50ea0ec-56: entered promiscuous mode
Oct 02 20:07:36 compute-0 ovn_controller[88435]: 2025-10-02T20:07:36Z|00109|binding|INFO|Claiming lport e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 for this chassis.
Oct 02 20:07:36 compute-0 ovn_controller[88435]: 2025-10-02T20:07:36Z|00110|binding|INFO|e50ea0ec-56a1-4e06-bd8b-531ca4d11a04: Claiming fa:16:3e:b0:ca:09 10.100.0.6
Oct 02 20:07:36 compute-0 nova_compute[355794]: 2025-10-02 20:07:36.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:36 compute-0 NetworkManager[44968]: <info>  [1759435656.6755] manager: (tape50ea0ec-56): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Oct 02 20:07:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:36.675 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:ca:09 10.100.0.6'], port_security=['fa:16:3e:b0:ca:09 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'af875636-eb00-48b8-b1f4-589898eafecb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f5a3a36-f114-4439-a81a-9e4ddc58a44b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '945fe5265b6446a2a61f775a8f3466f2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '08af3841-27c3-4295-9ad6-4be383e6b700 5b734220-98e1-4240-8eeb-85c0c90ff8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a072168a-e212-49f4-ae2d-55929dd9a988, chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=e50ea0ec-56a1-4e06-bd8b-531ca4d11a04) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:07:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:36.676 285790 INFO neutron.agent.ovn.metadata.agent [-] Port e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 in datapath 3f5a3a36-f114-4439-a81a-9e4ddc58a44b bound to our chassis
Oct 02 20:07:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:36.678 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3f5a3a36-f114-4439-a81a-9e4ddc58a44b
Oct 02 20:07:36 compute-0 ovn_controller[88435]: 2025-10-02T20:07:36Z|00111|binding|INFO|Setting lport e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 up in Southbound
Oct 02 20:07:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:36.698 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[d9665413-5061-411b-b8d7-375996600bc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:36.699 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3f5a3a36-f1 in ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 20:07:36 compute-0 nova_compute[355794]: 2025-10-02 20:07:36.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:36 compute-0 ovn_controller[88435]: 2025-10-02T20:07:36Z|00112|binding|INFO|Setting lport e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 ovn-installed in OVS
Oct 02 20:07:36 compute-0 nova_compute[355794]: 2025-10-02 20:07:36.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:36.705 420728 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3f5a3a36-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 20:07:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:36.705 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[2a98f972-34c2-408c-abe6-323eafb06a17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:36.708 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[2b785fb3-7697-422d-ad26-25e299d57e99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:36.721 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[616b6658-b3b7-4bdf-bcd9-a13dc640119d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:36 compute-0 systemd-machined[137646]: New machine qemu-11-instance-0000000a.
Oct 02 20:07:36 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000a.
Oct 02 20:07:36 compute-0 systemd-udevd[455354]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 20:07:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:36.767 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[f60a5b43-52a0-4d60-9bc9-dbd93c784a21]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:36 compute-0 ceph-mon[191910]: pgmap v1885: 321 pgs: 321 active+clean; 345 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.4 MiB/s wr, 50 op/s
Oct 02 20:07:36 compute-0 NetworkManager[44968]: <info>  [1759435656.7875] device (tape50ea0ec-56): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 20:07:36 compute-0 NetworkManager[44968]: <info>  [1759435656.7885] device (tape50ea0ec-56): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 20:07:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:36.819 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[6f9ea62e-5ba7-4b19-9430-1f8871b0ff86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:36 compute-0 NetworkManager[44968]: <info>  [1759435656.8272] manager: (tap3f5a3a36-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Oct 02 20:07:36 compute-0 systemd-udevd[455357]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 20:07:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:36.826 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[1825803a-823a-4cc0-bea2-edd748e4913d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:36.879 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd387a0-dd94-4703-9afa-275fc8ee23e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:36.881 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[613dcb87-5558-465d-baa6-0aed1afde7a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:36 compute-0 NetworkManager[44968]: <info>  [1759435656.9160] device (tap3f5a3a36-f0): carrier: link connected
Oct 02 20:07:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:36.926 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[2162afa8-82a7-4256-b815-22d9632028c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:36.963 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[01ef4de9-f023-4ea6-bccc-751199f1f9f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f5a3a36-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:88:eb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682848, 'reachable_time': 30576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 455384, 'error': None, 'target': 'ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:36 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:36.986 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[8dfab9a4-618a-4caf-a265-32b29dac9f0e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:88eb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 682848, 'tstamp': 682848}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 455385, 'error': None, 'target': 'ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:37.014 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[5c9640c3-870b-404a-a2fd-b8de25e4738a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f5a3a36-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:88:eb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682848, 'reachable_time': 30576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 455386, 'error': None, 'target': 'ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:37.063 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3dcfb3-3234-4093-bb32-de1eb5ab03c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:37.164 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[29e4168e-4304-40b8-9399-0ccd75fa5c8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:37.166 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f5a3a36-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:37.166 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:37.167 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f5a3a36-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:37 compute-0 kernel: tap3f5a3a36-f0: entered promiscuous mode
Oct 02 20:07:37 compute-0 NetworkManager[44968]: <info>  [1759435657.1703] manager: (tap3f5a3a36-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Oct 02 20:07:37 compute-0 nova_compute[355794]: 2025-10-02 20:07:37.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:37.178 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3f5a3a36-f0, col_values=(('external_ids', {'iface-id': 'f39d21c1-fcb9-4571-ab80-c736abbfc93d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:37 compute-0 ovn_controller[88435]: 2025-10-02T20:07:37Z|00113|binding|INFO|Releasing lport f39d21c1-fcb9-4571-ab80-c736abbfc93d from this chassis (sb_readonly=0)
Oct 02 20:07:37 compute-0 nova_compute[355794]: 2025-10-02 20:07:37.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:37 compute-0 nova_compute[355794]: 2025-10-02 20:07:37.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:37.206 285790 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3f5a3a36-f114-4439-a81a-9e4ddc58a44b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3f5a3a36-f114-4439-a81a-9e4ddc58a44b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:37.207 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[969185d2-c2a4-4442-83c1-70a1eef67765]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:37.208 285790 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: global
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     log         /dev/log local0 debug
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     log-tag     haproxy-metadata-proxy-3f5a3a36-f114-4439-a81a-9e4ddc58a44b
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     user        root
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     group       root
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     maxconn     1024
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     pidfile     /var/lib/neutron/external/pids/3f5a3a36-f114-4439-a81a-9e4ddc58a44b.pid.haproxy
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     daemon
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: defaults
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     log global
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     mode http
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     option httplog
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     option dontlognull
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     option http-server-close
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     option forwardfor
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     retries                 3
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     timeout http-request    30s
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     timeout connect         30s
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     timeout client          32s
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     timeout server          32s
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     timeout http-keep-alive 30s
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: listen listener
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     bind 169.254.169.254:80
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:     http-request add-header X-OVN-Network-ID 3f5a3a36-f114-4439-a81a-9e4ddc58a44b
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:37.210 285790 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b', 'env', 'PROCESS_TAG=haproxy-3f5a3a36-f114-4439-a81a-9e4ddc58a44b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3f5a3a36-f114-4439-a81a-9e4ddc58a44b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 20:07:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 359 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 3.0 MiB/s wr, 51 op/s
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:37.670 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '6e:8a:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:85:f4:22:b9:4f'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:07:37 compute-0 nova_compute[355794]: 2025-10-02 20:07:37.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:37 compute-0 podman[455459]: 2025-10-02 20:07:37.741897054 +0000 UTC m=+0.087417526 container create 6b71ddc304aedb0a67579fed05ff5f36f08c170161f6fc798f52372c7720ec59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:07:37 compute-0 podman[455459]: 2025-10-02 20:07:37.70353262 +0000 UTC m=+0.049053132 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 20:07:37 compute-0 systemd[1]: Started libpod-conmon-6b71ddc304aedb0a67579fed05ff5f36f08c170161f6fc798f52372c7720ec59.scope.
Oct 02 20:07:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:07:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74340ec75506114577e967eb1c82ba276a08fd21e6065a0f80ec42fbcebabed0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 20:07:37 compute-0 podman[455459]: 2025-10-02 20:07:37.855717984 +0000 UTC m=+0.201238476 container init 6b71ddc304aedb0a67579fed05ff5f36f08c170161f6fc798f52372c7720ec59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 20:07:37 compute-0 podman[455459]: 2025-10-02 20:07:37.865329247 +0000 UTC m=+0.210849729 container start 6b71ddc304aedb0a67579fed05ff5f36f08c170161f6fc798f52372c7720ec59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 20:07:37 compute-0 neutron-haproxy-ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b[455474]: [NOTICE]   (455478) : New worker (455480) forked
Oct 02 20:07:37 compute-0 neutron-haproxy-ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b[455474]: [NOTICE]   (455478) : Loading success.
Oct 02 20:07:37 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:37.930 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 20:07:37 compute-0 nova_compute[355794]: 2025-10-02 20:07:37.992 2 DEBUG nova.network.neutron [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Updating instance_info_cache with network_info: [{"id": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "address": "fa:16:3e:93:51:26", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4adcdafc-fb", "ovs_interfaceid": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.029 2 DEBUG oslo_concurrency.lockutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Releasing lock "refresh_cache-c942a9bd-3760-43df-964d-8aa0e8710a3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.029 2 DEBUG nova.compute.manager [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Instance network_info: |[{"id": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "address": "fa:16:3e:93:51:26", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4adcdafc-fb", "ovs_interfaceid": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.030 2 DEBUG oslo_concurrency.lockutils [req-a6d5eced-eccd-489a-bd7c-440fd2ac82b5 req-0148f477-3c91-4f53-bf21-e7a609a84bdf 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-c942a9bd-3760-43df-964d-8aa0e8710a3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.030 2 DEBUG nova.network.neutron [req-a6d5eced-eccd-489a-bd7c-440fd2ac82b5 req-0148f477-3c91-4f53-bf21-e7a609a84bdf 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Refreshing network info cache for port 4adcdafc-fb12-4e7c-9f7a-f2e6d691970d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.033 2 DEBUG nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Start _get_guest_xml network_info=[{"id": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "address": "fa:16:3e:93:51:26", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4adcdafc-fb", "ovs_interfaceid": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:04:39Z,direct_url=<?>,disk_format='qcow2',id=2881b8cb-4cad-4124-8a6e-ae21054c9692,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:04:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'disk_bus': 'virtio', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'image_id': '2881b8cb-4cad-4124-8a6e-ae21054c9692'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.042 2 WARNING nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.049 2 DEBUG nova.virt.libvirt.host [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.050 2 DEBUG nova.virt.libvirt.host [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.058 2 DEBUG nova.virt.libvirt.host [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.058 2 DEBUG nova.virt.libvirt.host [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.059 2 DEBUG nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.059 2 DEBUG nova.virt.hardware [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T20:04:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2a4d7fef-934e-4921-8c3b-c6783966faa5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:04:39Z,direct_url=<?>,disk_format='qcow2',id=2881b8cb-4cad-4124-8a6e-ae21054c9692,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:04:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.060 2 DEBUG nova.virt.hardware [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.060 2 DEBUG nova.virt.hardware [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.060 2 DEBUG nova.virt.hardware [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.060 2 DEBUG nova.virt.hardware [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.060 2 DEBUG nova.virt.hardware [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.061 2 DEBUG nova.virt.hardware [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.061 2 DEBUG nova.virt.hardware [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.062 2 DEBUG nova.virt.hardware [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.062 2 DEBUG nova.virt.hardware [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.062 2 DEBUG nova.virt.hardware [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.067 2 DEBUG oslo_concurrency.processutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.110 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435658.0903473, af875636-eb00-48b8-b1f4-589898eafecb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.111 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: af875636-eb00-48b8-b1f4-589898eafecb] VM Started (Lifecycle Event)
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.143 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.151 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435658.0907726, af875636-eb00-48b8-b1f4-589898eafecb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.152 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: af875636-eb00-48b8-b1f4-589898eafecb] VM Paused (Lifecycle Event)
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.181 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.189 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.223 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: af875636-eb00-48b8-b1f4-589898eafecb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:07:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:07:38 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1714084191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.585 2 DEBUG oslo_concurrency.processutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.650 2 DEBUG nova.storage.rbd_utils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] rbd image c942a9bd-3760-43df-964d-8aa0e8710a3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.684 2 DEBUG oslo_concurrency.processutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.719 2 DEBUG nova.compute.manager [req-ae28593f-4e72-4110-9e00-063e9fb17ab2 req-ed012464-7583-41c9-a068-8c27a42ecb40 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Received event network-vif-plugged-e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.720 2 DEBUG oslo_concurrency.lockutils [req-ae28593f-4e72-4110-9e00-063e9fb17ab2 req-ed012464-7583-41c9-a068-8c27a42ecb40 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "af875636-eb00-48b8-b1f4-589898eafecb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.720 2 DEBUG oslo_concurrency.lockutils [req-ae28593f-4e72-4110-9e00-063e9fb17ab2 req-ed012464-7583-41c9-a068-8c27a42ecb40 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "af875636-eb00-48b8-b1f4-589898eafecb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.720 2 DEBUG oslo_concurrency.lockutils [req-ae28593f-4e72-4110-9e00-063e9fb17ab2 req-ed012464-7583-41c9-a068-8c27a42ecb40 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "af875636-eb00-48b8-b1f4-589898eafecb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.721 2 DEBUG nova.compute.manager [req-ae28593f-4e72-4110-9e00-063e9fb17ab2 req-ed012464-7583-41c9-a068-8c27a42ecb40 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Processing event network-vif-plugged-e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.721 2 DEBUG nova.compute.manager [req-ae28593f-4e72-4110-9e00-063e9fb17ab2 req-ed012464-7583-41c9-a068-8c27a42ecb40 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Received event network-vif-plugged-e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.721 2 DEBUG oslo_concurrency.lockutils [req-ae28593f-4e72-4110-9e00-063e9fb17ab2 req-ed012464-7583-41c9-a068-8c27a42ecb40 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "af875636-eb00-48b8-b1f4-589898eafecb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.722 2 DEBUG oslo_concurrency.lockutils [req-ae28593f-4e72-4110-9e00-063e9fb17ab2 req-ed012464-7583-41c9-a068-8c27a42ecb40 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "af875636-eb00-48b8-b1f4-589898eafecb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.722 2 DEBUG oslo_concurrency.lockutils [req-ae28593f-4e72-4110-9e00-063e9fb17ab2 req-ed012464-7583-41c9-a068-8c27a42ecb40 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "af875636-eb00-48b8-b1f4-589898eafecb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.722 2 DEBUG nova.compute.manager [req-ae28593f-4e72-4110-9e00-063e9fb17ab2 req-ed012464-7583-41c9-a068-8c27a42ecb40 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] No waiting events found dispatching network-vif-plugged-e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.722 2 WARNING nova.compute.manager [req-ae28593f-4e72-4110-9e00-063e9fb17ab2 req-ed012464-7583-41c9-a068-8c27a42ecb40 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Received unexpected event network-vif-plugged-e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 for instance with vm_state building and task_state spawning.
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.724 2 DEBUG nova.compute.manager [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.730 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435658.7295291, af875636-eb00-48b8-b1f4-589898eafecb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.730 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: af875636-eb00-48b8-b1f4-589898eafecb] VM Resumed (Lifecycle Event)
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.737 2 DEBUG nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.755 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.758 2 INFO nova.virt.libvirt.driver [-] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Instance spawned successfully.
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.758 2 DEBUG nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.766 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.791 2 DEBUG nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.792 2 DEBUG nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:07:38 compute-0 ceph-mon[191910]: pgmap v1886: 321 pgs: 321 active+clean; 359 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 3.0 MiB/s wr, 51 op/s
Oct 02 20:07:38 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1714084191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.792 2 DEBUG nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.793 2 DEBUG nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.793 2 DEBUG nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.794 2 DEBUG nova.virt.libvirt.driver [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.806 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: af875636-eb00-48b8-b1f4-589898eafecb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.865 2 INFO nova.compute.manager [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Took 9.85 seconds to spawn the instance on the hypervisor.
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.866 2 DEBUG nova.compute.manager [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.960 2 INFO nova.compute.manager [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Took 11.08 seconds to build instance.
Oct 02 20:07:38 compute-0 nova_compute[355794]: 2025-10-02 20:07:38.978 2 DEBUG oslo_concurrency.lockutils [None req-c48db461-7d42-448a-a219-8fad77dd8e5a 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lock "af875636-eb00-48b8-b1f4-589898eafecb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:07:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1018215264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.259 2 DEBUG oslo_concurrency.processutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.261 2 DEBUG nova.virt.libvirt.vif [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T20:07:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1118171358',display_name='tempest-TestNetworkBasicOps-server-1118171358',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1118171358',id=11,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAk35KZ7CQK6+sXlwHKd132rUriO0VfU5GtRYC/4ZLUBOEnyPkd6bJxXv81TUMHDJORzY0bjQglnRzFjcurkWs8ue5nit6tRiThY/8NrD3xM1QdaVcCnCUr0kLKeT79Z0g==',key_name='tempest-TestNetworkBasicOps-584519399',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a7c52835a9494ea98fd26390771eb77f',ramdisk_id='',reservation_id='r-hfnhxdbw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1027837101',owner_user_name='tempest-TestNetworkBasicOps-1027837101-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:07:33Z,user_data=None,user_id='e87db118c0374d50a374f0ceaf961159',uuid=c942a9bd-3760-43df-964d-8aa0e8710a3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "address": "fa:16:3e:93:51:26", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4adcdafc-fb", "ovs_interfaceid": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.261 2 DEBUG nova.network.os_vif_util [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Converting VIF {"id": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "address": "fa:16:3e:93:51:26", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4adcdafc-fb", "ovs_interfaceid": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.263 2 DEBUG nova.network.os_vif_util [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:51:26,bridge_name='br-int',has_traffic_filtering=True,id=4adcdafc-fb12-4e7c-9f7a-f2e6d691970d,network=Network(aefd878a-4767-48ff-8dcb-ccb5b8fcb84b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4adcdafc-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.264 2 DEBUG nova.objects.instance [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lazy-loading 'pci_devices' on Instance uuid c942a9bd-3760-43df-964d-8aa0e8710a3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.280 2 DEBUG nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 20:07:39 compute-0 nova_compute[355794]:   <uuid>c942a9bd-3760-43df-964d-8aa0e8710a3d</uuid>
Oct 02 20:07:39 compute-0 nova_compute[355794]:   <name>instance-0000000b</name>
Oct 02 20:07:39 compute-0 nova_compute[355794]:   <memory>131072</memory>
Oct 02 20:07:39 compute-0 nova_compute[355794]:   <vcpu>1</vcpu>
Oct 02 20:07:39 compute-0 nova_compute[355794]:   <metadata>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <nova:name>tempest-TestNetworkBasicOps-server-1118171358</nova:name>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <nova:creationTime>2025-10-02 20:07:38</nova:creationTime>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <nova:flavor name="m1.nano">
Oct 02 20:07:39 compute-0 nova_compute[355794]:         <nova:memory>128</nova:memory>
Oct 02 20:07:39 compute-0 nova_compute[355794]:         <nova:disk>1</nova:disk>
Oct 02 20:07:39 compute-0 nova_compute[355794]:         <nova:swap>0</nova:swap>
Oct 02 20:07:39 compute-0 nova_compute[355794]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 20:07:39 compute-0 nova_compute[355794]:         <nova:vcpus>1</nova:vcpus>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       </nova:flavor>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <nova:owner>
Oct 02 20:07:39 compute-0 nova_compute[355794]:         <nova:user uuid="e87db118c0374d50a374f0ceaf961159">tempest-TestNetworkBasicOps-1027837101-project-member</nova:user>
Oct 02 20:07:39 compute-0 nova_compute[355794]:         <nova:project uuid="a7c52835a9494ea98fd26390771eb77f">tempest-TestNetworkBasicOps-1027837101</nova:project>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       </nova:owner>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <nova:root type="image" uuid="2881b8cb-4cad-4124-8a6e-ae21054c9692"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <nova:ports>
Oct 02 20:07:39 compute-0 nova_compute[355794]:         <nova:port uuid="4adcdafc-fb12-4e7c-9f7a-f2e6d691970d">
Oct 02 20:07:39 compute-0 nova_compute[355794]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:         </nova:port>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       </nova:ports>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     </nova:instance>
Oct 02 20:07:39 compute-0 nova_compute[355794]:   </metadata>
Oct 02 20:07:39 compute-0 nova_compute[355794]:   <sysinfo type="smbios">
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <system>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <entry name="manufacturer">RDO</entry>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <entry name="product">OpenStack Compute</entry>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <entry name="serial">c942a9bd-3760-43df-964d-8aa0e8710a3d</entry>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <entry name="uuid">c942a9bd-3760-43df-964d-8aa0e8710a3d</entry>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <entry name="family">Virtual Machine</entry>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     </system>
Oct 02 20:07:39 compute-0 nova_compute[355794]:   </sysinfo>
Oct 02 20:07:39 compute-0 nova_compute[355794]:   <os>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <boot dev="hd"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <smbios mode="sysinfo"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:   </os>
Oct 02 20:07:39 compute-0 nova_compute[355794]:   <features>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <acpi/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <apic/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <vmcoreinfo/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:   </features>
Oct 02 20:07:39 compute-0 nova_compute[355794]:   <clock offset="utc">
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <timer name="hpet" present="no"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:   </clock>
Oct 02 20:07:39 compute-0 nova_compute[355794]:   <cpu mode="host-model" match="exact">
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:   </cpu>
Oct 02 20:07:39 compute-0 nova_compute[355794]:   <devices>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/c942a9bd-3760-43df-964d-8aa0e8710a3d_disk">
Oct 02 20:07:39 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       </source>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:07:39 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <target dev="vda" bus="virtio"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <disk type="network" device="cdrom">
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/c942a9bd-3760-43df-964d-8aa0e8710a3d_disk.config">
Oct 02 20:07:39 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       </source>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:07:39 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <target dev="sda" bus="sata"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <interface type="ethernet">
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <mac address="fa:16:3e:93:51:26"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <mtu size="1442"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <target dev="tap4adcdafc-fb"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     </interface>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <serial type="pty">
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <log file="/var/lib/nova/instances/c942a9bd-3760-43df-964d-8aa0e8710a3d/console.log" append="off"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     </serial>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <video>
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     </video>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <input type="tablet" bus="usb"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <rng model="virtio">
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <backend model="random">/dev/urandom</backend>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     </rng>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <controller type="usb" index="0"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     <memballoon model="virtio">
Oct 02 20:07:39 compute-0 nova_compute[355794]:       <stats period="10"/>
Oct 02 20:07:39 compute-0 nova_compute[355794]:     </memballoon>
Oct 02 20:07:39 compute-0 nova_compute[355794]:   </devices>
Oct 02 20:07:39 compute-0 nova_compute[355794]: </domain>
Oct 02 20:07:39 compute-0 nova_compute[355794]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.281 2 DEBUG nova.compute.manager [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Preparing to wait for external event network-vif-plugged-4adcdafc-fb12-4e7c-9f7a-f2e6d691970d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.281 2 DEBUG oslo_concurrency.lockutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquiring lock "c942a9bd-3760-43df-964d-8aa0e8710a3d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.282 2 DEBUG oslo_concurrency.lockutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "c942a9bd-3760-43df-964d-8aa0e8710a3d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.282 2 DEBUG oslo_concurrency.lockutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "c942a9bd-3760-43df-964d-8aa0e8710a3d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.283 2 DEBUG nova.virt.libvirt.vif [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T20:07:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1118171358',display_name='tempest-TestNetworkBasicOps-server-1118171358',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1118171358',id=11,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAk35KZ7CQK6+sXlwHKd132rUriO0VfU5GtRYC/4ZLUBOEnyPkd6bJxXv81TUMHDJORzY0bjQglnRzFjcurkWs8ue5nit6tRiThY/8NrD3xM1QdaVcCnCUr0kLKeT79Z0g==',key_name='tempest-TestNetworkBasicOps-584519399',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a7c52835a9494ea98fd26390771eb77f',ramdisk_id='',reservation_id='r-hfnhxdbw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1027837101',owner_user_name='tempest-TestNetworkBasicOps-1027837101-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:07:33Z,user_data=None,user_id='e87db118c0374d50a374f0ceaf961159',uuid=c942a9bd-3760-43df-964d-8aa0e8710a3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "address": "fa:16:3e:93:51:26", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4adcdafc-fb", "ovs_interfaceid": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.283 2 DEBUG nova.network.os_vif_util [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Converting VIF {"id": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "address": "fa:16:3e:93:51:26", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4adcdafc-fb", "ovs_interfaceid": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.284 2 DEBUG nova.network.os_vif_util [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:51:26,bridge_name='br-int',has_traffic_filtering=True,id=4adcdafc-fb12-4e7c-9f7a-f2e6d691970d,network=Network(aefd878a-4767-48ff-8dcb-ccb5b8fcb84b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4adcdafc-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.285 2 DEBUG os_vif [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:51:26,bridge_name='br-int',has_traffic_filtering=True,id=4adcdafc-fb12-4e7c-9f7a-f2e6d691970d,network=Network(aefd878a-4767-48ff-8dcb-ccb5b8fcb84b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4adcdafc-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.287 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.288 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.291 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4adcdafc-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.291 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4adcdafc-fb, col_values=(('external_ids', {'iface-id': '4adcdafc-fb12-4e7c-9f7a-f2e6d691970d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:51:26', 'vm-uuid': 'c942a9bd-3760-43df-964d-8aa0e8710a3d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:39 compute-0 NetworkManager[44968]: <info>  [1759435659.2953] manager: (tap4adcdafc-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.307 2 INFO os_vif [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:51:26,bridge_name='br-int',has_traffic_filtering=True,id=4adcdafc-fb12-4e7c-9f7a-f2e6d691970d,network=Network(aefd878a-4767-48ff-8dcb-ccb5b8fcb84b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4adcdafc-fb')
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.384 2 DEBUG nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.384 2 DEBUG nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.384 2 DEBUG nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] No VIF found with MAC fa:16:3e:93:51:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.385 2 INFO nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Using config drive
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.420 2 DEBUG nova.storage.rbd_utils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] rbd image c942a9bd-3760-43df-964d-8aa0e8710a3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:07:39 compute-0 nova_compute[355794]: 2025-10-02 20:07:39.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1887: 321 pgs: 321 active+clean; 370 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 62 op/s
Oct 02 20:07:39 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1018215264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:07:40 compute-0 nova_compute[355794]: 2025-10-02 20:07:40.483 2 DEBUG nova.network.neutron [req-a6d5eced-eccd-489a-bd7c-440fd2ac82b5 req-0148f477-3c91-4f53-bf21-e7a609a84bdf 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Updated VIF entry in instance network info cache for port 4adcdafc-fb12-4e7c-9f7a-f2e6d691970d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:07:40 compute-0 nova_compute[355794]: 2025-10-02 20:07:40.484 2 DEBUG nova.network.neutron [req-a6d5eced-eccd-489a-bd7c-440fd2ac82b5 req-0148f477-3c91-4f53-bf21-e7a609a84bdf 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Updating instance_info_cache with network_info: [{"id": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "address": "fa:16:3e:93:51:26", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4adcdafc-fb", "ovs_interfaceid": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:07:40 compute-0 nova_compute[355794]: 2025-10-02 20:07:40.515 2 DEBUG oslo_concurrency.lockutils [req-a6d5eced-eccd-489a-bd7c-440fd2ac82b5 req-0148f477-3c91-4f53-bf21-e7a609a84bdf 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-c942a9bd-3760-43df-964d-8aa0e8710a3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:07:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:07:40 compute-0 nova_compute[355794]: 2025-10-02 20:07:40.600 2 INFO nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Creating config drive at /var/lib/nova/instances/c942a9bd-3760-43df-964d-8aa0e8710a3d/disk.config
Oct 02 20:07:40 compute-0 nova_compute[355794]: 2025-10-02 20:07:40.614 2 DEBUG oslo_concurrency.processutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c942a9bd-3760-43df-964d-8aa0e8710a3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfej6e5me execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:40 compute-0 nova_compute[355794]: 2025-10-02 20:07:40.775 2 DEBUG oslo_concurrency.processutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c942a9bd-3760-43df-964d-8aa0e8710a3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfej6e5me" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:40 compute-0 ceph-mon[191910]: pgmap v1887: 321 pgs: 321 active+clean; 370 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 62 op/s
Oct 02 20:07:40 compute-0 nova_compute[355794]: 2025-10-02 20:07:40.844 2 DEBUG nova.storage.rbd_utils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] rbd image c942a9bd-3760-43df-964d-8aa0e8710a3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:07:40 compute-0 nova_compute[355794]: 2025-10-02 20:07:40.859 2 DEBUG oslo_concurrency.processutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c942a9bd-3760-43df-964d-8aa0e8710a3d/disk.config c942a9bd-3760-43df-964d-8aa0e8710a3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:07:41 compute-0 nova_compute[355794]: 2025-10-02 20:07:41.116 2 DEBUG oslo_concurrency.processutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c942a9bd-3760-43df-964d-8aa0e8710a3d/disk.config c942a9bd-3760-43df-964d-8aa0e8710a3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:07:41 compute-0 nova_compute[355794]: 2025-10-02 20:07:41.117 2 INFO nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Deleting local config drive /var/lib/nova/instances/c942a9bd-3760-43df-964d-8aa0e8710a3d/disk.config because it was imported into RBD.
Oct 02 20:07:41 compute-0 kernel: tap4adcdafc-fb: entered promiscuous mode
Oct 02 20:07:41 compute-0 NetworkManager[44968]: <info>  [1759435661.1948] manager: (tap4adcdafc-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Oct 02 20:07:41 compute-0 ovn_controller[88435]: 2025-10-02T20:07:41Z|00114|binding|INFO|Claiming lport 4adcdafc-fb12-4e7c-9f7a-f2e6d691970d for this chassis.
Oct 02 20:07:41 compute-0 ovn_controller[88435]: 2025-10-02T20:07:41Z|00115|binding|INFO|4adcdafc-fb12-4e7c-9f7a-f2e6d691970d: Claiming fa:16:3e:93:51:26 10.100.0.5
Oct 02 20:07:41 compute-0 nova_compute[355794]: 2025-10-02 20:07:41.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:41.211 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:51:26 10.100.0.5'], port_security=['fa:16:3e:93:51:26 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c942a9bd-3760-43df-964d-8aa0e8710a3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a7c52835a9494ea98fd26390771eb77f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ad588f62-678c-4208-b626-55393ac900c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55b0fe3c-2477-4bd1-a279-06ccc23b46bf, chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=4adcdafc-fb12-4e7c-9f7a-f2e6d691970d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:07:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:41.213 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 4adcdafc-fb12-4e7c-9f7a-f2e6d691970d in datapath aefd878a-4767-48ff-8dcb-ccb5b8fcb84b bound to our chassis
Oct 02 20:07:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:41.217 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aefd878a-4767-48ff-8dcb-ccb5b8fcb84b
Oct 02 20:07:41 compute-0 ovn_controller[88435]: 2025-10-02T20:07:41Z|00116|binding|INFO|Setting lport 4adcdafc-fb12-4e7c-9f7a-f2e6d691970d ovn-installed in OVS
Oct 02 20:07:41 compute-0 ovn_controller[88435]: 2025-10-02T20:07:41Z|00117|binding|INFO|Setting lport 4adcdafc-fb12-4e7c-9f7a-f2e6d691970d up in Southbound
Oct 02 20:07:41 compute-0 nova_compute[355794]: 2025-10-02 20:07:41.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:41 compute-0 nova_compute[355794]: 2025-10-02 20:07:41.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:41.243 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[10777c71-df20-4c3b-b9ff-826150b4336a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:41 compute-0 systemd-udevd[455626]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 20:07:41 compute-0 systemd-machined[137646]: New machine qemu-12-instance-0000000b.
Oct 02 20:07:41 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000b.
Oct 02 20:07:41 compute-0 NetworkManager[44968]: <info>  [1759435661.2717] device (tap4adcdafc-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 20:07:41 compute-0 NetworkManager[44968]: <info>  [1759435661.2727] device (tap4adcdafc-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 20:07:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:41.294 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[5d56f923-6903-4b92-b590-948df8348c83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:41.301 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[81fe026c-0613-4691-bc66-568c2739b5c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:41.337 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[3e704edc-ef63-44e5-925d-3820b6c68da7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:41.364 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[9fbbcf50-d62b-43f7-85fa-1eddee54ef11]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaefd878a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:f4:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676151, 'reachable_time': 33888, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 455638, 'error': None, 'target': 'ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:41.388 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[d3d0c6b3-5c52-4838-8861-6ae69009f2e3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaefd878a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676164, 'tstamp': 676164}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 455640, 'error': None, 'target': 'ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaefd878a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676168, 'tstamp': 676168}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 455640, 'error': None, 'target': 'ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:07:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:41.390 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaefd878a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:41 compute-0 nova_compute[355794]: 2025-10-02 20:07:41.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:41 compute-0 nova_compute[355794]: 2025-10-02 20:07:41.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:41.395 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaefd878a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:41.395 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:07:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:41.396 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaefd878a-40, col_values=(('external_ids', {'iface-id': 'cdbc9f7e-e502-4e46-9d35-398a11c2a99d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:41.396 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:07:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1888: 321 pgs: 321 active+clean; 370 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 134 KiB/s rd, 3.6 MiB/s wr, 68 op/s
Oct 02 20:07:42 compute-0 nova_compute[355794]: 2025-10-02 20:07:42.484 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435662.483464, c942a9bd-3760-43df-964d-8aa0e8710a3d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:07:42 compute-0 nova_compute[355794]: 2025-10-02 20:07:42.485 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] VM Started (Lifecycle Event)
Oct 02 20:07:42 compute-0 nova_compute[355794]: 2025-10-02 20:07:42.524 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:07:42 compute-0 nova_compute[355794]: 2025-10-02 20:07:42.529 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435662.484193, c942a9bd-3760-43df-964d-8aa0e8710a3d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:07:42 compute-0 nova_compute[355794]: 2025-10-02 20:07:42.530 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] VM Paused (Lifecycle Event)
Oct 02 20:07:42 compute-0 nova_compute[355794]: 2025-10-02 20:07:42.561 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:07:42 compute-0 nova_compute[355794]: 2025-10-02 20:07:42.569 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:07:42 compute-0 nova_compute[355794]: 2025-10-02 20:07:42.600 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:07:42 compute-0 ceph-mon[191910]: pgmap v1888: 321 pgs: 321 active+clean; 370 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 134 KiB/s rd, 3.6 MiB/s wr, 68 op/s
Oct 02 20:07:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:07:42.932 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1de2af17-a89c-45e5-97c6-db433f26bbb6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:07:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1889: 321 pgs: 321 active+clean; 370 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 751 KiB/s rd, 3.2 MiB/s wr, 66 op/s
Oct 02 20:07:43 compute-0 nova_compute[355794]: 2025-10-02 20:07:43.951 2 DEBUG nova.compute.manager [req-ce24ccff-37ad-477f-ba2e-9048161b7b89 req-cade1464-322c-412e-9fb5-6c2a62cb3647 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Received event network-changed-e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:07:43 compute-0 nova_compute[355794]: 2025-10-02 20:07:43.952 2 DEBUG nova.compute.manager [req-ce24ccff-37ad-477f-ba2e-9048161b7b89 req-cade1464-322c-412e-9fb5-6c2a62cb3647 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Refreshing instance network info cache due to event network-changed-e50ea0ec-56a1-4e06-bd8b-531ca4d11a04. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:07:43 compute-0 nova_compute[355794]: 2025-10-02 20:07:43.952 2 DEBUG oslo_concurrency.lockutils [req-ce24ccff-37ad-477f-ba2e-9048161b7b89 req-cade1464-322c-412e-9fb5-6c2a62cb3647 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-af875636-eb00-48b8-b1f4-589898eafecb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:07:43 compute-0 nova_compute[355794]: 2025-10-02 20:07:43.952 2 DEBUG oslo_concurrency.lockutils [req-ce24ccff-37ad-477f-ba2e-9048161b7b89 req-cade1464-322c-412e-9fb5-6c2a62cb3647 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-af875636-eb00-48b8-b1f4-589898eafecb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:07:43 compute-0 nova_compute[355794]: 2025-10-02 20:07:43.952 2 DEBUG nova.network.neutron [req-ce24ccff-37ad-477f-ba2e-9048161b7b89 req-cade1464-322c-412e-9fb5-6c2a62cb3647 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Refreshing network info cache for port e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:07:44 compute-0 nova_compute[355794]: 2025-10-02 20:07:44.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:44 compute-0 nova_compute[355794]: 2025-10-02 20:07:44.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:44 compute-0 ceph-mon[191910]: pgmap v1889: 321 pgs: 321 active+clean; 370 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 751 KiB/s rd, 3.2 MiB/s wr, 66 op/s
Oct 02 20:07:44 compute-0 podman[455683]: 2025-10-02 20:07:44.887994334 +0000 UTC m=+0.154947930 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 20:07:45 compute-0 nova_compute[355794]: 2025-10-02 20:07:45.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:07:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 370 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.7 MiB/s wr, 97 op/s
Oct 02 20:07:45 compute-0 nova_compute[355794]: 2025-10-02 20:07:45.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.334 2 DEBUG nova.compute.manager [req-2327f1e6-a060-4a51-a145-13acb4a75ae9 req-101c5818-a54c-4c49-848f-c0c9aa4ff058 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Received event network-vif-plugged-4adcdafc-fb12-4e7c-9f7a-f2e6d691970d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.335 2 DEBUG oslo_concurrency.lockutils [req-2327f1e6-a060-4a51-a145-13acb4a75ae9 req-101c5818-a54c-4c49-848f-c0c9aa4ff058 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "c942a9bd-3760-43df-964d-8aa0e8710a3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.335 2 DEBUG oslo_concurrency.lockutils [req-2327f1e6-a060-4a51-a145-13acb4a75ae9 req-101c5818-a54c-4c49-848f-c0c9aa4ff058 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "c942a9bd-3760-43df-964d-8aa0e8710a3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.336 2 DEBUG oslo_concurrency.lockutils [req-2327f1e6-a060-4a51-a145-13acb4a75ae9 req-101c5818-a54c-4c49-848f-c0c9aa4ff058 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "c942a9bd-3760-43df-964d-8aa0e8710a3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.336 2 DEBUG nova.compute.manager [req-2327f1e6-a060-4a51-a145-13acb4a75ae9 req-101c5818-a54c-4c49-848f-c0c9aa4ff058 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Processing event network-vif-plugged-4adcdafc-fb12-4e7c-9f7a-f2e6d691970d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.338 2 DEBUG nova.compute.manager [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.344 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435666.3445835, c942a9bd-3760-43df-964d-8aa0e8710a3d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.345 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] VM Resumed (Lifecycle Event)
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.347 2 DEBUG nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.354 2 INFO nova.virt.libvirt.driver [-] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Instance spawned successfully.
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.354 2 DEBUG nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.392 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.403 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.413 2 DEBUG nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.414 2 DEBUG nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.415 2 DEBUG nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.415 2 DEBUG nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.416 2 DEBUG nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.417 2 DEBUG nova.virt.libvirt.driver [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.422 2 DEBUG nova.network.neutron [req-ce24ccff-37ad-477f-ba2e-9048161b7b89 req-cade1464-322c-412e-9fb5-6c2a62cb3647 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Updated VIF entry in instance network info cache for port e50ea0ec-56a1-4e06-bd8b-531ca4d11a04. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.423 2 DEBUG nova.network.neutron [req-ce24ccff-37ad-477f-ba2e-9048161b7b89 req-cade1464-322c-412e-9fb5-6c2a62cb3647 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Updating instance_info_cache with network_info: [{"id": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "address": "fa:16:3e:b0:ca:09", "network": {"id": "3f5a3a36-f114-4439-a81a-9e4ddc58a44b", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1040059364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "945fe5265b6446a2a61f775a8f3466f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50ea0ec-56", "ovs_interfaceid": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.679 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.781 2 DEBUG oslo_concurrency.lockutils [req-ce24ccff-37ad-477f-ba2e-9048161b7b89 req-cade1464-322c-412e-9fb5-6c2a62cb3647 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-af875636-eb00-48b8-b1f4-589898eafecb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.822 2 INFO nova.compute.manager [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Took 13.70 seconds to spawn the instance on the hypervisor.
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.822 2 DEBUG nova.compute.manager [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:07:46 compute-0 ceph-mon[191910]: pgmap v1890: 321 pgs: 321 active+clean; 370 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.7 MiB/s wr, 97 op/s
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.910 2 INFO nova.compute.manager [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Took 14.85 seconds to build instance.
Oct 02 20:07:46 compute-0 nova_compute[355794]: 2025-10-02 20:07:46.935 2 DEBUG oslo_concurrency.lockutils [None req-b6093acf-02fd-4b2a-95ac-6c23098acd21 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "c942a9bd-3760-43df-964d-8aa0e8710a3d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:47 compute-0 ovn_controller[88435]: 2025-10-02T20:07:47Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:42:64 10.100.0.13
Oct 02 20:07:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1891: 321 pgs: 321 active+clean; 370 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 94 op/s
Oct 02 20:07:48 compute-0 nova_compute[355794]: 2025-10-02 20:07:48.524 2 DEBUG nova.compute.manager [req-4578de6c-214a-4020-8679-75d034579e36 req-dfd6b9d2-95a0-4ddf-b790-b6dd448baffd 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Received event network-vif-plugged-4adcdafc-fb12-4e7c-9f7a-f2e6d691970d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:07:48 compute-0 nova_compute[355794]: 2025-10-02 20:07:48.525 2 DEBUG oslo_concurrency.lockutils [req-4578de6c-214a-4020-8679-75d034579e36 req-dfd6b9d2-95a0-4ddf-b790-b6dd448baffd 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "c942a9bd-3760-43df-964d-8aa0e8710a3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:07:48 compute-0 nova_compute[355794]: 2025-10-02 20:07:48.527 2 DEBUG oslo_concurrency.lockutils [req-4578de6c-214a-4020-8679-75d034579e36 req-dfd6b9d2-95a0-4ddf-b790-b6dd448baffd 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "c942a9bd-3760-43df-964d-8aa0e8710a3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:07:48 compute-0 nova_compute[355794]: 2025-10-02 20:07:48.527 2 DEBUG oslo_concurrency.lockutils [req-4578de6c-214a-4020-8679-75d034579e36 req-dfd6b9d2-95a0-4ddf-b790-b6dd448baffd 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "c942a9bd-3760-43df-964d-8aa0e8710a3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:07:48 compute-0 nova_compute[355794]: 2025-10-02 20:07:48.528 2 DEBUG nova.compute.manager [req-4578de6c-214a-4020-8679-75d034579e36 req-dfd6b9d2-95a0-4ddf-b790-b6dd448baffd 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] No waiting events found dispatching network-vif-plugged-4adcdafc-fb12-4e7c-9f7a-f2e6d691970d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:07:48 compute-0 nova_compute[355794]: 2025-10-02 20:07:48.529 2 WARNING nova.compute.manager [req-4578de6c-214a-4020-8679-75d034579e36 req-dfd6b9d2-95a0-4ddf-b790-b6dd448baffd 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Received unexpected event network-vif-plugged-4adcdafc-fb12-4e7c-9f7a-f2e6d691970d for instance with vm_state active and task_state None.
Oct 02 20:07:48 compute-0 ceph-mon[191910]: pgmap v1891: 321 pgs: 321 active+clean; 370 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 94 op/s
Oct 02 20:07:49 compute-0 nova_compute[355794]: 2025-10-02 20:07:49.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:49 compute-0 nova_compute[355794]: 2025-10-02 20:07:49.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 370 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 611 KiB/s wr, 134 op/s
Oct 02 20:07:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:07:50 compute-0 nova_compute[355794]: 2025-10-02 20:07:50.646 2 DEBUG nova.compute.manager [req-a9c622aa-d456-4f3c-aac8-828eac3f923c req-d5159163-f18a-48a4-b970-ecaa2823fda7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Received event network-changed-4adcdafc-fb12-4e7c-9f7a-f2e6d691970d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:07:50 compute-0 nova_compute[355794]: 2025-10-02 20:07:50.646 2 DEBUG nova.compute.manager [req-a9c622aa-d456-4f3c-aac8-828eac3f923c req-d5159163-f18a-48a4-b970-ecaa2823fda7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Refreshing instance network info cache due to event network-changed-4adcdafc-fb12-4e7c-9f7a-f2e6d691970d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:07:50 compute-0 nova_compute[355794]: 2025-10-02 20:07:50.647 2 DEBUG oslo_concurrency.lockutils [req-a9c622aa-d456-4f3c-aac8-828eac3f923c req-d5159163-f18a-48a4-b970-ecaa2823fda7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-c942a9bd-3760-43df-964d-8aa0e8710a3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:07:50 compute-0 nova_compute[355794]: 2025-10-02 20:07:50.647 2 DEBUG oslo_concurrency.lockutils [req-a9c622aa-d456-4f3c-aac8-828eac3f923c req-d5159163-f18a-48a4-b970-ecaa2823fda7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-c942a9bd-3760-43df-964d-8aa0e8710a3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:07:50 compute-0 nova_compute[355794]: 2025-10-02 20:07:50.647 2 DEBUG nova.network.neutron [req-a9c622aa-d456-4f3c-aac8-828eac3f923c req-d5159163-f18a-48a4-b970-ecaa2823fda7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Refreshing network info cache for port 4adcdafc-fb12-4e7c-9f7a-f2e6d691970d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:07:50 compute-0 ceph-mon[191910]: pgmap v1892: 321 pgs: 321 active+clean; 370 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 611 KiB/s wr, 134 op/s
Oct 02 20:07:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1893: 321 pgs: 321 active+clean; 370 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 27 KiB/s wr, 159 op/s
Oct 02 20:07:52 compute-0 nova_compute[355794]: 2025-10-02 20:07:52.639 2 DEBUG nova.network.neutron [req-a9c622aa-d456-4f3c-aac8-828eac3f923c req-d5159163-f18a-48a4-b970-ecaa2823fda7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Updated VIF entry in instance network info cache for port 4adcdafc-fb12-4e7c-9f7a-f2e6d691970d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:07:52 compute-0 nova_compute[355794]: 2025-10-02 20:07:52.640 2 DEBUG nova.network.neutron [req-a9c622aa-d456-4f3c-aac8-828eac3f923c req-d5159163-f18a-48a4-b970-ecaa2823fda7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Updating instance_info_cache with network_info: [{"id": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "address": "fa:16:3e:93:51:26", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4adcdafc-fb", "ovs_interfaceid": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:07:52 compute-0 nova_compute[355794]: 2025-10-02 20:07:52.690 2 DEBUG oslo_concurrency.lockutils [req-a9c622aa-d456-4f3c-aac8-828eac3f923c req-d5159163-f18a-48a4-b970-ecaa2823fda7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-c942a9bd-3760-43df-964d-8aa0e8710a3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:07:52 compute-0 ceph-mon[191910]: pgmap v1893: 321 pgs: 321 active+clean; 370 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 27 KiB/s wr, 159 op/s
Oct 02 20:07:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1894: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 27 KiB/s wr, 172 op/s
Oct 02 20:07:53 compute-0 podman[455704]: 2025-10-02 20:07:53.659078918 +0000 UTC m=+0.091835425 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 20:07:53 compute-0 podman[455703]: 2025-10-02 20:07:53.681672284 +0000 UTC m=+0.107543789 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 20:07:54 compute-0 nova_compute[355794]: 2025-10-02 20:07:54.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:54 compute-0 nova_compute[355794]: 2025-10-02 20:07:54.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:54 compute-0 ceph-mon[191910]: pgmap v1894: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 27 KiB/s wr, 172 op/s
Oct 02 20:07:54 compute-0 nova_compute[355794]: 2025-10-02 20:07:54.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:07:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 27 KiB/s wr, 158 op/s
Oct 02 20:07:55 compute-0 ceph-mon[191910]: pgmap v1895: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 27 KiB/s wr, 158 op/s
Oct 02 20:07:56 compute-0 ovn_controller[88435]: 2025-10-02T20:07:56Z|00118|binding|INFO|Releasing lport b59aad26-fd1d-4c37-adbd-b18497c4c15f from this chassis (sb_readonly=0)
Oct 02 20:07:56 compute-0 ovn_controller[88435]: 2025-10-02T20:07:56Z|00119|binding|INFO|Releasing lport f39d21c1-fcb9-4571-ab80-c736abbfc93d from this chassis (sb_readonly=0)
Oct 02 20:07:56 compute-0 ovn_controller[88435]: 2025-10-02T20:07:56Z|00120|binding|INFO|Releasing lport cdbc9f7e-e502-4e46-9d35-398a11c2a99d from this chassis (sb_readonly=0)
Oct 02 20:07:56 compute-0 ovn_controller[88435]: 2025-10-02T20:07:56Z|00121|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 20:07:56 compute-0 nova_compute[355794]: 2025-10-02 20:07:56.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 35 KiB/s wr, 124 op/s
Oct 02 20:07:58 compute-0 ceph-mon[191910]: pgmap v1896: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 35 KiB/s wr, 124 op/s
Oct 02 20:07:59 compute-0 nova_compute[355794]: 2025-10-02 20:07:59.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:59 compute-0 nova_compute[355794]: 2025-10-02 20:07:59.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:07:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1897: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 14 KiB/s wr, 103 op/s
Oct 02 20:07:59 compute-0 podman[157186]: time="2025-10-02T20:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:07:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 49965 "" "Go-http-client/1.1"
Oct 02 20:07:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10463 "" "Go-http-client/1.1"
Oct 02 20:08:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:00.564116) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435680564147, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1021, "num_deletes": 250, "total_data_size": 1475119, "memory_usage": 1495464, "flush_reason": "Manual Compaction"}
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435680571269, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 898046, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37924, "largest_seqno": 38944, "table_properties": {"data_size": 894103, "index_size": 1595, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10551, "raw_average_key_size": 20, "raw_value_size": 885604, "raw_average_value_size": 1739, "num_data_blocks": 72, "num_entries": 509, "num_filter_entries": 509, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759435586, "oldest_key_time": 1759435586, "file_creation_time": 1759435680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 7210 microseconds, and 3303 cpu microseconds.
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:00.571326) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 898046 bytes OK
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:00.571341) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:00.573864) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:00.573878) EVENT_LOG_v1 {"time_micros": 1759435680573873, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:00.573897) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1470290, prev total WAL file size 1470290, number of live WAL files 2.
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:00.575772) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353033' seq:72057594037927935, type:22 .. '6D6772737461740031373534' seq:0, type:0; will stop at (end)
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(876KB)], [86(10044KB)]
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435680575858, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11183331, "oldest_snapshot_seqno": -1}
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 5781 keys, 8435802 bytes, temperature: kUnknown
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435680646359, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 8435802, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8398854, "index_size": 21414, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 146584, "raw_average_key_size": 25, "raw_value_size": 8296065, "raw_average_value_size": 1435, "num_data_blocks": 884, "num_entries": 5781, "num_filter_entries": 5781, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759435680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:00.646682) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 8435802 bytes
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:00.648924) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.3 rd, 119.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.8 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(21.8) write-amplify(9.4) OK, records in: 6251, records dropped: 470 output_compression: NoCompression
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:00.648976) EVENT_LOG_v1 {"time_micros": 1759435680648966, "job": 50, "event": "compaction_finished", "compaction_time_micros": 70646, "compaction_time_cpu_micros": 43324, "output_level": 6, "num_output_files": 1, "total_output_size": 8435802, "num_input_records": 6251, "num_output_records": 5781, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435680649319, "job": 50, "event": "table_file_deletion", "file_number": 88}
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435680651278, "job": 50, "event": "table_file_deletion", "file_number": 86}
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:00.574807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:00.651562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:00.651567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:00.651571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:00.651574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:08:00 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:00.651577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:08:00 compute-0 podman[455743]: 2025-10-02 20:08:00.69446822 +0000 UTC m=+0.109956533 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct 02 20:08:00 compute-0 ceph-mon[191910]: pgmap v1897: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 14 KiB/s wr, 103 op/s
Oct 02 20:08:00 compute-0 podman[455742]: 2025-10-02 20:08:00.719048718 +0000 UTC m=+0.130280569 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 20:08:01 compute-0 openstack_network_exporter[372736]: ERROR   20:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:08:01 compute-0 openstack_network_exporter[372736]: ERROR   20:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:08:01 compute-0 openstack_network_exporter[372736]: ERROR   20:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:08:01 compute-0 openstack_network_exporter[372736]: ERROR   20:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:08:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:08:01 compute-0 openstack_network_exporter[372736]: ERROR   20:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:08:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:08:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 13 KiB/s wr, 63 op/s
Oct 02 20:08:02 compute-0 podman[455775]: 2025-10-02 20:08:02.6795803 +0000 UTC m=+0.101694244 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:08:02 compute-0 ceph-mon[191910]: pgmap v1898: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 13 KiB/s wr, 63 op/s
Oct 02 20:08:02 compute-0 podman[455776]: 2025-10-02 20:08:02.721670071 +0000 UTC m=+0.127168227 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 20:08:02 compute-0 podman[455777]: 2025-10-02 20:08:02.790130267 +0000 UTC m=+0.186030290 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 20:08:03 compute-0 ovn_controller[88435]: 2025-10-02T20:08:03Z|00122|binding|INFO|Releasing lport b59aad26-fd1d-4c37-adbd-b18497c4c15f from this chassis (sb_readonly=0)
Oct 02 20:08:03 compute-0 ovn_controller[88435]: 2025-10-02T20:08:03Z|00123|binding|INFO|Releasing lport f39d21c1-fcb9-4571-ab80-c736abbfc93d from this chassis (sb_readonly=0)
Oct 02 20:08:03 compute-0 ovn_controller[88435]: 2025-10-02T20:08:03Z|00124|binding|INFO|Releasing lport cdbc9f7e-e502-4e46-9d35-398a11c2a99d from this chassis (sb_readonly=0)
Oct 02 20:08:03 compute-0 ovn_controller[88435]: 2025-10-02T20:08:03Z|00125|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 20:08:03 compute-0 nova_compute[355794]: 2025-10-02 20:08:03.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:03 compute-0 nova_compute[355794]: 2025-10-02 20:08:03.193 2 DEBUG oslo_concurrency.lockutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Acquiring lock "ba7aef8d-a028-428d-97bd-508631983393" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:03 compute-0 nova_compute[355794]: 2025-10-02 20:08:03.194 2 DEBUG oslo_concurrency.lockutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lock "ba7aef8d-a028-428d-97bd-508631983393" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:03 compute-0 nova_compute[355794]: 2025-10-02 20:08:03.230 2 DEBUG nova.compute.manager [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 20:08:03 compute-0 nova_compute[355794]: 2025-10-02 20:08:03.349 2 DEBUG oslo_concurrency.lockutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:03 compute-0 nova_compute[355794]: 2025-10-02 20:08:03.351 2 DEBUG oslo_concurrency.lockutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:03 compute-0 nova_compute[355794]: 2025-10-02 20:08:03.370 2 DEBUG nova.virt.hardware [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 20:08:03 compute-0 nova_compute[355794]: 2025-10-02 20:08:03.372 2 INFO nova.compute.claims [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Claim successful on node compute-0.ctlplane.example.com
Oct 02 20:08:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 804 KiB/s rd, 13 KiB/s wr, 27 op/s
Oct 02 20:08:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:08:03
Oct 02 20:08:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:08:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:08:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'backups', 'cephfs.cephfs.meta']
Oct 02 20:08:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:08:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:08:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:08:03 compute-0 nova_compute[355794]: 2025-10-02 20:08:03.665 2 DEBUG oslo_concurrency.processutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:08:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:08:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:08:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:08:03 compute-0 nova_compute[355794]: 2025-10-02 20:08:03.759 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:08:03 compute-0 nova_compute[355794]: 2025-10-02 20:08:03.761 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:08:04 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:08:04 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3269874151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.244 2 DEBUG oslo_concurrency.processutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.258 2 DEBUG nova.compute.provider_tree [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.284 2 DEBUG nova.scheduler.client.report [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.338 2 DEBUG oslo_concurrency.lockutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.340 2 DEBUG nova.compute.manager [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 20:08:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:08:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:08:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:08:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:08:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:08:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:08:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:08:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:08:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:08:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.403 2 DEBUG nova.compute.manager [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.404 2 DEBUG nova.network.neutron [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.436 2 INFO nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.455 2 DEBUG nova.compute.manager [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.570 2 DEBUG nova.compute.manager [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.573 2 DEBUG nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.574 2 INFO nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Creating image(s)
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.651 2 DEBUG nova.storage.rbd_utils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] rbd image ba7aef8d-a028-428d-97bd-508631983393_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:04 compute-0 podman[455857]: 2025-10-02 20:08:04.708295972 +0000 UTC m=+0.129645602 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.721 2 DEBUG nova.storage.rbd_utils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] rbd image ba7aef8d-a028-428d-97bd-508631983393_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:04 compute-0 ceph-mon[191910]: pgmap v1899: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 804 KiB/s rd, 13 KiB/s wr, 27 op/s
Oct 02 20:08:04 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3269874151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:04 compute-0 podman[455858]: 2025-10-02 20:08:04.746237203 +0000 UTC m=+0.159610432 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.760 2 DEBUG nova.storage.rbd_utils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] rbd image ba7aef8d-a028-428d-97bd-508631983393_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.780 2 DEBUG oslo_concurrency.processutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.819 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.868 2 DEBUG oslo_concurrency.processutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.872 2 DEBUG oslo_concurrency.lockutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Acquiring lock "0c456b520d71abe557f4853537116bdcc2ff0a79" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.874 2 DEBUG oslo_concurrency.lockutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lock "0c456b520d71abe557f4853537116bdcc2ff0a79" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.875 2 DEBUG oslo_concurrency.lockutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lock "0c456b520d71abe557f4853537116bdcc2ff0a79" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.928 2 DEBUG nova.storage.rbd_utils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] rbd image ba7aef8d-a028-428d-97bd-508631983393_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:04 compute-0 nova_compute[355794]: 2025-10-02 20:08:04.938 2 DEBUG oslo_concurrency.processutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 ba7aef8d-a028-428d-97bd-508631983393_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:05 compute-0 nova_compute[355794]: 2025-10-02 20:08:05.342 2 DEBUG oslo_concurrency.processutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 ba7aef8d-a028-428d-97bd-508631983393_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:05 compute-0 nova_compute[355794]: 2025-10-02 20:08:05.461 2 DEBUG nova.storage.rbd_utils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] resizing rbd image ba7aef8d-a028-428d-97bd-508631983393_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 20:08:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:08:05 compute-0 nova_compute[355794]: 2025-10-02 20:08:05.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:08:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1900: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 202 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct 02 20:08:05 compute-0 nova_compute[355794]: 2025-10-02 20:08:05.673 2 DEBUG nova.objects.instance [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lazy-loading 'migration_context' on Instance uuid ba7aef8d-a028-428d-97bd-508631983393 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:08:05 compute-0 nova_compute[355794]: 2025-10-02 20:08:05.678 2 DEBUG nova.policy [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e8fb0356d35d4034be5df2acf0c1b9b8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1115b32054db477a9f511992d206db4d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 20:08:05 compute-0 nova_compute[355794]: 2025-10-02 20:08:05.697 2 DEBUG nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 20:08:05 compute-0 nova_compute[355794]: 2025-10-02 20:08:05.698 2 DEBUG nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Ensure instance console log exists: /var/lib/nova/instances/ba7aef8d-a028-428d-97bd-508631983393/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 20:08:05 compute-0 nova_compute[355794]: 2025-10-02 20:08:05.698 2 DEBUG oslo_concurrency.lockutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:05 compute-0 nova_compute[355794]: 2025-10-02 20:08:05.699 2 DEBUG oslo_concurrency.lockutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:05 compute-0 nova_compute[355794]: 2025-10-02 20:08:05.699 2 DEBUG oslo_concurrency.lockutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:06 compute-0 ceph-mon[191910]: pgmap v1900: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 202 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct 02 20:08:07 compute-0 nova_compute[355794]: 2025-10-02 20:08:07.586 2 DEBUG nova.network.neutron [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Successfully created port: d9c79914-e94a-4a4b-908a-c70b53a1a20f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 20:08:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 391 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 745 KiB/s wr, 23 op/s
Oct 02 20:08:08 compute-0 nova_compute[355794]: 2025-10-02 20:08:08.517 2 DEBUG nova.network.neutron [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Successfully updated port: d9c79914-e94a-4a4b-908a-c70b53a1a20f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 20:08:08 compute-0 nova_compute[355794]: 2025-10-02 20:08:08.541 2 DEBUG oslo_concurrency.lockutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Acquiring lock "refresh_cache-ba7aef8d-a028-428d-97bd-508631983393" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:08:08 compute-0 nova_compute[355794]: 2025-10-02 20:08:08.542 2 DEBUG oslo_concurrency.lockutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Acquired lock "refresh_cache-ba7aef8d-a028-428d-97bd-508631983393" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:08:08 compute-0 nova_compute[355794]: 2025-10-02 20:08:08.543 2 DEBUG nova.network.neutron [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 20:08:08 compute-0 nova_compute[355794]: 2025-10-02 20:08:08.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:08:08 compute-0 nova_compute[355794]: 2025-10-02 20:08:08.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:08:08 compute-0 nova_compute[355794]: 2025-10-02 20:08:08.702 2 DEBUG nova.compute.manager [req-ddb969ba-03de-48d3-b4f6-3b8d9a0745c0 req-ab7aa7fa-1716-47d2-865b-d377450711f0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Received event network-changed-d9c79914-e94a-4a4b-908a-c70b53a1a20f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:08 compute-0 nova_compute[355794]: 2025-10-02 20:08:08.703 2 DEBUG nova.compute.manager [req-ddb969ba-03de-48d3-b4f6-3b8d9a0745c0 req-ab7aa7fa-1716-47d2-865b-d377450711f0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Refreshing instance network info cache due to event network-changed-d9c79914-e94a-4a4b-908a-c70b53a1a20f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:08:08 compute-0 nova_compute[355794]: 2025-10-02 20:08:08.703 2 DEBUG oslo_concurrency.lockutils [req-ddb969ba-03de-48d3-b4f6-3b8d9a0745c0 req-ab7aa7fa-1716-47d2-865b-d377450711f0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-ba7aef8d-a028-428d-97bd-508631983393" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:08:08 compute-0 ceph-mon[191910]: pgmap v1901: 321 pgs: 321 active+clean; 391 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 745 KiB/s wr, 23 op/s
Oct 02 20:08:09 compute-0 nova_compute[355794]: 2025-10-02 20:08:09.174 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:08:09 compute-0 nova_compute[355794]: 2025-10-02 20:08:09.175 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:08:09 compute-0 nova_compute[355794]: 2025-10-02 20:08:09.175 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:08:09 compute-0 nova_compute[355794]: 2025-10-02 20:08:09.240 2 DEBUG nova.network.neutron [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 20:08:09 compute-0 nova_compute[355794]: 2025-10-02 20:08:09.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:09 compute-0 nova_compute[355794]: 2025-10-02 20:08:09.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 418 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 20:08:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.729 2 DEBUG nova.network.neutron [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Updating instance_info_cache with network_info: [{"id": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "address": "fa:16:3e:1f:6a:85", "network": {"id": "e2b83389-7f6e-4c8a-aff7-0b0ca66e1004", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1531328621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1115b32054db477a9f511992d206db4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9c79914-e9", "ovs_interfaceid": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.762 2 DEBUG oslo_concurrency.lockutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Releasing lock "refresh_cache-ba7aef8d-a028-428d-97bd-508631983393" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.763 2 DEBUG nova.compute.manager [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Instance network_info: |[{"id": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "address": "fa:16:3e:1f:6a:85", "network": {"id": "e2b83389-7f6e-4c8a-aff7-0b0ca66e1004", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1531328621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1115b32054db477a9f511992d206db4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9c79914-e9", "ovs_interfaceid": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.764 2 DEBUG oslo_concurrency.lockutils [req-ddb969ba-03de-48d3-b4f6-3b8d9a0745c0 req-ab7aa7fa-1716-47d2-865b-d377450711f0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-ba7aef8d-a028-428d-97bd-508631983393" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.765 2 DEBUG nova.network.neutron [req-ddb969ba-03de-48d3-b4f6-3b8d9a0745c0 req-ab7aa7fa-1716-47d2-865b-d377450711f0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Refreshing network info cache for port d9c79914-e94a-4a4b-908a-c70b53a1a20f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.770 2 DEBUG nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Start _get_guest_xml network_info=[{"id": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "address": "fa:16:3e:1f:6a:85", "network": {"id": "e2b83389-7f6e-4c8a-aff7-0b0ca66e1004", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1531328621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1115b32054db477a9f511992d206db4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9c79914-e9", "ovs_interfaceid": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:04:39Z,direct_url=<?>,disk_format='qcow2',id=2881b8cb-4cad-4124-8a6e-ae21054c9692,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:04:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'disk_bus': 'virtio', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'image_id': '2881b8cb-4cad-4124-8a6e-ae21054c9692'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 20:08:10 compute-0 ceph-mon[191910]: pgmap v1902: 321 pgs: 321 active+clean; 418 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.783 2 WARNING nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.798 2 DEBUG nova.virt.libvirt.host [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.800 2 DEBUG nova.virt.libvirt.host [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.808 2 DEBUG nova.virt.libvirt.host [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.810 2 DEBUG nova.virt.libvirt.host [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.812 2 DEBUG nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.812 2 DEBUG nova.virt.hardware [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T20:04:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2a4d7fef-934e-4921-8c3b-c6783966faa5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:04:39Z,direct_url=<?>,disk_format='qcow2',id=2881b8cb-4cad-4124-8a6e-ae21054c9692,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:04:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.813 2 DEBUG nova.virt.hardware [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.814 2 DEBUG nova.virt.hardware [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.814 2 DEBUG nova.virt.hardware [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.814 2 DEBUG nova.virt.hardware [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.815 2 DEBUG nova.virt.hardware [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.815 2 DEBUG nova.virt.hardware [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.815 2 DEBUG nova.virt.hardware [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.816 2 DEBUG nova.virt.hardware [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.816 2 DEBUG nova.virt.hardware [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.816 2 DEBUG nova.virt.hardware [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 20:08:10 compute-0 nova_compute[355794]: 2025-10-02 20:08:10.821 2 DEBUG oslo_concurrency.processutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:08:11 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1741234371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:08:11 compute-0 nova_compute[355794]: 2025-10-02 20:08:11.392 2 DEBUG oslo_concurrency.processutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:11 compute-0 nova_compute[355794]: 2025-10-02 20:08:11.453 2 DEBUG nova.storage.rbd_utils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] rbd image ba7aef8d-a028-428d-97bd-508631983393_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:11 compute-0 nova_compute[355794]: 2025-10-02 20:08:11.472 2 DEBUG oslo_concurrency.processutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1903: 321 pgs: 321 active+clean; 418 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 20:08:11 compute-0 nova_compute[355794]: 2025-10-02 20:08:11.756 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Updating instance_info_cache with network_info: [{"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:08:11 compute-0 nova_compute[355794]: 2025-10-02 20:08:11.776 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:08:11 compute-0 nova_compute[355794]: 2025-10-02 20:08:11.776 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:08:11 compute-0 nova_compute[355794]: 2025-10-02 20:08:11.777 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:08:11 compute-0 nova_compute[355794]: 2025-10-02 20:08:11.778 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:08:11 compute-0 nova_compute[355794]: 2025-10-02 20:08:11.778 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:08:11 compute-0 nova_compute[355794]: 2025-10-02 20:08:11.779 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:08:11 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1741234371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:08:11 compute-0 nova_compute[355794]: 2025-10-02 20:08:11.814 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:11 compute-0 nova_compute[355794]: 2025-10-02 20:08:11.815 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:11 compute-0 nova_compute[355794]: 2025-10-02 20:08:11.815 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:11 compute-0 nova_compute[355794]: 2025-10-02 20:08:11.816 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:08:11 compute-0 nova_compute[355794]: 2025-10-02 20:08:11.816 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:08:11 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3817963392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.034 2 DEBUG oslo_concurrency.processutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.037 2 DEBUG nova.virt.libvirt.vif [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T20:08:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1522053352',display_name='tempest-ServersTestManualDisk-server-1522053352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1522053352',id=12,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlQFSF4JLGxUNfpcWEs3v88IQ3MYtOfPgcyY6C3M82987GeHPrUblgTRdFc77f2xNa0PKburvm4ibYOH9NMNz1TlDQ8SsNHGTHZNV7ZZVJs98GuS8aopMWtaRUvDSxNxg==',key_name='tempest-keypair-271875864',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1115b32054db477a9f511992d206db4d',ramdisk_id='',reservation_id='r-8vodv0cp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1985524993',owner_user_name='tempest-ServersTestManualDisk-1985524993-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:08:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e8fb0356d35d4034be5df2acf0c1b9b8',uuid=ba7aef8d-a028-428d-97bd-508631983393,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "address": "fa:16:3e:1f:6a:85", "network": {"id": "e2b83389-7f6e-4c8a-aff7-0b0ca66e1004", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1531328621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1115b32054db477a9f511992d206db4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9c79914-e9", "ovs_interfaceid": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.038 2 DEBUG nova.network.os_vif_util [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Converting VIF {"id": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "address": "fa:16:3e:1f:6a:85", "network": {"id": "e2b83389-7f6e-4c8a-aff7-0b0ca66e1004", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1531328621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1115b32054db477a9f511992d206db4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9c79914-e9", "ovs_interfaceid": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.039 2 DEBUG nova.network.os_vif_util [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:6a:85,bridge_name='br-int',has_traffic_filtering=True,id=d9c79914-e94a-4a4b-908a-c70b53a1a20f,network=Network(e2b83389-7f6e-4c8a-aff7-0b0ca66e1004),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9c79914-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.041 2 DEBUG nova.objects.instance [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lazy-loading 'pci_devices' on Instance uuid ba7aef8d-a028-428d-97bd-508631983393 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.094 2 DEBUG nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] End _get_guest_xml xml=<domain type="kvm">
Oct 02 20:08:12 compute-0 nova_compute[355794]:   <uuid>ba7aef8d-a028-428d-97bd-508631983393</uuid>
Oct 02 20:08:12 compute-0 nova_compute[355794]:   <name>instance-0000000c</name>
Oct 02 20:08:12 compute-0 nova_compute[355794]:   <memory>131072</memory>
Oct 02 20:08:12 compute-0 nova_compute[355794]:   <vcpu>1</vcpu>
Oct 02 20:08:12 compute-0 nova_compute[355794]:   <metadata>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <nova:name>tempest-ServersTestManualDisk-server-1522053352</nova:name>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <nova:creationTime>2025-10-02 20:08:10</nova:creationTime>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <nova:flavor name="m1.nano">
Oct 02 20:08:12 compute-0 nova_compute[355794]:         <nova:memory>128</nova:memory>
Oct 02 20:08:12 compute-0 nova_compute[355794]:         <nova:disk>1</nova:disk>
Oct 02 20:08:12 compute-0 nova_compute[355794]:         <nova:swap>0</nova:swap>
Oct 02 20:08:12 compute-0 nova_compute[355794]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 20:08:12 compute-0 nova_compute[355794]:         <nova:vcpus>1</nova:vcpus>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       </nova:flavor>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <nova:owner>
Oct 02 20:08:12 compute-0 nova_compute[355794]:         <nova:user uuid="e8fb0356d35d4034be5df2acf0c1b9b8">tempest-ServersTestManualDisk-1985524993-project-member</nova:user>
Oct 02 20:08:12 compute-0 nova_compute[355794]:         <nova:project uuid="1115b32054db477a9f511992d206db4d">tempest-ServersTestManualDisk-1985524993</nova:project>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       </nova:owner>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <nova:root type="image" uuid="2881b8cb-4cad-4124-8a6e-ae21054c9692"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <nova:ports>
Oct 02 20:08:12 compute-0 nova_compute[355794]:         <nova:port uuid="d9c79914-e94a-4a4b-908a-c70b53a1a20f">
Oct 02 20:08:12 compute-0 nova_compute[355794]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:         </nova:port>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       </nova:ports>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     </nova:instance>
Oct 02 20:08:12 compute-0 nova_compute[355794]:   </metadata>
Oct 02 20:08:12 compute-0 nova_compute[355794]:   <sysinfo type="smbios">
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <system>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <entry name="manufacturer">RDO</entry>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <entry name="product">OpenStack Compute</entry>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <entry name="serial">ba7aef8d-a028-428d-97bd-508631983393</entry>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <entry name="uuid">ba7aef8d-a028-428d-97bd-508631983393</entry>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <entry name="family">Virtual Machine</entry>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     </system>
Oct 02 20:08:12 compute-0 nova_compute[355794]:   </sysinfo>
Oct 02 20:08:12 compute-0 nova_compute[355794]:   <os>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <boot dev="hd"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <smbios mode="sysinfo"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:   </os>
Oct 02 20:08:12 compute-0 nova_compute[355794]:   <features>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <acpi/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <apic/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <vmcoreinfo/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:   </features>
Oct 02 20:08:12 compute-0 nova_compute[355794]:   <clock offset="utc">
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <timer name="hpet" present="no"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:   </clock>
Oct 02 20:08:12 compute-0 nova_compute[355794]:   <cpu mode="host-model" match="exact">
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:   </cpu>
Oct 02 20:08:12 compute-0 nova_compute[355794]:   <devices>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/ba7aef8d-a028-428d-97bd-508631983393_disk">
Oct 02 20:08:12 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       </source>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:08:12 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <target dev="vda" bus="virtio"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <disk type="network" device="cdrom">
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/ba7aef8d-a028-428d-97bd-508631983393_disk.config">
Oct 02 20:08:12 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       </source>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:08:12 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <target dev="sda" bus="sata"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <interface type="ethernet">
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <mac address="fa:16:3e:1f:6a:85"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <mtu size="1442"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <target dev="tapd9c79914-e9"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     </interface>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <serial type="pty">
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <log file="/var/lib/nova/instances/ba7aef8d-a028-428d-97bd-508631983393/console.log" append="off"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     </serial>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <video>
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     </video>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <input type="tablet" bus="usb"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <rng model="virtio">
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <backend model="random">/dev/urandom</backend>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     </rng>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <controller type="usb" index="0"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     <memballoon model="virtio">
Oct 02 20:08:12 compute-0 nova_compute[355794]:       <stats period="10"/>
Oct 02 20:08:12 compute-0 nova_compute[355794]:     </memballoon>
Oct 02 20:08:12 compute-0 nova_compute[355794]:   </devices>
Oct 02 20:08:12 compute-0 nova_compute[355794]: </domain>
Oct 02 20:08:12 compute-0 nova_compute[355794]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.095 2 DEBUG nova.compute.manager [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Preparing to wait for external event network-vif-plugged-d9c79914-e94a-4a4b-908a-c70b53a1a20f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.095 2 DEBUG oslo_concurrency.lockutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Acquiring lock "ba7aef8d-a028-428d-97bd-508631983393-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.096 2 DEBUG oslo_concurrency.lockutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lock "ba7aef8d-a028-428d-97bd-508631983393-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.096 2 DEBUG oslo_concurrency.lockutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lock "ba7aef8d-a028-428d-97bd-508631983393-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.098 2 DEBUG nova.virt.libvirt.vif [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T20:08:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1522053352',display_name='tempest-ServersTestManualDisk-server-1522053352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1522053352',id=12,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlQFSF4JLGxUNfpcWEs3v88IQ3MYtOfPgcyY6C3M82987GeHPrUblgTRdFc77f2xNa0PKburvm4ibYOH9NMNz1TlDQ8SsNHGTHZNV7ZZVJs98GuS8aopMWtaRUvDSxNxg==',key_name='tempest-keypair-271875864',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1115b32054db477a9f511992d206db4d',ramdisk_id='',reservation_id='r-8vodv0cp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1985524993',owner_user_name='tempest-ServersTestManualDisk-1985524993-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:08:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e8fb0356d35d4034be5df2acf0c1b9b8',uuid=ba7aef8d-a028-428d-97bd-508631983393,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "address": "fa:16:3e:1f:6a:85", "network": {"id": "e2b83389-7f6e-4c8a-aff7-0b0ca66e1004", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1531328621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1115b32054db477a9f511992d206db4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9c79914-e9", "ovs_interfaceid": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.098 2 DEBUG nova.network.os_vif_util [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Converting VIF {"id": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "address": "fa:16:3e:1f:6a:85", "network": {"id": "e2b83389-7f6e-4c8a-aff7-0b0ca66e1004", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1531328621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1115b32054db477a9f511992d206db4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9c79914-e9", "ovs_interfaceid": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.099 2 DEBUG nova.network.os_vif_util [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:6a:85,bridge_name='br-int',has_traffic_filtering=True,id=d9c79914-e94a-4a4b-908a-c70b53a1a20f,network=Network(e2b83389-7f6e-4c8a-aff7-0b0ca66e1004),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9c79914-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.100 2 DEBUG os_vif [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:6a:85,bridge_name='br-int',has_traffic_filtering=True,id=d9c79914-e94a-4a4b-908a-c70b53a1a20f,network=Network(e2b83389-7f6e-4c8a-aff7-0b0ca66e1004),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9c79914-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.103 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.104 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.112 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9c79914-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.113 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd9c79914-e9, col_values=(('external_ids', {'iface-id': 'd9c79914-e94a-4a4b-908a-c70b53a1a20f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:6a:85', 'vm-uuid': 'ba7aef8d-a028-428d-97bd-508631983393'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:12 compute-0 NetworkManager[44968]: <info>  [1759435692.1190] manager: (tapd9c79914-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.140 2 INFO os_vif [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:6a:85,bridge_name='br-int',has_traffic_filtering=True,id=d9c79914-e94a-4a4b-908a-c70b53a1a20f,network=Network(e2b83389-7f6e-4c8a-aff7-0b0ca66e1004),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9c79914-e9')
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.235 2 DEBUG nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.236 2 DEBUG nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.236 2 DEBUG nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] No VIF found with MAC fa:16:3e:1f:6a:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.237 2 INFO nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Using config drive
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.311 2 DEBUG nova.storage.rbd_utils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] rbd image ba7aef8d-a028-428d-97bd-508631983393_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:12 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:08:12 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2805507713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.392 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.506 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.506 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.513 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.513 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.521 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.521 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.522 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.527 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.527 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.532 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.533 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.538 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:08:12 compute-0 nova_compute[355794]: 2025-10-02 20:08:12.538 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:08:12 compute-0 ceph-mon[191910]: pgmap v1903: 321 pgs: 321 active+clean; 418 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 20:08:12 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3817963392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:08:12 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2805507713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031146728690499446 of space, bias 1.0, pg target 0.9344018607149834 quantized to 32 (current 32)
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:08:13 compute-0 nova_compute[355794]: 2025-10-02 20:08:13.248 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:08:13 compute-0 nova_compute[355794]: 2025-10-02 20:08:13.249 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2952MB free_disk=59.80143737792969GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:08:13 compute-0 nova_compute[355794]: 2025-10-02 20:08:13.250 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:13 compute-0 nova_compute[355794]: 2025-10-02 20:08:13.251 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:13 compute-0 nova_compute[355794]: 2025-10-02 20:08:13.372 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:08:13 compute-0 nova_compute[355794]: 2025-10-02 20:08:13.372 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:08:13 compute-0 nova_compute[355794]: 2025-10-02 20:08:13.373 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance a6e095a0-cb58-430d-9347-4aab385c6e69 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:08:13 compute-0 nova_compute[355794]: 2025-10-02 20:08:13.374 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance af875636-eb00-48b8-b1f4-589898eafecb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:08:13 compute-0 nova_compute[355794]: 2025-10-02 20:08:13.374 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance c942a9bd-3760-43df-964d-8aa0e8710a3d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:08:13 compute-0 nova_compute[355794]: 2025-10-02 20:08:13.374 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance ba7aef8d-a028-428d-97bd-508631983393 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:08:13 compute-0 nova_compute[355794]: 2025-10-02 20:08:13.375 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:08:13 compute-0 nova_compute[355794]: 2025-10-02 20:08:13.375 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1664MB phys_disk=59GB used_disk=7GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:08:13 compute-0 nova_compute[355794]: 2025-10-02 20:08:13.534 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:13 compute-0 nova_compute[355794]: 2025-10-02 20:08:13.599 2 INFO nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Creating config drive at /var/lib/nova/instances/ba7aef8d-a028-428d-97bd-508631983393/disk.config
Oct 02 20:08:13 compute-0 nova_compute[355794]: 2025-10-02 20:08:13.615 2 DEBUG oslo_concurrency.processutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ba7aef8d-a028-428d-97bd-508631983393/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp16j99u78 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 418 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 20:08:13 compute-0 nova_compute[355794]: 2025-10-02 20:08:13.760 2 DEBUG oslo_concurrency.processutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ba7aef8d-a028-428d-97bd-508631983393/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp16j99u78" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:13 compute-0 nova_compute[355794]: 2025-10-02 20:08:13.823 2 DEBUG nova.storage.rbd_utils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] rbd image ba7aef8d-a028-428d-97bd-508631983393_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:13 compute-0 nova_compute[355794]: 2025-10-02 20:08:13.837 2 DEBUG oslo_concurrency.processutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ba7aef8d-a028-428d-97bd-508631983393/disk.config ba7aef8d-a028-428d-97bd-508631983393_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:08:14 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2579187382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:14 compute-0 nova_compute[355794]: 2025-10-02 20:08:14.042 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:14 compute-0 nova_compute[355794]: 2025-10-02 20:08:14.052 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:08:14 compute-0 nova_compute[355794]: 2025-10-02 20:08:14.078 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:08:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:08:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.0 total, 600.0 interval
                                            Cumulative writes: 8632 writes, 38K keys, 8632 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.01 MB/s
                                            Cumulative WAL: 8632 writes, 8632 syncs, 1.00 writes per sync, written: 0.05 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1353 writes, 6381 keys, 1353 commit groups, 1.0 writes per commit group, ingest: 8.75 MB, 0.01 MB/s
                                            Interval WAL: 1353 writes, 1353 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     99.5      0.48              0.23        25    0.019       0      0       0.0       0.0
                                              L6      1/0    8.05 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.8    138.8    113.7      1.59              0.83        24    0.066    122K    13K       0.0       0.0
                                             Sum      1/0    8.05 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.8    106.5    110.4      2.07              1.06        49    0.042    122K    13K       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.9    130.6    129.6      0.45              0.22        12    0.038     36K   3059       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    138.8    113.7      1.59              0.83        24    0.066    122K    13K       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    100.3      0.48              0.23        24    0.020       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 3600.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.047, interval 0.008
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.22 GB write, 0.06 MB/s write, 0.21 GB read, 0.06 MB/s read, 2.1 seconds
                                            Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x557df35531f0#2 capacity: 304.00 MB usage: 25.57 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000273 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1622,24.65 MB,8.10781%) FilterBlock(50,346.67 KB,0.111364%) IndexBlock(50,599.27 KB,0.192507%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Oct 02 20:08:14 compute-0 nova_compute[355794]: 2025-10-02 20:08:14.089 2 DEBUG oslo_concurrency.processutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ba7aef8d-a028-428d-97bd-508631983393/disk.config ba7aef8d-a028-428d-97bd-508631983393_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:14 compute-0 nova_compute[355794]: 2025-10-02 20:08:14.090 2 INFO nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Deleting local config drive /var/lib/nova/instances/ba7aef8d-a028-428d-97bd-508631983393/disk.config because it was imported into RBD.
Oct 02 20:08:14 compute-0 nova_compute[355794]: 2025-10-02 20:08:14.100 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:08:14 compute-0 nova_compute[355794]: 2025-10-02 20:08:14.101 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:14 compute-0 kernel: tapd9c79914-e9: entered promiscuous mode
Oct 02 20:08:14 compute-0 NetworkManager[44968]: <info>  [1759435694.1521] manager: (tapd9c79914-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Oct 02 20:08:14 compute-0 nova_compute[355794]: 2025-10-02 20:08:14.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:14 compute-0 ovn_controller[88435]: 2025-10-02T20:08:14Z|00126|binding|INFO|Claiming lport d9c79914-e94a-4a4b-908a-c70b53a1a20f for this chassis.
Oct 02 20:08:14 compute-0 ovn_controller[88435]: 2025-10-02T20:08:14Z|00127|binding|INFO|d9c79914-e94a-4a4b-908a-c70b53a1a20f: Claiming fa:16:3e:1f:6a:85 10.100.0.14
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.162 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:6a:85 10.100.0.14'], port_security=['fa:16:3e:1f:6a:85 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ba7aef8d-a028-428d-97bd-508631983393', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1115b32054db477a9f511992d206db4d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5621b4be-c82f-4f5a-8cfd-388d865a6477', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2763136a-7895-4d99-b111-862304d5703e, chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=d9c79914-e94a-4a4b-908a-c70b53a1a20f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.163 285790 INFO neutron.agent.ovn.metadata.agent [-] Port d9c79914-e94a-4a4b-908a-c70b53a1a20f in datapath e2b83389-7f6e-4c8a-aff7-0b0ca66e1004 bound to our chassis
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.165 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e2b83389-7f6e-4c8a-aff7-0b0ca66e1004
Oct 02 20:08:14 compute-0 ovn_controller[88435]: 2025-10-02T20:08:14Z|00128|binding|INFO|Setting lport d9c79914-e94a-4a4b-908a-c70b53a1a20f ovn-installed in OVS
Oct 02 20:08:14 compute-0 ovn_controller[88435]: 2025-10-02T20:08:14Z|00129|binding|INFO|Setting lport d9c79914-e94a-4a4b-908a-c70b53a1a20f up in Southbound
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.191 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[ed408fad-a6ba-4144-99fd-198d69a001dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:14 compute-0 nova_compute[355794]: 2025-10-02 20:08:14.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.192 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape2b83389-71 in ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.196 420728 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape2b83389-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.196 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[ff95812b-de34-4a06-93a1-0a77c2274104]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.197 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[a83b175a-b7b4-40c2-b2f0-a9d8043a5d25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.219 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[195b82e6-0509-41e5-add5-56360935febd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:14 compute-0 systemd-machined[137646]: New machine qemu-13-instance-0000000c.
Oct 02 20:08:14 compute-0 systemd-udevd[456250]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 20:08:14 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.239 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[09874515-4649-4141-97d0-16d1bb0eb1e6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:14 compute-0 NetworkManager[44968]: <info>  [1759435694.2492] device (tapd9c79914-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 20:08:14 compute-0 NetworkManager[44968]: <info>  [1759435694.2507] device (tapd9c79914-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.276 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[6f649322-a041-40ca-8582-abfeb9fcd703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:14 compute-0 NetworkManager[44968]: <info>  [1759435694.2858] manager: (tape2b83389-70): new Veth device (/org/freedesktop/NetworkManager/Devices/66)
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.286 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[674c0013-958f-4d38-8758-73c0e064e40d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.322 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[44f6943b-7785-4650-934c-8402b753d175]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.326 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[b8a11251-5ff1-4432-b2cc-f1aeebdd111f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:14 compute-0 NetworkManager[44968]: <info>  [1759435694.3511] device (tape2b83389-70): carrier: link connected
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.358 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[981d1861-c3c1-4f9c-8b71-9891758a1c1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.383 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[5f43cb37-eae3-421e-81e1-337a578bb069]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape2b83389-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:73:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686592, 'reachable_time': 41373, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 456280, 'error': None, 'target': 'ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.403 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[391d25be-2e0e-4093-9c15-ad553aa1b467]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:73d5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686592, 'tstamp': 686592}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 456281, 'error': None, 'target': 'ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.426 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[9ef2f378-c544-4325-86bf-8afc8a09b322]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape2b83389-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:73:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686592, 'reachable_time': 41373, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 456282, 'error': None, 'target': 'ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.486 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b63854-2e19-40bf-8f8b-0cce987b7b5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:14 compute-0 nova_compute[355794]: 2025-10-02 20:08:14.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.623 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[8349ad94-da1a-490d-a7d0-058a368fc1b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.626 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape2b83389-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.626 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.627 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape2b83389-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:14 compute-0 kernel: tape2b83389-70: entered promiscuous mode
Oct 02 20:08:14 compute-0 NetworkManager[44968]: <info>  [1759435694.6300] manager: (tape2b83389-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Oct 02 20:08:14 compute-0 nova_compute[355794]: 2025-10-02 20:08:14.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.633 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape2b83389-70, col_values=(('external_ids', {'iface-id': '3a74431e-a9db-47f7-bc49-a75791f8a78f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:14 compute-0 ovn_controller[88435]: 2025-10-02T20:08:14Z|00130|binding|INFO|Releasing lport 3a74431e-a9db-47f7-bc49-a75791f8a78f from this chassis (sb_readonly=0)
Oct 02 20:08:14 compute-0 nova_compute[355794]: 2025-10-02 20:08:14.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.638 285790 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e2b83389-7f6e-4c8a-aff7-0b0ca66e1004.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e2b83389-7f6e-4c8a-aff7-0b0ca66e1004.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.640 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[ae43f990-16d6-4617-969a-13f454f46b6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.641 285790 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: global
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     log         /dev/log local0 debug
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     log-tag     haproxy-metadata-proxy-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     user        root
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     group       root
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     maxconn     1024
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     pidfile     /var/lib/neutron/external/pids/e2b83389-7f6e-4c8a-aff7-0b0ca66e1004.pid.haproxy
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     daemon
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: defaults
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     log global
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     mode http
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     option httplog
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     option dontlognull
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     option http-server-close
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     option forwardfor
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     retries                 3
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     timeout http-request    30s
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     timeout connect         30s
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     timeout client          32s
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     timeout server          32s
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     timeout http-keep-alive 30s
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: listen listener
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     bind 169.254.169.254:80
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:     http-request add-header X-OVN-Network-ID e2b83389-7f6e-4c8a-aff7-0b0ca66e1004
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 20:08:14 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:14.642 285790 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004', 'env', 'PROCESS_TAG=haproxy-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e2b83389-7f6e-4c8a-aff7-0b0ca66e1004.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 20:08:14 compute-0 nova_compute[355794]: 2025-10-02 20:08:14.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:14 compute-0 ceph-mon[191910]: pgmap v1904: 321 pgs: 321 active+clean; 418 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 02 20:08:14 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2579187382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.096 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:08:15 compute-0 podman[456355]: 2025-10-02 20:08:15.269773757 +0000 UTC m=+0.103514323 container create baa5deeccd27a974a175725b1af9a5de81e0218fb8d1fa2e2144e90b3f710437 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 20:08:15 compute-0 podman[456355]: 2025-10-02 20:08:15.209973179 +0000 UTC m=+0.043713765 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.317 2 DEBUG nova.network.neutron [req-ddb969ba-03de-48d3-b4f6-3b8d9a0745c0 req-ab7aa7fa-1716-47d2-865b-d377450711f0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Updated VIF entry in instance network info cache for port d9c79914-e94a-4a4b-908a-c70b53a1a20f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.317 2 DEBUG nova.network.neutron [req-ddb969ba-03de-48d3-b4f6-3b8d9a0745c0 req-ab7aa7fa-1716-47d2-865b-d377450711f0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Updating instance_info_cache with network_info: [{"id": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "address": "fa:16:3e:1f:6a:85", "network": {"id": "e2b83389-7f6e-4c8a-aff7-0b0ca66e1004", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1531328621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1115b32054db477a9f511992d206db4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9c79914-e9", "ovs_interfaceid": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:08:15 compute-0 systemd[1]: Started libpod-conmon-baa5deeccd27a974a175725b1af9a5de81e0218fb8d1fa2e2144e90b3f710437.scope.
Oct 02 20:08:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a05be34e70030b4fdce32a658e8bc0273fedc12e432a0dbdd2c40eb1fb4e5a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 20:08:15 compute-0 podman[456367]: 2025-10-02 20:08:15.387872653 +0000 UTC m=+0.081983524 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 20:08:15 compute-0 podman[456355]: 2025-10-02 20:08:15.38815131 +0000 UTC m=+0.221891886 container init baa5deeccd27a974a175725b1af9a5de81e0218fb8d1fa2e2144e90b3f710437 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 20:08:15 compute-0 podman[456355]: 2025-10-02 20:08:15.397190599 +0000 UTC m=+0.230931165 container start baa5deeccd27a974a175725b1af9a5de81e0218fb8d1fa2e2144e90b3f710437 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:08:15 compute-0 neutron-haproxy-ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004[456377]: [NOTICE]   (456391) : New worker (456394) forked
Oct 02 20:08:15 compute-0 neutron-haproxy-ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004[456377]: [NOTICE]   (456391) : Loading success.
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.427 2 DEBUG oslo_concurrency.lockutils [req-ddb969ba-03de-48d3-b4f6-3b8d9a0745c0 req-ab7aa7fa-1716-47d2-865b-d377450711f0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-ba7aef8d-a028-428d-97bd-508631983393" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.522 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435695.521869, ba7aef8d-a028-428d-97bd-508631983393 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.522 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: ba7aef8d-a028-428d-97bd-508631983393] VM Started (Lifecycle Event)
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.558 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: ba7aef8d-a028-428d-97bd-508631983393] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.562 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435695.5219982, ba7aef8d-a028-428d-97bd-508631983393 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.563 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: ba7aef8d-a028-428d-97bd-508631983393] VM Paused (Lifecycle Event)
Oct 02 20:08:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.569 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.584 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: ba7aef8d-a028-428d-97bd-508631983393] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.588 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: ba7aef8d-a028-428d-97bd-508631983393] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.605 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: ba7aef8d-a028-428d-97bd-508631983393] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.624 2 DEBUG nova.compute.manager [req-43070bb5-e996-4efe-9146-b7944259aabc req-1807ddab-e89e-48b9-89de-40ad557b2df6 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Received event network-vif-plugged-d9c79914-e94a-4a4b-908a-c70b53a1a20f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.625 2 DEBUG oslo_concurrency.lockutils [req-43070bb5-e996-4efe-9146-b7944259aabc req-1807ddab-e89e-48b9-89de-40ad557b2df6 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "ba7aef8d-a028-428d-97bd-508631983393-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.625 2 DEBUG oslo_concurrency.lockutils [req-43070bb5-e996-4efe-9146-b7944259aabc req-1807ddab-e89e-48b9-89de-40ad557b2df6 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "ba7aef8d-a028-428d-97bd-508631983393-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.625 2 DEBUG oslo_concurrency.lockutils [req-43070bb5-e996-4efe-9146-b7944259aabc req-1807ddab-e89e-48b9-89de-40ad557b2df6 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "ba7aef8d-a028-428d-97bd-508631983393-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.625 2 DEBUG nova.compute.manager [req-43070bb5-e996-4efe-9146-b7944259aabc req-1807ddab-e89e-48b9-89de-40ad557b2df6 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Processing event network-vif-plugged-d9c79914-e94a-4a4b-908a-c70b53a1a20f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.626 2 DEBUG nova.compute.manager [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.631 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435695.63147, ba7aef8d-a028-428d-97bd-508631983393 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.632 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: ba7aef8d-a028-428d-97bd-508631983393] VM Resumed (Lifecycle Event)
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.634 2 DEBUG nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.639 2 INFO nova.virt.libvirt.driver [-] [instance: ba7aef8d-a028-428d-97bd-508631983393] Instance spawned successfully.
Oct 02 20:08:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 418 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.640 2 DEBUG nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.658 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: ba7aef8d-a028-428d-97bd-508631983393] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.669 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: ba7aef8d-a028-428d-97bd-508631983393] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.675 2 DEBUG nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.676 2 DEBUG nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.676 2 DEBUG nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.677 2 DEBUG nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.677 2 DEBUG nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.678 2 DEBUG nova.virt.libvirt.driver [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.712 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: ba7aef8d-a028-428d-97bd-508631983393] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.762 2 INFO nova.compute.manager [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Took 11.19 seconds to spawn the instance on the hypervisor.
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.762 2 DEBUG nova.compute.manager [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.815 2 DEBUG oslo_concurrency.lockutils [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Acquiring lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.816 2 DEBUG oslo_concurrency.lockutils [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.816 2 DEBUG oslo_concurrency.lockutils [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Acquiring lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.817 2 DEBUG oslo_concurrency.lockutils [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.818 2 DEBUG oslo_concurrency.lockutils [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.820 2 INFO nova.compute.manager [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Terminating instance
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.822 2 DEBUG nova.compute.manager [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.874 2 INFO nova.compute.manager [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Took 12.57 seconds to build instance.
Oct 02 20:08:15 compute-0 kernel: tap668a7aea-bc (unregistering): left promiscuous mode
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.903 2 DEBUG oslo_concurrency.lockutils [None req-85e398af-5f50-4e9f-9fbb-88da4f38385e e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lock "ba7aef8d-a028-428d-97bd-508631983393" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:15 compute-0 NetworkManager[44968]: <info>  [1759435695.9067] device (tap668a7aea-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 20:08:15 compute-0 ovn_controller[88435]: 2025-10-02T20:08:15Z|00131|binding|INFO|Releasing lport 668a7aea-bc00-4cac-b1dd-b0786e76c474 from this chassis (sb_readonly=0)
Oct 02 20:08:15 compute-0 ovn_controller[88435]: 2025-10-02T20:08:15Z|00132|binding|INFO|Setting lport 668a7aea-bc00-4cac-b1dd-b0786e76c474 down in Southbound
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:15 compute-0 ovn_controller[88435]: 2025-10-02T20:08:15Z|00133|binding|INFO|Removing iface tap668a7aea-bc ovn-installed in OVS
Oct 02 20:08:15 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:15.933 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:42:64 10.100.0.13'], port_security=['fa:16:3e:eb:42:64 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c59c91fb-efec-4ddf-b699-e072223ea127', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0db170bd1e464f2ea61c24a9079861a4', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f24334a9-c477-489f-956b-2cd2adaeee19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.218', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34cee90-562d-4e73-b869-f45c74e302ff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=668a7aea-bc00-4cac-b1dd-b0786e76c474) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:08:15 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:15.934 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 668a7aea-bc00-4cac-b1dd-b0786e76c474 in datapath c59c91fb-efec-4ddf-b699-e072223ea127 unbound from our chassis
Oct 02 20:08:15 compute-0 nova_compute[355794]: 2025-10-02 20:08:15.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:15 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:15.938 285790 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c59c91fb-efec-4ddf-b699-e072223ea127, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 20:08:15 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:15.939 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[4117e00e-192a-455e-afe8-9fe435c09fe0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:15 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:15.940 285790 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127 namespace which is not needed anymore
Oct 02 20:08:15 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct 02 20:08:15 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000007.scope: Consumed 46.961s CPU time.
Oct 02 20:08:15 compute-0 systemd-machined[137646]: Machine qemu-10-instance-00000007 terminated.
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.062 2 INFO nova.virt.libvirt.driver [-] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Instance destroyed successfully.
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.063 2 DEBUG nova.objects.instance [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lazy-loading 'resources' on Instance uuid cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.080 2 DEBUG nova.virt.libvirt.vif [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T20:05:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-521568053',display_name='tempest-ServerActionsTestJSON-server-521568053',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-521568053',id=7,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO3+Ml3MDFRwjAbHlLVelOVaJFw9wMrAgiM3K5ECgv/8gOHV2c2HY+qo3hzkkazGL3NjANBQg+Uxykp7yYUSzraCqdB1dpSHggRBXiV5RbTjrArOXyRLYbqWS943JQegug==',key_name='tempest-keypair-1325070434',keypairs=<?>,launch_index=0,launched_at=2025-10-02T20:05:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0db170bd1e464f2ea61c24a9079861a4',ramdisk_id='',reservation_id='r-w750d1sy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-872820255',owner_user_name='tempest-ServerActionsTestJSON-872820255-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T20:07:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f962d436a03a4b70951908eb9f826d11',uuid=cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.080 2 DEBUG nova.network.os_vif_util [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Converting VIF {"id": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "address": "fa:16:3e:eb:42:64", "network": {"id": "c59c91fb-efec-4ddf-b699-e072223ea127", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-226494039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0db170bd1e464f2ea61c24a9079861a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap668a7aea-bc", "ovs_interfaceid": "668a7aea-bc00-4cac-b1dd-b0786e76c474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.081 2 DEBUG nova.network.os_vif_util [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:42:64,bridge_name='br-int',has_traffic_filtering=True,id=668a7aea-bc00-4cac-b1dd-b0786e76c474,network=Network(c59c91fb-efec-4ddf-b699-e072223ea127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap668a7aea-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.082 2 DEBUG os_vif [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:42:64,bridge_name='br-int',has_traffic_filtering=True,id=668a7aea-bc00-4cac-b1dd-b0786e76c474,network=Network(c59c91fb-efec-4ddf-b699-e072223ea127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap668a7aea-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.083 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap668a7aea-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.089 2 INFO os_vif [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:42:64,bridge_name='br-int',has_traffic_filtering=True,id=668a7aea-bc00-4cac-b1dd-b0786e76c474,network=Network(c59c91fb-efec-4ddf-b699-e072223ea127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap668a7aea-bc')
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:16 compute-0 neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127[453910]: [NOTICE]   (453914) : haproxy version is 2.8.14-c23fe91
Oct 02 20:08:16 compute-0 neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127[453910]: [NOTICE]   (453914) : path to executable is /usr/sbin/haproxy
Oct 02 20:08:16 compute-0 neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127[453910]: [WARNING]  (453914) : Exiting Master process...
Oct 02 20:08:16 compute-0 ovn_controller[88435]: 2025-10-02T20:08:16Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:ca:09 10.100.0.6
Oct 02 20:08:16 compute-0 ovn_controller[88435]: 2025-10-02T20:08:16Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:ca:09 10.100.0.6
Oct 02 20:08:16 compute-0 neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127[453910]: [ALERT]    (453914) : Current worker (453916) exited with code 143 (Terminated)
Oct 02 20:08:16 compute-0 neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127[453910]: [WARNING]  (453914) : All workers exited. Exiting... (0)
Oct 02 20:08:16 compute-0 systemd[1]: libpod-9908f6a690995d1b3a4306bf11fedafa55777080a88920bf8ddd0987a02251c9.scope: Deactivated successfully.
Oct 02 20:08:16 compute-0 podman[456439]: 2025-10-02 20:08:16.211957948 +0000 UTC m=+0.087873230 container died 9908f6a690995d1b3a4306bf11fedafa55777080a88920bf8ddd0987a02251c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.228 2 DEBUG nova.compute.manager [req-b77caddc-368d-46da-81bb-a3b19b36f6e8 req-fe88c249-537f-47f2-aff5-0470e729f201 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Received event network-vif-unplugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.228 2 DEBUG oslo_concurrency.lockutils [req-b77caddc-368d-46da-81bb-a3b19b36f6e8 req-fe88c249-537f-47f2-aff5-0470e729f201 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.229 2 DEBUG oslo_concurrency.lockutils [req-b77caddc-368d-46da-81bb-a3b19b36f6e8 req-fe88c249-537f-47f2-aff5-0470e729f201 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.229 2 DEBUG oslo_concurrency.lockutils [req-b77caddc-368d-46da-81bb-a3b19b36f6e8 req-fe88c249-537f-47f2-aff5-0470e729f201 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.229 2 DEBUG nova.compute.manager [req-b77caddc-368d-46da-81bb-a3b19b36f6e8 req-fe88c249-537f-47f2-aff5-0470e729f201 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] No waiting events found dispatching network-vif-unplugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.229 2 DEBUG nova.compute.manager [req-b77caddc-368d-46da-81bb-a3b19b36f6e8 req-fe88c249-537f-47f2-aff5-0470e729f201 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Received event network-vif-unplugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 20:08:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9908f6a690995d1b3a4306bf11fedafa55777080a88920bf8ddd0987a02251c9-userdata-shm.mount: Deactivated successfully.
Oct 02 20:08:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2edea184a0937526881ae6e3c691d6e61e3b7e2e863169a47af1c8eb310de6e-merged.mount: Deactivated successfully.
Oct 02 20:08:16 compute-0 podman[456439]: 2025-10-02 20:08:16.291351923 +0000 UTC m=+0.167267195 container cleanup 9908f6a690995d1b3a4306bf11fedafa55777080a88920bf8ddd0987a02251c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 20:08:16 compute-0 systemd[1]: libpod-conmon-9908f6a690995d1b3a4306bf11fedafa55777080a88920bf8ddd0987a02251c9.scope: Deactivated successfully.
Oct 02 20:08:16 compute-0 podman[456475]: 2025-10-02 20:08:16.396499388 +0000 UTC m=+0.072886455 container remove 9908f6a690995d1b3a4306bf11fedafa55777080a88920bf8ddd0987a02251c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 02 20:08:16 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:16.417 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[d1085f34-71f5-4031-b055-ab49c79e8c80]: (4, ('Thu Oct  2 08:08:16 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127 (9908f6a690995d1b3a4306bf11fedafa55777080a88920bf8ddd0987a02251c9)\n9908f6a690995d1b3a4306bf11fedafa55777080a88920bf8ddd0987a02251c9\nThu Oct  2 08:08:16 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127 (9908f6a690995d1b3a4306bf11fedafa55777080a88920bf8ddd0987a02251c9)\n9908f6a690995d1b3a4306bf11fedafa55777080a88920bf8ddd0987a02251c9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:16 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:16.421 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[849064b7-759d-4825-9d25-9440fe1d2078]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:16 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:16.422 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc59c91fb-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:16 compute-0 kernel: tapc59c91fb-e0: left promiscuous mode
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:16 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:16.441 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[063f30e0-974f-4426-be53-1b40cfbc21e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:16 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:16.460 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[278b0580-6e27-4e2c-a8fb-39fbd4954701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:16 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:16.462 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[1dacd867-d4e5-4490-9eac-ffacbbc170c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:16 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:16.491 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[239c0067-7f2f-4dbf-8a84-fedddeef1f88]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679983, 'reachable_time': 27537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 456490, 'error': None, 'target': 'ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:16 compute-0 systemd[1]: run-netns-ovnmeta\x2dc59c91fb\x2defec\x2d4ddf\x2db699\x2de072223ea127.mount: Deactivated successfully.
Oct 02 20:08:16 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:16.498 285947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c59c91fb-efec-4ddf-b699-e072223ea127 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 20:08:16 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:16.498 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[24ec8e16-4483-43ca-a01e-4048b82f686a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:16 compute-0 ceph-mon[191910]: pgmap v1905: 321 pgs: 321 active+clean; 418 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.828 2 INFO nova.virt.libvirt.driver [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Deleting instance files /var/lib/nova/instances/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9_del
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.829 2 INFO nova.virt.libvirt.driver [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Deletion of /var/lib/nova/instances/cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9_del complete
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.897 2 INFO nova.compute.manager [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Took 1.07 seconds to destroy the instance on the hypervisor.
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.898 2 DEBUG oslo.service.loopingcall [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.898 2 DEBUG nova.compute.manager [-] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 20:08:16 compute-0 nova_compute[355794]: 2025-10-02 20:08:16.899 2 DEBUG nova.network.neutron [-] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 20:08:17 compute-0 sudo[456491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:08:17 compute-0 sudo[456491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:08:17 compute-0 sudo[456491]: pam_unix(sudo:session): session closed for user root
Oct 02 20:08:17 compute-0 sudo[456516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:08:17 compute-0 sudo[456516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:08:17 compute-0 sudo[456516]: pam_unix(sudo:session): session closed for user root
Oct 02 20:08:17 compute-0 sudo[456541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:08:17 compute-0 sudo[456541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:08:17 compute-0 sudo[456541]: pam_unix(sudo:session): session closed for user root
Oct 02 20:08:17 compute-0 sudo[456566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:08:17 compute-0 sudo[456566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:08:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 419 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 1.9 MiB/s wr, 46 op/s
Oct 02 20:08:17 compute-0 nova_compute[355794]: 2025-10-02 20:08:17.874 2 DEBUG nova.compute.manager [req-1f88efe3-aeb9-483c-b9dc-d5bf521663b9 req-c80aa381-369d-4e6a-9738-daf43ef8da54 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Received event network-vif-plugged-d9c79914-e94a-4a4b-908a-c70b53a1a20f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:17 compute-0 nova_compute[355794]: 2025-10-02 20:08:17.875 2 DEBUG oslo_concurrency.lockutils [req-1f88efe3-aeb9-483c-b9dc-d5bf521663b9 req-c80aa381-369d-4e6a-9738-daf43ef8da54 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "ba7aef8d-a028-428d-97bd-508631983393-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:17 compute-0 nova_compute[355794]: 2025-10-02 20:08:17.876 2 DEBUG oslo_concurrency.lockutils [req-1f88efe3-aeb9-483c-b9dc-d5bf521663b9 req-c80aa381-369d-4e6a-9738-daf43ef8da54 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "ba7aef8d-a028-428d-97bd-508631983393-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:17 compute-0 nova_compute[355794]: 2025-10-02 20:08:17.876 2 DEBUG oslo_concurrency.lockutils [req-1f88efe3-aeb9-483c-b9dc-d5bf521663b9 req-c80aa381-369d-4e6a-9738-daf43ef8da54 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "ba7aef8d-a028-428d-97bd-508631983393-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:17 compute-0 nova_compute[355794]: 2025-10-02 20:08:17.876 2 DEBUG nova.compute.manager [req-1f88efe3-aeb9-483c-b9dc-d5bf521663b9 req-c80aa381-369d-4e6a-9738-daf43ef8da54 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] No waiting events found dispatching network-vif-plugged-d9c79914-e94a-4a4b-908a-c70b53a1a20f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:08:17 compute-0 nova_compute[355794]: 2025-10-02 20:08:17.877 2 WARNING nova.compute.manager [req-1f88efe3-aeb9-483c-b9dc-d5bf521663b9 req-c80aa381-369d-4e6a-9738-daf43ef8da54 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Received unexpected event network-vif-plugged-d9c79914-e94a-4a4b-908a-c70b53a1a20f for instance with vm_state active and task_state None.
Oct 02 20:08:18 compute-0 sudo[456566]: pam_unix(sudo:session): session closed for user root
Oct 02 20:08:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:08:18 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:08:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:08:18 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:08:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:08:18 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:08:18 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 277e0f39-5ad6-462b-9033-fff6a0c24fed does not exist
Oct 02 20:08:18 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e62e8ddb-364c-4a70-802c-623da9b93136 does not exist
Oct 02 20:08:18 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev d1e80303-2293-4154-9b63-0f6b3cebcd3a does not exist
Oct 02 20:08:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:08:18 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:08:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:08:18 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:08:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:08:18 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:08:18 compute-0 sudo[456622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:08:18 compute-0 sudo[456622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:08:18 compute-0 sudo[456622]: pam_unix(sudo:session): session closed for user root
Oct 02 20:08:18 compute-0 sudo[456647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:08:18 compute-0 sudo[456647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:08:18 compute-0 sudo[456647]: pam_unix(sudo:session): session closed for user root
Oct 02 20:08:18 compute-0 nova_compute[355794]: 2025-10-02 20:08:18.523 2 DEBUG nova.compute.manager [req-4ccc8bca-67a6-4e8c-9cce-7abae5a83765 req-f6169d84-50da-46cf-9259-2198f6e47e21 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Received event network-vif-plugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:18 compute-0 nova_compute[355794]: 2025-10-02 20:08:18.523 2 DEBUG oslo_concurrency.lockutils [req-4ccc8bca-67a6-4e8c-9cce-7abae5a83765 req-f6169d84-50da-46cf-9259-2198f6e47e21 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:18 compute-0 nova_compute[355794]: 2025-10-02 20:08:18.523 2 DEBUG oslo_concurrency.lockutils [req-4ccc8bca-67a6-4e8c-9cce-7abae5a83765 req-f6169d84-50da-46cf-9259-2198f6e47e21 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:18 compute-0 nova_compute[355794]: 2025-10-02 20:08:18.523 2 DEBUG oslo_concurrency.lockutils [req-4ccc8bca-67a6-4e8c-9cce-7abae5a83765 req-f6169d84-50da-46cf-9259-2198f6e47e21 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:18 compute-0 nova_compute[355794]: 2025-10-02 20:08:18.524 2 DEBUG nova.compute.manager [req-4ccc8bca-67a6-4e8c-9cce-7abae5a83765 req-f6169d84-50da-46cf-9259-2198f6e47e21 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] No waiting events found dispatching network-vif-plugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:08:18 compute-0 nova_compute[355794]: 2025-10-02 20:08:18.524 2 WARNING nova.compute.manager [req-4ccc8bca-67a6-4e8c-9cce-7abae5a83765 req-f6169d84-50da-46cf-9259-2198f6e47e21 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Received unexpected event network-vif-plugged-668a7aea-bc00-4cac-b1dd-b0786e76c474 for instance with vm_state active and task_state deleting.
Oct 02 20:08:18 compute-0 sudo[456672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:08:18 compute-0 sudo[456672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:08:18 compute-0 sudo[456672]: pam_unix(sudo:session): session closed for user root
Oct 02 20:08:18 compute-0 sudo[456697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:08:18 compute-0 sudo[456697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:08:18 compute-0 ceph-mon[191910]: pgmap v1906: 321 pgs: 321 active+clean; 419 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 1.9 MiB/s wr, 46 op/s
Oct 02 20:08:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:08:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:08:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:08:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:08:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:08:18 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:08:18 compute-0 nova_compute[355794]: 2025-10-02 20:08:18.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:19 compute-0 nova_compute[355794]: 2025-10-02 20:08:19.031 2 DEBUG nova.network.neutron [-] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:08:19 compute-0 nova_compute[355794]: 2025-10-02 20:08:19.049 2 INFO nova.compute.manager [-] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Took 2.15 seconds to deallocate network for instance.
Oct 02 20:08:19 compute-0 nova_compute[355794]: 2025-10-02 20:08:19.114 2 DEBUG oslo_concurrency.lockutils [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:19 compute-0 nova_compute[355794]: 2025-10-02 20:08:19.115 2 DEBUG oslo_concurrency.lockutils [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:19 compute-0 nova_compute[355794]: 2025-10-02 20:08:19.265 2 DEBUG oslo_concurrency.processutils [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:19 compute-0 podman[456760]: 2025-10-02 20:08:19.309289017 +0000 UTC m=+0.076179011 container create 29998f2f32748822fe4c96c1e88c930f4a2b3b355301ea97340c0533546f2765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_margulis, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 20:08:19 compute-0 podman[456760]: 2025-10-02 20:08:19.280909558 +0000 UTC m=+0.047799572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:08:19 compute-0 systemd[1]: Started libpod-conmon-29998f2f32748822fe4c96c1e88c930f4a2b3b355301ea97340c0533546f2765.scope.
Oct 02 20:08:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:08:19 compute-0 podman[456760]: 2025-10-02 20:08:19.438031424 +0000 UTC m=+0.204921448 container init 29998f2f32748822fe4c96c1e88c930f4a2b3b355301ea97340c0533546f2765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_margulis, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:08:19 compute-0 podman[456760]: 2025-10-02 20:08:19.453487242 +0000 UTC m=+0.220377236 container start 29998f2f32748822fe4c96c1e88c930f4a2b3b355301ea97340c0533546f2765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:08:19 compute-0 podman[456760]: 2025-10-02 20:08:19.458827623 +0000 UTC m=+0.225717617 container attach 29998f2f32748822fe4c96c1e88c930f4a2b3b355301ea97340c0533546f2765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 20:08:19 compute-0 serene_margulis[456777]: 167 167
Oct 02 20:08:19 compute-0 systemd[1]: libpod-29998f2f32748822fe4c96c1e88c930f4a2b3b355301ea97340c0533546f2765.scope: Deactivated successfully.
Oct 02 20:08:19 compute-0 conmon[456777]: conmon 29998f2f32748822fe4c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-29998f2f32748822fe4c96c1e88c930f4a2b3b355301ea97340c0533546f2765.scope/container/memory.events
Oct 02 20:08:19 compute-0 podman[456760]: 2025-10-02 20:08:19.472932385 +0000 UTC m=+0.239822379 container died 29998f2f32748822fe4c96c1e88c930f4a2b3b355301ea97340c0533546f2765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_margulis, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:08:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bcff517202820bc0e3779ede0e4e859d4f472dac0cc8e21dd115d04adfa1618-merged.mount: Deactivated successfully.
Oct 02 20:08:19 compute-0 nova_compute[355794]: 2025-10-02 20:08:19.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:19 compute-0 podman[456760]: 2025-10-02 20:08:19.525899833 +0000 UTC m=+0.292789827 container remove 29998f2f32748822fe4c96c1e88c930f4a2b3b355301ea97340c0533546f2765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_margulis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:08:19 compute-0 systemd[1]: libpod-conmon-29998f2f32748822fe4c96c1e88c930f4a2b3b355301ea97340c0533546f2765.scope: Deactivated successfully.
Oct 02 20:08:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 393 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 797 KiB/s rd, 3.2 MiB/s wr, 112 op/s
Oct 02 20:08:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:08:19 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4125154973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:19 compute-0 podman[456819]: 2025-10-02 20:08:19.830205983 +0000 UTC m=+0.089684428 container create bc4069e357f2d2ee8ca9bbd319b3bbf9a2e8fde10f96c533958e77e8371b6f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bhabha, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:08:19 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4125154973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:19 compute-0 nova_compute[355794]: 2025-10-02 20:08:19.850 2 DEBUG oslo_concurrency.processutils [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:19 compute-0 nova_compute[355794]: 2025-10-02 20:08:19.861 2 DEBUG nova.compute.provider_tree [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:08:19 compute-0 podman[456819]: 2025-10-02 20:08:19.791187513 +0000 UTC m=+0.050665988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:08:19 compute-0 systemd[1]: Started libpod-conmon-bc4069e357f2d2ee8ca9bbd319b3bbf9a2e8fde10f96c533958e77e8371b6f2d.scope.
Oct 02 20:08:19 compute-0 nova_compute[355794]: 2025-10-02 20:08:19.891 2 DEBUG nova.scheduler.client.report [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:08:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:08:19 compute-0 nova_compute[355794]: 2025-10-02 20:08:19.935 2 DEBUG oslo_concurrency.lockutils [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bee77ca0117f110d0e1a2542c7e9662d47b517c83abc4cc3738e40260a2fcf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bee77ca0117f110d0e1a2542c7e9662d47b517c83abc4cc3738e40260a2fcf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bee77ca0117f110d0e1a2542c7e9662d47b517c83abc4cc3738e40260a2fcf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bee77ca0117f110d0e1a2542c7e9662d47b517c83abc4cc3738e40260a2fcf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bee77ca0117f110d0e1a2542c7e9662d47b517c83abc4cc3738e40260a2fcf3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:08:19 compute-0 podman[456819]: 2025-10-02 20:08:19.96729053 +0000 UTC m=+0.226768975 container init bc4069e357f2d2ee8ca9bbd319b3bbf9a2e8fde10f96c533958e77e8371b6f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bhabha, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:08:19 compute-0 nova_compute[355794]: 2025-10-02 20:08:19.971 2 INFO nova.scheduler.client.report [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Deleted allocations for instance cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9
Oct 02 20:08:19 compute-0 podman[456819]: 2025-10-02 20:08:19.97944248 +0000 UTC m=+0.238920925 container start bc4069e357f2d2ee8ca9bbd319b3bbf9a2e8fde10f96c533958e77e8371b6f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bhabha, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Oct 02 20:08:19 compute-0 podman[456819]: 2025-10-02 20:08:19.989408523 +0000 UTC m=+0.248886968 container attach bc4069e357f2d2ee8ca9bbd319b3bbf9a2e8fde10f96c533958e77e8371b6f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 20:08:20 compute-0 nova_compute[355794]: 2025-10-02 20:08:20.093 2 DEBUG oslo_concurrency.lockutils [None req-28df379a-58fd-4b88-9341-5266b6f1f245 f962d436a03a4b70951908eb9f826d11 0db170bd1e464f2ea61c24a9079861a4 - - default default] Lock "cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:08:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1982708695' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:08:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:08:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1982708695' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:08:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:08:20 compute-0 nova_compute[355794]: 2025-10-02 20:08:20.640 2 DEBUG nova.compute.manager [req-5da094a0-96c1-4882-a53a-7fb083a64afe req-67684540-2ebe-4211-a90f-e158b68b0a24 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Received event network-vif-deleted-668a7aea-bc00-4cac-b1dd-b0786e76c474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:20 compute-0 nova_compute[355794]: 2025-10-02 20:08:20.640 2 DEBUG nova.compute.manager [req-5da094a0-96c1-4882-a53a-7fb083a64afe req-67684540-2ebe-4211-a90f-e158b68b0a24 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Received event network-changed-d9c79914-e94a-4a4b-908a-c70b53a1a20f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:20 compute-0 nova_compute[355794]: 2025-10-02 20:08:20.641 2 DEBUG nova.compute.manager [req-5da094a0-96c1-4882-a53a-7fb083a64afe req-67684540-2ebe-4211-a90f-e158b68b0a24 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Refreshing instance network info cache due to event network-changed-d9c79914-e94a-4a4b-908a-c70b53a1a20f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:08:20 compute-0 nova_compute[355794]: 2025-10-02 20:08:20.642 2 DEBUG oslo_concurrency.lockutils [req-5da094a0-96c1-4882-a53a-7fb083a64afe req-67684540-2ebe-4211-a90f-e158b68b0a24 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-ba7aef8d-a028-428d-97bd-508631983393" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:08:20 compute-0 nova_compute[355794]: 2025-10-02 20:08:20.642 2 DEBUG oslo_concurrency.lockutils [req-5da094a0-96c1-4882-a53a-7fb083a64afe req-67684540-2ebe-4211-a90f-e158b68b0a24 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-ba7aef8d-a028-428d-97bd-508631983393" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:08:20 compute-0 nova_compute[355794]: 2025-10-02 20:08:20.642 2 DEBUG nova.network.neutron [req-5da094a0-96c1-4882-a53a-7fb083a64afe req-67684540-2ebe-4211-a90f-e158b68b0a24 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Refreshing network info cache for port d9c79914-e94a-4a4b-908a-c70b53a1a20f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:08:20 compute-0 nova_compute[355794]: 2025-10-02 20:08:20.830 2 DEBUG oslo_concurrency.lockutils [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Acquiring lock "ba7aef8d-a028-428d-97bd-508631983393" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:20 compute-0 nova_compute[355794]: 2025-10-02 20:08:20.831 2 DEBUG oslo_concurrency.lockutils [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lock "ba7aef8d-a028-428d-97bd-508631983393" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:20 compute-0 nova_compute[355794]: 2025-10-02 20:08:20.831 2 DEBUG oslo_concurrency.lockutils [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Acquiring lock "ba7aef8d-a028-428d-97bd-508631983393-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:20 compute-0 nova_compute[355794]: 2025-10-02 20:08:20.832 2 DEBUG oslo_concurrency.lockutils [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lock "ba7aef8d-a028-428d-97bd-508631983393-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:20 compute-0 nova_compute[355794]: 2025-10-02 20:08:20.832 2 DEBUG oslo_concurrency.lockutils [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lock "ba7aef8d-a028-428d-97bd-508631983393-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:20 compute-0 nova_compute[355794]: 2025-10-02 20:08:20.835 2 INFO nova.compute.manager [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Terminating instance
Oct 02 20:08:20 compute-0 nova_compute[355794]: 2025-10-02 20:08:20.837 2 DEBUG nova.compute.manager [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 20:08:20 compute-0 ceph-mon[191910]: pgmap v1907: 321 pgs: 321 active+clean; 393 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 797 KiB/s rd, 3.2 MiB/s wr, 112 op/s
Oct 02 20:08:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1982708695' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:08:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1982708695' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:08:20 compute-0 kernel: tapd9c79914-e9 (unregistering): left promiscuous mode
Oct 02 20:08:20 compute-0 NetworkManager[44968]: <info>  [1759435700.9377] device (tapd9c79914-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 20:08:20 compute-0 ovn_controller[88435]: 2025-10-02T20:08:20Z|00134|binding|INFO|Releasing lport d9c79914-e94a-4a4b-908a-c70b53a1a20f from this chassis (sb_readonly=0)
Oct 02 20:08:20 compute-0 nova_compute[355794]: 2025-10-02 20:08:20.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:20 compute-0 ovn_controller[88435]: 2025-10-02T20:08:20Z|00135|binding|INFO|Setting lport d9c79914-e94a-4a4b-908a-c70b53a1a20f down in Southbound
Oct 02 20:08:20 compute-0 ovn_controller[88435]: 2025-10-02T20:08:20Z|00136|binding|INFO|Removing iface tapd9c79914-e9 ovn-installed in OVS
Oct 02 20:08:20 compute-0 nova_compute[355794]: 2025-10-02 20:08:20.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:20 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:20.971 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:6a:85 10.100.0.14'], port_security=['fa:16:3e:1f:6a:85 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ba7aef8d-a028-428d-97bd-508631983393', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1115b32054db477a9f511992d206db4d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5621b4be-c82f-4f5a-8cfd-388d865a6477', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2763136a-7895-4d99-b111-862304d5703e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=d9c79914-e94a-4a4b-908a-c70b53a1a20f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:08:20 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:20.973 285790 INFO neutron.agent.ovn.metadata.agent [-] Port d9c79914-e94a-4a4b-908a-c70b53a1a20f in datapath e2b83389-7f6e-4c8a-aff7-0b0ca66e1004 unbound from our chassis
Oct 02 20:08:20 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:20.977 285790 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e2b83389-7f6e-4c8a-aff7-0b0ca66e1004, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 20:08:20 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:20.978 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[0b09086f-0b04-4b72-9a17-8d3f7ec7f6a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:20 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:20.979 285790 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004 namespace which is not needed anymore
Oct 02 20:08:20 compute-0 nova_compute[355794]: 2025-10-02 20:08:20.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:21 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct 02 20:08:21 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 6.450s CPU time.
Oct 02 20:08:21 compute-0 systemd-machined[137646]: Machine qemu-13-instance-0000000c terminated.
Oct 02 20:08:21 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:21 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.090 2 INFO nova.virt.libvirt.driver [-] [instance: ba7aef8d-a028-428d-97bd-508631983393] Instance destroyed successfully.
Oct 02 20:08:21 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.090 2 DEBUG nova.objects.instance [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lazy-loading 'resources' on Instance uuid ba7aef8d-a028-428d-97bd-508631983393 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:08:21 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.104 2 DEBUG nova.virt.libvirt.vif [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T20:08:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1522053352',display_name='tempest-ServersTestManualDisk-server-1522053352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1522053352',id=12,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlQFSF4JLGxUNfpcWEs3v88IQ3MYtOfPgcyY6C3M82987GeHPrUblgTRdFc77f2xNa0PKburvm4ibYOH9NMNz1TlDQ8SsNHGTHZNV7ZZVJs98GuS8aopMWtaRUvDSxNxg==',key_name='tempest-keypair-271875864',keypairs=<?>,launch_index=0,launched_at=2025-10-02T20:08:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1115b32054db477a9f511992d206db4d',ramdisk_id='',reservation_id='r-8vodv0cp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1985524993',owner_user_name='tempest-ServersTestManualDisk-1985524993-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T20:08:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e8fb0356d35d4034be5df2acf0c1b9b8',uuid=ba7aef8d-a028-428d-97bd-508631983393,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "address": "fa:16:3e:1f:6a:85", "network": {"id": "e2b83389-7f6e-4c8a-aff7-0b0ca66e1004", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1531328621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1115b32054db477a9f511992d206db4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9c79914-e9", "ovs_interfaceid": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 20:08:21 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.104 2 DEBUG nova.network.os_vif_util [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Converting VIF {"id": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "address": "fa:16:3e:1f:6a:85", "network": {"id": "e2b83389-7f6e-4c8a-aff7-0b0ca66e1004", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1531328621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1115b32054db477a9f511992d206db4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9c79914-e9", "ovs_interfaceid": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:08:21 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.105 2 DEBUG nova.network.os_vif_util [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:6a:85,bridge_name='br-int',has_traffic_filtering=True,id=d9c79914-e94a-4a4b-908a-c70b53a1a20f,network=Network(e2b83389-7f6e-4c8a-aff7-0b0ca66e1004),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9c79914-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:08:21 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.105 2 DEBUG os_vif [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:6a:85,bridge_name='br-int',has_traffic_filtering=True,id=d9c79914-e94a-4a4b-908a-c70b53a1a20f,network=Network(e2b83389-7f6e-4c8a-aff7-0b0ca66e1004),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9c79914-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 20:08:21 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:21 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.107 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9c79914-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:21 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:21 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 20:08:21 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.119 2 INFO os_vif [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:6a:85,bridge_name='br-int',has_traffic_filtering=True,id=d9c79914-e94a-4a4b-908a-c70b53a1a20f,network=Network(e2b83389-7f6e-4c8a-aff7-0b0ca66e1004),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9c79914-e9')
Oct 02 20:08:21 compute-0 strange_bhabha[456837]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:08:21 compute-0 strange_bhabha[456837]: --> relative data size: 1.0
Oct 02 20:08:21 compute-0 strange_bhabha[456837]: --> All data devices are unavailable
Oct 02 20:08:21 compute-0 systemd[1]: libpod-bc4069e357f2d2ee8ca9bbd319b3bbf9a2e8fde10f96c533958e77e8371b6f2d.scope: Deactivated successfully.
Oct 02 20:08:21 compute-0 systemd[1]: libpod-bc4069e357f2d2ee8ca9bbd319b3bbf9a2e8fde10f96c533958e77e8371b6f2d.scope: Consumed 1.118s CPU time.
Oct 02 20:08:21 compute-0 podman[456819]: 2025-10-02 20:08:21.214773017 +0000 UTC m=+1.474251432 container died bc4069e357f2d2ee8ca9bbd319b3bbf9a2e8fde10f96c533958e77e8371b6f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 20:08:21 compute-0 neutron-haproxy-ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004[456377]: [NOTICE]   (456391) : haproxy version is 2.8.14-c23fe91
Oct 02 20:08:21 compute-0 neutron-haproxy-ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004[456377]: [NOTICE]   (456391) : path to executable is /usr/sbin/haproxy
Oct 02 20:08:21 compute-0 neutron-haproxy-ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004[456377]: [WARNING]  (456391) : Exiting Master process...
Oct 02 20:08:21 compute-0 neutron-haproxy-ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004[456377]: [ALERT]    (456391) : Current worker (456394) exited with code 143 (Terminated)
Oct 02 20:08:21 compute-0 neutron-haproxy-ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004[456377]: [WARNING]  (456391) : All workers exited. Exiting... (0)
Oct 02 20:08:21 compute-0 systemd[1]: libpod-baa5deeccd27a974a175725b1af9a5de81e0218fb8d1fa2e2144e90b3f710437.scope: Deactivated successfully.
Oct 02 20:08:21 compute-0 podman[456912]: 2025-10-02 20:08:21.239104749 +0000 UTC m=+0.079993212 container died baa5deeccd27a974a175725b1af9a5de81e0218fb8d1fa2e2144e90b3f710437 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:08:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bee77ca0117f110d0e1a2542c7e9662d47b517c83abc4cc3738e40260a2fcf3-merged.mount: Deactivated successfully.
Oct 02 20:08:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-baa5deeccd27a974a175725b1af9a5de81e0218fb8d1fa2e2144e90b3f710437-userdata-shm.mount: Deactivated successfully.
Oct 02 20:08:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-59a05be34e70030b4fdce32a658e8bc0273fedc12e432a0dbdd2c40eb1fb4e5a-merged.mount: Deactivated successfully.
Oct 02 20:08:21 compute-0 podman[456819]: 2025-10-02 20:08:21.301750252 +0000 UTC m=+1.561228667 container remove bc4069e357f2d2ee8ca9bbd319b3bbf9a2e8fde10f96c533958e77e8371b6f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 20:08:21 compute-0 podman[456912]: 2025-10-02 20:08:21.319205103 +0000 UTC m=+0.160093556 container cleanup baa5deeccd27a974a175725b1af9a5de81e0218fb8d1fa2e2144e90b3f710437 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 20:08:21 compute-0 systemd[1]: libpod-conmon-bc4069e357f2d2ee8ca9bbd319b3bbf9a2e8fde10f96c533958e77e8371b6f2d.scope: Deactivated successfully.
Oct 02 20:08:21 compute-0 sudo[456697]: pam_unix(sudo:session): session closed for user root
Oct 02 20:08:21 compute-0 systemd[1]: libpod-conmon-baa5deeccd27a974a175725b1af9a5de81e0218fb8d1fa2e2144e90b3f710437.scope: Deactivated successfully.
Oct 02 20:08:21 compute-0 podman[456959]: 2025-10-02 20:08:21.427338826 +0000 UTC m=+0.067091961 container remove baa5deeccd27a974a175725b1af9a5de81e0218fb8d1fa2e2144e90b3f710437 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 20:08:21 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:21.440 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce36f01-8f70-4094-8c20-7b86bf127469]: (4, ('Thu Oct  2 08:08:21 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004 (baa5deeccd27a974a175725b1af9a5de81e0218fb8d1fa2e2144e90b3f710437)\nbaa5deeccd27a974a175725b1af9a5de81e0218fb8d1fa2e2144e90b3f710437\nThu Oct  2 08:08:21 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004 (baa5deeccd27a974a175725b1af9a5de81e0218fb8d1fa2e2144e90b3f710437)\nbaa5deeccd27a974a175725b1af9a5de81e0218fb8d1fa2e2144e90b3f710437\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:21 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:21.443 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[1241ddd8-ff1c-4a87-ae20-b0d757e34cfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:21 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:21.444 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape2b83389-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:21 compute-0 kernel: tape2b83389-70: left promiscuous mode
Oct 02 20:08:21 compute-0 sudo[456965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:08:21 compute-0 sudo[456965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:08:21 compute-0 sudo[456965]: pam_unix(sudo:session): session closed for user root
Oct 02 20:08:21 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:21 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:21 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:21.478 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[3727a24f-6241-4c8c-8996-802a60627755]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:21 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:21.503 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[b4401f64-a031-4890-beef-97c6ff711ec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:21 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:21.505 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[542723f0-8c73-4947-bd52-d0c3182c8773]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:21 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:21.521 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[d8e94cb9-664d-4c89-b4a6-2c434e7b7c37]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686583, 'reachable_time': 40095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 457016, 'error': None, 'target': 'ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:21 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:21.525 285947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e2b83389-7f6e-4c8a-aff7-0b0ca66e1004 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 20:08:21 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:21.525 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[cf0cbd4e-d512-434b-8314-8967ecc30aee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:21 compute-0 systemd[1]: run-netns-ovnmeta\x2de2b83389\x2d7f6e\x2d4c8a\x2daff7\x2d0b0ca66e1004.mount: Deactivated successfully.
Oct 02 20:08:21 compute-0 sudo[456994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:08:21 compute-0 sudo[456994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:08:21 compute-0 sudo[456994]: pam_unix(sudo:session): session closed for user root
Oct 02 20:08:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 370 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 145 op/s
Oct 02 20:08:21 compute-0 sudo[457023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:08:21 compute-0 sudo[457023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:08:21 compute-0 sudo[457023]: pam_unix(sudo:session): session closed for user root
Oct 02 20:08:21 compute-0 sudo[457048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:08:21 compute-0 sudo[457048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:08:21 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.907 2 INFO nova.virt.libvirt.driver [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Deleting instance files /var/lib/nova/instances/ba7aef8d-a028-428d-97bd-508631983393_del
Oct 02 20:08:21 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.909 2 INFO nova.virt.libvirt.driver [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Deletion of /var/lib/nova/instances/ba7aef8d-a028-428d-97bd-508631983393_del complete
Oct 02 20:08:21 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.997 2 INFO nova.compute.manager [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Took 1.16 seconds to destroy the instance on the hypervisor.
Oct 02 20:08:22 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.998 2 DEBUG oslo.service.loopingcall [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 20:08:22 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.998 2 DEBUG nova.compute.manager [-] [instance: ba7aef8d-a028-428d-97bd-508631983393] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 20:08:22 compute-0 nova_compute[355794]: 2025-10-02 20:08:21.998 2 DEBUG nova.network.neutron [-] [instance: ba7aef8d-a028-428d-97bd-508631983393] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 20:08:22 compute-0 podman[457111]: 2025-10-02 20:08:22.31191081 +0000 UTC m=+0.067772029 container create b9dba8a6bdbce9b853b9a84f33f2e228d253e36d4cfc9e626a989e2829e877b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:08:22 compute-0 podman[457111]: 2025-10-02 20:08:22.282456863 +0000 UTC m=+0.038318082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:08:22 compute-0 systemd[1]: Started libpod-conmon-b9dba8a6bdbce9b853b9a84f33f2e228d253e36d4cfc9e626a989e2829e877b3.scope.
Oct 02 20:08:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:08:22 compute-0 podman[457111]: 2025-10-02 20:08:22.428872846 +0000 UTC m=+0.184734065 container init b9dba8a6bdbce9b853b9a84f33f2e228d253e36d4cfc9e626a989e2829e877b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 20:08:22 compute-0 podman[457111]: 2025-10-02 20:08:22.440294157 +0000 UTC m=+0.196155366 container start b9dba8a6bdbce9b853b9a84f33f2e228d253e36d4cfc9e626a989e2829e877b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 20:08:22 compute-0 podman[457111]: 2025-10-02 20:08:22.446492441 +0000 UTC m=+0.202353650 container attach b9dba8a6bdbce9b853b9a84f33f2e228d253e36d4cfc9e626a989e2829e877b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 20:08:22 compute-0 thirsty_euclid[457128]: 167 167
Oct 02 20:08:22 compute-0 systemd[1]: libpod-b9dba8a6bdbce9b853b9a84f33f2e228d253e36d4cfc9e626a989e2829e877b3.scope: Deactivated successfully.
Oct 02 20:08:22 compute-0 conmon[457128]: conmon b9dba8a6bdbce9b853b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9dba8a6bdbce9b853b9a84f33f2e228d253e36d4cfc9e626a989e2829e877b3.scope/container/memory.events
Oct 02 20:08:22 compute-0 podman[457111]: 2025-10-02 20:08:22.451212365 +0000 UTC m=+0.207073564 container died b9dba8a6bdbce9b853b9a84f33f2e228d253e36d4cfc9e626a989e2829e877b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 20:08:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-461e0fcf8be83076fc7a4fb66a7d107220776255c76e68a0dc7bd2c20ee1de7b-merged.mount: Deactivated successfully.
Oct 02 20:08:22 compute-0 podman[457111]: 2025-10-02 20:08:22.511088926 +0000 UTC m=+0.266950105 container remove b9dba8a6bdbce9b853b9a84f33f2e228d253e36d4cfc9e626a989e2829e877b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_euclid, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 20:08:22 compute-0 systemd[1]: libpod-conmon-b9dba8a6bdbce9b853b9a84f33f2e228d253e36d4cfc9e626a989e2829e877b3.scope: Deactivated successfully.
Oct 02 20:08:22 compute-0 nova_compute[355794]: 2025-10-02 20:08:22.581 2 DEBUG nova.network.neutron [req-5da094a0-96c1-4882-a53a-7fb083a64afe req-67684540-2ebe-4211-a90f-e158b68b0a24 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Updated VIF entry in instance network info cache for port d9c79914-e94a-4a4b-908a-c70b53a1a20f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:08:22 compute-0 nova_compute[355794]: 2025-10-02 20:08:22.582 2 DEBUG nova.network.neutron [req-5da094a0-96c1-4882-a53a-7fb083a64afe req-67684540-2ebe-4211-a90f-e158b68b0a24 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Updating instance_info_cache with network_info: [{"id": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "address": "fa:16:3e:1f:6a:85", "network": {"id": "e2b83389-7f6e-4c8a-aff7-0b0ca66e1004", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1531328621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1115b32054db477a9f511992d206db4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9c79914-e9", "ovs_interfaceid": "d9c79914-e94a-4a4b-908a-c70b53a1a20f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:08:22 compute-0 nova_compute[355794]: 2025-10-02 20:08:22.618 2 DEBUG oslo_concurrency.lockutils [req-5da094a0-96c1-4882-a53a-7fb083a64afe req-67684540-2ebe-4211-a90f-e158b68b0a24 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-ba7aef8d-a028-428d-97bd-508631983393" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:08:22 compute-0 podman[457151]: 2025-10-02 20:08:22.769976077 +0000 UTC m=+0.067187494 container create f1b2dae4bebba7309be85bd4f9c25b93013665f611b04745d873a9de6c45d542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:08:22 compute-0 podman[457151]: 2025-10-02 20:08:22.744987138 +0000 UTC m=+0.042198525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:08:22 compute-0 systemd[1]: Started libpod-conmon-f1b2dae4bebba7309be85bd4f9c25b93013665f611b04745d873a9de6c45d542.scope.
Oct 02 20:08:22 compute-0 ceph-mon[191910]: pgmap v1908: 321 pgs: 321 active+clean; 370 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 145 op/s
Oct 02 20:08:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7151f9540ff18ddd727cf4ec1aa57eb0a6e617a9b41b8002ea14072f1b7b9db3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7151f9540ff18ddd727cf4ec1aa57eb0a6e617a9b41b8002ea14072f1b7b9db3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7151f9540ff18ddd727cf4ec1aa57eb0a6e617a9b41b8002ea14072f1b7b9db3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7151f9540ff18ddd727cf4ec1aa57eb0a6e617a9b41b8002ea14072f1b7b9db3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:08:22 compute-0 podman[457151]: 2025-10-02 20:08:22.948315693 +0000 UTC m=+0.245527110 container init f1b2dae4bebba7309be85bd4f9c25b93013665f611b04745d873a9de6c45d542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wright, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 20:08:22 compute-0 podman[457151]: 2025-10-02 20:08:22.975966953 +0000 UTC m=+0.273178340 container start f1b2dae4bebba7309be85bd4f9c25b93013665f611b04745d873a9de6c45d542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wright, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:08:22 compute-0 podman[457151]: 2025-10-02 20:08:22.989996853 +0000 UTC m=+0.287208270 container attach f1b2dae4bebba7309be85bd4f9c25b93013665f611b04745d873a9de6c45d542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wright, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:08:23 compute-0 nova_compute[355794]: 2025-10-02 20:08:23.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:23 compute-0 nova_compute[355794]: 2025-10-02 20:08:23.338 2 DEBUG nova.network.neutron [-] [instance: ba7aef8d-a028-428d-97bd-508631983393] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:08:23 compute-0 nova_compute[355794]: 2025-10-02 20:08:23.365 2 INFO nova.compute.manager [-] [instance: ba7aef8d-a028-428d-97bd-508631983393] Took 1.37 seconds to deallocate network for instance.
Oct 02 20:08:23 compute-0 nova_compute[355794]: 2025-10-02 20:08:23.404 2 DEBUG nova.compute.manager [req-d00c2e01-ad4c-4064-8eea-23f8a17e5f91 req-90523825-07f2-459c-be02-d1821ec36237 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: ba7aef8d-a028-428d-97bd-508631983393] Received event network-vif-deleted-d9c79914-e94a-4a4b-908a-c70b53a1a20f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:23 compute-0 nova_compute[355794]: 2025-10-02 20:08:23.416 2 DEBUG oslo_concurrency.lockutils [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:23 compute-0 nova_compute[355794]: 2025-10-02 20:08:23.416 2 DEBUG oslo_concurrency.lockutils [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:23 compute-0 nova_compute[355794]: 2025-10-02 20:08:23.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:23 compute-0 nova_compute[355794]: 2025-10-02 20:08:23.588 2 DEBUG oslo_concurrency.processutils [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1909: 321 pgs: 321 active+clean; 364 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 159 op/s
Oct 02 20:08:23 compute-0 intelligent_wright[457167]: {
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:     "0": [
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:         {
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "devices": [
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "/dev/loop3"
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             ],
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "lv_name": "ceph_lv0",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "lv_size": "21470642176",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "name": "ceph_lv0",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "tags": {
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.cluster_name": "ceph",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.crush_device_class": "",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.encrypted": "0",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.osd_id": "0",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.type": "block",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.vdo": "0"
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             },
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "type": "block",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "vg_name": "ceph_vg0"
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:         }
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:     ],
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:     "1": [
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:         {
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "devices": [
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "/dev/loop4"
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             ],
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "lv_name": "ceph_lv1",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "lv_size": "21470642176",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "name": "ceph_lv1",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "tags": {
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.cluster_name": "ceph",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.crush_device_class": "",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.encrypted": "0",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.osd_id": "1",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.type": "block",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.vdo": "0"
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             },
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "type": "block",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "vg_name": "ceph_vg1"
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:         }
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:     ],
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:     "2": [
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:         {
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "devices": [
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "/dev/loop5"
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             ],
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "lv_name": "ceph_lv2",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "lv_size": "21470642176",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "name": "ceph_lv2",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "tags": {
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.cluster_name": "ceph",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.crush_device_class": "",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.encrypted": "0",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.osd_id": "2",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.type": "block",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:                 "ceph.vdo": "0"
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             },
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "type": "block",
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:             "vg_name": "ceph_vg2"
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:         }
Oct 02 20:08:23 compute-0 intelligent_wright[457167]:     ]
Oct 02 20:08:23 compute-0 intelligent_wright[457167]: }
Oct 02 20:08:23 compute-0 systemd[1]: libpod-f1b2dae4bebba7309be85bd4f9c25b93013665f611b04745d873a9de6c45d542.scope: Deactivated successfully.
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:23.901850) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435703901923, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 457, "num_deletes": 251, "total_data_size": 364668, "memory_usage": 373496, "flush_reason": "Manual Compaction"}
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435703912919, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 361234, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38945, "largest_seqno": 39401, "table_properties": {"data_size": 358576, "index_size": 694, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6499, "raw_average_key_size": 19, "raw_value_size": 353277, "raw_average_value_size": 1036, "num_data_blocks": 31, "num_entries": 341, "num_filter_entries": 341, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759435680, "oldest_key_time": 1759435680, "file_creation_time": 1759435703, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 11142 microseconds, and 2902 cpu microseconds.
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:23.912988) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 361234 bytes OK
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:23.913013) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:23.915757) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:23.915776) EVENT_LOG_v1 {"time_micros": 1759435703915770, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:23.915802) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 361890, prev total WAL file size 361890, number of live WAL files 2.
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:23.917593) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(352KB)], [89(8238KB)]
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435703917661, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 8797036, "oldest_snapshot_seqno": -1}
Oct 02 20:08:23 compute-0 podman[457197]: 2025-10-02 20:08:23.931607899 +0000 UTC m=+0.044880136 container died f1b2dae4bebba7309be85bd4f9c25b93013665f611b04745d873a9de6c45d542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 5611 keys, 7039411 bytes, temperature: kUnknown
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435703958857, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 7039411, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7004988, "index_size": 19273, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14085, "raw_key_size": 143788, "raw_average_key_size": 25, "raw_value_size": 6906520, "raw_average_value_size": 1230, "num_data_blocks": 785, "num_entries": 5611, "num_filter_entries": 5611, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759435703, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:23.959127) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 7039411 bytes
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:23.964684) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.0 rd, 170.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 8.0 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(43.8) write-amplify(19.5) OK, records in: 6122, records dropped: 511 output_compression: NoCompression
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:23.964714) EVENT_LOG_v1 {"time_micros": 1759435703964702, "job": 52, "event": "compaction_finished", "compaction_time_micros": 41309, "compaction_time_cpu_micros": 19585, "output_level": 6, "num_output_files": 1, "total_output_size": 7039411, "num_input_records": 6122, "num_output_records": 5611, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435703968289, "job": 52, "event": "table_file_deletion", "file_number": 91}
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435703969501, "job": 52, "event": "table_file_deletion", "file_number": 89}
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:23.916664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:23.969637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:23.969644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:23.969646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:23.969648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:08:23 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:08:23.969650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:08:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7151f9540ff18ddd727cf4ec1aa57eb0a6e617a9b41b8002ea14072f1b7b9db3-merged.mount: Deactivated successfully.
Oct 02 20:08:24 compute-0 podman[457196]: 2025-10-02 20:08:24.027726515 +0000 UTC m=+0.112396377 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 20:08:24 compute-0 podman[457203]: 2025-10-02 20:08:24.053157126 +0000 UTC m=+0.141942146 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 20:08:24 compute-0 podman[457197]: 2025-10-02 20:08:24.069424775 +0000 UTC m=+0.182697012 container remove f1b2dae4bebba7309be85bd4f9c25b93013665f611b04745d873a9de6c45d542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:08:24 compute-0 systemd[1]: libpod-conmon-f1b2dae4bebba7309be85bd4f9c25b93013665f611b04745d873a9de6c45d542.scope: Deactivated successfully.
Oct 02 20:08:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:08:24 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109952095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:24 compute-0 sudo[457048]: pam_unix(sudo:session): session closed for user root
Oct 02 20:08:24 compute-0 nova_compute[355794]: 2025-10-02 20:08:24.128 2 DEBUG oslo_concurrency.processutils [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:24 compute-0 nova_compute[355794]: 2025-10-02 20:08:24.136 2 DEBUG nova.compute.provider_tree [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:08:24 compute-0 nova_compute[355794]: 2025-10-02 20:08:24.158 2 DEBUG nova.scheduler.client.report [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:08:24 compute-0 nova_compute[355794]: 2025-10-02 20:08:24.215 2 DEBUG oslo_concurrency.lockutils [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:24 compute-0 sudo[457251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:08:24 compute-0 sudo[457251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:08:24 compute-0 sudo[457251]: pam_unix(sudo:session): session closed for user root
Oct 02 20:08:24 compute-0 nova_compute[355794]: 2025-10-02 20:08:24.258 2 INFO nova.scheduler.client.report [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Deleted allocations for instance ba7aef8d-a028-428d-97bd-508631983393
Oct 02 20:08:24 compute-0 sudo[457277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:08:24 compute-0 nova_compute[355794]: 2025-10-02 20:08:24.347 2 DEBUG oslo_concurrency.lockutils [None req-dd81c528-7010-492f-917a-8d36c24f0779 e8fb0356d35d4034be5df2acf0c1b9b8 1115b32054db477a9f511992d206db4d - - default default] Lock "ba7aef8d-a028-428d-97bd-508631983393" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:24 compute-0 sudo[457277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:08:24 compute-0 sudo[457277]: pam_unix(sudo:session): session closed for user root
Oct 02 20:08:24 compute-0 sudo[457302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:08:24 compute-0 sudo[457302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:08:24 compute-0 sudo[457302]: pam_unix(sudo:session): session closed for user root
Oct 02 20:08:24 compute-0 nova_compute[355794]: 2025-10-02 20:08:24.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:24 compute-0 sudo[457327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:08:24 compute-0 sudo[457327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:08:24 compute-0 ceph-mon[191910]: pgmap v1909: 321 pgs: 321 active+clean; 364 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 159 op/s
Oct 02 20:08:24 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1109952095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:25 compute-0 podman[457389]: 2025-10-02 20:08:25.188920246 +0000 UTC m=+0.115982022 container create 003b52986bcee2f8403108b878bfe76c52fb9acc3a68f8526eb52a5b49d7998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khorana, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:08:25 compute-0 podman[457389]: 2025-10-02 20:08:25.130313989 +0000 UTC m=+0.057375825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:08:25 compute-0 systemd[1]: Started libpod-conmon-003b52986bcee2f8403108b878bfe76c52fb9acc3a68f8526eb52a5b49d7998d.scope.
Oct 02 20:08:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:08:25 compute-0 podman[457389]: 2025-10-02 20:08:25.452128711 +0000 UTC m=+0.379190537 container init 003b52986bcee2f8403108b878bfe76c52fb9acc3a68f8526eb52a5b49d7998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 20:08:25 compute-0 podman[457389]: 2025-10-02 20:08:25.474369118 +0000 UTC m=+0.401430894 container start 003b52986bcee2f8403108b878bfe76c52fb9acc3a68f8526eb52a5b49d7998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 20:08:25 compute-0 quirky_khorana[457404]: 167 167
Oct 02 20:08:25 compute-0 systemd[1]: libpod-003b52986bcee2f8403108b878bfe76c52fb9acc3a68f8526eb52a5b49d7998d.scope: Deactivated successfully.
Oct 02 20:08:25 compute-0 podman[457389]: 2025-10-02 20:08:25.490696819 +0000 UTC m=+0.417758655 container attach 003b52986bcee2f8403108b878bfe76c52fb9acc3a68f8526eb52a5b49d7998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khorana, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:08:25 compute-0 podman[457389]: 2025-10-02 20:08:25.491368577 +0000 UTC m=+0.418430353 container died 003b52986bcee2f8403108b878bfe76c52fb9acc3a68f8526eb52a5b49d7998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 20:08:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:08:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-69a31a03e01905a0bf921325978e29aaee826df1225fef4bdd76a8a30a738a83-merged.mount: Deactivated successfully.
Oct 02 20:08:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 352 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 182 op/s
Oct 02 20:08:25 compute-0 podman[457389]: 2025-10-02 20:08:25.698832741 +0000 UTC m=+0.625894507 container remove 003b52986bcee2f8403108b878bfe76c52fb9acc3a68f8526eb52a5b49d7998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khorana, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 20:08:25 compute-0 systemd[1]: libpod-conmon-003b52986bcee2f8403108b878bfe76c52fb9acc3a68f8526eb52a5b49d7998d.scope: Deactivated successfully.
Oct 02 20:08:25 compute-0 ceph-mon[191910]: pgmap v1910: 321 pgs: 321 active+clean; 352 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 182 op/s
Oct 02 20:08:26 compute-0 podman[457426]: 2025-10-02 20:08:25.999608877 +0000 UTC m=+0.080789602 container create 0f511373e15246c837f5ff78d97f48efa93d2f3457d3a6ab5053756efcd4a242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_joliot, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:08:26 compute-0 podman[457426]: 2025-10-02 20:08:25.964620074 +0000 UTC m=+0.045800839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:08:26 compute-0 systemd[1]: Started libpod-conmon-0f511373e15246c837f5ff78d97f48efa93d2f3457d3a6ab5053756efcd4a242.scope.
Oct 02 20:08:26 compute-0 nova_compute[355794]: 2025-10-02 20:08:26.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e135a5434f64a0320261495f20341328c4c938068aa0375933fe72923ef65123/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e135a5434f64a0320261495f20341328c4c938068aa0375933fe72923ef65123/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e135a5434f64a0320261495f20341328c4c938068aa0375933fe72923ef65123/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e135a5434f64a0320261495f20341328c4c938068aa0375933fe72923ef65123/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:08:26 compute-0 podman[457426]: 2025-10-02 20:08:26.207307898 +0000 UTC m=+0.288488693 container init 0f511373e15246c837f5ff78d97f48efa93d2f3457d3a6ab5053756efcd4a242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:08:26 compute-0 podman[457426]: 2025-10-02 20:08:26.233027967 +0000 UTC m=+0.314208712 container start 0f511373e15246c837f5ff78d97f48efa93d2f3457d3a6ab5053756efcd4a242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:08:26 compute-0 ovn_controller[88435]: 2025-10-02T20:08:26Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:51:26 10.100.0.5
Oct 02 20:08:26 compute-0 ovn_controller[88435]: 2025-10-02T20:08:26Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:51:26 10.100.0.5
Oct 02 20:08:26 compute-0 podman[457426]: 2025-10-02 20:08:26.279435231 +0000 UTC m=+0.360615966 container attach 0f511373e15246c837f5ff78d97f48efa93d2f3457d3a6ab5053756efcd4a242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 20:08:27 compute-0 nifty_joliot[457440]: {
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:         "osd_id": 1,
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:         "type": "bluestore"
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:     },
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:         "osd_id": 2,
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:         "type": "bluestore"
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:     },
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:         "osd_id": 0,
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:         "type": "bluestore"
Oct 02 20:08:27 compute-0 nifty_joliot[457440]:     }
Oct 02 20:08:27 compute-0 nifty_joliot[457440]: }
Oct 02 20:08:27 compute-0 systemd[1]: libpod-0f511373e15246c837f5ff78d97f48efa93d2f3457d3a6ab5053756efcd4a242.scope: Deactivated successfully.
Oct 02 20:08:27 compute-0 podman[457426]: 2025-10-02 20:08:27.468601889 +0000 UTC m=+1.549782614 container died 0f511373e15246c837f5ff78d97f48efa93d2f3457d3a6ab5053756efcd4a242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_joliot, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:08:27 compute-0 systemd[1]: libpod-0f511373e15246c837f5ff78d97f48efa93d2f3457d3a6ab5053756efcd4a242.scope: Consumed 1.185s CPU time.
Oct 02 20:08:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e135a5434f64a0320261495f20341328c4c938068aa0375933fe72923ef65123-merged.mount: Deactivated successfully.
Oct 02 20:08:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 340 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.5 MiB/s wr, 222 op/s
Oct 02 20:08:27 compute-0 podman[457426]: 2025-10-02 20:08:27.751062493 +0000 UTC m=+1.832243218 container remove 0f511373e15246c837f5ff78d97f48efa93d2f3457d3a6ab5053756efcd4a242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_joliot, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:08:27 compute-0 systemd[1]: libpod-conmon-0f511373e15246c837f5ff78d97f48efa93d2f3457d3a6ab5053756efcd4a242.scope: Deactivated successfully.
Oct 02 20:08:27 compute-0 sudo[457327]: pam_unix(sudo:session): session closed for user root
Oct 02 20:08:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:08:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:08:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:08:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:08:27 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 19c71cbb-0dd7-4957-b3fe-147621cc6376 does not exist
Oct 02 20:08:27 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e41bc243-ad7a-4f35-979f-3b9f5c98abd7 does not exist
Oct 02 20:08:28 compute-0 sudo[457487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:08:28 compute-0 sudo[457487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:08:28 compute-0 sudo[457487]: pam_unix(sudo:session): session closed for user root
Oct 02 20:08:28 compute-0 sudo[457512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:08:28 compute-0 sudo[457512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:08:28 compute-0 sudo[457512]: pam_unix(sudo:session): session closed for user root
Oct 02 20:08:28 compute-0 ceph-mon[191910]: pgmap v1911: 321 pgs: 321 active+clean; 340 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.5 MiB/s wr, 222 op/s
Oct 02 20:08:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:08:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:08:29 compute-0 nova_compute[355794]: 2025-10-02 20:08:29.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 353 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.1 MiB/s wr, 231 op/s
Oct 02 20:08:29 compute-0 ovn_controller[88435]: 2025-10-02T20:08:29Z|00137|binding|INFO|Releasing lport f39d21c1-fcb9-4571-ab80-c736abbfc93d from this chassis (sb_readonly=0)
Oct 02 20:08:29 compute-0 ovn_controller[88435]: 2025-10-02T20:08:29Z|00138|binding|INFO|Releasing lport cdbc9f7e-e502-4e46-9d35-398a11c2a99d from this chassis (sb_readonly=0)
Oct 02 20:08:29 compute-0 ovn_controller[88435]: 2025-10-02T20:08:29Z|00139|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 20:08:29 compute-0 podman[157186]: time="2025-10-02T20:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:08:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 48732 "" "Go-http-client/1.1"
Oct 02 20:08:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10021 "" "Go-http-client/1.1"
Oct 02 20:08:29 compute-0 nova_compute[355794]: 2025-10-02 20:08:29.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:08:30 compute-0 nova_compute[355794]: 2025-10-02 20:08:30.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:30 compute-0 ceph-mon[191910]: pgmap v1912: 321 pgs: 321 active+clean; 353 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.1 MiB/s wr, 231 op/s
Oct 02 20:08:31 compute-0 nova_compute[355794]: 2025-10-02 20:08:31.058 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759435696.0557125, cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:08:31 compute-0 nova_compute[355794]: 2025-10-02 20:08:31.059 2 INFO nova.compute.manager [-] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] VM Stopped (Lifecycle Event)
Oct 02 20:08:31 compute-0 nova_compute[355794]: 2025-10-02 20:08:31.084 2 DEBUG nova.compute.manager [None req-69a8929c-e9f3-4754-9518-94a76ef5b59e - - - - - -] [instance: cc92ea21-c529-4a4d-b4dd-39f2ec3a1db9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:08:31 compute-0 nova_compute[355794]: 2025-10-02 20:08:31.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:31 compute-0 openstack_network_exporter[372736]: ERROR   20:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:08:31 compute-0 openstack_network_exporter[372736]: ERROR   20:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:08:31 compute-0 openstack_network_exporter[372736]: ERROR   20:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:08:31 compute-0 openstack_network_exporter[372736]: ERROR   20:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:08:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:08:31 compute-0 openstack_network_exporter[372736]: ERROR   20:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:08:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:08:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 356 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 146 op/s
Oct 02 20:08:31 compute-0 podman[457538]: 2025-10-02 20:08:31.724319017 +0000 UTC m=+0.129211270 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, com.redhat.component=ubi9-container, architecture=x86_64, distribution-scope=public, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct 02 20:08:31 compute-0 podman[457537]: 2025-10-02 20:08:31.765192296 +0000 UTC m=+0.176375465 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Oct 02 20:08:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:32.322 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:32.324 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:32.326 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:32 compute-0 nova_compute[355794]: 2025-10-02 20:08:32.845 2 INFO nova.compute.manager [None req-bdebe8c5-c26f-4cb2-89b2-6dfb967d0059 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Get console output
Oct 02 20:08:32 compute-0 nova_compute[355794]: 2025-10-02 20:08:32.857 5500 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 20:08:32 compute-0 ceph-mon[191910]: pgmap v1913: 321 pgs: 321 active+clean; 356 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 146 op/s
Oct 02 20:08:32 compute-0 ovn_controller[88435]: 2025-10-02T20:08:32Z|00140|binding|INFO|Releasing lport f39d21c1-fcb9-4571-ab80-c736abbfc93d from this chassis (sb_readonly=0)
Oct 02 20:08:32 compute-0 ovn_controller[88435]: 2025-10-02T20:08:32Z|00141|binding|INFO|Releasing lport cdbc9f7e-e502-4e46-9d35-398a11c2a99d from this chassis (sb_readonly=0)
Oct 02 20:08:32 compute-0 ovn_controller[88435]: 2025-10-02T20:08:32Z|00142|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 20:08:32 compute-0 nova_compute[355794]: 2025-10-02 20:08:32.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.352 2 DEBUG oslo_concurrency.lockutils [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquiring lock "c942a9bd-3760-43df-964d-8aa0e8710a3d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.354 2 DEBUG oslo_concurrency.lockutils [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "c942a9bd-3760-43df-964d-8aa0e8710a3d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.354 2 DEBUG oslo_concurrency.lockutils [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquiring lock "c942a9bd-3760-43df-964d-8aa0e8710a3d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.356 2 DEBUG oslo_concurrency.lockutils [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "c942a9bd-3760-43df-964d-8aa0e8710a3d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.356 2 DEBUG oslo_concurrency.lockutils [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "c942a9bd-3760-43df-964d-8aa0e8710a3d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.358 2 INFO nova.compute.manager [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Terminating instance
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.360 2 DEBUG nova.compute.manager [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 20:08:33 compute-0 kernel: tap4adcdafc-fb (unregistering): left promiscuous mode
Oct 02 20:08:33 compute-0 NetworkManager[44968]: <info>  [1759435713.5175] device (tap4adcdafc-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 20:08:33 compute-0 ovn_controller[88435]: 2025-10-02T20:08:33Z|00143|binding|INFO|Releasing lport 4adcdafc-fb12-4e7c-9f7a-f2e6d691970d from this chassis (sb_readonly=0)
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:33 compute-0 ovn_controller[88435]: 2025-10-02T20:08:33Z|00144|binding|INFO|Setting lport 4adcdafc-fb12-4e7c-9f7a-f2e6d691970d down in Southbound
Oct 02 20:08:33 compute-0 ovn_controller[88435]: 2025-10-02T20:08:33Z|00145|binding|INFO|Removing iface tap4adcdafc-fb ovn-installed in OVS
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:33.553 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:51:26 10.100.0.5'], port_security=['fa:16:3e:93:51:26 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c942a9bd-3760-43df-964d-8aa0e8710a3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a7c52835a9494ea98fd26390771eb77f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ad588f62-678c-4208-b626-55393ac900c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55b0fe3c-2477-4bd1-a279-06ccc23b46bf, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=4adcdafc-fb12-4e7c-9f7a-f2e6d691970d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:08:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:33.555 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 4adcdafc-fb12-4e7c-9f7a-f2e6d691970d in datapath aefd878a-4767-48ff-8dcb-ccb5b8fcb84b unbound from our chassis
Oct 02 20:08:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:33.560 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aefd878a-4767-48ff-8dcb-ccb5b8fcb84b
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:33.588 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[c19d3d48-43b3-4f76-97d3-d0541f20152c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:33 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 02 20:08:33 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Consumed 42.495s CPU time.
Oct 02 20:08:33 compute-0 systemd-machined[137646]: Machine qemu-12-instance-0000000b terminated.
Oct 02 20:08:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:33.625 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[697e971c-c414-4b3f-8a86-f84106067ede]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:33.628 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[1053340c-59fc-497a-8eaa-2e590f6e5981]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 356 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 954 KiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 02 20:08:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:33.662 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c8eb1f-68b9-4250-8075-3012fde2ba08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:08:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:08:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:08:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:08:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:08:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:08:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:33.686 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[e9604163-6e62-43a9-8bea-4a43b1338609]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaefd878a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:f4:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676151, 'reachable_time': 26982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 457627, 'error': None, 'target': 'ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:33 compute-0 podman[457576]: 2025-10-02 20:08:33.704460317 +0000 UTC m=+0.144419221 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:08:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:33.707 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[1008b888-aec6-416c-9c2a-0d71b7e42bbe]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaefd878a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676164, 'tstamp': 676164}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 457638, 'error': None, 'target': 'ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaefd878a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676168, 'tstamp': 676168}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 457638, 'error': None, 'target': 'ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:33.709 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaefd878a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:33.719 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaefd878a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:33.720 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:08:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:33.720 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaefd878a-40, col_values=(('external_ids', {'iface-id': 'cdbc9f7e-e502-4e46-9d35-398a11c2a99d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:33 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:33.720 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:08:33 compute-0 podman[457577]: 2025-10-02 20:08:33.724051414 +0000 UTC m=+0.142446179 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 02 20:08:33 compute-0 podman[457578]: 2025-10-02 20:08:33.744493774 +0000 UTC m=+0.155844503 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.808 2 INFO nova.virt.libvirt.driver [-] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Instance destroyed successfully.
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.809 2 DEBUG nova.objects.instance [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lazy-loading 'resources' on Instance uuid c942a9bd-3760-43df-964d-8aa0e8710a3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.832 2 DEBUG nova.virt.libvirt.vif [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T20:07:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1118171358',display_name='tempest-TestNetworkBasicOps-server-1118171358',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1118171358',id=11,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAk35KZ7CQK6+sXlwHKd132rUriO0VfU5GtRYC/4ZLUBOEnyPkd6bJxXv81TUMHDJORzY0bjQglnRzFjcurkWs8ue5nit6tRiThY/8NrD3xM1QdaVcCnCUr0kLKeT79Z0g==',key_name='tempest-TestNetworkBasicOps-584519399',keypairs=<?>,launch_index=0,launched_at=2025-10-02T20:07:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a7c52835a9494ea98fd26390771eb77f',ramdisk_id='',reservation_id='r-hfnhxdbw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1027837101',owner_user_name='tempest-TestNetworkBasicOps-1027837101-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T20:07:46Z,user_data=None,user_id='e87db118c0374d50a374f0ceaf961159',uuid=c942a9bd-3760-43df-964d-8aa0e8710a3d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "address": "fa:16:3e:93:51:26", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4adcdafc-fb", "ovs_interfaceid": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.832 2 DEBUG nova.network.os_vif_util [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Converting VIF {"id": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "address": "fa:16:3e:93:51:26", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4adcdafc-fb", "ovs_interfaceid": "4adcdafc-fb12-4e7c-9f7a-f2e6d691970d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.833 2 DEBUG nova.network.os_vif_util [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:51:26,bridge_name='br-int',has_traffic_filtering=True,id=4adcdafc-fb12-4e7c-9f7a-f2e6d691970d,network=Network(aefd878a-4767-48ff-8dcb-ccb5b8fcb84b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4adcdafc-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.834 2 DEBUG os_vif [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:51:26,bridge_name='br-int',has_traffic_filtering=True,id=4adcdafc-fb12-4e7c-9f7a-f2e6d691970d,network=Network(aefd878a-4767-48ff-8dcb-ccb5b8fcb84b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4adcdafc-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.835 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4adcdafc-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:33 compute-0 nova_compute[355794]: 2025-10-02 20:08:33.845 2 INFO os_vif [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:51:26,bridge_name='br-int',has_traffic_filtering=True,id=4adcdafc-fb12-4e7c-9f7a-f2e6d691970d,network=Network(aefd878a-4767-48ff-8dcb-ccb5b8fcb84b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4adcdafc-fb')
Oct 02 20:08:34 compute-0 nova_compute[355794]: 2025-10-02 20:08:34.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:34 compute-0 nova_compute[355794]: 2025-10-02 20:08:34.661 2 INFO nova.virt.libvirt.driver [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Deleting instance files /var/lib/nova/instances/c942a9bd-3760-43df-964d-8aa0e8710a3d_del
Oct 02 20:08:34 compute-0 nova_compute[355794]: 2025-10-02 20:08:34.662 2 INFO nova.virt.libvirt.driver [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Deletion of /var/lib/nova/instances/c942a9bd-3760-43df-964d-8aa0e8710a3d_del complete
Oct 02 20:08:34 compute-0 nova_compute[355794]: 2025-10-02 20:08:34.715 2 DEBUG nova.compute.manager [req-00c9e4db-5919-48b7-bad7-e6b7662399c1 req-acd12630-dccd-41e1-b762-7d3c874deec7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Received event network-vif-unplugged-4adcdafc-fb12-4e7c-9f7a-f2e6d691970d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:34 compute-0 nova_compute[355794]: 2025-10-02 20:08:34.716 2 DEBUG oslo_concurrency.lockutils [req-00c9e4db-5919-48b7-bad7-e6b7662399c1 req-acd12630-dccd-41e1-b762-7d3c874deec7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "c942a9bd-3760-43df-964d-8aa0e8710a3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:34 compute-0 nova_compute[355794]: 2025-10-02 20:08:34.716 2 DEBUG oslo_concurrency.lockutils [req-00c9e4db-5919-48b7-bad7-e6b7662399c1 req-acd12630-dccd-41e1-b762-7d3c874deec7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "c942a9bd-3760-43df-964d-8aa0e8710a3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:34 compute-0 nova_compute[355794]: 2025-10-02 20:08:34.716 2 DEBUG oslo_concurrency.lockutils [req-00c9e4db-5919-48b7-bad7-e6b7662399c1 req-acd12630-dccd-41e1-b762-7d3c874deec7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "c942a9bd-3760-43df-964d-8aa0e8710a3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:34 compute-0 nova_compute[355794]: 2025-10-02 20:08:34.716 2 DEBUG nova.compute.manager [req-00c9e4db-5919-48b7-bad7-e6b7662399c1 req-acd12630-dccd-41e1-b762-7d3c874deec7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] No waiting events found dispatching network-vif-unplugged-4adcdafc-fb12-4e7c-9f7a-f2e6d691970d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:08:34 compute-0 nova_compute[355794]: 2025-10-02 20:08:34.717 2 DEBUG nova.compute.manager [req-00c9e4db-5919-48b7-bad7-e6b7662399c1 req-acd12630-dccd-41e1-b762-7d3c874deec7 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Received event network-vif-unplugged-4adcdafc-fb12-4e7c-9f7a-f2e6d691970d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 20:08:34 compute-0 nova_compute[355794]: 2025-10-02 20:08:34.751 2 INFO nova.compute.manager [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Took 1.39 seconds to destroy the instance on the hypervisor.
Oct 02 20:08:34 compute-0 nova_compute[355794]: 2025-10-02 20:08:34.751 2 DEBUG oslo.service.loopingcall [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 20:08:34 compute-0 nova_compute[355794]: 2025-10-02 20:08:34.752 2 DEBUG nova.compute.manager [-] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 20:08:34 compute-0 nova_compute[355794]: 2025-10-02 20:08:34.752 2 DEBUG nova.network.neutron [-] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 20:08:34 compute-0 ceph-mon[191910]: pgmap v1914: 321 pgs: 321 active+clean; 356 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 954 KiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 02 20:08:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:08:35 compute-0 ovn_controller[88435]: 2025-10-02T20:08:35Z|00146|binding|INFO|Releasing lport f39d21c1-fcb9-4571-ab80-c736abbfc93d from this chassis (sb_readonly=0)
Oct 02 20:08:35 compute-0 ovn_controller[88435]: 2025-10-02T20:08:35Z|00147|binding|INFO|Releasing lport cdbc9f7e-e502-4e46-9d35-398a11c2a99d from this chassis (sb_readonly=0)
Oct 02 20:08:35 compute-0 ovn_controller[88435]: 2025-10-02T20:08:35Z|00148|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 20:08:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 323 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 606 KiB/s rd, 2.2 MiB/s wr, 108 op/s
Oct 02 20:08:35 compute-0 nova_compute[355794]: 2025-10-02 20:08:35.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:35 compute-0 podman[457677]: 2025-10-02 20:08:35.744829087 +0000 UTC m=+0.157499937 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git)
Oct 02 20:08:35 compute-0 podman[457678]: 2025-10-02 20:08:35.754559233 +0000 UTC m=+0.167301645 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 20:08:36 compute-0 nova_compute[355794]: 2025-10-02 20:08:36.085 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759435701.0818837, ba7aef8d-a028-428d-97bd-508631983393 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:08:36 compute-0 nova_compute[355794]: 2025-10-02 20:08:36.086 2 INFO nova.compute.manager [-] [instance: ba7aef8d-a028-428d-97bd-508631983393] VM Stopped (Lifecycle Event)
Oct 02 20:08:36 compute-0 nova_compute[355794]: 2025-10-02 20:08:36.117 2 DEBUG nova.compute.manager [None req-7a3e0b41-ec8f-41fb-a2af-c5f10c5f70ea - - - - - -] [instance: ba7aef8d-a028-428d-97bd-508631983393] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:08:36 compute-0 ceph-mon[191910]: pgmap v1915: 321 pgs: 321 active+clean; 323 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 606 KiB/s rd, 2.2 MiB/s wr, 108 op/s
Oct 02 20:08:36 compute-0 nova_compute[355794]: 2025-10-02 20:08:36.913 2 DEBUG nova.compute.manager [req-540f2800-d047-4e38-a266-ed08e6b05383 req-ac9b4218-7a48-4166-a103-8e7d30841f66 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Received event network-vif-plugged-4adcdafc-fb12-4e7c-9f7a-f2e6d691970d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:36 compute-0 nova_compute[355794]: 2025-10-02 20:08:36.913 2 DEBUG oslo_concurrency.lockutils [req-540f2800-d047-4e38-a266-ed08e6b05383 req-ac9b4218-7a48-4166-a103-8e7d30841f66 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "c942a9bd-3760-43df-964d-8aa0e8710a3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:36 compute-0 nova_compute[355794]: 2025-10-02 20:08:36.914 2 DEBUG oslo_concurrency.lockutils [req-540f2800-d047-4e38-a266-ed08e6b05383 req-ac9b4218-7a48-4166-a103-8e7d30841f66 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "c942a9bd-3760-43df-964d-8aa0e8710a3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:36 compute-0 nova_compute[355794]: 2025-10-02 20:08:36.914 2 DEBUG oslo_concurrency.lockutils [req-540f2800-d047-4e38-a266-ed08e6b05383 req-ac9b4218-7a48-4166-a103-8e7d30841f66 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "c942a9bd-3760-43df-964d-8aa0e8710a3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:36 compute-0 nova_compute[355794]: 2025-10-02 20:08:36.914 2 DEBUG nova.compute.manager [req-540f2800-d047-4e38-a266-ed08e6b05383 req-ac9b4218-7a48-4166-a103-8e7d30841f66 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] No waiting events found dispatching network-vif-plugged-4adcdafc-fb12-4e7c-9f7a-f2e6d691970d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:08:36 compute-0 nova_compute[355794]: 2025-10-02 20:08:36.915 2 WARNING nova.compute.manager [req-540f2800-d047-4e38-a266-ed08e6b05383 req-ac9b4218-7a48-4166-a103-8e7d30841f66 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Received unexpected event network-vif-plugged-4adcdafc-fb12-4e7c-9f7a-f2e6d691970d for instance with vm_state active and task_state deleting.
Oct 02 20:08:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 299 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 1.2 MiB/s wr, 96 op/s
Oct 02 20:08:37 compute-0 nova_compute[355794]: 2025-10-02 20:08:37.775 2 DEBUG nova.network.neutron [-] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:08:37 compute-0 nova_compute[355794]: 2025-10-02 20:08:37.802 2 INFO nova.compute.manager [-] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Took 3.05 seconds to deallocate network for instance.
Oct 02 20:08:37 compute-0 nova_compute[355794]: 2025-10-02 20:08:37.860 2 DEBUG oslo_concurrency.lockutils [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:37 compute-0 nova_compute[355794]: 2025-10-02 20:08:37.861 2 DEBUG oslo_concurrency.lockutils [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:37 compute-0 nova_compute[355794]: 2025-10-02 20:08:37.921 2 DEBUG nova.compute.manager [req-27d25209-4370-4e2d-b67f-182543d46a8f req-781dd904-e68e-4143-a061-36a0b98835c3 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Received event network-vif-deleted-4adcdafc-fb12-4e7c-9f7a-f2e6d691970d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:37 compute-0 nova_compute[355794]: 2025-10-02 20:08:37.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:37 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:37.933 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '6e:8a:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:85:f4:22:b9:4f'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:08:37 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:37.936 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 20:08:37 compute-0 nova_compute[355794]: 2025-10-02 20:08:37.988 2 DEBUG oslo_concurrency.processutils [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:08:38 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/197202024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:38 compute-0 nova_compute[355794]: 2025-10-02 20:08:38.485 2 DEBUG oslo_concurrency.processutils [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:38 compute-0 nova_compute[355794]: 2025-10-02 20:08:38.498 2 DEBUG nova.compute.provider_tree [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:08:38 compute-0 nova_compute[355794]: 2025-10-02 20:08:38.526 2 DEBUG nova.scheduler.client.report [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:08:38 compute-0 nova_compute[355794]: 2025-10-02 20:08:38.555 2 DEBUG oslo_concurrency.lockutils [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:38 compute-0 nova_compute[355794]: 2025-10-02 20:08:38.594 2 INFO nova.scheduler.client.report [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Deleted allocations for instance c942a9bd-3760-43df-964d-8aa0e8710a3d
Oct 02 20:08:38 compute-0 nova_compute[355794]: 2025-10-02 20:08:38.709 2 DEBUG oslo_concurrency.lockutils [None req-426bc4df-7933-48cc-80cd-5ca1756471f2 e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "c942a9bd-3760-43df-964d-8aa0e8710a3d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:38 compute-0 nova_compute[355794]: 2025-10-02 20:08:38.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:38 compute-0 nova_compute[355794]: 2025-10-02 20:08:38.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:38 compute-0 ceph-mon[191910]: pgmap v1916: 321 pgs: 321 active+clean; 299 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 1.2 MiB/s wr, 96 op/s
Oct 02 20:08:38 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/197202024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:39 compute-0 nova_compute[355794]: 2025-10-02 20:08:39.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 277 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 188 KiB/s rd, 866 KiB/s wr, 56 op/s
Oct 02 20:08:39 compute-0 ceph-mon[191910]: pgmap v1917: 321 pgs: 321 active+clean; 277 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 188 KiB/s rd, 866 KiB/s wr, 56 op/s
Oct 02 20:08:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:08:40 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:40.940 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1de2af17-a89c-45e5-97c6-db433f26bbb6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:41 compute-0 nova_compute[355794]: 2025-10-02 20:08:41.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 277 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 42 KiB/s wr, 33 op/s
Oct 02 20:08:41 compute-0 nova_compute[355794]: 2025-10-02 20:08:41.703 2 DEBUG oslo_concurrency.lockutils [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquiring lock "a6e095a0-cb58-430d-9347-4aab385c6e69" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:41 compute-0 nova_compute[355794]: 2025-10-02 20:08:41.704 2 DEBUG oslo_concurrency.lockutils [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "a6e095a0-cb58-430d-9347-4aab385c6e69" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:41 compute-0 nova_compute[355794]: 2025-10-02 20:08:41.705 2 DEBUG oslo_concurrency.lockutils [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquiring lock "a6e095a0-cb58-430d-9347-4aab385c6e69-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:41 compute-0 nova_compute[355794]: 2025-10-02 20:08:41.706 2 DEBUG oslo_concurrency.lockutils [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "a6e095a0-cb58-430d-9347-4aab385c6e69-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:41 compute-0 nova_compute[355794]: 2025-10-02 20:08:41.707 2 DEBUG oslo_concurrency.lockutils [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "a6e095a0-cb58-430d-9347-4aab385c6e69-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:41 compute-0 nova_compute[355794]: 2025-10-02 20:08:41.709 2 INFO nova.compute.manager [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Terminating instance
Oct 02 20:08:41 compute-0 nova_compute[355794]: 2025-10-02 20:08:41.711 2 DEBUG nova.compute.manager [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 20:08:41 compute-0 kernel: tap4af10480-1b (unregistering): left promiscuous mode
Oct 02 20:08:41 compute-0 NetworkManager[44968]: <info>  [1759435721.8487] device (tap4af10480-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 20:08:41 compute-0 ovn_controller[88435]: 2025-10-02T20:08:41Z|00149|binding|INFO|Releasing lport 4af10480-1bf8-4efe-bb0e-ef9ee356a470 from this chassis (sb_readonly=0)
Oct 02 20:08:41 compute-0 ovn_controller[88435]: 2025-10-02T20:08:41Z|00150|binding|INFO|Setting lport 4af10480-1bf8-4efe-bb0e-ef9ee356a470 down in Southbound
Oct 02 20:08:41 compute-0 ovn_controller[88435]: 2025-10-02T20:08:41Z|00151|binding|INFO|Removing iface tap4af10480-1b ovn-installed in OVS
Oct 02 20:08:41 compute-0 nova_compute[355794]: 2025-10-02 20:08:41.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:41 compute-0 nova_compute[355794]: 2025-10-02 20:08:41.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:41.875 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:72:8b 10.100.0.7'], port_security=['fa:16:3e:d8:72:8b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a6e095a0-cb58-430d-9347-4aab385c6e69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a7c52835a9494ea98fd26390771eb77f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b59875ce-2e1e-411c-9c9d-217f385a6c78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55b0fe3c-2477-4bd1-a279-06ccc23b46bf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=4af10480-1bf8-4efe-bb0e-ef9ee356a470) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:08:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:41.877 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 4af10480-1bf8-4efe-bb0e-ef9ee356a470 in datapath aefd878a-4767-48ff-8dcb-ccb5b8fcb84b unbound from our chassis
Oct 02 20:08:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:41.880 285790 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aefd878a-4767-48ff-8dcb-ccb5b8fcb84b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 20:08:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:41.882 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a606b5-d54a-4023-b484-378953a901e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:41.883 285790 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b namespace which is not needed anymore
Oct 02 20:08:41 compute-0 nova_compute[355794]: 2025-10-02 20:08:41.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:41 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct 02 20:08:41 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 52.200s CPU time.
Oct 02 20:08:41 compute-0 systemd-machined[137646]: Machine qemu-8-instance-00000008 terminated.
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:42 compute-0 neutron-haproxy-ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b[452343]: [NOTICE]   (452347) : haproxy version is 2.8.14-c23fe91
Oct 02 20:08:42 compute-0 neutron-haproxy-ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b[452343]: [NOTICE]   (452347) : path to executable is /usr/sbin/haproxy
Oct 02 20:08:42 compute-0 neutron-haproxy-ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b[452343]: [WARNING]  (452347) : Exiting Master process...
Oct 02 20:08:42 compute-0 neutron-haproxy-ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b[452343]: [WARNING]  (452347) : Exiting Master process...
Oct 02 20:08:42 compute-0 neutron-haproxy-ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b[452343]: [ALERT]    (452347) : Current worker (452349) exited with code 143 (Terminated)
Oct 02 20:08:42 compute-0 neutron-haproxy-ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b[452343]: [WARNING]  (452347) : All workers exited. Exiting... (0)
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.177 2 INFO nova.virt.libvirt.driver [-] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Instance destroyed successfully.
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.178 2 DEBUG nova.objects.instance [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lazy-loading 'resources' on Instance uuid a6e095a0-cb58-430d-9347-4aab385c6e69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:08:42 compute-0 systemd[1]: libpod-4716511ea772389918ea083514b17219df193fedbad5de0cc559b8d4e7139101.scope: Deactivated successfully.
Oct 02 20:08:42 compute-0 podman[457765]: 2025-10-02 20:08:42.185819405 +0000 UTC m=+0.102820174 container died 4716511ea772389918ea083514b17219df193fedbad5de0cc559b8d4e7139101 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.198 2 DEBUG nova.virt.libvirt.vif [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T20:06:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-332031272',display_name='tempest-TestNetworkBasicOps-server-332031272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-332031272',id=8,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+gizvNNhk87DzMIKZAzdFdrNakQS09f3n8/hsElwZOR6W+1OR1WlE16FZq4XAVBI1PnHc1iNjlKAiJ6aqdGaonrPyunVFvVvPgUMCTqaVFzbO55Hz8ocdQlO2t7Ap3sQ==',key_name='tempest-TestNetworkBasicOps-1755477534',keypairs=<?>,launch_index=0,launched_at=2025-10-02T20:06:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a7c52835a9494ea98fd26390771eb77f',ramdisk_id='',reservation_id='r-3x1sbpqi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1027837101',owner_user_name='tempest-TestNetworkBasicOps-1027837101-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T20:06:31Z,user_data=None,user_id='e87db118c0374d50a374f0ceaf961159',uuid=a6e095a0-cb58-430d-9347-4aab385c6e69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "address": "fa:16:3e:d8:72:8b", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4af10480-1b", "ovs_interfaceid": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.200 2 DEBUG nova.network.os_vif_util [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Converting VIF {"id": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "address": "fa:16:3e:d8:72:8b", "network": {"id": "aefd878a-4767-48ff-8dcb-ccb5b8fcb84b", "bridge": "br-int", "label": "tempest-network-smoke--797126595", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7c52835a9494ea98fd26390771eb77f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4af10480-1b", "ovs_interfaceid": "4af10480-1bf8-4efe-bb0e-ef9ee356a470", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.202 2 DEBUG nova.network.os_vif_util [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:72:8b,bridge_name='br-int',has_traffic_filtering=True,id=4af10480-1bf8-4efe-bb0e-ef9ee356a470,network=Network(aefd878a-4767-48ff-8dcb-ccb5b8fcb84b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4af10480-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.203 2 DEBUG os_vif [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:72:8b,bridge_name='br-int',has_traffic_filtering=True,id=4af10480-1bf8-4efe-bb0e-ef9ee356a470,network=Network(aefd878a-4767-48ff-8dcb-ccb5b8fcb84b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4af10480-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.210 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4af10480-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.221 2 INFO os_vif [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:72:8b,bridge_name='br-int',has_traffic_filtering=True,id=4af10480-1bf8-4efe-bb0e-ef9ee356a470,network=Network(aefd878a-4767-48ff-8dcb-ccb5b8fcb84b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4af10480-1b')
Oct 02 20:08:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4716511ea772389918ea083514b17219df193fedbad5de0cc559b8d4e7139101-userdata-shm.mount: Deactivated successfully.
Oct 02 20:08:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-7299c6461d8d2fb36f8d107a1844ec410972c362cff03175c7e97346ba63b886-merged.mount: Deactivated successfully.
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.271 2 DEBUG nova.compute.manager [req-d44f0d59-d15e-47a5-a34b-9217ac7368c2 req-2142f608-4514-4672-a8d3-303e20ea9973 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Received event network-vif-unplugged-4af10480-1bf8-4efe-bb0e-ef9ee356a470 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.273 2 DEBUG oslo_concurrency.lockutils [req-d44f0d59-d15e-47a5-a34b-9217ac7368c2 req-2142f608-4514-4672-a8d3-303e20ea9973 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "a6e095a0-cb58-430d-9347-4aab385c6e69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.274 2 DEBUG oslo_concurrency.lockutils [req-d44f0d59-d15e-47a5-a34b-9217ac7368c2 req-2142f608-4514-4672-a8d3-303e20ea9973 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "a6e095a0-cb58-430d-9347-4aab385c6e69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.274 2 DEBUG oslo_concurrency.lockutils [req-d44f0d59-d15e-47a5-a34b-9217ac7368c2 req-2142f608-4514-4672-a8d3-303e20ea9973 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "a6e095a0-cb58-430d-9347-4aab385c6e69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.275 2 DEBUG nova.compute.manager [req-d44f0d59-d15e-47a5-a34b-9217ac7368c2 req-2142f608-4514-4672-a8d3-303e20ea9973 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] No waiting events found dispatching network-vif-unplugged-4af10480-1bf8-4efe-bb0e-ef9ee356a470 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.276 2 DEBUG nova.compute.manager [req-d44f0d59-d15e-47a5-a34b-9217ac7368c2 req-2142f608-4514-4672-a8d3-303e20ea9973 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Received event network-vif-unplugged-4af10480-1bf8-4efe-bb0e-ef9ee356a470 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 20:08:42 compute-0 podman[457765]: 2025-10-02 20:08:42.277888674 +0000 UTC m=+0.194889453 container cleanup 4716511ea772389918ea083514b17219df193fedbad5de0cc559b8d4e7139101 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 20:08:42 compute-0 systemd[1]: libpod-conmon-4716511ea772389918ea083514b17219df193fedbad5de0cc559b8d4e7139101.scope: Deactivated successfully.
Oct 02 20:08:42 compute-0 podman[457817]: 2025-10-02 20:08:42.419480671 +0000 UTC m=+0.097011541 container remove 4716511ea772389918ea083514b17219df193fedbad5de0cc559b8d4e7139101 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:08:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:42.437 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b58741-5c3e-4ac7-87c5-80d278e3b5e7]: (4, ('Thu Oct  2 08:08:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b (4716511ea772389918ea083514b17219df193fedbad5de0cc559b8d4e7139101)\n4716511ea772389918ea083514b17219df193fedbad5de0cc559b8d4e7139101\nThu Oct  2 08:08:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b (4716511ea772389918ea083514b17219df193fedbad5de0cc559b8d4e7139101)\n4716511ea772389918ea083514b17219df193fedbad5de0cc559b8d4e7139101\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:42.440 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[66b60735-fa5a-4046-84e8-df91197750ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:42.441 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaefd878a-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:42 compute-0 kernel: tapaefd878a-40: left promiscuous mode
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:42 compute-0 nova_compute[355794]: 2025-10-02 20:08:42.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:42.466 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[5ae394e7-f422-4a40-a8d5-08f2ee593216]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:42.500 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[5893e2c9-5c4c-48e2-9dff-e8717f1bb1ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:42.501 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[7885998c-4a17-4bb0-a568-8d7169ebbf87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:42.525 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b9cc4e-e30c-44cb-8e34-c24270a7f0ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676142, 'reachable_time': 27092, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 457832, 'error': None, 'target': 'ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:42.530 285947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aefd878a-4767-48ff-8dcb-ccb5b8fcb84b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 20:08:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:42.530 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[d75de973-4057-4464-99bb-5dfba552a58c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:42 compute-0 systemd[1]: run-netns-ovnmeta\x2daefd878a\x2d4767\x2d48ff\x2d8dcb\x2dccb5b8fcb84b.mount: Deactivated successfully.
Oct 02 20:08:42 compute-0 ceph-mon[191910]: pgmap v1918: 321 pgs: 321 active+clean; 277 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 42 KiB/s wr, 33 op/s
Oct 02 20:08:43 compute-0 nova_compute[355794]: 2025-10-02 20:08:43.170 2 INFO nova.virt.libvirt.driver [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Deleting instance files /var/lib/nova/instances/a6e095a0-cb58-430d-9347-4aab385c6e69_del
Oct 02 20:08:43 compute-0 nova_compute[355794]: 2025-10-02 20:08:43.171 2 INFO nova.virt.libvirt.driver [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Deletion of /var/lib/nova/instances/a6e095a0-cb58-430d-9347-4aab385c6e69_del complete
Oct 02 20:08:43 compute-0 nova_compute[355794]: 2025-10-02 20:08:43.250 2 INFO nova.compute.manager [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Took 1.54 seconds to destroy the instance on the hypervisor.
Oct 02 20:08:43 compute-0 nova_compute[355794]: 2025-10-02 20:08:43.251 2 DEBUG oslo.service.loopingcall [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 20:08:43 compute-0 nova_compute[355794]: 2025-10-02 20:08:43.252 2 DEBUG nova.compute.manager [-] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 20:08:43 compute-0 nova_compute[355794]: 2025-10-02 20:08:43.253 2 DEBUG nova.network.neutron [-] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 20:08:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 252 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 30 op/s
Oct 02 20:08:43 compute-0 nova_compute[355794]: 2025-10-02 20:08:43.907 2 DEBUG nova.network.neutron [-] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:08:43 compute-0 nova_compute[355794]: 2025-10-02 20:08:43.926 2 INFO nova.compute.manager [-] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Took 0.67 seconds to deallocate network for instance.
Oct 02 20:08:43 compute-0 nova_compute[355794]: 2025-10-02 20:08:43.976 2 DEBUG oslo_concurrency.lockutils [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:43 compute-0 nova_compute[355794]: 2025-10-02 20:08:43.978 2 DEBUG oslo_concurrency.lockutils [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:44 compute-0 nova_compute[355794]: 2025-10-02 20:08:44.075 2 DEBUG oslo_concurrency.processutils [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:44 compute-0 nova_compute[355794]: 2025-10-02 20:08:44.428 2 DEBUG nova.compute.manager [req-240131d5-1e27-43a5-bd73-2c6eba99e7a6 req-8b0603db-33a5-4912-ab00-c3805dadf9c0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Received event network-vif-plugged-4af10480-1bf8-4efe-bb0e-ef9ee356a470 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:44 compute-0 nova_compute[355794]: 2025-10-02 20:08:44.429 2 DEBUG oslo_concurrency.lockutils [req-240131d5-1e27-43a5-bd73-2c6eba99e7a6 req-8b0603db-33a5-4912-ab00-c3805dadf9c0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "a6e095a0-cb58-430d-9347-4aab385c6e69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:44 compute-0 nova_compute[355794]: 2025-10-02 20:08:44.430 2 DEBUG oslo_concurrency.lockutils [req-240131d5-1e27-43a5-bd73-2c6eba99e7a6 req-8b0603db-33a5-4912-ab00-c3805dadf9c0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "a6e095a0-cb58-430d-9347-4aab385c6e69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:44 compute-0 nova_compute[355794]: 2025-10-02 20:08:44.430 2 DEBUG oslo_concurrency.lockutils [req-240131d5-1e27-43a5-bd73-2c6eba99e7a6 req-8b0603db-33a5-4912-ab00-c3805dadf9c0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "a6e095a0-cb58-430d-9347-4aab385c6e69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:44 compute-0 nova_compute[355794]: 2025-10-02 20:08:44.431 2 DEBUG nova.compute.manager [req-240131d5-1e27-43a5-bd73-2c6eba99e7a6 req-8b0603db-33a5-4912-ab00-c3805dadf9c0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] No waiting events found dispatching network-vif-plugged-4af10480-1bf8-4efe-bb0e-ef9ee356a470 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:08:44 compute-0 nova_compute[355794]: 2025-10-02 20:08:44.431 2 WARNING nova.compute.manager [req-240131d5-1e27-43a5-bd73-2c6eba99e7a6 req-8b0603db-33a5-4912-ab00-c3805dadf9c0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Received unexpected event network-vif-plugged-4af10480-1bf8-4efe-bb0e-ef9ee356a470 for instance with vm_state deleted and task_state None.
Oct 02 20:08:44 compute-0 nova_compute[355794]: 2025-10-02 20:08:44.431 2 DEBUG nova.compute.manager [req-240131d5-1e27-43a5-bd73-2c6eba99e7a6 req-8b0603db-33a5-4912-ab00-c3805dadf9c0 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Received event network-vif-deleted-4af10480-1bf8-4efe-bb0e-ef9ee356a470 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:44 compute-0 nova_compute[355794]: 2025-10-02 20:08:44.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:08:44 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1810399498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:44 compute-0 nova_compute[355794]: 2025-10-02 20:08:44.569 2 DEBUG oslo_concurrency.processutils [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:44 compute-0 nova_compute[355794]: 2025-10-02 20:08:44.581 2 DEBUG nova.compute.provider_tree [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:08:44 compute-0 nova_compute[355794]: 2025-10-02 20:08:44.602 2 DEBUG nova.scheduler.client.report [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:08:44 compute-0 nova_compute[355794]: 2025-10-02 20:08:44.628 2 DEBUG oslo_concurrency.lockutils [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:44 compute-0 nova_compute[355794]: 2025-10-02 20:08:44.667 2 INFO nova.scheduler.client.report [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Deleted allocations for instance a6e095a0-cb58-430d-9347-4aab385c6e69
Oct 02 20:08:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct 02 20:08:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct 02 20:08:44 compute-0 nova_compute[355794]: 2025-10-02 20:08:44.741 2 DEBUG oslo_concurrency.lockutils [None req-cf2c73e8-8488-455c-8ca6-eb7075971f0f e87db118c0374d50a374f0ceaf961159 a7c52835a9494ea98fd26390771eb77f - - default default] Lock "a6e095a0-cb58-430d-9347-4aab385c6e69" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:44 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct 02 20:08:44 compute-0 ceph-mon[191910]: pgmap v1919: 321 pgs: 321 active+clean; 252 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 30 op/s
Oct 02 20:08:44 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1810399498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:08:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 226 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.5 KiB/s wr, 41 op/s
Oct 02 20:08:45 compute-0 podman[457856]: 2025-10-02 20:08:45.737264857 +0000 UTC m=+0.140210441 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Oct 02 20:08:45 compute-0 ceph-mon[191910]: osdmap e140: 3 total, 3 up, 3 in
Oct 02 20:08:46 compute-0 nova_compute[355794]: 2025-10-02 20:08:46.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:46 compute-0 ceph-mon[191910]: pgmap v1921: 321 pgs: 321 active+clean; 226 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.5 KiB/s wr, 41 op/s
Oct 02 20:08:47 compute-0 nova_compute[355794]: 2025-10-02 20:08:47.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 206 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 823 KiB/s wr, 41 op/s
Oct 02 20:08:48 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:48.153 285942 DEBUG eventlet.wsgi.server [-] (285942) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 02 20:08:48 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:48.163 285942 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Oct 02 20:08:48 compute-0 ovn_metadata_agent[285768]: Accept: */*
Oct 02 20:08:48 compute-0 ovn_metadata_agent[285768]: Connection: close
Oct 02 20:08:48 compute-0 ovn_metadata_agent[285768]: Content-Type: text/plain
Oct 02 20:08:48 compute-0 ovn_metadata_agent[285768]: Host: 169.254.169.254
Oct 02 20:08:48 compute-0 ovn_metadata_agent[285768]: User-Agent: curl/7.84.0
Oct 02 20:08:48 compute-0 ovn_metadata_agent[285768]: X-Forwarded-For: 10.100.0.6
Oct 02 20:08:48 compute-0 ovn_metadata_agent[285768]: X-Ovn-Network-Id: 3f5a3a36-f114-4439-a81a-9e4ddc58a44b __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 02 20:08:48 compute-0 ovn_controller[88435]: 2025-10-02T20:08:48Z|00152|binding|INFO|Releasing lport f39d21c1-fcb9-4571-ab80-c736abbfc93d from this chassis (sb_readonly=0)
Oct 02 20:08:48 compute-0 ovn_controller[88435]: 2025-10-02T20:08:48Z|00153|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 20:08:48 compute-0 nova_compute[355794]: 2025-10-02 20:08:48.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:48 compute-0 ceph-mon[191910]: pgmap v1922: 321 pgs: 321 active+clean; 206 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 823 KiB/s wr, 41 op/s
Oct 02 20:08:48 compute-0 nova_compute[355794]: 2025-10-02 20:08:48.806 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759435713.8034317, c942a9bd-3760-43df-964d-8aa0e8710a3d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:08:48 compute-0 nova_compute[355794]: 2025-10-02 20:08:48.807 2 INFO nova.compute.manager [-] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] VM Stopped (Lifecycle Event)
Oct 02 20:08:48 compute-0 nova_compute[355794]: 2025-10-02 20:08:48.843 2 DEBUG nova.compute.manager [None req-f3c632a5-eb63-4007-87ab-92a4fb02bf59 - - - - - -] [instance: c942a9bd-3760-43df-964d-8aa0e8710a3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:08:49 compute-0 nova_compute[355794]: 2025-10-02 20:08:49.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 218 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.0 MiB/s wr, 52 op/s
Oct 02 20:08:50 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:50.364 285942 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 02 20:08:50 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:50.365 285942 INFO eventlet.wsgi.server [-] 10.100.0.6,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 2.2024970
Oct 02 20:08:50 compute-0 haproxy-metadata-proxy-3f5a3a36-f114-4439-a81a-9e4ddc58a44b[455480]: 10.100.0.6:36956 [02/Oct/2025:20:08:48.151] listener listener/metadata 0/0/0/2213/2213 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Oct 02 20:08:50 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:50.521 285942 DEBUG eventlet.wsgi.server [-] (285942) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 02 20:08:50 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:50.522 285942 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Oct 02 20:08:50 compute-0 ovn_metadata_agent[285768]: Accept: */*
Oct 02 20:08:50 compute-0 ovn_metadata_agent[285768]: Connection: close
Oct 02 20:08:50 compute-0 ovn_metadata_agent[285768]: Content-Length: 100
Oct 02 20:08:50 compute-0 ovn_metadata_agent[285768]: Content-Type: application/x-www-form-urlencoded
Oct 02 20:08:50 compute-0 ovn_metadata_agent[285768]: Host: 169.254.169.254
Oct 02 20:08:50 compute-0 ovn_metadata_agent[285768]: User-Agent: curl/7.84.0
Oct 02 20:08:50 compute-0 ovn_metadata_agent[285768]: X-Forwarded-For: 10.100.0.6
Oct 02 20:08:50 compute-0 ovn_metadata_agent[285768]: X-Ovn-Network-Id: 3f5a3a36-f114-4439-a81a-9e4ddc58a44b
Oct 02 20:08:50 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:08:50 compute-0 ovn_metadata_agent[285768]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 02 20:08:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:08:50 compute-0 nova_compute[355794]: 2025-10-02 20:08:50.744 2 DEBUG oslo_concurrency.lockutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Acquiring lock "163b34fe-f5ad-414e-bcfa-8a956779638a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:50 compute-0 nova_compute[355794]: 2025-10-02 20:08:50.745 2 DEBUG oslo_concurrency.lockutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lock "163b34fe-f5ad-414e-bcfa-8a956779638a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:50 compute-0 nova_compute[355794]: 2025-10-02 20:08:50.767 2 DEBUG nova.compute.manager [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 20:08:50 compute-0 ceph-mon[191910]: pgmap v1923: 321 pgs: 321 active+clean; 218 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.0 MiB/s wr, 52 op/s
Oct 02 20:08:50 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:50.827 285942 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 02 20:08:50 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:50.828 285942 INFO eventlet.wsgi.server [-] 10.100.0.6,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.3061490
Oct 02 20:08:50 compute-0 haproxy-metadata-proxy-3f5a3a36-f114-4439-a81a-9e4ddc58a44b[455480]: 10.100.0.6:36966 [02/Oct/2025:20:08:50.519] listener listener/metadata 0/0/0/309/309 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Oct 02 20:08:50 compute-0 nova_compute[355794]: 2025-10-02 20:08:50.861 2 DEBUG oslo_concurrency.lockutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:50 compute-0 nova_compute[355794]: 2025-10-02 20:08:50.861 2 DEBUG oslo_concurrency.lockutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:50 compute-0 nova_compute[355794]: 2025-10-02 20:08:50.872 2 DEBUG nova.virt.hardware [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 20:08:50 compute-0 nova_compute[355794]: 2025-10-02 20:08:50.872 2 INFO nova.compute.claims [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Claim successful on node compute-0.ctlplane.example.com
Oct 02 20:08:51 compute-0 nova_compute[355794]: 2025-10-02 20:08:51.027 2 DEBUG oslo_concurrency.processutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:08:51 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2741119143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:51 compute-0 nova_compute[355794]: 2025-10-02 20:08:51.573 2 DEBUG oslo_concurrency.processutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:51 compute-0 nova_compute[355794]: 2025-10-02 20:08:51.587 2 DEBUG nova.compute.provider_tree [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:08:51 compute-0 nova_compute[355794]: 2025-10-02 20:08:51.611 2 DEBUG nova.scheduler.client.report [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:08:51 compute-0 nova_compute[355794]: 2025-10-02 20:08:51.641 2 DEBUG oslo_concurrency.lockutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:51 compute-0 nova_compute[355794]: 2025-10-02 20:08:51.642 2 DEBUG nova.compute.manager [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 20:08:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 218 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.0 MiB/s wr, 52 op/s
Oct 02 20:08:51 compute-0 nova_compute[355794]: 2025-10-02 20:08:51.711 2 DEBUG nova.compute.manager [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 20:08:51 compute-0 nova_compute[355794]: 2025-10-02 20:08:51.712 2 DEBUG nova.network.neutron [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 20:08:51 compute-0 nova_compute[355794]: 2025-10-02 20:08:51.737 2 INFO nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 20:08:51 compute-0 nova_compute[355794]: 2025-10-02 20:08:51.758 2 DEBUG nova.compute.manager [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 20:08:51 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2741119143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:51 compute-0 nova_compute[355794]: 2025-10-02 20:08:51.880 2 DEBUG nova.compute.manager [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 20:08:51 compute-0 nova_compute[355794]: 2025-10-02 20:08:51.882 2 DEBUG nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 20:08:51 compute-0 nova_compute[355794]: 2025-10-02 20:08:51.883 2 INFO nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Creating image(s)
Oct 02 20:08:51 compute-0 nova_compute[355794]: 2025-10-02 20:08:51.955 2 DEBUG nova.storage.rbd_utils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] rbd image 163b34fe-f5ad-414e-bcfa-8a956779638a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:52 compute-0 nova_compute[355794]: 2025-10-02 20:08:52.011 2 DEBUG nova.storage.rbd_utils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] rbd image 163b34fe-f5ad-414e-bcfa-8a956779638a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:52 compute-0 nova_compute[355794]: 2025-10-02 20:08:52.074 2 DEBUG nova.storage.rbd_utils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] rbd image 163b34fe-f5ad-414e-bcfa-8a956779638a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:52 compute-0 nova_compute[355794]: 2025-10-02 20:08:52.096 2 DEBUG oslo_concurrency.processutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:52 compute-0 nova_compute[355794]: 2025-10-02 20:08:52.133 2 DEBUG nova.policy [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '52586757ab2e427f98b2a1d571ef51d2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1bb915f165644ddbb5971268b645746a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 20:08:52 compute-0 nova_compute[355794]: 2025-10-02 20:08:52.198 2 DEBUG oslo_concurrency.processutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:52 compute-0 nova_compute[355794]: 2025-10-02 20:08:52.199 2 DEBUG oslo_concurrency.lockutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Acquiring lock "0c456b520d71abe557f4853537116bdcc2ff0a79" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:52 compute-0 nova_compute[355794]: 2025-10-02 20:08:52.200 2 DEBUG oslo_concurrency.lockutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lock "0c456b520d71abe557f4853537116bdcc2ff0a79" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:52 compute-0 nova_compute[355794]: 2025-10-02 20:08:52.201 2 DEBUG oslo_concurrency.lockutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lock "0c456b520d71abe557f4853537116bdcc2ff0a79" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:52 compute-0 nova_compute[355794]: 2025-10-02 20:08:52.237 2 DEBUG nova.storage.rbd_utils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] rbd image 163b34fe-f5ad-414e-bcfa-8a956779638a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:52 compute-0 nova_compute[355794]: 2025-10-02 20:08:52.244 2 DEBUG oslo_concurrency.processutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 163b34fe-f5ad-414e-bcfa-8a956779638a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:52 compute-0 nova_compute[355794]: 2025-10-02 20:08:52.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:52 compute-0 nova_compute[355794]: 2025-10-02 20:08:52.653 2 DEBUG oslo_concurrency.processutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79 163b34fe-f5ad-414e-bcfa-8a956779638a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:52 compute-0 nova_compute[355794]: 2025-10-02 20:08:52.804 2 DEBUG nova.storage.rbd_utils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] resizing rbd image 163b34fe-f5ad-414e-bcfa-8a956779638a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 20:08:52 compute-0 ceph-mon[191910]: pgmap v1924: 321 pgs: 321 active+clean; 218 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.0 MiB/s wr, 52 op/s
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.059 2 DEBUG oslo_concurrency.lockutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquiring lock "f50e6a55-f3b5-402b-91b2-12d34386f656" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.061 2 DEBUG oslo_concurrency.lockutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "f50e6a55-f3b5-402b-91b2-12d34386f656" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.077 2 DEBUG nova.objects.instance [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lazy-loading 'migration_context' on Instance uuid 163b34fe-f5ad-414e-bcfa-8a956779638a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.091 2 DEBUG nova.compute.manager [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.095 2 DEBUG nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.095 2 DEBUG nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Ensure instance console log exists: /var/lib/nova/instances/163b34fe-f5ad-414e-bcfa-8a956779638a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.096 2 DEBUG oslo_concurrency.lockutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.096 2 DEBUG oslo_concurrency.lockutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.097 2 DEBUG oslo_concurrency.lockutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.186 2 DEBUG nova.network.neutron [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Successfully created port: e776586d-986f-4e28-9744-f39a9506e590 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.201 2 DEBUG oslo_concurrency.lockutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.202 2 DEBUG oslo_concurrency.lockutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.214 2 DEBUG nova.virt.hardware [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.215 2 INFO nova.compute.claims [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Claim successful on node compute-0.ctlplane.example.com
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.446 2 DEBUG oslo_concurrency.processutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.576 2 DEBUG oslo_concurrency.lockutils [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Acquiring lock "af875636-eb00-48b8-b1f4-589898eafecb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.577 2 DEBUG oslo_concurrency.lockutils [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lock "af875636-eb00-48b8-b1f4-589898eafecb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.578 2 DEBUG oslo_concurrency.lockutils [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Acquiring lock "af875636-eb00-48b8-b1f4-589898eafecb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.579 2 DEBUG oslo_concurrency.lockutils [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lock "af875636-eb00-48b8-b1f4-589898eafecb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.579 2 DEBUG oslo_concurrency.lockutils [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lock "af875636-eb00-48b8-b1f4-589898eafecb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.582 2 INFO nova.compute.manager [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Terminating instance
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.584 2 DEBUG nova.compute.manager [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 20:08:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 233 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.9 MiB/s wr, 64 op/s
Oct 02 20:08:53 compute-0 kernel: tape50ea0ec-56 (unregistering): left promiscuous mode
Oct 02 20:08:53 compute-0 NetworkManager[44968]: <info>  [1759435733.6970] device (tape50ea0ec-56): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 20:08:53 compute-0 ovn_controller[88435]: 2025-10-02T20:08:53Z|00154|binding|INFO|Releasing lport e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 from this chassis (sb_readonly=0)
Oct 02 20:08:53 compute-0 ovn_controller[88435]: 2025-10-02T20:08:53Z|00155|binding|INFO|Setting lport e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 down in Southbound
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:53 compute-0 ovn_controller[88435]: 2025-10-02T20:08:53Z|00156|binding|INFO|Removing iface tape50ea0ec-56 ovn-installed in OVS
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:53 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:53.728 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:ca:09 10.100.0.6'], port_security=['fa:16:3e:b0:ca:09 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'af875636-eb00-48b8-b1f4-589898eafecb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f5a3a36-f114-4439-a81a-9e4ddc58a44b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '945fe5265b6446a2a61f775a8f3466f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '08af3841-27c3-4295-9ad6-4be383e6b700 5b734220-98e1-4240-8eeb-85c0c90ff8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a072168a-e212-49f4-ae2d-55929dd9a988, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=e50ea0ec-56a1-4e06-bd8b-531ca4d11a04) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:08:53 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:53.729 285790 INFO neutron.agent.ovn.metadata.agent [-] Port e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 in datapath 3f5a3a36-f114-4439-a81a-9e4ddc58a44b unbound from our chassis
Oct 02 20:08:53 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:53.731 285790 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f5a3a36-f114-4439-a81a-9e4ddc58a44b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 20:08:53 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:53.732 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[26a92709-ca20-4138-af56-d77175372e9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:53 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:53.733 285790 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b namespace which is not needed anymore
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:53 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct 02 20:08:53 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Consumed 46.300s CPU time.
Oct 02 20:08:53 compute-0 systemd-machined[137646]: Machine qemu-11-instance-0000000a terminated.
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.842 2 INFO nova.virt.libvirt.driver [-] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Instance destroyed successfully.
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.843 2 DEBUG nova.objects.instance [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lazy-loading 'resources' on Instance uuid af875636-eb00-48b8-b1f4-589898eafecb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.858 2 DEBUG nova.virt.libvirt.vif [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T20:07:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-2122533072',display_name='tempest-TestServerBasicOps-server-2122533072',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-2122533072',id=10,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDxPI0CmzQm3wR8m2Vq0zGE2hiiNGt34W7En7pAqLuzoJ7ysyl6XPe7tkRbaBW5GW92Ce/Yooxvj5tcD36c/D4W8bSyhnpmezx4ELw/4LYg6y2osPt0fZXFT30f+OZ2jeQ==',key_name='tempest-TestServerBasicOps-1679027111',keypairs=<?>,launch_index=0,launched_at=2025-10-02T20:07:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='945fe5265b6446a2a61f775a8f3466f2',ramdisk_id='',reservation_id='r-e6sqh4pr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-644988398',owner_user_name='tempest-TestServerBasicOps-644988398-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T20:08:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='56d9dae393d64f4b925b7c0827ad71e0',uuid=af875636-eb00-48b8-b1f4-589898eafecb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "address": "fa:16:3e:b0:ca:09", "network": {"id": "3f5a3a36-f114-4439-a81a-9e4ddc58a44b", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1040059364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "945fe5265b6446a2a61f775a8f3466f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50ea0ec-56", "ovs_interfaceid": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.861 2 DEBUG nova.network.os_vif_util [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Converting VIF {"id": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "address": "fa:16:3e:b0:ca:09", "network": {"id": "3f5a3a36-f114-4439-a81a-9e4ddc58a44b", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1040059364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "945fe5265b6446a2a61f775a8f3466f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50ea0ec-56", "ovs_interfaceid": "e50ea0ec-56a1-4e06-bd8b-531ca4d11a04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.862 2 DEBUG nova.network.os_vif_util [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b0:ca:09,bridge_name='br-int',has_traffic_filtering=True,id=e50ea0ec-56a1-4e06-bd8b-531ca4d11a04,network=Network(3f5a3a36-f114-4439-a81a-9e4ddc58a44b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50ea0ec-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.862 2 DEBUG os_vif [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:ca:09,bridge_name='br-int',has_traffic_filtering=True,id=e50ea0ec-56a1-4e06-bd8b-531ca4d11a04,network=Network(3f5a3a36-f114-4439-a81a-9e4ddc58a44b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50ea0ec-56') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.864 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape50ea0ec-56, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 20:08:53 compute-0 nova_compute[355794]: 2025-10-02 20:08:53.872 2 INFO os_vif [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:ca:09,bridge_name='br-int',has_traffic_filtering=True,id=e50ea0ec-56a1-4e06-bd8b-531ca4d11a04,network=Network(3f5a3a36-f114-4439-a81a-9e4ddc58a44b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50ea0ec-56')
Oct 02 20:08:53 compute-0 neutron-haproxy-ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b[455474]: [NOTICE]   (455478) : haproxy version is 2.8.14-c23fe91
Oct 02 20:08:53 compute-0 neutron-haproxy-ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b[455474]: [NOTICE]   (455478) : path to executable is /usr/sbin/haproxy
Oct 02 20:08:53 compute-0 neutron-haproxy-ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b[455474]: [WARNING]  (455478) : Exiting Master process...
Oct 02 20:08:53 compute-0 neutron-haproxy-ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b[455474]: [ALERT]    (455478) : Current worker (455480) exited with code 143 (Terminated)
Oct 02 20:08:53 compute-0 neutron-haproxy-ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b[455474]: [WARNING]  (455478) : All workers exited. Exiting... (0)
Oct 02 20:08:53 compute-0 systemd[1]: libpod-6b71ddc304aedb0a67579fed05ff5f36f08c170161f6fc798f52372c7720ec59.scope: Deactivated successfully.
Oct 02 20:08:53 compute-0 podman[458117]: 2025-10-02 20:08:53.953740224 +0000 UTC m=+0.073475190 container died 6b71ddc304aedb0a67579fed05ff5f36f08c170161f6fc798f52372c7720ec59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:08:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b71ddc304aedb0a67579fed05ff5f36f08c170161f6fc798f52372c7720ec59-userdata-shm.mount: Deactivated successfully.
Oct 02 20:08:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-74340ec75506114577e967eb1c82ba276a08fd21e6065a0f80ec42fbcebabed0-merged.mount: Deactivated successfully.
Oct 02 20:08:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:08:54 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2474076248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:54 compute-0 podman[458117]: 2025-10-02 20:08:54.07863892 +0000 UTC m=+0.198373846 container cleanup 6b71ddc304aedb0a67579fed05ff5f36f08c170161f6fc798f52372c7720ec59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:08:54 compute-0 systemd[1]: libpod-conmon-6b71ddc304aedb0a67579fed05ff5f36f08c170161f6fc798f52372c7720ec59.scope: Deactivated successfully.
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.114 2 DEBUG oslo_concurrency.processutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.132 2 DEBUG nova.network.neutron [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Successfully updated port: e776586d-986f-4e28-9744-f39a9506e590 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.135 2 DEBUG nova.compute.provider_tree [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.158 2 DEBUG nova.scheduler.client.report [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.161 2 DEBUG oslo_concurrency.lockutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Acquiring lock "refresh_cache-163b34fe-f5ad-414e-bcfa-8a956779638a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.162 2 DEBUG oslo_concurrency.lockutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Acquired lock "refresh_cache-163b34fe-f5ad-414e-bcfa-8a956779638a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.162 2 DEBUG nova.network.neutron [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 20:08:54 compute-0 podman[458160]: 2025-10-02 20:08:54.183576799 +0000 UTC m=+0.105590327 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.188 2 DEBUG oslo_concurrency.lockutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.189 2 DEBUG nova.compute.manager [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 20:08:54 compute-0 podman[458175]: 2025-10-02 20:08:54.220560335 +0000 UTC m=+0.104080218 container remove 6b71ddc304aedb0a67579fed05ff5f36f08c170161f6fc798f52372c7720ec59 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:08:54 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:54.240 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[a04f1ad8-2e2b-4cc7-ba05-0978095bbf25]: (4, ('Thu Oct  2 08:08:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b (6b71ddc304aedb0a67579fed05ff5f36f08c170161f6fc798f52372c7720ec59)\n6b71ddc304aedb0a67579fed05ff5f36f08c170161f6fc798f52372c7720ec59\nThu Oct  2 08:08:54 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b (6b71ddc304aedb0a67579fed05ff5f36f08c170161f6fc798f52372c7720ec59)\n6b71ddc304aedb0a67579fed05ff5f36f08c170161f6fc798f52372c7720ec59\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:54 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:54.242 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[3f0252f6-8e0b-40d4-b68c-ebb2603e0f86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.245 2 DEBUG nova.compute.manager [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 20:08:54 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:54.245 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f5a3a36-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.246 2 DEBUG nova.network.neutron [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:54 compute-0 kernel: tap3f5a3a36-f0: left promiscuous mode
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.271 2 INFO nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:54 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:54.278 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[792bb8fe-f6ba-46ee-b1d5-ff8360e56b5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.290 2 DEBUG nova.compute.manager [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 20:08:54 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:54.294 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[f38ed023-1f8d-46d3-ad7a-7ed2fb77e767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:54 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:54.295 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d92739-834b-43e3-948f-a6bf9e413da1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:54 compute-0 podman[458177]: 2025-10-02 20:08:54.299363814 +0000 UTC m=+0.152995068 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 20:08:54 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:54.317 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[b49da2af-537f-466d-8594-bc5acaa064ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682837, 'reachable_time': 20907, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 458221, 'error': None, 'target': 'ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:54 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:54.321 285947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3f5a3a36-f114-4439-a81a-9e4ddc58a44b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 20:08:54 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:54.321 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[1fdebb96-9467-49b7-8b75-98b28a6fdfc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d3f5a3a36\x2df114\x2d4439\x2da81a\x2d9e4ddc58a44b.mount: Deactivated successfully.
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.380 2 DEBUG nova.compute.manager [req-dfbed0af-986f-432b-b54a-685fc489eb48 req-536d8a1c-1462-416c-8f81-d73a158eb2ca 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Received event network-vif-unplugged-e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.380 2 DEBUG oslo_concurrency.lockutils [req-dfbed0af-986f-432b-b54a-685fc489eb48 req-536d8a1c-1462-416c-8f81-d73a158eb2ca 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "af875636-eb00-48b8-b1f4-589898eafecb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.381 2 DEBUG oslo_concurrency.lockutils [req-dfbed0af-986f-432b-b54a-685fc489eb48 req-536d8a1c-1462-416c-8f81-d73a158eb2ca 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "af875636-eb00-48b8-b1f4-589898eafecb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.381 2 DEBUG oslo_concurrency.lockutils [req-dfbed0af-986f-432b-b54a-685fc489eb48 req-536d8a1c-1462-416c-8f81-d73a158eb2ca 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "af875636-eb00-48b8-b1f4-589898eafecb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.381 2 DEBUG nova.compute.manager [req-dfbed0af-986f-432b-b54a-685fc489eb48 req-536d8a1c-1462-416c-8f81-d73a158eb2ca 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] No waiting events found dispatching network-vif-unplugged-e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.381 2 DEBUG nova.compute.manager [req-dfbed0af-986f-432b-b54a-685fc489eb48 req-536d8a1c-1462-416c-8f81-d73a158eb2ca 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Received event network-vif-unplugged-e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.404 2 DEBUG nova.compute.manager [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.406 2 DEBUG nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.406 2 INFO nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Creating image(s)
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.442 2 DEBUG nova.storage.rbd_utils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] rbd image f50e6a55-f3b5-402b-91b2-12d34386f656_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.492 2 DEBUG nova.storage.rbd_utils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] rbd image f50e6a55-f3b5-402b-91b2-12d34386f656_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.538 2 DEBUG nova.storage.rbd_utils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] rbd image f50e6a55-f3b5-402b-91b2-12d34386f656_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.547 2 DEBUG oslo_concurrency.lockutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquiring lock "5791872ee933d4d58fd9e831120a99fbea624bcf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.548 2 DEBUG oslo_concurrency.lockutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "5791872ee933d4d58fd9e831120a99fbea624bcf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.555 2 DEBUG nova.policy [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5d4abc29b2e475e9c7c54249ca341c4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '16e65e6cbbf848e5bb5755e6da3b1d33', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.556 2 DEBUG nova.network.neutron [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.773 2 INFO nova.virt.libvirt.driver [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Deleting instance files /var/lib/nova/instances/af875636-eb00-48b8-b1f4-589898eafecb_del
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.776 2 INFO nova.virt.libvirt.driver [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Deletion of /var/lib/nova/instances/af875636-eb00-48b8-b1f4-589898eafecb_del complete
Oct 02 20:08:54 compute-0 ceph-mon[191910]: pgmap v1925: 321 pgs: 321 active+clean; 233 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.9 MiB/s wr, 64 op/s
Oct 02 20:08:54 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2474076248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.860 2 INFO nova.compute.manager [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Took 1.27 seconds to destroy the instance on the hypervisor.
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.860 2 DEBUG oslo.service.loopingcall [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.861 2 DEBUG nova.compute.manager [-] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 20:08:54 compute-0 nova_compute[355794]: 2025-10-02 20:08:54.861 2 DEBUG nova.network.neutron [-] [instance: af875636-eb00-48b8-b1f4-589898eafecb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 20:08:55 compute-0 nova_compute[355794]: 2025-10-02 20:08:55.240 2 DEBUG nova.virt.libvirt.imagebackend [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Image locations are: [{'url': 'rbd://6019f664-a1c2-5955-8391-692cb79a59f9/images/fe71959f-8f59-4b45-ae05-4216d5f12fab/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://6019f664-a1c2-5955-8391-692cb79a59f9/images/fe71959f-8f59-4b45-ae05-4216d5f12fab/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 20:08:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:08:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 205 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.9 MiB/s wr, 57 op/s
Oct 02 20:08:55 compute-0 nova_compute[355794]: 2025-10-02 20:08:55.757 2 DEBUG nova.network.neutron [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Successfully created port: f069cce3-8536-48d3-a068-b30f9a0107d5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.138 2 DEBUG nova.network.neutron [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Updating instance_info_cache with network_info: [{"id": "e776586d-986f-4e28-9744-f39a9506e590", "address": "fa:16:3e:32:d6:29", "network": {"id": "008f46fc-bb87-421e-842a-684df1b332d5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1755644466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1bb915f165644ddbb5971268b645746a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape776586d-98", "ovs_interfaceid": "e776586d-986f-4e28-9744-f39a9506e590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.162 2 DEBUG oslo_concurrency.lockutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Releasing lock "refresh_cache-163b34fe-f5ad-414e-bcfa-8a956779638a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.162 2 DEBUG nova.compute.manager [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Instance network_info: |[{"id": "e776586d-986f-4e28-9744-f39a9506e590", "address": "fa:16:3e:32:d6:29", "network": {"id": "008f46fc-bb87-421e-842a-684df1b332d5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1755644466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1bb915f165644ddbb5971268b645746a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape776586d-98", "ovs_interfaceid": "e776586d-986f-4e28-9744-f39a9506e590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.166 2 DEBUG nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Start _get_guest_xml network_info=[{"id": "e776586d-986f-4e28-9744-f39a9506e590", "address": "fa:16:3e:32:d6:29", "network": {"id": "008f46fc-bb87-421e-842a-684df1b332d5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1755644466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1bb915f165644ddbb5971268b645746a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape776586d-98", "ovs_interfaceid": "e776586d-986f-4e28-9744-f39a9506e590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:04:39Z,direct_url=<?>,disk_format='qcow2',id=2881b8cb-4cad-4124-8a6e-ae21054c9692,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:04:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'disk_bus': 'virtio', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'image_id': '2881b8cb-4cad-4124-8a6e-ae21054c9692'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.179 2 WARNING nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.190 2 DEBUG nova.virt.libvirt.host [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.191 2 DEBUG nova.virt.libvirt.host [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.209 2 DEBUG nova.virt.libvirt.host [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.209 2 DEBUG nova.virt.libvirt.host [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.210 2 DEBUG nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.210 2 DEBUG nova.virt.hardware [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T20:04:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2a4d7fef-934e-4921-8c3b-c6783966faa5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:04:39Z,direct_url=<?>,disk_format='qcow2',id=2881b8cb-4cad-4124-8a6e-ae21054c9692,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1c35486f37b94d43a7bf2f2fa09c70b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:04:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.211 2 DEBUG nova.virt.hardware [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.212 2 DEBUG nova.virt.hardware [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.212 2 DEBUG nova.virt.hardware [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.212 2 DEBUG nova.virt.hardware [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.212 2 DEBUG nova.virt.hardware [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.213 2 DEBUG nova.virt.hardware [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.213 2 DEBUG nova.virt.hardware [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.214 2 DEBUG nova.virt.hardware [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.214 2 DEBUG nova.virt.hardware [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.214 2 DEBUG nova.virt.hardware [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.217 2 DEBUG oslo_concurrency.processutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.578 2 DEBUG nova.compute.manager [req-c2ca49b2-c46f-4697-934a-f87efab73eba req-fe088f05-942c-433b-8d6b-2b75f501f195 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Received event network-vif-plugged-e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.579 2 DEBUG oslo_concurrency.lockutils [req-c2ca49b2-c46f-4697-934a-f87efab73eba req-fe088f05-942c-433b-8d6b-2b75f501f195 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "af875636-eb00-48b8-b1f4-589898eafecb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.580 2 DEBUG oslo_concurrency.lockutils [req-c2ca49b2-c46f-4697-934a-f87efab73eba req-fe088f05-942c-433b-8d6b-2b75f501f195 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "af875636-eb00-48b8-b1f4-589898eafecb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.581 2 DEBUG oslo_concurrency.lockutils [req-c2ca49b2-c46f-4697-934a-f87efab73eba req-fe088f05-942c-433b-8d6b-2b75f501f195 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "af875636-eb00-48b8-b1f4-589898eafecb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.581 2 DEBUG nova.compute.manager [req-c2ca49b2-c46f-4697-934a-f87efab73eba req-fe088f05-942c-433b-8d6b-2b75f501f195 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] No waiting events found dispatching network-vif-plugged-e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.582 2 WARNING nova.compute.manager [req-c2ca49b2-c46f-4697-934a-f87efab73eba req-fe088f05-942c-433b-8d6b-2b75f501f195 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Received unexpected event network-vif-plugged-e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 for instance with vm_state active and task_state deleting.
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.582 2 DEBUG nova.compute.manager [req-c2ca49b2-c46f-4697-934a-f87efab73eba req-fe088f05-942c-433b-8d6b-2b75f501f195 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Received event network-changed-e776586d-986f-4e28-9744-f39a9506e590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.582 2 DEBUG nova.compute.manager [req-c2ca49b2-c46f-4697-934a-f87efab73eba req-fe088f05-942c-433b-8d6b-2b75f501f195 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Refreshing instance network info cache due to event network-changed-e776586d-986f-4e28-9744-f39a9506e590. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.583 2 DEBUG oslo_concurrency.lockutils [req-c2ca49b2-c46f-4697-934a-f87efab73eba req-fe088f05-942c-433b-8d6b-2b75f501f195 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-163b34fe-f5ad-414e-bcfa-8a956779638a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.583 2 DEBUG oslo_concurrency.lockutils [req-c2ca49b2-c46f-4697-934a-f87efab73eba req-fe088f05-942c-433b-8d6b-2b75f501f195 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-163b34fe-f5ad-414e-bcfa-8a956779638a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.583 2 DEBUG nova.network.neutron [req-c2ca49b2-c46f-4697-934a-f87efab73eba req-fe088f05-942c-433b-8d6b-2b75f501f195 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Refreshing network info cache for port e776586d-986f-4e28-9744-f39a9506e590 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:08:56 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:08:56 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3748553489' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.746 2 DEBUG oslo_concurrency.processutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.797 2 DEBUG nova.storage.rbd_utils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] rbd image 163b34fe-f5ad-414e-bcfa-8a956779638a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.806 2 DEBUG oslo_concurrency.processutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.857 2 DEBUG oslo_concurrency.processutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5791872ee933d4d58fd9e831120a99fbea624bcf.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:56 compute-0 ceph-mon[191910]: pgmap v1926: 321 pgs: 321 active+clean; 205 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.9 MiB/s wr, 57 op/s
Oct 02 20:08:56 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3748553489' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.900 2 DEBUG nova.network.neutron [-] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.910 2 DEBUG nova.compute.manager [req-459a1d8c-729e-4cf7-a8d5-89cf53d796f4 req-694a5635-59ec-4191-b952-4449ef6b27cc 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Received event network-vif-deleted-e50ea0ec-56a1-4e06-bd8b-531ca4d11a04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.911 2 INFO nova.compute.manager [req-459a1d8c-729e-4cf7-a8d5-89cf53d796f4 req-694a5635-59ec-4191-b952-4449ef6b27cc 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Neutron deleted interface e50ea0ec-56a1-4e06-bd8b-531ca4d11a04; detaching it from the instance and deleting it from the info cache
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.911 2 DEBUG nova.network.neutron [req-459a1d8c-729e-4cf7-a8d5-89cf53d796f4 req-694a5635-59ec-4191-b952-4449ef6b27cc 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.933 2 INFO nova.compute.manager [-] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Took 2.07 seconds to deallocate network for instance.
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.944 2 DEBUG nova.compute.manager [req-459a1d8c-729e-4cf7-a8d5-89cf53d796f4 req-694a5635-59ec-4191-b952-4449ef6b27cc 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Detach interface failed, port_id=e50ea0ec-56a1-4e06-bd8b-531ca4d11a04, reason: Instance af875636-eb00-48b8-b1f4-589898eafecb could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.949 2 DEBUG oslo_concurrency.processutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5791872ee933d4d58fd9e831120a99fbea624bcf.part --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.949 2 DEBUG nova.virt.images [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] fe71959f-8f59-4b45-ae05-4216d5f12fab was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.950 2 DEBUG nova.privsep.utils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.950 2 DEBUG oslo_concurrency.processutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5791872ee933d4d58fd9e831120a99fbea624bcf.part /var/lib/nova/instances/_base/5791872ee933d4d58fd9e831120a99fbea624bcf.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.983 2 DEBUG oslo_concurrency.lockutils [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:56 compute-0 nova_compute[355794]: 2025-10-02 20:08:56.984 2 DEBUG oslo_concurrency.lockutils [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.089 2 DEBUG oslo_concurrency.processutils [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.167 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759435722.1648295, a6e095a0-cb58-430d-9347-4aab385c6e69 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.168 2 INFO nova.compute.manager [-] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] VM Stopped (Lifecycle Event)
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.194 2 DEBUG nova.compute.manager [None req-d2c36224-d553-4b3f-95de-6df05ccf029a - - - - - -] [instance: a6e095a0-cb58-430d-9347-4aab385c6e69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:08:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:08:57 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2361379511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.350 2 DEBUG oslo_concurrency.processutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5791872ee933d4d58fd9e831120a99fbea624bcf.part /var/lib/nova/instances/_base/5791872ee933d4d58fd9e831120a99fbea624bcf.converted" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.356 2 DEBUG oslo_concurrency.processutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5791872ee933d4d58fd9e831120a99fbea624bcf.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.379 2 DEBUG oslo_concurrency.processutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.381 2 DEBUG nova.virt.libvirt.vif [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T20:08:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-2050959147',display_name='tempest-ServerAddressesTestJSON-server-2050959147',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-2050959147',id=13,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1bb915f165644ddbb5971268b645746a',ramdisk_id='',reservation_id='r-9bq7u2ay',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-980594293',owner_user_name='tempest-ServerAddressesTestJSON-980594293-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:08:51Z,user_data=None,user_id='52586757ab2e427f98b2a1d571ef51d2',uuid=163b34fe-f5ad-414e-bcfa-8a956779638a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e776586d-986f-4e28-9744-f39a9506e590", "address": "fa:16:3e:32:d6:29", "network": {"id": "008f46fc-bb87-421e-842a-684df1b332d5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1755644466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1bb915f165644ddbb5971268b645746a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape776586d-98", "ovs_interfaceid": "e776586d-986f-4e28-9744-f39a9506e590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.382 2 DEBUG nova.network.os_vif_util [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Converting VIF {"id": "e776586d-986f-4e28-9744-f39a9506e590", "address": "fa:16:3e:32:d6:29", "network": {"id": "008f46fc-bb87-421e-842a-684df1b332d5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1755644466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1bb915f165644ddbb5971268b645746a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape776586d-98", "ovs_interfaceid": "e776586d-986f-4e28-9744-f39a9506e590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.383 2 DEBUG nova.network.os_vif_util [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:d6:29,bridge_name='br-int',has_traffic_filtering=True,id=e776586d-986f-4e28-9744-f39a9506e590,network=Network(008f46fc-bb87-421e-842a-684df1b332d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape776586d-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.384 2 DEBUG nova.objects.instance [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lazy-loading 'pci_devices' on Instance uuid 163b34fe-f5ad-414e-bcfa-8a956779638a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.414 2 DEBUG nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] End _get_guest_xml xml=<domain type="kvm">
Oct 02 20:08:57 compute-0 nova_compute[355794]:   <uuid>163b34fe-f5ad-414e-bcfa-8a956779638a</uuid>
Oct 02 20:08:57 compute-0 nova_compute[355794]:   <name>instance-0000000d</name>
Oct 02 20:08:57 compute-0 nova_compute[355794]:   <memory>131072</memory>
Oct 02 20:08:57 compute-0 nova_compute[355794]:   <vcpu>1</vcpu>
Oct 02 20:08:57 compute-0 nova_compute[355794]:   <metadata>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <nova:name>tempest-ServerAddressesTestJSON-server-2050959147</nova:name>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <nova:creationTime>2025-10-02 20:08:56</nova:creationTime>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <nova:flavor name="m1.nano">
Oct 02 20:08:57 compute-0 nova_compute[355794]:         <nova:memory>128</nova:memory>
Oct 02 20:08:57 compute-0 nova_compute[355794]:         <nova:disk>1</nova:disk>
Oct 02 20:08:57 compute-0 nova_compute[355794]:         <nova:swap>0</nova:swap>
Oct 02 20:08:57 compute-0 nova_compute[355794]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 20:08:57 compute-0 nova_compute[355794]:         <nova:vcpus>1</nova:vcpus>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       </nova:flavor>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <nova:owner>
Oct 02 20:08:57 compute-0 nova_compute[355794]:         <nova:user uuid="52586757ab2e427f98b2a1d571ef51d2">tempest-ServerAddressesTestJSON-980594293-project-member</nova:user>
Oct 02 20:08:57 compute-0 nova_compute[355794]:         <nova:project uuid="1bb915f165644ddbb5971268b645746a">tempest-ServerAddressesTestJSON-980594293</nova:project>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       </nova:owner>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <nova:root type="image" uuid="2881b8cb-4cad-4124-8a6e-ae21054c9692"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <nova:ports>
Oct 02 20:08:57 compute-0 nova_compute[355794]:         <nova:port uuid="e776586d-986f-4e28-9744-f39a9506e590">
Oct 02 20:08:57 compute-0 nova_compute[355794]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:         </nova:port>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       </nova:ports>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     </nova:instance>
Oct 02 20:08:57 compute-0 nova_compute[355794]:   </metadata>
Oct 02 20:08:57 compute-0 nova_compute[355794]:   <sysinfo type="smbios">
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <system>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <entry name="manufacturer">RDO</entry>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <entry name="product">OpenStack Compute</entry>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <entry name="serial">163b34fe-f5ad-414e-bcfa-8a956779638a</entry>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <entry name="uuid">163b34fe-f5ad-414e-bcfa-8a956779638a</entry>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <entry name="family">Virtual Machine</entry>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     </system>
Oct 02 20:08:57 compute-0 nova_compute[355794]:   </sysinfo>
Oct 02 20:08:57 compute-0 nova_compute[355794]:   <os>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <boot dev="hd"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <smbios mode="sysinfo"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:   </os>
Oct 02 20:08:57 compute-0 nova_compute[355794]:   <features>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <acpi/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <apic/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <vmcoreinfo/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:   </features>
Oct 02 20:08:57 compute-0 nova_compute[355794]:   <clock offset="utc">
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <timer name="hpet" present="no"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:   </clock>
Oct 02 20:08:57 compute-0 nova_compute[355794]:   <cpu mode="host-model" match="exact">
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:   </cpu>
Oct 02 20:08:57 compute-0 nova_compute[355794]:   <devices>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/163b34fe-f5ad-414e-bcfa-8a956779638a_disk">
Oct 02 20:08:57 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       </source>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:08:57 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <target dev="vda" bus="virtio"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <disk type="network" device="cdrom">
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/163b34fe-f5ad-414e-bcfa-8a956779638a_disk.config">
Oct 02 20:08:57 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       </source>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:08:57 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <target dev="sda" bus="sata"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <interface type="ethernet">
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <mac address="fa:16:3e:32:d6:29"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <mtu size="1442"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <target dev="tape776586d-98"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     </interface>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <serial type="pty">
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <log file="/var/lib/nova/instances/163b34fe-f5ad-414e-bcfa-8a956779638a/console.log" append="off"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     </serial>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <video>
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     </video>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <input type="tablet" bus="usb"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <rng model="virtio">
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <backend model="random">/dev/urandom</backend>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     </rng>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <controller type="usb" index="0"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     <memballoon model="virtio">
Oct 02 20:08:57 compute-0 nova_compute[355794]:       <stats period="10"/>
Oct 02 20:08:57 compute-0 nova_compute[355794]:     </memballoon>
Oct 02 20:08:57 compute-0 nova_compute[355794]:   </devices>
Oct 02 20:08:57 compute-0 nova_compute[355794]: </domain>
Oct 02 20:08:57 compute-0 nova_compute[355794]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.414 2 DEBUG nova.compute.manager [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Preparing to wait for external event network-vif-plugged-e776586d-986f-4e28-9744-f39a9506e590 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.414 2 DEBUG oslo_concurrency.lockutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Acquiring lock "163b34fe-f5ad-414e-bcfa-8a956779638a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.415 2 DEBUG oslo_concurrency.lockutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lock "163b34fe-f5ad-414e-bcfa-8a956779638a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.415 2 DEBUG oslo_concurrency.lockutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lock "163b34fe-f5ad-414e-bcfa-8a956779638a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.416 2 DEBUG nova.virt.libvirt.vif [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T20:08:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-2050959147',display_name='tempest-ServerAddressesTestJSON-server-2050959147',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-2050959147',id=13,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1bb915f165644ddbb5971268b645746a',ramdisk_id='',reservation_id='r-9bq7u2ay',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-980594293',owner_user_name='tempest-ServerAddressesTestJSON-980594293-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:08:51Z,user_data=None,user_id='52586757ab2e427f98b2a1d571ef51d2',uuid=163b34fe-f5ad-414e-bcfa-8a956779638a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e776586d-986f-4e28-9744-f39a9506e590", "address": "fa:16:3e:32:d6:29", "network": {"id": "008f46fc-bb87-421e-842a-684df1b332d5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1755644466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1bb915f165644ddbb5971268b645746a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape776586d-98", "ovs_interfaceid": "e776586d-986f-4e28-9744-f39a9506e590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.416 2 DEBUG nova.network.os_vif_util [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Converting VIF {"id": "e776586d-986f-4e28-9744-f39a9506e590", "address": "fa:16:3e:32:d6:29", "network": {"id": "008f46fc-bb87-421e-842a-684df1b332d5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1755644466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1bb915f165644ddbb5971268b645746a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape776586d-98", "ovs_interfaceid": "e776586d-986f-4e28-9744-f39a9506e590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.417 2 DEBUG nova.network.os_vif_util [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:d6:29,bridge_name='br-int',has_traffic_filtering=True,id=e776586d-986f-4e28-9744-f39a9506e590,network=Network(008f46fc-bb87-421e-842a-684df1b332d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape776586d-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.417 2 DEBUG os_vif [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d6:29,bridge_name='br-int',has_traffic_filtering=True,id=e776586d-986f-4e28-9744-f39a9506e590,network=Network(008f46fc-bb87-421e-842a-684df1b332d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape776586d-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.418 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.419 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.422 2 DEBUG oslo_concurrency.processutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5791872ee933d4d58fd9e831120a99fbea624bcf.converted --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.423 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape776586d-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.423 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape776586d-98, col_values=(('external_ids', {'iface-id': 'e776586d-986f-4e28-9744-f39a9506e590', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:d6:29', 'vm-uuid': '163b34fe-f5ad-414e-bcfa-8a956779638a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.424 2 DEBUG oslo_concurrency.lockutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "5791872ee933d4d58fd9e831120a99fbea624bcf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:57 compute-0 NetworkManager[44968]: <info>  [1759435737.4266] manager: (tape776586d-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.468 2 DEBUG nova.storage.rbd_utils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] rbd image f50e6a55-f3b5-402b-91b2-12d34386f656_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.477 2 DEBUG oslo_concurrency.processutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5791872ee933d4d58fd9e831120a99fbea624bcf f50e6a55-f3b5-402b-91b2-12d34386f656_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.505 2 DEBUG nova.network.neutron [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Successfully updated port: f069cce3-8536-48d3-a068-b30f9a0107d5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.512 2 INFO os_vif [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d6:29,bridge_name='br-int',has_traffic_filtering=True,id=e776586d-986f-4e28-9744-f39a9506e590,network=Network(008f46fc-bb87-421e-842a-684df1b332d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape776586d-98')
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.526 2 DEBUG oslo_concurrency.lockutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquiring lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.526 2 DEBUG oslo_concurrency.lockutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquired lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.526 2 DEBUG nova.network.neutron [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 20:08:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:08:57 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2383072993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.623 2 DEBUG oslo_concurrency.processutils [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.640 2 DEBUG nova.compute.provider_tree [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.649 2 DEBUG nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.649 2 DEBUG nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.650 2 DEBUG nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] No VIF found with MAC fa:16:3e:32:d6:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.650 2 INFO nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Using config drive
Oct 02 20:08:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 215 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 3.5 MiB/s wr, 69 op/s
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.684 2 DEBUG nova.storage.rbd_utils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] rbd image 163b34fe-f5ad-414e-bcfa-8a956779638a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.698 2 DEBUG nova.scheduler.client.report [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.733 2 DEBUG oslo_concurrency.lockutils [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.789 2 INFO nova.scheduler.client.report [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Deleted allocations for instance af875636-eb00-48b8-b1f4-589898eafecb
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.811 2 DEBUG nova.network.neutron [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 20:08:57 compute-0 nova_compute[355794]: 2025-10-02 20:08:57.888 2 DEBUG oslo_concurrency.lockutils [None req-bd750a08-c519-4b28-99ba-fe5b9a326248 56d9dae393d64f4b925b7c0827ad71e0 945fe5265b6446a2a61f775a8f3466f2 - - default default] Lock "af875636-eb00-48b8-b1f4-589898eafecb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:57 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2361379511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:08:57 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2383072993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.069 2 INFO nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Creating config drive at /var/lib/nova/instances/163b34fe-f5ad-414e-bcfa-8a956779638a/disk.config
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.079 2 DEBUG oslo_concurrency.processutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/163b34fe-f5ad-414e-bcfa-8a956779638a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzzbr5t93 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.111 2 DEBUG oslo_concurrency.processutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5791872ee933d4d58fd9e831120a99fbea624bcf f50e6a55-f3b5-402b-91b2-12d34386f656_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.272 2 DEBUG oslo_concurrency.processutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/163b34fe-f5ad-414e-bcfa-8a956779638a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzzbr5t93" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.328 2 DEBUG nova.storage.rbd_utils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] rbd image 163b34fe-f5ad-414e-bcfa-8a956779638a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.341 2 DEBUG oslo_concurrency.processutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/163b34fe-f5ad-414e-bcfa-8a956779638a/disk.config 163b34fe-f5ad-414e-bcfa-8a956779638a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.391 2 DEBUG nova.storage.rbd_utils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] resizing rbd image f50e6a55-f3b5-402b-91b2-12d34386f656_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.514 2 DEBUG nova.network.neutron [req-c2ca49b2-c46f-4697-934a-f87efab73eba req-fe088f05-942c-433b-8d6b-2b75f501f195 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Updated VIF entry in instance network info cache for port e776586d-986f-4e28-9744-f39a9506e590. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.515 2 DEBUG nova.network.neutron [req-c2ca49b2-c46f-4697-934a-f87efab73eba req-fe088f05-942c-433b-8d6b-2b75f501f195 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Updating instance_info_cache with network_info: [{"id": "e776586d-986f-4e28-9744-f39a9506e590", "address": "fa:16:3e:32:d6:29", "network": {"id": "008f46fc-bb87-421e-842a-684df1b332d5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1755644466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1bb915f165644ddbb5971268b645746a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape776586d-98", "ovs_interfaceid": "e776586d-986f-4e28-9744-f39a9506e590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.532 2 DEBUG oslo_concurrency.lockutils [req-c2ca49b2-c46f-4697-934a-f87efab73eba req-fe088f05-942c-433b-8d6b-2b75f501f195 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-163b34fe-f5ad-414e-bcfa-8a956779638a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.741 2 DEBUG oslo_concurrency.processutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/163b34fe-f5ad-414e-bcfa-8a956779638a/disk.config 163b34fe-f5ad-414e-bcfa-8a956779638a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.742 2 INFO nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Deleting local config drive /var/lib/nova/instances/163b34fe-f5ad-414e-bcfa-8a956779638a/disk.config because it was imported into RBD.
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.761 2 DEBUG nova.objects.instance [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lazy-loading 'migration_context' on Instance uuid f50e6a55-f3b5-402b-91b2-12d34386f656 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.789 2 DEBUG nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.789 2 DEBUG nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Ensure instance console log exists: /var/lib/nova/instances/f50e6a55-f3b5-402b-91b2-12d34386f656/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.790 2 DEBUG oslo_concurrency.lockutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.791 2 DEBUG oslo_concurrency.lockutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.791 2 DEBUG oslo_concurrency.lockutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:08:58 compute-0 kernel: tape776586d-98: entered promiscuous mode
Oct 02 20:08:58 compute-0 NetworkManager[44968]: <info>  [1759435738.8523] manager: (tape776586d-98): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Oct 02 20:08:58 compute-0 ovn_controller[88435]: 2025-10-02T20:08:58Z|00157|binding|INFO|Claiming lport e776586d-986f-4e28-9744-f39a9506e590 for this chassis.
Oct 02 20:08:58 compute-0 ovn_controller[88435]: 2025-10-02T20:08:58Z|00158|binding|INFO|e776586d-986f-4e28-9744-f39a9506e590: Claiming fa:16:3e:32:d6:29 10.100.0.4
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:58 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:58.862 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:d6:29 10.100.0.4'], port_security=['fa:16:3e:32:d6:29 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '163b34fe-f5ad-414e-bcfa-8a956779638a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-008f46fc-bb87-421e-842a-684df1b332d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1bb915f165644ddbb5971268b645746a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ce40818d-e998-4577-9064-57e09821ac97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=367746d3-b079-4da8-9599-ecd20c816c12, chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=e776586d-986f-4e28-9744-f39a9506e590) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:08:58 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:58.865 285790 INFO neutron.agent.ovn.metadata.agent [-] Port e776586d-986f-4e28-9744-f39a9506e590 in datapath 008f46fc-bb87-421e-842a-684df1b332d5 bound to our chassis
Oct 02 20:08:58 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:58.869 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 008f46fc-bb87-421e-842a-684df1b332d5
Oct 02 20:08:58 compute-0 ovn_controller[88435]: 2025-10-02T20:08:58Z|00159|binding|INFO|Setting lport e776586d-986f-4e28-9744-f39a9506e590 ovn-installed in OVS
Oct 02 20:08:58 compute-0 ovn_controller[88435]: 2025-10-02T20:08:58Z|00160|binding|INFO|Setting lport e776586d-986f-4e28-9744-f39a9506e590 up in Southbound
Oct 02 20:08:58 compute-0 nova_compute[355794]: 2025-10-02 20:08:58.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:58 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:58.889 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[a105e960-1792-4b87-88fa-860ee9ce32b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:58 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:58.893 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap008f46fc-b1 in ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 20:08:58 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:58.897 420728 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap008f46fc-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 20:08:58 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:58.898 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[4791bfe0-8153-4e3b-9bfd-4c04d09bb7a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:58 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:58.900 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[1c6b89d6-ec14-4595-a965-0fb53269e37f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:58 compute-0 systemd-udevd[458558]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 20:08:58 compute-0 ceph-mon[191910]: pgmap v1927: 321 pgs: 321 active+clean; 215 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 3.5 MiB/s wr, 69 op/s
Oct 02 20:08:58 compute-0 systemd-machined[137646]: New machine qemu-14-instance-0000000d.
Oct 02 20:08:58 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:58.920 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc9a21a-7c3f-4da7-8dc0-4489ef08572c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:58 compute-0 NetworkManager[44968]: <info>  [1759435738.9299] device (tape776586d-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 20:08:58 compute-0 NetworkManager[44968]: <info>  [1759435738.9318] device (tape776586d-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 20:08:58 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Oct 02 20:08:58 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:58.958 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[e29ef749-29fd-4cef-a245-1b3c5ff10e5a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:59.004 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[685b3fe3-7a93-4578-91e5-a9ebeeda38bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.011 2 DEBUG nova.network.neutron [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Updating instance_info_cache with network_info: [{"id": "f069cce3-8536-48d3-a068-b30f9a0107d5", "address": "fa:16:3e:45:37:9a", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.149", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069cce3-85", "ovs_interfaceid": "f069cce3-8536-48d3-a068-b30f9a0107d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:59.014 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[428c198b-24a5-45ba-a8c4-1a17730795a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:59 compute-0 NetworkManager[44968]: <info>  [1759435739.0168] manager: (tap008f46fc-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/70)
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.030 2 DEBUG oslo_concurrency.lockutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Releasing lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.031 2 DEBUG nova.compute.manager [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Instance network_info: |[{"id": "f069cce3-8536-48d3-a068-b30f9a0107d5", "address": "fa:16:3e:45:37:9a", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.149", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069cce3-85", "ovs_interfaceid": "f069cce3-8536-48d3-a068-b30f9a0107d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.033 2 DEBUG nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Start _get_guest_xml network_info=[{"id": "f069cce3-8536-48d3-a068-b30f9a0107d5", "address": "fa:16:3e:45:37:9a", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.149", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069cce3-85", "ovs_interfaceid": "f069cce3-8536-48d3-a068-b30f9a0107d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:08:43Z,direct_url=<?>,disk_format='qcow2',id=fe71959f-8f59-4b45-ae05-4216d5f12fab,min_disk=0,min_ram=0,name='tempest-scenario-img--1806953314',owner='16e65e6cbbf848e5bb5755e6da3b1d33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:08:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'disk_bus': 'virtio', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'image_id': 'fe71959f-8f59-4b45-ae05-4216d5f12fab'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.051 2 WARNING nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.061 2 DEBUG nova.compute.manager [req-acda7c8c-ec7c-46c8-9f11-5f7363b844cd req-ed30ff0a-3ebd-49df-afcd-f435f1e42794 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Received event network-changed-f069cce3-8536-48d3-a068-b30f9a0107d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.062 2 DEBUG nova.compute.manager [req-acda7c8c-ec7c-46c8-9f11-5f7363b844cd req-ed30ff0a-3ebd-49df-afcd-f435f1e42794 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Refreshing instance network info cache due to event network-changed-f069cce3-8536-48d3-a068-b30f9a0107d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.063 2 DEBUG oslo_concurrency.lockutils [req-acda7c8c-ec7c-46c8-9f11-5f7363b844cd req-ed30ff0a-3ebd-49df-afcd-f435f1e42794 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.063 2 DEBUG oslo_concurrency.lockutils [req-acda7c8c-ec7c-46c8-9f11-5f7363b844cd req-ed30ff0a-3ebd-49df-afcd-f435f1e42794 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.064 2 DEBUG nova.network.neutron [req-acda7c8c-ec7c-46c8-9f11-5f7363b844cd req-ed30ff0a-3ebd-49df-afcd-f435f1e42794 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Refreshing network info cache for port f069cce3-8536-48d3-a068-b30f9a0107d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.066 2 DEBUG nova.virt.libvirt.host [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.067 2 DEBUG nova.virt.libvirt.host [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:59.069 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[87870150-85db-4325-8326-1bbff2ed8296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:59.073 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[efc401dd-3e37-41e6-a301-ebfe86273b36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.076 2 DEBUG nova.virt.libvirt.host [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.077 2 DEBUG nova.virt.libvirt.host [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.078 2 DEBUG nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.079 2 DEBUG nova.virt.hardware [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T20:04:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2a4d7fef-934e-4921-8c3b-c6783966faa5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:08:43Z,direct_url=<?>,disk_format='qcow2',id=fe71959f-8f59-4b45-ae05-4216d5f12fab,min_disk=0,min_ram=0,name='tempest-scenario-img--1806953314',owner='16e65e6cbbf848e5bb5755e6da3b1d33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:08:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.080 2 DEBUG nova.virt.hardware [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.080 2 DEBUG nova.virt.hardware [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.080 2 DEBUG nova.virt.hardware [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.081 2 DEBUG nova.virt.hardware [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.081 2 DEBUG nova.virt.hardware [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.081 2 DEBUG nova.virt.hardware [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.082 2 DEBUG nova.virt.hardware [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.082 2 DEBUG nova.virt.hardware [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.083 2 DEBUG nova.virt.hardware [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.083 2 DEBUG nova.virt.hardware [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.088 2 DEBUG oslo_concurrency.processutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:59 compute-0 NetworkManager[44968]: <info>  [1759435739.1044] device (tap008f46fc-b0): carrier: link connected
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:59.115 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3768c7-8ddd-4ba8-a969-e030972db187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:59.148 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[5084b592-7c7d-4652-b6cb-63cf316dd28d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap008f46fc-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:5f:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691067, 'reachable_time': 43384, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 458592, 'error': None, 'target': 'ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:59.176 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[b19e2a58-edb8-45ed-a478-803467d60111]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe03:5f28'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691067, 'tstamp': 691067}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 458593, 'error': None, 'target': 'ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:59.207 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[f730fcab-7c7b-4b63-8bc5-2482509be037]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap008f46fc-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:5f:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691067, 'reachable_time': 43384, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 458594, 'error': None, 'target': 'ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:59.257 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[fdce521f-d6d8-4c9a-a189-e1845d08150a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:59.341 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[da051bf9-36e1-4ec0-b4f6-8d4a392c3483]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:59.343 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap008f46fc-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:59.343 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:59.343 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap008f46fc-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:59 compute-0 kernel: tap008f46fc-b0: entered promiscuous mode
Oct 02 20:08:59 compute-0 NetworkManager[44968]: <info>  [1759435739.3466] manager: (tap008f46fc-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:59.350 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap008f46fc-b0, col_values=(('external_ids', {'iface-id': '1944179b-d8f5-4748-9bd7-8f7b4c1da3b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:08:59 compute-0 ovn_controller[88435]: 2025-10-02T20:08:59Z|00161|binding|INFO|Releasing lport 1944179b-d8f5-4748-9bd7-8f7b4c1da3b3 from this chassis (sb_readonly=0)
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:59.404 285790 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/008f46fc-bb87-421e-842a-684df1b332d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/008f46fc-bb87-421e-842a-684df1b332d5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:59.405 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[02b17eaa-206d-42d9-9a7e-14ebca07c3c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:59.407 285790 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: global
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     log         /dev/log local0 debug
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     log-tag     haproxy-metadata-proxy-008f46fc-bb87-421e-842a-684df1b332d5
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     user        root
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     group       root
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     maxconn     1024
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     pidfile     /var/lib/neutron/external/pids/008f46fc-bb87-421e-842a-684df1b332d5.pid.haproxy
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     daemon
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: defaults
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     log global
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     mode http
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     option httplog
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     option dontlognull
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     option http-server-close
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     option forwardfor
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     retries                 3
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     timeout http-request    30s
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     timeout connect         30s
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     timeout client          32s
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     timeout server          32s
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     timeout http-keep-alive 30s
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: listen listener
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     bind 169.254.169.254:80
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:     http-request add-header X-OVN-Network-ID 008f46fc-bb87-421e-842a-684df1b332d5
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 20:08:59 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:08:59.408 285790 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5', 'env', 'PROCESS_TAG=haproxy-008f46fc-bb87-421e-842a-684df1b332d5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/008f46fc-bb87-421e-842a-684df1b332d5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:08:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:08:59 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3753835757' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.621 2 DEBUG oslo_concurrency.processutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.662 2 DEBUG nova.storage.rbd_utils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] rbd image f50e6a55-f3b5-402b-91b2-12d34386f656_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:08:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 209 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 90 op/s
Oct 02 20:08:59 compute-0 nova_compute[355794]: 2025-10-02 20:08:59.670 2 DEBUG oslo_concurrency.processutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:08:59 compute-0 podman[157186]: time="2025-10-02T20:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:08:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:08:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9081 "" "Go-http-client/1.1"
Oct 02 20:08:59 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3753835757' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:08:59 compute-0 podman[458723]: 2025-10-02 20:08:59.960833473 +0000 UTC m=+0.105621538 container create 7317f391125458421aab1c238a72e391a253949d4cefddc7f79af01bad73ed99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:08:59 compute-0 podman[458723]: 2025-10-02 20:08:59.895742776 +0000 UTC m=+0.040530931 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 20:09:00 compute-0 systemd[1]: Started libpod-conmon-7317f391125458421aab1c238a72e391a253949d4cefddc7f79af01bad73ed99.scope.
Oct 02 20:09:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b0f0c4837fb7a951bbcf753554a3bee6524d7302e80e2f55f658b9aef5fa06/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 20:09:00 compute-0 podman[458723]: 2025-10-02 20:09:00.111814447 +0000 UTC m=+0.256602562 container init 7317f391125458421aab1c238a72e391a253949d4cefddc7f79af01bad73ed99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:09:00 compute-0 podman[458723]: 2025-10-02 20:09:00.121977155 +0000 UTC m=+0.266765230 container start 7317f391125458421aab1c238a72e391a253949d4cefddc7f79af01bad73ed99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 20:09:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:09:00 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2170883324' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:09:00 compute-0 neutron-haproxy-ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5[458738]: [NOTICE]   (458742) : New worker (458746) forked
Oct 02 20:09:00 compute-0 neutron-haproxy-ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5[458738]: [NOTICE]   (458742) : Loading success.
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.197 2 DEBUG oslo_concurrency.processutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.200 2 DEBUG nova.virt.libvirt.vif [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T20:08:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6',id=14,image_ref='fe71959f-8f59-4b45-ae05-4216d5f12fab',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='f724f930-b01d-4568-9d24-c7060da9fe9c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='16e65e6cbbf848e5bb5755e6da3b1d33',ramdisk_id='',reservation_id='r-0paxwoim',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='fe71959f-8f59-4b45-ae05-4216d5f12fab',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1246773106',owner_user_name='tempest-PrometheusGabbiTest-1246773106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:08:54Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='e5d4abc29b2e475e9c7c54249ca341c4',uuid=f50e6a55-f3b5-402b-91b2-12d34386f656,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f069cce3-8536-48d3-a068-b30f9a0107d5", "address": "fa:16:3e:45:37:9a", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.149", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069cce3-85", "ovs_interfaceid": "f069cce3-8536-48d3-a068-b30f9a0107d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.201 2 DEBUG nova.network.os_vif_util [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Converting VIF {"id": "f069cce3-8536-48d3-a068-b30f9a0107d5", "address": "fa:16:3e:45:37:9a", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.149", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069cce3-85", "ovs_interfaceid": "f069cce3-8536-48d3-a068-b30f9a0107d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.203 2 DEBUG nova.network.os_vif_util [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:37:9a,bridge_name='br-int',has_traffic_filtering=True,id=f069cce3-8536-48d3-a068-b30f9a0107d5,network=Network(f0deafed-687f-4945-b8e7-38e6d324244b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069cce3-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.206 2 DEBUG nova.objects.instance [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lazy-loading 'pci_devices' on Instance uuid f50e6a55-f3b5-402b-91b2-12d34386f656 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.229 2 DEBUG nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] End _get_guest_xml xml=<domain type="kvm">
Oct 02 20:09:00 compute-0 nova_compute[355794]:   <uuid>f50e6a55-f3b5-402b-91b2-12d34386f656</uuid>
Oct 02 20:09:00 compute-0 nova_compute[355794]:   <name>instance-0000000e</name>
Oct 02 20:09:00 compute-0 nova_compute[355794]:   <memory>131072</memory>
Oct 02 20:09:00 compute-0 nova_compute[355794]:   <vcpu>1</vcpu>
Oct 02 20:09:00 compute-0 nova_compute[355794]:   <metadata>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <nova:name>te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6</nova:name>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <nova:creationTime>2025-10-02 20:08:59</nova:creationTime>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <nova:flavor name="m1.nano">
Oct 02 20:09:00 compute-0 nova_compute[355794]:         <nova:memory>128</nova:memory>
Oct 02 20:09:00 compute-0 nova_compute[355794]:         <nova:disk>1</nova:disk>
Oct 02 20:09:00 compute-0 nova_compute[355794]:         <nova:swap>0</nova:swap>
Oct 02 20:09:00 compute-0 nova_compute[355794]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 20:09:00 compute-0 nova_compute[355794]:         <nova:vcpus>1</nova:vcpus>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       </nova:flavor>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <nova:owner>
Oct 02 20:09:00 compute-0 nova_compute[355794]:         <nova:user uuid="e5d4abc29b2e475e9c7c54249ca341c4">tempest-PrometheusGabbiTest-1246773106-project-member</nova:user>
Oct 02 20:09:00 compute-0 nova_compute[355794]:         <nova:project uuid="16e65e6cbbf848e5bb5755e6da3b1d33">tempest-PrometheusGabbiTest-1246773106</nova:project>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       </nova:owner>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <nova:root type="image" uuid="fe71959f-8f59-4b45-ae05-4216d5f12fab"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <nova:ports>
Oct 02 20:09:00 compute-0 nova_compute[355794]:         <nova:port uuid="f069cce3-8536-48d3-a068-b30f9a0107d5">
Oct 02 20:09:00 compute-0 nova_compute[355794]:           <nova:ip type="fixed" address="10.100.1.149" ipVersion="4"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:         </nova:port>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       </nova:ports>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     </nova:instance>
Oct 02 20:09:00 compute-0 nova_compute[355794]:   </metadata>
Oct 02 20:09:00 compute-0 nova_compute[355794]:   <sysinfo type="smbios">
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <system>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <entry name="manufacturer">RDO</entry>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <entry name="product">OpenStack Compute</entry>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <entry name="serial">f50e6a55-f3b5-402b-91b2-12d34386f656</entry>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <entry name="uuid">f50e6a55-f3b5-402b-91b2-12d34386f656</entry>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <entry name="family">Virtual Machine</entry>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     </system>
Oct 02 20:09:00 compute-0 nova_compute[355794]:   </sysinfo>
Oct 02 20:09:00 compute-0 nova_compute[355794]:   <os>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <boot dev="hd"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <smbios mode="sysinfo"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:   </os>
Oct 02 20:09:00 compute-0 nova_compute[355794]:   <features>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <acpi/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <apic/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <vmcoreinfo/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:   </features>
Oct 02 20:09:00 compute-0 nova_compute[355794]:   <clock offset="utc">
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <timer name="hpet" present="no"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:   </clock>
Oct 02 20:09:00 compute-0 nova_compute[355794]:   <cpu mode="host-model" match="exact">
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:   </cpu>
Oct 02 20:09:00 compute-0 nova_compute[355794]:   <devices>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/f50e6a55-f3b5-402b-91b2-12d34386f656_disk">
Oct 02 20:09:00 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       </source>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:09:00 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <target dev="vda" bus="virtio"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <disk type="network" device="cdrom">
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/f50e6a55-f3b5-402b-91b2-12d34386f656_disk.config">
Oct 02 20:09:00 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       </source>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:09:00 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <target dev="sda" bus="sata"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <interface type="ethernet">
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <mac address="fa:16:3e:45:37:9a"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <mtu size="1442"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <target dev="tapf069cce3-85"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     </interface>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <serial type="pty">
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <log file="/var/lib/nova/instances/f50e6a55-f3b5-402b-91b2-12d34386f656/console.log" append="off"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     </serial>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <video>
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     </video>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <input type="tablet" bus="usb"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <rng model="virtio">
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <backend model="random">/dev/urandom</backend>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     </rng>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <controller type="usb" index="0"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     <memballoon model="virtio">
Oct 02 20:09:00 compute-0 nova_compute[355794]:       <stats period="10"/>
Oct 02 20:09:00 compute-0 nova_compute[355794]:     </memballoon>
Oct 02 20:09:00 compute-0 nova_compute[355794]:   </devices>
Oct 02 20:09:00 compute-0 nova_compute[355794]: </domain>
Oct 02 20:09:00 compute-0 nova_compute[355794]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.230 2 DEBUG nova.compute.manager [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Preparing to wait for external event network-vif-plugged-f069cce3-8536-48d3-a068-b30f9a0107d5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.231 2 DEBUG oslo_concurrency.lockutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquiring lock "f50e6a55-f3b5-402b-91b2-12d34386f656-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.231 2 DEBUG oslo_concurrency.lockutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "f50e6a55-f3b5-402b-91b2-12d34386f656-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.232 2 DEBUG oslo_concurrency.lockutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "f50e6a55-f3b5-402b-91b2-12d34386f656-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.233 2 DEBUG nova.virt.libvirt.vif [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T20:08:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6',id=14,image_ref='fe71959f-8f59-4b45-ae05-4216d5f12fab',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='f724f930-b01d-4568-9d24-c7060da9fe9c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='16e65e6cbbf848e5bb5755e6da3b1d33',ramdisk_id='',reservation_id='r-0paxwoim',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='fe71959f-8f59-4b45-ae05-4216d5f12fab',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1246773106',owner_user_name='tempest-PrometheusGabbiTest-1246773106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:08:54Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='e5d4abc29b2e475e9c7c54249ca341c4',uuid=f50e6a55-f3b5-402b-91b2-12d34386f656,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f069cce3-8536-48d3-a068-b30f9a0107d5", "address": "fa:16:3e:45:37:9a", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.149", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069cce3-85", "ovs_interfaceid": "f069cce3-8536-48d3-a068-b30f9a0107d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.234 2 DEBUG nova.network.os_vif_util [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Converting VIF {"id": "f069cce3-8536-48d3-a068-b30f9a0107d5", "address": "fa:16:3e:45:37:9a", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.149", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069cce3-85", "ovs_interfaceid": "f069cce3-8536-48d3-a068-b30f9a0107d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.235 2 DEBUG nova.network.os_vif_util [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:37:9a,bridge_name='br-int',has_traffic_filtering=True,id=f069cce3-8536-48d3-a068-b30f9a0107d5,network=Network(f0deafed-687f-4945-b8e7-38e6d324244b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069cce3-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.236 2 DEBUG os_vif [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:37:9a,bridge_name='br-int',has_traffic_filtering=True,id=f069cce3-8536-48d3-a068-b30f9a0107d5,network=Network(f0deafed-687f-4945-b8e7-38e6d324244b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069cce3-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.238 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.245 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf069cce3-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.246 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf069cce3-85, col_values=(('external_ids', {'iface-id': 'f069cce3-8536-48d3-a068-b30f9a0107d5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:37:9a', 'vm-uuid': 'f50e6a55-f3b5-402b-91b2-12d34386f656'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:09:00 compute-0 NetworkManager[44968]: <info>  [1759435740.2501] manager: (tapf069cce3-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.262 2 INFO os_vif [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:37:9a,bridge_name='br-int',has_traffic_filtering=True,id=f069cce3-8536-48d3-a068-b30f9a0107d5,network=Network(f0deafed-687f-4945-b8e7-38e6d324244b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069cce3-85')
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.341 2 DEBUG nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.341 2 DEBUG nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.342 2 DEBUG nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] No VIF found with MAC fa:16:3e:45:37:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.342 2 INFO nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Using config drive
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.385 2 DEBUG nova.storage.rbd_utils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] rbd image f50e6a55-f3b5-402b-91b2-12d34386f656_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.546 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435740.5458357, 163b34fe-f5ad-414e-bcfa-8a956779638a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.548 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] VM Started (Lifecycle Event)
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.575 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:09:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.585 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435740.547227, 163b34fe-f5ad-414e-bcfa-8a956779638a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.585 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] VM Paused (Lifecycle Event)
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.607 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.616 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.634 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.910 2 INFO nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Creating config drive at /var/lib/nova/instances/f50e6a55-f3b5-402b-91b2-12d34386f656/disk.config
Oct 02 20:09:00 compute-0 nova_compute[355794]: 2025-10-02 20:09:00.924 2 DEBUG oslo_concurrency.processutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f50e6a55-f3b5-402b-91b2-12d34386f656/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpihj41kx5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:09:00 compute-0 ceph-mon[191910]: pgmap v1928: 321 pgs: 321 active+clean; 209 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 90 op/s
Oct 02 20:09:00 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2170883324' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.078 2 DEBUG oslo_concurrency.processutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f50e6a55-f3b5-402b-91b2-12d34386f656/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpihj41kx5" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.143 2 DEBUG nova.storage.rbd_utils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] rbd image f50e6a55-f3b5-402b-91b2-12d34386f656_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.155 2 DEBUG oslo_concurrency.processutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f50e6a55-f3b5-402b-91b2-12d34386f656/disk.config f50e6a55-f3b5-402b-91b2-12d34386f656_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.211 2 DEBUG nova.network.neutron [req-acda7c8c-ec7c-46c8-9f11-5f7363b844cd req-ed30ff0a-3ebd-49df-afcd-f435f1e42794 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Updated VIF entry in instance network info cache for port f069cce3-8536-48d3-a068-b30f9a0107d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.214 2 DEBUG nova.network.neutron [req-acda7c8c-ec7c-46c8-9f11-5f7363b844cd req-ed30ff0a-3ebd-49df-afcd-f435f1e42794 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Updating instance_info_cache with network_info: [{"id": "f069cce3-8536-48d3-a068-b30f9a0107d5", "address": "fa:16:3e:45:37:9a", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.149", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069cce3-85", "ovs_interfaceid": "f069cce3-8536-48d3-a068-b30f9a0107d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.227 2 DEBUG nova.compute.manager [req-f0ca9338-a4c0-4495-9f16-6823d408a347 req-1528b43a-ae26-421b-8163-62c972196f79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Received event network-vif-plugged-e776586d-986f-4e28-9744-f39a9506e590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.228 2 DEBUG oslo_concurrency.lockutils [req-f0ca9338-a4c0-4495-9f16-6823d408a347 req-1528b43a-ae26-421b-8163-62c972196f79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "163b34fe-f5ad-414e-bcfa-8a956779638a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.228 2 DEBUG oslo_concurrency.lockutils [req-f0ca9338-a4c0-4495-9f16-6823d408a347 req-1528b43a-ae26-421b-8163-62c972196f79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "163b34fe-f5ad-414e-bcfa-8a956779638a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.228 2 DEBUG oslo_concurrency.lockutils [req-f0ca9338-a4c0-4495-9f16-6823d408a347 req-1528b43a-ae26-421b-8163-62c972196f79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "163b34fe-f5ad-414e-bcfa-8a956779638a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.229 2 DEBUG nova.compute.manager [req-f0ca9338-a4c0-4495-9f16-6823d408a347 req-1528b43a-ae26-421b-8163-62c972196f79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Processing event network-vif-plugged-e776586d-986f-4e28-9744-f39a9506e590 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.229 2 DEBUG nova.compute.manager [req-f0ca9338-a4c0-4495-9f16-6823d408a347 req-1528b43a-ae26-421b-8163-62c972196f79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Received event network-vif-plugged-e776586d-986f-4e28-9744-f39a9506e590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.229 2 DEBUG oslo_concurrency.lockutils [req-f0ca9338-a4c0-4495-9f16-6823d408a347 req-1528b43a-ae26-421b-8163-62c972196f79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "163b34fe-f5ad-414e-bcfa-8a956779638a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.229 2 DEBUG oslo_concurrency.lockutils [req-f0ca9338-a4c0-4495-9f16-6823d408a347 req-1528b43a-ae26-421b-8163-62c972196f79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "163b34fe-f5ad-414e-bcfa-8a956779638a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.230 2 DEBUG oslo_concurrency.lockutils [req-f0ca9338-a4c0-4495-9f16-6823d408a347 req-1528b43a-ae26-421b-8163-62c972196f79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "163b34fe-f5ad-414e-bcfa-8a956779638a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.230 2 DEBUG nova.compute.manager [req-f0ca9338-a4c0-4495-9f16-6823d408a347 req-1528b43a-ae26-421b-8163-62c972196f79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] No waiting events found dispatching network-vif-plugged-e776586d-986f-4e28-9744-f39a9506e590 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.230 2 WARNING nova.compute.manager [req-f0ca9338-a4c0-4495-9f16-6823d408a347 req-1528b43a-ae26-421b-8163-62c972196f79 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Received unexpected event network-vif-plugged-e776586d-986f-4e28-9744-f39a9506e590 for instance with vm_state building and task_state spawning.
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.231 2 DEBUG nova.compute.manager [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.237 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435741.2363667, 163b34fe-f5ad-414e-bcfa-8a956779638a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.237 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] VM Resumed (Lifecycle Event)
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.239 2 DEBUG nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.245 2 INFO nova.virt.libvirt.driver [-] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Instance spawned successfully.
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.245 2 DEBUG nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.261 2 DEBUG oslo_concurrency.lockutils [req-acda7c8c-ec7c-46c8-9f11-5f7363b844cd req-ed30ff0a-3ebd-49df-afcd-f435f1e42794 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.305 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.313 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.344 2 DEBUG nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.344 2 DEBUG nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.345 2 DEBUG nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.346 2 DEBUG nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.347 2 DEBUG nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.348 2 DEBUG nova.virt.libvirt.driver [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.407 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:09:01 compute-0 openstack_network_exporter[372736]: ERROR   20:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:09:01 compute-0 openstack_network_exporter[372736]: ERROR   20:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:09:01 compute-0 openstack_network_exporter[372736]: ERROR   20:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:09:01 compute-0 openstack_network_exporter[372736]: ERROR   20:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:09:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:09:01 compute-0 openstack_network_exporter[372736]: ERROR   20:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:09:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.469 2 INFO nova.compute.manager [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Took 9.59 seconds to spawn the instance on the hypervisor.
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.475 2 DEBUG nova.compute.manager [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.505 2 DEBUG oslo_concurrency.processutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f50e6a55-f3b5-402b-91b2-12d34386f656/disk.config f50e6a55-f3b5-402b-91b2-12d34386f656_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.506 2 INFO nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Deleting local config drive /var/lib/nova/instances/f50e6a55-f3b5-402b-91b2-12d34386f656/disk.config because it was imported into RBD.
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.562 2 INFO nova.compute.manager [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Took 10.73 seconds to build instance.
Oct 02 20:09:01 compute-0 kernel: tapf069cce3-85: entered promiscuous mode
Oct 02 20:09:01 compute-0 NetworkManager[44968]: <info>  [1759435741.5868] manager: (tapf069cce3-85): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Oct 02 20:09:01 compute-0 systemd-udevd[458584]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 20:09:01 compute-0 ovn_controller[88435]: 2025-10-02T20:09:01Z|00162|binding|INFO|Claiming lport f069cce3-8536-48d3-a068-b30f9a0107d5 for this chassis.
Oct 02 20:09:01 compute-0 ovn_controller[88435]: 2025-10-02T20:09:01Z|00163|binding|INFO|f069cce3-8536-48d3-a068-b30f9a0107d5: Claiming fa:16:3e:45:37:9a 10.100.1.149
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.594 2 DEBUG oslo_concurrency.lockutils [None req-4b4b517a-29ae-4c8c-b903-b55723e41a24 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lock "163b34fe-f5ad-414e-bcfa-8a956779638a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.603 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:37:9a 10.100.1.149'], port_security=['fa:16:3e:45:37:9a 10.100.1.149'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.149/16', 'neutron:device_id': 'f50e6a55-f3b5-402b-91b2-12d34386f656', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0deafed-687f-4945-b8e7-38e6d324244b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '16e65e6cbbf848e5bb5755e6da3b1d33', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7d0acfe3-81ce-4e08-8e78-709b63816024', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cba1ebe5-3c4d-41f0-9003-ea3a824c4dce, chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=f069cce3-8536-48d3-a068-b30f9a0107d5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:09:01 compute-0 NetworkManager[44968]: <info>  [1759435741.6078] device (tapf069cce3-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.606 285790 INFO neutron.agent.ovn.metadata.agent [-] Port f069cce3-8536-48d3-a068-b30f9a0107d5 in datapath f0deafed-687f-4945-b8e7-38e6d324244b bound to our chassis
Oct 02 20:09:01 compute-0 NetworkManager[44968]: <info>  [1759435741.6087] device (tapf069cce3-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.616 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0deafed-687f-4945-b8e7-38e6d324244b
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.631 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[d3dbbd90-3962-45bb-b21b-a5f39b140194]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.633 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf0deafed-61 in ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.638 420728 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf0deafed-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.638 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[9e98df1f-fb18-430e-a08b-244563654b30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.640 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[5874a60f-bd57-4048-a06f-a7f1a92a01f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.654 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e7c03d-9c21-42c5-9dbf-8eb8484b0b3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:01 compute-0 ovn_controller[88435]: 2025-10-02T20:09:01Z|00164|binding|INFO|Setting lport f069cce3-8536-48d3-a068-b30f9a0107d5 ovn-installed in OVS
Oct 02 20:09:01 compute-0 ovn_controller[88435]: 2025-10-02T20:09:01Z|00165|binding|INFO|Setting lport f069cce3-8536-48d3-a068-b30f9a0107d5 up in Southbound
Oct 02 20:09:01 compute-0 nova_compute[355794]: 2025-10-02 20:09:01.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1929: 321 pgs: 321 active+clean; 223 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.3 MiB/s wr, 93 op/s
Oct 02 20:09:01 compute-0 systemd-machined[137646]: New machine qemu-15-instance-0000000e.
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.684 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ed3ca4-2847-45f9-a7fb-7a93c9e092b3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:01 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.737 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[acc250c2-b0df-4c2b-a050-a7b7a45629a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:01 compute-0 NetworkManager[44968]: <info>  [1759435741.7518] manager: (tapf0deafed-60): new Veth device (/org/freedesktop/NetworkManager/Devices/74)
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.750 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[2fff4f54-824a-4c30-a96c-4a8b3b806d16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.814 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[09c9e4d5-9e5d-4557-932e-384e016b939c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.819 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[bc8bcc3c-282a-444e-aa7d-9e1bb6c6a284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:01 compute-0 NetworkManager[44968]: <info>  [1759435741.8525] device (tapf0deafed-60): carrier: link connected
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.864 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[840adaa2-929d-4058-90a9-33fc42dc13f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.889 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef0ff25-9db9-47c2-abc2-7ba22e60a6ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0deafed-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:dd:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691341, 'reachable_time': 41906, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 458879, 'error': None, 'target': 'ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:01 compute-0 podman[458843]: 2025-10-02 20:09:01.90522214 +0000 UTC m=+0.101221742 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.911 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2ff5dc-8551-4a00-af1a-1a7e02130fda]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0a:dd27'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691341, 'tstamp': 691341}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 458886, 'error': None, 'target': 'ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:01 compute-0 podman[458841]: 2025-10-02 20:09:01.938519189 +0000 UTC m=+0.135652321 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.4)
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.939 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[b82a5f97-cb1a-4d12-99e9-7be72c71a9a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0deafed-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:dd:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691341, 'reachable_time': 41906, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 458887, 'error': None, 'target': 'ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:01 compute-0 ceph-mon[191910]: pgmap v1929: 321 pgs: 321 active+clean; 223 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.3 MiB/s wr, 93 op/s
Oct 02 20:09:01 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:01.980 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[8c7563e1-043c-43eb-a74c-ec663e6106f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:02.092 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[d635f28d-a54e-425b-b1af-8c2781debdd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:02.096 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0deafed-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:02.097 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:02.098 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0deafed-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:09:02 compute-0 nova_compute[355794]: 2025-10-02 20:09:02.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:02 compute-0 NetworkManager[44968]: <info>  [1759435742.1037] manager: (tapf0deafed-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Oct 02 20:09:02 compute-0 kernel: tapf0deafed-60: entered promiscuous mode
Oct 02 20:09:02 compute-0 nova_compute[355794]: 2025-10-02 20:09:02.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:02.112 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0deafed-60, col_values=(('external_ids', {'iface-id': 'ad4572b7-e012-418a-9c6b-97a8e10ee248'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:09:02 compute-0 ovn_controller[88435]: 2025-10-02T20:09:02Z|00166|binding|INFO|Releasing lport ad4572b7-e012-418a-9c6b-97a8e10ee248 from this chassis (sb_readonly=0)
Oct 02 20:09:02 compute-0 nova_compute[355794]: 2025-10-02 20:09:02.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:02 compute-0 nova_compute[355794]: 2025-10-02 20:09:02.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:02.144 285790 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f0deafed-687f-4945-b8e7-38e6d324244b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f0deafed-687f-4945-b8e7-38e6d324244b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:02.145 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[8d08b02a-4b1d-4896-90f6-deb450c96c79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:02.146 285790 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]: global
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     log         /dev/log local0 debug
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     log-tag     haproxy-metadata-proxy-f0deafed-687f-4945-b8e7-38e6d324244b
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     user        root
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     group       root
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     maxconn     1024
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     pidfile     /var/lib/neutron/external/pids/f0deafed-687f-4945-b8e7-38e6d324244b.pid.haproxy
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     daemon
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]: defaults
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     log global
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     mode http
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     option httplog
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     option dontlognull
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     option http-server-close
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     option forwardfor
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     retries                 3
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     timeout http-request    30s
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     timeout connect         30s
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     timeout client          32s
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     timeout server          32s
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     timeout http-keep-alive 30s
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]: 
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]: listen listener
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     bind 169.254.169.254:80
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:     http-request add-header X-OVN-Network-ID f0deafed-687f-4945-b8e7-38e6d324244b
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 20:09:02 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:02.149 285790 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b', 'env', 'PROCESS_TAG=haproxy-f0deafed-687f-4945-b8e7-38e6d324244b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f0deafed-687f-4945-b8e7-38e6d324244b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 20:09:02 compute-0 podman[458960]: 2025-10-02 20:09:02.74233748 +0000 UTC m=+0.123198232 container create 89b21111d7b12e91c340950d1f36d25f839b0382cb63e7dadb0776b8d9a8b5b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 20:09:02 compute-0 podman[458960]: 2025-10-02 20:09:02.680088077 +0000 UTC m=+0.060948929 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 20:09:02 compute-0 systemd[1]: Started libpod-conmon-89b21111d7b12e91c340950d1f36d25f839b0382cb63e7dadb0776b8d9a8b5b3.scope.
Oct 02 20:09:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d047fb53c528f113faad6f7f0d3b14603e1269fa48d7f993431f73ae26521763/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 20:09:02 compute-0 podman[458960]: 2025-10-02 20:09:02.880076814 +0000 UTC m=+0.260937596 container init 89b21111d7b12e91c340950d1f36d25f839b0382cb63e7dadb0776b8d9a8b5b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 20:09:02 compute-0 podman[458960]: 2025-10-02 20:09:02.892351258 +0000 UTC m=+0.273212020 container start 89b21111d7b12e91c340950d1f36d25f839b0382cb63e7dadb0776b8d9a8b5b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:09:02 compute-0 neutron-haproxy-ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b[458975]: [NOTICE]   (458979) : New worker (458981) forked
Oct 02 20:09:02 compute-0 neutron-haproxy-ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b[458975]: [NOTICE]   (458979) : Loading success.
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.196 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435743.1947799, f50e6a55-f3b5-402b-91b2-12d34386f656 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.197 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] VM Started (Lifecycle Event)
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.220 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.227 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435743.1948762, f50e6a55-f3b5-402b-91b2-12d34386f656 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.227 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] VM Paused (Lifecycle Event)
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.261 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.268 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.289 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.343 2 DEBUG nova.compute.manager [req-9633b8d7-b819-4719-909e-dd527d566fd9 req-470cb616-1d9e-4189-a1c9-06c6f093d8bd 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Received event network-vif-plugged-f069cce3-8536-48d3-a068-b30f9a0107d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.343 2 DEBUG oslo_concurrency.lockutils [req-9633b8d7-b819-4719-909e-dd527d566fd9 req-470cb616-1d9e-4189-a1c9-06c6f093d8bd 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "f50e6a55-f3b5-402b-91b2-12d34386f656-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.343 2 DEBUG oslo_concurrency.lockutils [req-9633b8d7-b819-4719-909e-dd527d566fd9 req-470cb616-1d9e-4189-a1c9-06c6f093d8bd 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "f50e6a55-f3b5-402b-91b2-12d34386f656-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.344 2 DEBUG oslo_concurrency.lockutils [req-9633b8d7-b819-4719-909e-dd527d566fd9 req-470cb616-1d9e-4189-a1c9-06c6f093d8bd 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "f50e6a55-f3b5-402b-91b2-12d34386f656-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.344 2 DEBUG nova.compute.manager [req-9633b8d7-b819-4719-909e-dd527d566fd9 req-470cb616-1d9e-4189-a1c9-06c6f093d8bd 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Processing event network-vif-plugged-f069cce3-8536-48d3-a068-b30f9a0107d5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.344 2 DEBUG nova.compute.manager [req-9633b8d7-b819-4719-909e-dd527d566fd9 req-470cb616-1d9e-4189-a1c9-06c6f093d8bd 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Received event network-vif-plugged-f069cce3-8536-48d3-a068-b30f9a0107d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.344 2 DEBUG oslo_concurrency.lockutils [req-9633b8d7-b819-4719-909e-dd527d566fd9 req-470cb616-1d9e-4189-a1c9-06c6f093d8bd 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "f50e6a55-f3b5-402b-91b2-12d34386f656-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.344 2 DEBUG oslo_concurrency.lockutils [req-9633b8d7-b819-4719-909e-dd527d566fd9 req-470cb616-1d9e-4189-a1c9-06c6f093d8bd 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "f50e6a55-f3b5-402b-91b2-12d34386f656-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.345 2 DEBUG oslo_concurrency.lockutils [req-9633b8d7-b819-4719-909e-dd527d566fd9 req-470cb616-1d9e-4189-a1c9-06c6f093d8bd 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "f50e6a55-f3b5-402b-91b2-12d34386f656-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.345 2 DEBUG nova.compute.manager [req-9633b8d7-b819-4719-909e-dd527d566fd9 req-470cb616-1d9e-4189-a1c9-06c6f093d8bd 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] No waiting events found dispatching network-vif-plugged-f069cce3-8536-48d3-a068-b30f9a0107d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.345 2 WARNING nova.compute.manager [req-9633b8d7-b819-4719-909e-dd527d566fd9 req-470cb616-1d9e-4189-a1c9-06c6f093d8bd 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Received unexpected event network-vif-plugged-f069cce3-8536-48d3-a068-b30f9a0107d5 for instance with vm_state building and task_state spawning.
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.348 2 DEBUG nova.compute.manager [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.348 2 DEBUG oslo_concurrency.lockutils [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Acquiring lock "163b34fe-f5ad-414e-bcfa-8a956779638a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.349 2 DEBUG oslo_concurrency.lockutils [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lock "163b34fe-f5ad-414e-bcfa-8a956779638a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.349 2 DEBUG oslo_concurrency.lockutils [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Acquiring lock "163b34fe-f5ad-414e-bcfa-8a956779638a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.350 2 DEBUG oslo_concurrency.lockutils [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lock "163b34fe-f5ad-414e-bcfa-8a956779638a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.350 2 DEBUG oslo_concurrency.lockutils [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lock "163b34fe-f5ad-414e-bcfa-8a956779638a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.351 2 INFO nova.compute.manager [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Terminating instance
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.352 2 DEBUG nova.compute.manager [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.355 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435743.3548286, f50e6a55-f3b5-402b-91b2-12d34386f656 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.355 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] VM Resumed (Lifecycle Event)
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.357 2 DEBUG nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.376 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.378 2 INFO nova.virt.libvirt.driver [-] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Instance spawned successfully.
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.379 2 DEBUG nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.400 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.420 2 DEBUG nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.421 2 DEBUG nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.422 2 DEBUG nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.422 2 DEBUG nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.423 2 DEBUG nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.423 2 DEBUG nova.virt.libvirt.driver [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.428 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:09:03 compute-0 kernel: tape776586d-98 (unregistering): left promiscuous mode
Oct 02 20:09:03 compute-0 NetworkManager[44968]: <info>  [1759435743.4612] device (tape776586d-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:03 compute-0 ovn_controller[88435]: 2025-10-02T20:09:03Z|00167|binding|INFO|Releasing lport e776586d-986f-4e28-9744-f39a9506e590 from this chassis (sb_readonly=0)
Oct 02 20:09:03 compute-0 ovn_controller[88435]: 2025-10-02T20:09:03Z|00168|binding|INFO|Setting lport e776586d-986f-4e28-9744-f39a9506e590 down in Southbound
Oct 02 20:09:03 compute-0 ovn_controller[88435]: 2025-10-02T20:09:03Z|00169|binding|INFO|Removing iface tape776586d-98 ovn-installed in OVS
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:03 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:03.489 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:d6:29 10.100.0.4'], port_security=['fa:16:3e:32:d6:29 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '163b34fe-f5ad-414e-bcfa-8a956779638a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-008f46fc-bb87-421e-842a-684df1b332d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1bb915f165644ddbb5971268b645746a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ce40818d-e998-4577-9064-57e09821ac97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=367746d3-b079-4da8-9599-ecd20c816c12, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=e776586d-986f-4e28-9744-f39a9506e590) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:09:03 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:03.491 285790 INFO neutron.agent.ovn.metadata.agent [-] Port e776586d-986f-4e28-9744-f39a9506e590 in datapath 008f46fc-bb87-421e-842a-684df1b332d5 unbound from our chassis
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.494 2 INFO nova.compute.manager [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Took 9.09 seconds to spawn the instance on the hypervisor.
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.494 2 DEBUG nova.compute.manager [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:03 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:03.499 285790 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 008f46fc-bb87-421e-842a-684df1b332d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 20:09:03 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:03.503 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[5c89f267-ccfd-4ef9-af81-0e28b01b9fb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:03 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:03.505 285790 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5 namespace which is not needed anymore
Oct 02 20:09:03 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct 02 20:09:03 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 3.378s CPU time.
Oct 02 20:09:03 compute-0 systemd-machined[137646]: Machine qemu-14-instance-0000000d terminated.
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.566 2 INFO nova.compute.manager [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Took 10.40 seconds to build instance.
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.574 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.585 2 DEBUG oslo_concurrency.lockutils [None req-dec255a2-5db1-4310-aee9-045bfe0edc36 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "f50e6a55-f3b5-402b-91b2-12d34386f656" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.606 2 INFO nova.virt.libvirt.driver [-] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Instance destroyed successfully.
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.606 2 DEBUG nova.objects.instance [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lazy-loading 'resources' on Instance uuid 163b34fe-f5ad-414e-bcfa-8a956779638a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.623 2 DEBUG nova.virt.libvirt.vif [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T20:08:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-2050959147',display_name='tempest-ServerAddressesTestJSON-server-2050959147',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-2050959147',id=13,image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T20:09:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1bb915f165644ddbb5971268b645746a',ramdisk_id='',reservation_id='r-9bq7u2ay',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='2881b8cb-4cad-4124-8a6e-ae21054c9692',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-980594293',owner_user_name='tempest-ServerAddressesTestJSON-980594293-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T20:09:01Z,user_data=None,user_id='52586757ab2e427f98b2a1d571ef51d2',uuid=163b34fe-f5ad-414e-bcfa-8a956779638a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e776586d-986f-4e28-9744-f39a9506e590", "address": "fa:16:3e:32:d6:29", "network": {"id": "008f46fc-bb87-421e-842a-684df1b332d5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1755644466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1bb915f165644ddbb5971268b645746a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape776586d-98", "ovs_interfaceid": "e776586d-986f-4e28-9744-f39a9506e590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.623 2 DEBUG nova.network.os_vif_util [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Converting VIF {"id": "e776586d-986f-4e28-9744-f39a9506e590", "address": "fa:16:3e:32:d6:29", "network": {"id": "008f46fc-bb87-421e-842a-684df1b332d5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1755644466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1bb915f165644ddbb5971268b645746a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape776586d-98", "ovs_interfaceid": "e776586d-986f-4e28-9744-f39a9506e590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.624 2 DEBUG nova.network.os_vif_util [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:d6:29,bridge_name='br-int',has_traffic_filtering=True,id=e776586d-986f-4e28-9744-f39a9506e590,network=Network(008f46fc-bb87-421e-842a-684df1b332d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape776586d-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.624 2 DEBUG os_vif [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d6:29,bridge_name='br-int',has_traffic_filtering=True,id=e776586d-986f-4e28-9744-f39a9506e590,network=Network(008f46fc-bb87-421e-842a-684df1b332d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape776586d-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.626 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape776586d-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:03 compute-0 nova_compute[355794]: 2025-10-02 20:09:03.635 2 INFO os_vif [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d6:29,bridge_name='br-int',has_traffic_filtering=True,id=e776586d-986f-4e28-9744-f39a9506e590,network=Network(008f46fc-bb87-421e-842a-684df1b332d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape776586d-98')
Oct 02 20:09:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:09:03
Oct 02 20:09:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:09:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:09:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'volumes', 'backups', 'default.rgw.control', 'default.rgw.log', 'images', '.mgr', 'cephfs.cephfs.meta']
Oct 02 20:09:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:09:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:09:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:09:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 232 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 102 op/s
Oct 02 20:09:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:09:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:09:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:09:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:09:03 compute-0 neutron-haproxy-ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5[458738]: [NOTICE]   (458742) : haproxy version is 2.8.14-c23fe91
Oct 02 20:09:03 compute-0 neutron-haproxy-ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5[458738]: [NOTICE]   (458742) : path to executable is /usr/sbin/haproxy
Oct 02 20:09:03 compute-0 neutron-haproxy-ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5[458738]: [WARNING]  (458742) : Exiting Master process...
Oct 02 20:09:03 compute-0 neutron-haproxy-ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5[458738]: [ALERT]    (458742) : Current worker (458746) exited with code 143 (Terminated)
Oct 02 20:09:03 compute-0 neutron-haproxy-ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5[458738]: [WARNING]  (458742) : All workers exited. Exiting... (0)
Oct 02 20:09:03 compute-0 systemd[1]: libpod-7317f391125458421aab1c238a72e391a253949d4cefddc7f79af01bad73ed99.scope: Deactivated successfully.
Oct 02 20:09:03 compute-0 podman[459034]: 2025-10-02 20:09:03.808897732 +0000 UTC m=+0.120002977 container died 7317f391125458421aab1c238a72e391a253949d4cefddc7f79af01bad73ed99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:09:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7317f391125458421aab1c238a72e391a253949d4cefddc7f79af01bad73ed99-userdata-shm.mount: Deactivated successfully.
Oct 02 20:09:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5b0f0c4837fb7a951bbcf753554a3bee6524d7302e80e2f55f658b9aef5fa06-merged.mount: Deactivated successfully.
Oct 02 20:09:03 compute-0 podman[459034]: 2025-10-02 20:09:03.939419346 +0000 UTC m=+0.250524581 container cleanup 7317f391125458421aab1c238a72e391a253949d4cefddc7f79af01bad73ed99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 20:09:03 compute-0 systemd[1]: libpod-conmon-7317f391125458421aab1c238a72e391a253949d4cefddc7f79af01bad73ed99.scope: Deactivated successfully.
Oct 02 20:09:03 compute-0 podman[459052]: 2025-10-02 20:09:03.964510198 +0000 UTC m=+0.119807552 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 02 20:09:03 compute-0 podman[459058]: 2025-10-02 20:09:03.973595278 +0000 UTC m=+0.123768067 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 20:09:04 compute-0 podman[459111]: 2025-10-02 20:09:04.036980521 +0000 UTC m=+0.052148167 container remove 7317f391125458421aab1c238a72e391a253949d4cefddc7f79af01bad73ed99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:09:04 compute-0 podman[459059]: 2025-10-02 20:09:04.037796172 +0000 UTC m=+0.177652729 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 20:09:04 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:04.056 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[d99e4d4f-0e5c-432b-9a1a-11a62927b957]: (4, ('Thu Oct  2 08:09:03 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5 (7317f391125458421aab1c238a72e391a253949d4cefddc7f79af01bad73ed99)\n7317f391125458421aab1c238a72e391a253949d4cefddc7f79af01bad73ed99\nThu Oct  2 08:09:03 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5 (7317f391125458421aab1c238a72e391a253949d4cefddc7f79af01bad73ed99)\n7317f391125458421aab1c238a72e391a253949d4cefddc7f79af01bad73ed99\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:04 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:04.058 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc92d5e-32ea-4c82-b9dc-f3228e29017b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:04 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:04.060 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap008f46fc-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:09:04 compute-0 kernel: tap008f46fc-b0: left promiscuous mode
Oct 02 20:09:04 compute-0 nova_compute[355794]: 2025-10-02 20:09:04.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:04 compute-0 nova_compute[355794]: 2025-10-02 20:09:04.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:04 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:04.078 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[700790f3-903d-4b08-955e-8f4158a5d5ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:04 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:04.103 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[17f2295e-2dd1-4168-86c2-10b7b631e76a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:04 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:04.105 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[6b7244bd-aefb-4dbe-890a-e53a3ed28bfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:04 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:04.125 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[f600713c-f474-4cf3-9b1e-59d496ecc2de]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691056, 'reachable_time': 39015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 459138, 'error': None, 'target': 'ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:04 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:04.128 285947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-008f46fc-bb87-421e-842a-684df1b332d5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 20:09:04 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:04.128 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[fe484556-e499-4165-bbac-6a0b73f5e23b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:09:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d008f46fc\x2dbb87\x2d421e\x2d842a\x2d684df1b332d5.mount: Deactivated successfully.
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.304 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.307 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343a7c7fe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.318 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.321 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 163b34fe-f5ad-414e-bcfa-8a956779638a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 20:09:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:04.322 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/163b34fe-f5ad-414e-bcfa-8a956779638a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}83403cbd27ca997ec29507654ce1f65c7e3234a2ba47473760e788c908204f10" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 20:09:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:09:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:09:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:09:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:09:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:09:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:09:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:09:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:09:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:09:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:09:04 compute-0 nova_compute[355794]: 2025-10-02 20:09:04.507 2 INFO nova.virt.libvirt.driver [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Deleting instance files /var/lib/nova/instances/163b34fe-f5ad-414e-bcfa-8a956779638a_del
Oct 02 20:09:04 compute-0 nova_compute[355794]: 2025-10-02 20:09:04.508 2 INFO nova.virt.libvirt.driver [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Deletion of /var/lib/nova/instances/163b34fe-f5ad-414e-bcfa-8a956779638a_del complete
Oct 02 20:09:04 compute-0 nova_compute[355794]: 2025-10-02 20:09:04.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:04 compute-0 nova_compute[355794]: 2025-10-02 20:09:04.571 2 INFO nova.compute.manager [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Took 1.22 seconds to destroy the instance on the hypervisor.
Oct 02 20:09:04 compute-0 nova_compute[355794]: 2025-10-02 20:09:04.572 2 DEBUG oslo.service.loopingcall [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 20:09:04 compute-0 nova_compute[355794]: 2025-10-02 20:09:04.573 2 DEBUG nova.compute.manager [-] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 20:09:04 compute-0 nova_compute[355794]: 2025-10-02 20:09:04.574 2 DEBUG nova.network.neutron [-] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 20:09:04 compute-0 ceph-mon[191910]: pgmap v1930: 321 pgs: 321 active+clean; 232 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 102 op/s
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.020 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1819 Content-Type: application/json Date: Thu, 02 Oct 2025 20:09:04 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-4f8ae0af-3dbd-4851-9b7c-355faa4f8a7a x-openstack-request-id: req-4f8ae0af-3dbd-4851-9b7c-355faa4f8a7a _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.021 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "163b34fe-f5ad-414e-bcfa-8a956779638a", "name": "tempest-ServerAddressesTestJSON-server-2050959147", "status": "ACTIVE", "tenant_id": "1bb915f165644ddbb5971268b645746a", "user_id": "52586757ab2e427f98b2a1d571ef51d2", "metadata": {}, "hostId": "56f42075bb6ff6f7fbc887848e3fc224c73f541ea5ad6f2d022efd4b", "image": {"id": "2881b8cb-4cad-4124-8a6e-ae21054c9692", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/2881b8cb-4cad-4124-8a6e-ae21054c9692"}]}, "flavor": {"id": "2a4d7fef-934e-4921-8c3b-c6783966faa5", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/2a4d7fef-934e-4921-8c3b-c6783966faa5"}]}, "created": "2025-10-02T20:08:49Z", "updated": "2025-10-02T20:09:03Z", "addresses": {"tempest-ServerAddressesTestJSON-1755644466-network": [{"version": 4, "addr": "10.100.0.4", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:32:d6:29"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/163b34fe-f5ad-414e-bcfa-8a956779638a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/163b34fe-f5ad-414e-bcfa-8a956779638a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-02T20:09:01.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000d", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "deleting", "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.022 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/163b34fe-f5ad-414e-bcfa-8a956779638a used request id req-4f8ae0af-3dbd-4851-9b7c-355faa4f8a7a request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '163b34fe-f5ad-414e-bcfa-8a956779638a' (instance-0000000d)
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: Domain not found: no domain with matching uuid '163b34fe-f5ad-414e-bcfa-8a956779638a' (instance-0000000d): libvirt.libvirtError: Domain not found: no domain with matching uuid '163b34fe-f5ad-414e-bcfa-8a956779638a' (instance-0000000d)
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager     return fut.result()
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager     return self.__get_result()
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager     raise self._exception
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 274, in discover_libvirt_polling
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager     dom_state = domain.state()[0]
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager                 ^^^^^^^^^^^^^^
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/site-packages/libvirt.py", line 3266, in state
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager     raise libvirtError('virDomainGetState() failed')
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager libvirt.libvirtError: Domain not found: no domain with matching uuid '163b34fe-f5ad-414e-bcfa-8a956779638a' (instance-0000000d)
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.026 14 ERROR ceilometer.polling.manager 
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.041 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.055 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.059 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f50e6a55-f3b5-402b-91b2-12d34386f656 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.060 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f50e6a55-f3b5-402b-91b2-12d34386f656 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}83403cbd27ca997ec29507654ce1f65c7e3234a2ba47473760e788c908204f10" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 20:09:05 compute-0 ovn_controller[88435]: 2025-10-02T20:09:05Z|00170|binding|INFO|Releasing lport ad4572b7-e012-418a-9c6b-97a8e10ee248 from this chassis (sb_readonly=0)
Oct 02 20:09:05 compute-0 ovn_controller[88435]: 2025-10-02T20:09:05Z|00171|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 20:09:05 compute-0 nova_compute[355794]: 2025-10-02 20:09:05.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:05 compute-0 nova_compute[355794]: 2025-10-02 20:09:05.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:09:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:09:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1931: 321 pgs: 321 active+clean; 225 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 108 op/s
Oct 02 20:09:05 compute-0 nova_compute[355794]: 2025-10-02 20:09:05.761 2 DEBUG nova.network.neutron [-] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.761 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Thu, 02 Oct 2025 20:09:05 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-cddd4ae7-3114-4599-b6ce-d2152dac3bf3 x-openstack-request-id: req-cddd4ae7-3114-4599-b6ce-d2152dac3bf3 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.762 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "f50e6a55-f3b5-402b-91b2-12d34386f656", "name": "te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6", "status": "ACTIVE", "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "user_id": "e5d4abc29b2e475e9c7c54249ca341c4", "metadata": {"metering.server_group": "f724f930-b01d-4568-9d24-c7060da9fe9c"}, "hostId": "01141092f0da7a843904c1ec95a2fbe7e7386c3244d2a85e02147c4e", "image": {"id": "fe71959f-8f59-4b45-ae05-4216d5f12fab", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/fe71959f-8f59-4b45-ae05-4216d5f12fab"}]}, "flavor": {"id": "2a4d7fef-934e-4921-8c3b-c6783966faa5", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/2a4d7fef-934e-4921-8c3b-c6783966faa5"}]}, "created": "2025-10-02T20:08:51Z", "updated": "2025-10-02T20:09:03Z", "addresses": {"": [{"version": 4, "addr": "10.100.1.149", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:45:37:9a"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/f50e6a55-f3b5-402b-91b2-12d34386f656"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/f50e6a55-f3b5-402b-91b2-12d34386f656"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-02T20:09:03.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.763 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f50e6a55-f3b5-402b-91b2-12d34386f656 used request id req-cddd4ae7-3114-4599-b6ce-d2152dac3bf3 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.766 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f50e6a55-f3b5-402b-91b2-12d34386f656', 'name': 'te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6', 'flavor': {'id': '2a4d7fef-934e-4921-8c3b-c6783966faa5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'fe71959f-8f59-4b45-ae05-4216d5f12fab'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '16e65e6cbbf848e5bb5755e6da3b1d33', 'user_id': 'e5d4abc29b2e475e9c7c54249ca341c4', 'hostId': '01141092f0da7a843904c1ec95a2fbe7e7386c3244d2a85e02147c4e', 'status': 'active', 'metadata': {'metering.server_group': 'f724f930-b01d-4568-9d24-c7060da9fe9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.767 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.767 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.768 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.769 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.770 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:09:05.769097) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:05 compute-0 nova_compute[355794]: 2025-10-02 20:09:05.793 2 INFO nova.compute.manager [-] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Took 1.22 seconds to deallocate network for instance.
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.812 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.813 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.814 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.842 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.843 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.844 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.845 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.846 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.846 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.847 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.848 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:09:05.847542) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:05 compute-0 nova_compute[355794]: 2025-10-02 20:09:05.855 2 DEBUG oslo_concurrency.lockutils [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:09:05 compute-0 nova_compute[355794]: 2025-10-02 20:09:05.856 2 DEBUG oslo_concurrency.lockutils [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:09:05 compute-0 nova_compute[355794]: 2025-10-02 20:09:05.873 2 DEBUG nova.compute.manager [req-4b800c4d-0ceb-4930-bcb8-2f49dcbc9e6e req-b92cfd32-40b8-4dde-b6e3-409269efe1fc 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Received event network-vif-deleted-e776586d-986f-4e28-9744-f39a9506e590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.935 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.937 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.937 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.994 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.995 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:05 compute-0 nova_compute[355794]: 2025-10-02 20:09:05.995 2 DEBUG oslo_concurrency.processutils [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.996 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.998 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.998 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.999 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:05.999 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.000 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.000 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:09:05.999559) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.000 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.001 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.002 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.003 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.004 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.005 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.006 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.006 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.007 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:09:06.007163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.060 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.095 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.097 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.098 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.098 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.099 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.099 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.100 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:09:06.099762) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.100 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.101 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.102 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.103 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.104 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.106 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.106 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.107 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.108 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:09:06.108062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.115 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.121 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for f50e6a55-f3b5-402b-91b2-12d34386f656 / tapf069cce3-85 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.122 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.123 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.124 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.125 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.126 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:09:06.126458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.129 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.130 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.131 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-02T20:09:06.130640) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.131 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6>]
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.133 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.133 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.135 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.136 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:09:06.135536) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.137 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.138 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.138 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.139 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.140 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.140 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.141 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.141 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:09:06.140498) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.141 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.143 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.143 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.144 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.144 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.145 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.145 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.146 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:09:06.145708) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.146 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.147 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.148 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.148 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.149 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.150 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.150 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.150 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.151 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.151 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:09:06.150738) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.151 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.152 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.152 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.152 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.153 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.153 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.154 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.154 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.154 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:09:06.154035) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.155 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.155 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.156 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.156 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.156 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.157 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.157 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.157 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.158 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.159 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.159 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.bytes volume: 6924800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:09:06.157473) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.160 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.162 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.162 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.162 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.163 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.163 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.164 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:09:06.163136) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.164 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.165 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.165 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.166 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.166 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.166 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.167 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.167 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.168 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:09:06.167136) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.168 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.169 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.169 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.170 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.171 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.171 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.171 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.172 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.172 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.173 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.173 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.173 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-02T20:09:06.172948) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.173 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6>]
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.174 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.175 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.175 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.176 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.176 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.176 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:09:06.176002) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.177 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.177 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance f50e6a55-f3b5-402b-91b2-12d34386f656: ceilometer.compute.pollsters.NoVolumeException
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.177 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.178 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.179 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.179 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2730 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.180 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:09:06.179625) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.180 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.bytes volume: 110 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.182 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.182 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.182 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.183 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.183 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:09:06.183102) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.184 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.184 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.185 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.185 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.186 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.186 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.186 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.186 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.187 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.187 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.187 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.188 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:09:06.186500) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.189 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.189 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.189 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.190 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.190 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.190 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.190 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.191 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.192 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.192 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:09:06.190236) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.192 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.192 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.193 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.193 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.193 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.194 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:09:06.192666) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.194 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.194 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.194 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.195 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.195 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 56740000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.195 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/cpu volume: 2450000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.196 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:09:06.194991) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.196 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.197 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.197 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.197 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.197 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.198 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.199 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:09:06.197482) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.199 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.latency volume: 801083505 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.200 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.latency volume: 2943348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.200 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.204 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.204 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.208 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.208 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.208 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.209 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.209 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.209 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.211 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.211 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:09:06.211 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:09:06 compute-0 nova_compute[355794]: 2025-10-02 20:09:06.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:09:06 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3906147760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:09:06 compute-0 nova_compute[355794]: 2025-10-02 20:09:06.618 2 DEBUG oslo_concurrency.processutils [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:09:06 compute-0 nova_compute[355794]: 2025-10-02 20:09:06.628 2 DEBUG nova.compute.provider_tree [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:09:06 compute-0 nova_compute[355794]: 2025-10-02 20:09:06.646 2 DEBUG nova.scheduler.client.report [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:09:06 compute-0 nova_compute[355794]: 2025-10-02 20:09:06.680 2 DEBUG oslo_concurrency.lockutils [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:09:06 compute-0 podman[459161]: 2025-10-02 20:09:06.689961645 +0000 UTC m=+0.104964891 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 20:09:06 compute-0 podman[459160]: 2025-10-02 20:09:06.707333274 +0000 UTC m=+0.121381585 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vendor=Red Hat, Inc., release=1755695350, vcs-type=git, distribution-scope=public)
Oct 02 20:09:06 compute-0 nova_compute[355794]: 2025-10-02 20:09:06.722 2 INFO nova.scheduler.client.report [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Deleted allocations for instance 163b34fe-f5ad-414e-bcfa-8a956779638a
Oct 02 20:09:06 compute-0 ceph-mon[191910]: pgmap v1931: 321 pgs: 321 active+clean; 225 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 108 op/s
Oct 02 20:09:06 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3906147760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:09:06 compute-0 nova_compute[355794]: 2025-10-02 20:09:06.795 2 DEBUG oslo_concurrency.lockutils [None req-05e749bd-1b5f-4eaa-955d-0835e161560a 52586757ab2e427f98b2a1d571ef51d2 1bb915f165644ddbb5971268b645746a - - default default] Lock "163b34fe-f5ad-414e-bcfa-8a956779638a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.446s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:09:07 compute-0 nova_compute[355794]: 2025-10-02 20:09:07.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:09:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1932: 321 pgs: 321 active+clean; 200 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 117 op/s
Oct 02 20:09:08 compute-0 nova_compute[355794]: 2025-10-02 20:09:08.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:09:08 compute-0 nova_compute[355794]: 2025-10-02 20:09:08.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:09:08 compute-0 nova_compute[355794]: 2025-10-02 20:09:08.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:09:08 compute-0 nova_compute[355794]: 2025-10-02 20:09:08.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:08 compute-0 ceph-mon[191910]: pgmap v1932: 321 pgs: 321 active+clean; 200 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 117 op/s
Oct 02 20:09:08 compute-0 nova_compute[355794]: 2025-10-02 20:09:08.836 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759435733.8319538, af875636-eb00-48b8-b1f4-589898eafecb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:09:08 compute-0 nova_compute[355794]: 2025-10-02 20:09:08.837 2 INFO nova.compute.manager [-] [instance: af875636-eb00-48b8-b1f4-589898eafecb] VM Stopped (Lifecycle Event)
Oct 02 20:09:08 compute-0 nova_compute[355794]: 2025-10-02 20:09:08.841 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:09:08 compute-0 nova_compute[355794]: 2025-10-02 20:09:08.842 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:09:08 compute-0 nova_compute[355794]: 2025-10-02 20:09:08.843 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:09:08 compute-0 nova_compute[355794]: 2025-10-02 20:09:08.844 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:09:08 compute-0 nova_compute[355794]: 2025-10-02 20:09:08.873 2 DEBUG nova.compute.manager [None req-7e157111-95d1-4afd-9187-66adc76d468e - - - - - -] [instance: af875636-eb00-48b8-b1f4-589898eafecb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:09:09 compute-0 nova_compute[355794]: 2025-10-02 20:09:09.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 150 op/s
Oct 02 20:09:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:09:10 compute-0 ceph-mon[191910]: pgmap v1933: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 150 op/s
Oct 02 20:09:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 963 KiB/s wr, 133 op/s
Oct 02 20:09:11 compute-0 nova_compute[355794]: 2025-10-02 20:09:11.725 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:09:11 compute-0 nova_compute[355794]: 2025-10-02 20:09:11.740 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:09:11 compute-0 nova_compute[355794]: 2025-10-02 20:09:11.741 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:09:11 compute-0 nova_compute[355794]: 2025-10-02 20:09:11.742 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:09:11 compute-0 nova_compute[355794]: 2025-10-02 20:09:11.742 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:09:11 compute-0 nova_compute[355794]: 2025-10-02 20:09:11.766 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:09:11 compute-0 nova_compute[355794]: 2025-10-02 20:09:11.767 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:09:11 compute-0 nova_compute[355794]: 2025-10-02 20:09:11.768 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:09:11 compute-0 nova_compute[355794]: 2025-10-02 20:09:11.768 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:09:11 compute-0 nova_compute[355794]: 2025-10-02 20:09:11.769 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:09:12 compute-0 ovn_controller[88435]: 2025-10-02T20:09:12Z|00172|binding|INFO|Releasing lport ad4572b7-e012-418a-9c6b-97a8e10ee248 from this chassis (sb_readonly=0)
Oct 02 20:09:12 compute-0 ovn_controller[88435]: 2025-10-02T20:09:12Z|00173|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 20:09:12 compute-0 nova_compute[355794]: 2025-10-02 20:09:12.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:12 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:09:12 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2576045427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:09:12 compute-0 nova_compute[355794]: 2025-10-02 20:09:12.391 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:09:12 compute-0 nova_compute[355794]: 2025-10-02 20:09:12.494 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:09:12 compute-0 nova_compute[355794]: 2025-10-02 20:09:12.496 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:09:12 compute-0 nova_compute[355794]: 2025-10-02 20:09:12.497 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:09:12 compute-0 nova_compute[355794]: 2025-10-02 20:09:12.504 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:09:12 compute-0 nova_compute[355794]: 2025-10-02 20:09:12.504 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:09:12 compute-0 ceph-mon[191910]: pgmap v1934: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 963 KiB/s wr, 133 op/s
Oct 02 20:09:12 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2576045427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:09:12 compute-0 ovn_controller[88435]: 2025-10-02T20:09:12Z|00174|binding|INFO|Releasing lport ad4572b7-e012-418a-9c6b-97a8e10ee248 from this chassis (sb_readonly=0)
Oct 02 20:09:12 compute-0 ovn_controller[88435]: 2025-10-02T20:09:12Z|00175|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 20:09:13 compute-0 nova_compute[355794]: 2025-10-02 20:09:13.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:13 compute-0 nova_compute[355794]: 2025-10-02 20:09:13.064 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:09:13 compute-0 nova_compute[355794]: 2025-10-02 20:09:13.066 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3515MB free_disk=59.93428421020508GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:09:13 compute-0 nova_compute[355794]: 2025-10-02 20:09:13.066 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:09:13 compute-0 nova_compute[355794]: 2025-10-02 20:09:13.066 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009001264691997135 of space, bias 1.0, pg target 0.27003794075991405 quantized to 32 (current 32)
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:09:13 compute-0 nova_compute[355794]: 2025-10-02 20:09:13.172 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:09:13 compute-0 nova_compute[355794]: 2025-10-02 20:09:13.173 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance f50e6a55-f3b5-402b-91b2-12d34386f656 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:09:13 compute-0 nova_compute[355794]: 2025-10-02 20:09:13.173 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:09:13 compute-0 nova_compute[355794]: 2025-10-02 20:09:13.173 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:09:13 compute-0 nova_compute[355794]: 2025-10-02 20:09:13.260 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:09:13 compute-0 nova_compute[355794]: 2025-10-02 20:09:13.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 265 KiB/s wr, 122 op/s
Oct 02 20:09:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:09:13 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2719086043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:09:13 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2719086043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:09:13 compute-0 nova_compute[355794]: 2025-10-02 20:09:13.887 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:09:13 compute-0 nova_compute[355794]: 2025-10-02 20:09:13.898 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:09:13 compute-0 nova_compute[355794]: 2025-10-02 20:09:13.923 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:09:13 compute-0 nova_compute[355794]: 2025-10-02 20:09:13.959 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:09:13 compute-0 nova_compute[355794]: 2025-10-02 20:09:13.960 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:09:14 compute-0 nova_compute[355794]: 2025-10-02 20:09:14.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:15 compute-0 ceph-mon[191910]: pgmap v1935: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 265 KiB/s wr, 122 op/s
Oct 02 20:09:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:09:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1936: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 112 op/s
Oct 02 20:09:15 compute-0 nova_compute[355794]: 2025-10-02 20:09:15.794 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:09:15 compute-0 nova_compute[355794]: 2025-10-02 20:09:15.795 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:09:15 compute-0 nova_compute[355794]: 2025-10-02 20:09:15.795 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:09:16 compute-0 ceph-mon[191910]: pgmap v1936: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 112 op/s
Oct 02 20:09:16 compute-0 podman[459247]: 2025-10-02 20:09:16.685956318 +0000 UTC m=+0.117419929 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 20:09:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 95 op/s
Oct 02 20:09:18 compute-0 nova_compute[355794]: 2025-10-02 20:09:18.604 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759435743.6021688, 163b34fe-f5ad-414e-bcfa-8a956779638a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:09:18 compute-0 nova_compute[355794]: 2025-10-02 20:09:18.606 2 INFO nova.compute.manager [-] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] VM Stopped (Lifecycle Event)
Oct 02 20:09:18 compute-0 nova_compute[355794]: 2025-10-02 20:09:18.633 2 DEBUG nova.compute.manager [None req-bf4d7d2a-50a1-435c-ba96-02a56b3dc09a - - - - - -] [instance: 163b34fe-f5ad-414e-bcfa-8a956779638a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:09:18 compute-0 nova_compute[355794]: 2025-10-02 20:09:18.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:18 compute-0 ceph-mon[191910]: pgmap v1937: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 95 op/s
Oct 02 20:09:19 compute-0 nova_compute[355794]: 2025-10-02 20:09:19.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 70 op/s
Oct 02 20:09:20 compute-0 nova_compute[355794]: 2025-10-02 20:09:20.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:09:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/930382400' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:09:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:09:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/930382400' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:09:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:09:20 compute-0 ceph-mon[191910]: pgmap v1938: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 70 op/s
Oct 02 20:09:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/930382400' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:09:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/930382400' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:09:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 635 KiB/s rd, 20 op/s
Oct 02 20:09:22 compute-0 ceph-mon[191910]: pgmap v1939: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 635 KiB/s rd, 20 op/s
Oct 02 20:09:22 compute-0 nova_compute[355794]: 2025-10-02 20:09:22.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:23 compute-0 nova_compute[355794]: 2025-10-02 20:09:23.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 4 op/s
Oct 02 20:09:24 compute-0 nova_compute[355794]: 2025-10-02 20:09:24.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:24 compute-0 podman[459269]: 2025-10-02 20:09:24.711848436 +0000 UTC m=+0.117482120 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm)
Oct 02 20:09:24 compute-0 podman[459268]: 2025-10-02 20:09:24.735584762 +0000 UTC m=+0.150536862 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:09:24 compute-0 ceph-mon[191910]: pgmap v1940: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 4 op/s
Oct 02 20:09:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:09:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:09:26 compute-0 ceph-mon[191910]: pgmap v1941: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:09:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1942: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:09:28 compute-0 sudo[459308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:09:28 compute-0 sudo[459308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:28 compute-0 sudo[459308]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:28 compute-0 sudo[459333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:09:28 compute-0 sudo[459333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:28 compute-0 sudo[459333]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:28 compute-0 sudo[459358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:09:28 compute-0 sudo[459358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:28 compute-0 nova_compute[355794]: 2025-10-02 20:09:28.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:28 compute-0 sudo[459358]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:28 compute-0 sudo[459383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:09:28 compute-0 sudo[459383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:28 compute-0 ceph-mon[191910]: pgmap v1942: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:09:29 compute-0 nova_compute[355794]: 2025-10-02 20:09:29.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:29 compute-0 sudo[459383]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:09:29 compute-0 podman[157186]: time="2025-10-02T20:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:09:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:09:29 compute-0 sudo[459438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:09:29 compute-0 sudo[459438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:29 compute-0 sudo[459438]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9560 "" "Go-http-client/1.1"
Oct 02 20:09:29 compute-0 sudo[459463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:09:29 compute-0 sudo[459463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:29 compute-0 sudo[459463]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:29 compute-0 sudo[459488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:09:29 compute-0 sudo[459488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:30 compute-0 sudo[459488]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:30 compute-0 sudo[459513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 20:09:30 compute-0 sudo[459513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:30 compute-0 sudo[459513]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:09:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:09:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:09:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:09:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:09:30 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:09:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:09:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:09:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:09:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:09:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:09:30 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 1bbb9fd2-2daf-498e-aeb3-51755954d46e does not exist
Oct 02 20:09:30 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev c3e7556d-92e9-437e-9838-79b249001291 does not exist
Oct 02 20:09:30 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev c4117e6e-00d2-4561-95e1-e74b05812a3e does not exist
Oct 02 20:09:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:09:30 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:09:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:09:30 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:09:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:09:30 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:09:30 compute-0 sudo[459555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:09:30 compute-0 sudo[459555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:30 compute-0 sudo[459555]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:30 compute-0 sudo[459580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:09:30 compute-0 sudo[459580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:30 compute-0 sudo[459580]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:30 compute-0 ceph-mon[191910]: pgmap v1943: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:09:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:09:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:09:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:09:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:09:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:09:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:09:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:09:30 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:09:30 compute-0 sudo[459605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:09:30 compute-0 sudo[459605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:30 compute-0 sudo[459605]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:31 compute-0 sudo[459630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:09:31 compute-0 sudo[459630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:31 compute-0 openstack_network_exporter[372736]: ERROR   20:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:09:31 compute-0 openstack_network_exporter[372736]: ERROR   20:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:09:31 compute-0 openstack_network_exporter[372736]: ERROR   20:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:09:31 compute-0 openstack_network_exporter[372736]: ERROR   20:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:09:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:09:31 compute-0 openstack_network_exporter[372736]: ERROR   20:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:09:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:09:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1944: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:09:31 compute-0 podman[459693]: 2025-10-02 20:09:31.756741089 +0000 UTC m=+0.114484811 container create a09f5921ff7a62bbd07a353977c5b1f3397332bf1a0c93912722e3ccf22001f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wu, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 02 20:09:31 compute-0 podman[459693]: 2025-10-02 20:09:31.697796664 +0000 UTC m=+0.055540396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:09:31 compute-0 systemd[1]: Started libpod-conmon-a09f5921ff7a62bbd07a353977c5b1f3397332bf1a0c93912722e3ccf22001f3.scope.
Oct 02 20:09:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:09:31 compute-0 podman[459693]: 2025-10-02 20:09:31.987144568 +0000 UTC m=+0.344888330 container init a09f5921ff7a62bbd07a353977c5b1f3397332bf1a0c93912722e3ccf22001f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wu, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 20:09:32 compute-0 podman[459693]: 2025-10-02 20:09:32.003187082 +0000 UTC m=+0.360930794 container start a09f5921ff7a62bbd07a353977c5b1f3397332bf1a0c93912722e3ccf22001f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wu, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 20:09:32 compute-0 quizzical_wu[459708]: 167 167
Oct 02 20:09:32 compute-0 systemd[1]: libpod-a09f5921ff7a62bbd07a353977c5b1f3397332bf1a0c93912722e3ccf22001f3.scope: Deactivated successfully.
Oct 02 20:09:32 compute-0 conmon[459708]: conmon a09f5921ff7a62bbd07a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a09f5921ff7a62bbd07a353977c5b1f3397332bf1a0c93912722e3ccf22001f3.scope/container/memory.events
Oct 02 20:09:32 compute-0 podman[459693]: 2025-10-02 20:09:32.051039914 +0000 UTC m=+0.408783626 container attach a09f5921ff7a62bbd07a353977c5b1f3397332bf1a0c93912722e3ccf22001f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 20:09:32 compute-0 podman[459693]: 2025-10-02 20:09:32.051758343 +0000 UTC m=+0.409502065 container died a09f5921ff7a62bbd07a353977c5b1f3397332bf1a0c93912722e3ccf22001f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 20:09:32 compute-0 ceph-mon[191910]: pgmap v1944: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:09:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-49d5ebb7710a7e30b174b791c6f39402b413dff303441f9e02bbb77556ccc520-merged.mount: Deactivated successfully.
Oct 02 20:09:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:32.324 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:09:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:32.326 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:09:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:32.328 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:09:32 compute-0 podman[459693]: 2025-10-02 20:09:32.531581104 +0000 UTC m=+0.889324826 container remove a09f5921ff7a62bbd07a353977c5b1f3397332bf1a0c93912722e3ccf22001f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 20:09:32 compute-0 podman[459713]: 2025-10-02 20:09:32.540751136 +0000 UTC m=+0.473451264 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 20:09:32 compute-0 podman[459715]: 2025-10-02 20:09:32.547616878 +0000 UTC m=+0.480092910 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, version=9.4, vcs-type=git, container_name=kepler, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.expose-services=, release-0.7.12=, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Oct 02 20:09:32 compute-0 systemd[1]: libpod-conmon-a09f5921ff7a62bbd07a353977c5b1f3397332bf1a0c93912722e3ccf22001f3.scope: Deactivated successfully.
Oct 02 20:09:32 compute-0 ovn_controller[88435]: 2025-10-02T20:09:32Z|00176|binding|INFO|Releasing lport ad4572b7-e012-418a-9c6b-97a8e10ee248 from this chassis (sb_readonly=0)
Oct 02 20:09:32 compute-0 ovn_controller[88435]: 2025-10-02T20:09:32Z|00177|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 20:09:32 compute-0 nova_compute[355794]: 2025-10-02 20:09:32.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:32 compute-0 podman[459767]: 2025-10-02 20:09:32.831823327 +0000 UTC m=+0.103637556 container create 3db100426aef350a2092272365a0e0a9f30990a3dc67bc17e92fb74b77ef67a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:09:32 compute-0 podman[459767]: 2025-10-02 20:09:32.78492909 +0000 UTC m=+0.056743369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:09:32 compute-0 systemd[1]: Started libpod-conmon-3db100426aef350a2092272365a0e0a9f30990a3dc67bc17e92fb74b77ef67a2.scope.
Oct 02 20:09:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:09:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e26956587529e5e3e6c630ee28b522cc82536617974983addac82c3e0bcd0c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:09:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e26956587529e5e3e6c630ee28b522cc82536617974983addac82c3e0bcd0c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:09:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e26956587529e5e3e6c630ee28b522cc82536617974983addac82c3e0bcd0c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:09:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e26956587529e5e3e6c630ee28b522cc82536617974983addac82c3e0bcd0c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:09:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e26956587529e5e3e6c630ee28b522cc82536617974983addac82c3e0bcd0c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:09:33 compute-0 podman[459767]: 2025-10-02 20:09:33.105240442 +0000 UTC m=+0.377054741 container init 3db100426aef350a2092272365a0e0a9f30990a3dc67bc17e92fb74b77ef67a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wright, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 20:09:33 compute-0 podman[459767]: 2025-10-02 20:09:33.124656914 +0000 UTC m=+0.396471143 container start 3db100426aef350a2092272365a0e0a9f30990a3dc67bc17e92fb74b77ef67a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wright, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:09:33 compute-0 podman[459767]: 2025-10-02 20:09:33.238606651 +0000 UTC m=+0.510420870 container attach 3db100426aef350a2092272365a0e0a9f30990a3dc67bc17e92fb74b77ef67a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 20:09:33 compute-0 nova_compute[355794]: 2025-10-02 20:09:33.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:09:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:09:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:09:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:09:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:09:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:09:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1945: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:09:34 compute-0 suspicious_wright[459782]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:09:34 compute-0 suspicious_wright[459782]: --> relative data size: 1.0
Oct 02 20:09:34 compute-0 suspicious_wright[459782]: --> All data devices are unavailable
Oct 02 20:09:34 compute-0 nova_compute[355794]: 2025-10-02 20:09:34.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:34 compute-0 systemd[1]: libpod-3db100426aef350a2092272365a0e0a9f30990a3dc67bc17e92fb74b77ef67a2.scope: Deactivated successfully.
Oct 02 20:09:34 compute-0 systemd[1]: libpod-3db100426aef350a2092272365a0e0a9f30990a3dc67bc17e92fb74b77ef67a2.scope: Consumed 1.417s CPU time.
Oct 02 20:09:34 compute-0 podman[459837]: 2025-10-02 20:09:34.690478531 +0000 UTC m=+0.041855755 container died 3db100426aef350a2092272365a0e0a9f30990a3dc67bc17e92fb74b77ef67a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wright, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 20:09:34 compute-0 podman[459812]: 2025-10-02 20:09:34.802746714 +0000 UTC m=+0.212257152 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 20:09:34 compute-0 podman[459811]: 2025-10-02 20:09:34.814910485 +0000 UTC m=+0.224069034 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:09:34 compute-0 ceph-mon[191910]: pgmap v1945: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:09:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e26956587529e5e3e6c630ee28b522cc82536617974983addac82c3e0bcd0c1-merged.mount: Deactivated successfully.
Oct 02 20:09:35 compute-0 podman[459813]: 2025-10-02 20:09:35.064805539 +0000 UTC m=+0.452153542 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:09:35 compute-0 podman[459837]: 2025-10-02 20:09:35.162799515 +0000 UTC m=+0.514176769 container remove 3db100426aef350a2092272365a0e0a9f30990a3dc67bc17e92fb74b77ef67a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:09:35 compute-0 systemd[1]: libpod-conmon-3db100426aef350a2092272365a0e0a9f30990a3dc67bc17e92fb74b77ef67a2.scope: Deactivated successfully.
Oct 02 20:09:35 compute-0 sudo[459630]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:35 compute-0 sudo[459880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:09:35 compute-0 sudo[459880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:35 compute-0 sudo[459880]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:35 compute-0 sudo[459905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:09:35 compute-0 sudo[459905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:35 compute-0 sudo[459905]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:09:35 compute-0 sudo[459930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:09:35 compute-0 sudo[459930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:35 compute-0 sudo[459930]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:09:35.641462) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435775641506, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 857, "num_deletes": 252, "total_data_size": 1123809, "memory_usage": 1143392, "flush_reason": "Manual Compaction"}
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435775687017, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 1103226, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39402, "largest_seqno": 40258, "table_properties": {"data_size": 1098879, "index_size": 1999, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 8688, "raw_average_key_size": 17, "raw_value_size": 1090169, "raw_average_value_size": 2193, "num_data_blocks": 89, "num_entries": 497, "num_filter_entries": 497, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759435704, "oldest_key_time": 1759435704, "file_creation_time": 1759435775, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 45612 microseconds, and 6796 cpu microseconds.
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:09:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:09:35.687073) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 1103226 bytes OK
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:09:35.687097) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:09:35.703550) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:09:35.703576) EVENT_LOG_v1 {"time_micros": 1759435775703568, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:09:35.703601) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 1119562, prev total WAL file size 1119562, number of live WAL files 2.
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:09:35.705067) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323533' seq:0, type:0; will stop at (end)
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(1077KB)], [92(6874KB)]
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435775705243, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 8142637, "oldest_snapshot_seqno": -1}
Oct 02 20:09:35 compute-0 sudo[459955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:09:35 compute-0 sudo[459955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 5588 keys, 7404160 bytes, temperature: kUnknown
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435775846100, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 7404160, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7369105, "index_size": 19969, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14021, "raw_key_size": 145140, "raw_average_key_size": 25, "raw_value_size": 7270164, "raw_average_value_size": 1301, "num_data_blocks": 795, "num_entries": 5588, "num_filter_entries": 5588, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759435775, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:09:35.847048) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 7404160 bytes
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:09:35.862672) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 57.8 rd, 52.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 6.7 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(14.1) write-amplify(6.7) OK, records in: 6108, records dropped: 520 output_compression: NoCompression
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:09:35.862707) EVENT_LOG_v1 {"time_micros": 1759435775862691, "job": 54, "event": "compaction_finished", "compaction_time_micros": 140946, "compaction_time_cpu_micros": 35814, "output_level": 6, "num_output_files": 1, "total_output_size": 7404160, "num_input_records": 6108, "num_output_records": 5588, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435775863823, "job": 54, "event": "table_file_deletion", "file_number": 94}
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435775866921, "job": 54, "event": "table_file_deletion", "file_number": 92}
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:09:35.704518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:09:35.867045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:09:35.867052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:09:35.867053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:09:35.867054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:09:35 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:09:35.867056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:09:36 compute-0 podman[460020]: 2025-10-02 20:09:36.423756077 +0000 UTC m=+0.112416608 container create c60814c963ec6f20c969356b77d09b17399bdf1759810039f50cb8d1ed1dba39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:09:36 compute-0 podman[460020]: 2025-10-02 20:09:36.351855659 +0000 UTC m=+0.040516230 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:09:36 compute-0 systemd[1]: Started libpod-conmon-c60814c963ec6f20c969356b77d09b17399bdf1759810039f50cb8d1ed1dba39.scope.
Oct 02 20:09:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:09:36 compute-0 podman[460020]: 2025-10-02 20:09:36.664818898 +0000 UTC m=+0.353479509 container init c60814c963ec6f20c969356b77d09b17399bdf1759810039f50cb8d1ed1dba39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 20:09:36 compute-0 podman[460020]: 2025-10-02 20:09:36.678740485 +0000 UTC m=+0.367401016 container start c60814c963ec6f20c969356b77d09b17399bdf1759810039f50cb8d1ed1dba39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 20:09:36 compute-0 vigorous_noether[460036]: 167 167
Oct 02 20:09:36 compute-0 systemd[1]: libpod-c60814c963ec6f20c969356b77d09b17399bdf1759810039f50cb8d1ed1dba39.scope: Deactivated successfully.
Oct 02 20:09:36 compute-0 ceph-mon[191910]: pgmap v1946: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:09:36 compute-0 podman[460020]: 2025-10-02 20:09:36.758432998 +0000 UTC m=+0.447093609 container attach c60814c963ec6f20c969356b77d09b17399bdf1759810039f50cb8d1ed1dba39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 20:09:36 compute-0 podman[460020]: 2025-10-02 20:09:36.758956082 +0000 UTC m=+0.447616643 container died c60814c963ec6f20c969356b77d09b17399bdf1759810039f50cb8d1ed1dba39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:09:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-4959a54e17a58c2c0be4a1db1ba2c253eeb7fc9e33cdc840287598ede1cb4172-merged.mount: Deactivated successfully.
Oct 02 20:09:37 compute-0 podman[460020]: 2025-10-02 20:09:37.150085082 +0000 UTC m=+0.838745633 container remove c60814c963ec6f20c969356b77d09b17399bdf1759810039f50cb8d1ed1dba39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 20:09:37 compute-0 systemd[1]: libpod-conmon-c60814c963ec6f20c969356b77d09b17399bdf1759810039f50cb8d1ed1dba39.scope: Deactivated successfully.
Oct 02 20:09:37 compute-0 podman[460052]: 2025-10-02 20:09:37.283173844 +0000 UTC m=+0.369561222 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, maintainer=Red Hat, Inc., architecture=x86_64, release=1755695350, version=9.6, managed_by=edpm_ansible, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 20:09:37 compute-0 podman[460053]: 2025-10-02 20:09:37.305087422 +0000 UTC m=+0.387890395 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 20:09:37 compute-0 podman[460101]: 2025-10-02 20:09:37.49636445 +0000 UTC m=+0.134178502 container create 8f93cdead991006706a5e9f90c2473264948973f7dda7cba9141821c7333b4cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:09:37 compute-0 podman[460101]: 2025-10-02 20:09:37.433077859 +0000 UTC m=+0.070891922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:09:37 compute-0 systemd[1]: Started libpod-conmon-8f93cdead991006706a5e9f90c2473264948973f7dda7cba9141821c7333b4cc.scope.
Oct 02 20:09:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:09:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1947: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:09:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca08bc56a2c28bbf1925458c8492f3de8f27df7d9a23886720ae183cc3680124/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:09:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca08bc56a2c28bbf1925458c8492f3de8f27df7d9a23886720ae183cc3680124/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:09:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca08bc56a2c28bbf1925458c8492f3de8f27df7d9a23886720ae183cc3680124/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:09:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca08bc56a2c28bbf1925458c8492f3de8f27df7d9a23886720ae183cc3680124/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:09:37 compute-0 podman[460101]: 2025-10-02 20:09:37.74875081 +0000 UTC m=+0.386564872 container init 8f93cdead991006706a5e9f90c2473264948973f7dda7cba9141821c7333b4cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 20:09:37 compute-0 podman[460101]: 2025-10-02 20:09:37.768303506 +0000 UTC m=+0.406117558 container start 8f93cdead991006706a5e9f90c2473264948973f7dda7cba9141821c7333b4cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:09:37 compute-0 podman[460101]: 2025-10-02 20:09:37.840182273 +0000 UTC m=+0.477996335 container attach 8f93cdead991006706a5e9f90c2473264948973f7dda7cba9141821c7333b4cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]: {
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:     "0": [
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:         {
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "devices": [
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "/dev/loop3"
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             ],
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "lv_name": "ceph_lv0",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "lv_size": "21470642176",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "name": "ceph_lv0",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "tags": {
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.cluster_name": "ceph",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.crush_device_class": "",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.encrypted": "0",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.osd_id": "0",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.type": "block",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.vdo": "0"
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             },
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "type": "block",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "vg_name": "ceph_vg0"
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:         }
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:     ],
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:     "1": [
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:         {
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "devices": [
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "/dev/loop4"
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             ],
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "lv_name": "ceph_lv1",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "lv_size": "21470642176",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "name": "ceph_lv1",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "tags": {
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.cluster_name": "ceph",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.crush_device_class": "",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.encrypted": "0",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.osd_id": "1",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.type": "block",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.vdo": "0"
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             },
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "type": "block",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "vg_name": "ceph_vg1"
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:         }
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:     ],
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:     "2": [
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:         {
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "devices": [
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "/dev/loop5"
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             ],
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "lv_name": "ceph_lv2",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "lv_size": "21470642176",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "name": "ceph_lv2",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "tags": {
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.cluster_name": "ceph",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.crush_device_class": "",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.encrypted": "0",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.osd_id": "2",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.type": "block",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:                 "ceph.vdo": "0"
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             },
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "type": "block",
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:             "vg_name": "ceph_vg2"
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:         }
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]:     ]
Oct 02 20:09:38 compute-0 laughing_bardeen[460116]: }
Oct 02 20:09:38 compute-0 nova_compute[355794]: 2025-10-02 20:09:38.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:38 compute-0 systemd[1]: libpod-8f93cdead991006706a5e9f90c2473264948973f7dda7cba9141821c7333b4cc.scope: Deactivated successfully.
Oct 02 20:09:38 compute-0 podman[460101]: 2025-10-02 20:09:38.657040348 +0000 UTC m=+1.294854400 container died 8f93cdead991006706a5e9f90c2473264948973f7dda7cba9141821c7333b4cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 20:09:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca08bc56a2c28bbf1925458c8492f3de8f27df7d9a23886720ae183cc3680124-merged.mount: Deactivated successfully.
Oct 02 20:09:38 compute-0 ceph-mon[191910]: pgmap v1947: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:09:38 compute-0 podman[460101]: 2025-10-02 20:09:38.932659142 +0000 UTC m=+1.570473164 container remove 8f93cdead991006706a5e9f90c2473264948973f7dda7cba9141821c7333b4cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 20:09:38 compute-0 systemd[1]: libpod-conmon-8f93cdead991006706a5e9f90c2473264948973f7dda7cba9141821c7333b4cc.scope: Deactivated successfully.
Oct 02 20:09:38 compute-0 sudo[459955]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:39 compute-0 sudo[460138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:09:39 compute-0 sudo[460138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:39 compute-0 sudo[460138]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:39 compute-0 sudo[460163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:09:39 compute-0 sudo[460163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:39 compute-0 sudo[460163]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:39 compute-0 sudo[460188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:09:39 compute-0 sudo[460188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:39 compute-0 sudo[460188]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:39 compute-0 sudo[460213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:09:39 compute-0 nova_compute[355794]: 2025-10-02 20:09:39.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:39 compute-0 sudo[460213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 1 op/s
Oct 02 20:09:40 compute-0 podman[460277]: 2025-10-02 20:09:40.186071356 +0000 UTC m=+0.104975361 container create 9c798244833c2dea2c95ca93d68c10dc886184f5db3726a252e371ab80bda5a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dewdney, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 20:09:40 compute-0 podman[460277]: 2025-10-02 20:09:40.136673532 +0000 UTC m=+0.055577557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:09:40 compute-0 systemd[1]: Started libpod-conmon-9c798244833c2dea2c95ca93d68c10dc886184f5db3726a252e371ab80bda5a8.scope.
Oct 02 20:09:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:09:40 compute-0 podman[460277]: 2025-10-02 20:09:40.504489849 +0000 UTC m=+0.423393934 container init 9c798244833c2dea2c95ca93d68c10dc886184f5db3726a252e371ab80bda5a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:09:40 compute-0 podman[460277]: 2025-10-02 20:09:40.520021189 +0000 UTC m=+0.438925234 container start 9c798244833c2dea2c95ca93d68c10dc886184f5db3726a252e371ab80bda5a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dewdney, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:09:40 compute-0 cranky_dewdney[460293]: 167 167
Oct 02 20:09:40 compute-0 systemd[1]: libpod-9c798244833c2dea2c95ca93d68c10dc886184f5db3726a252e371ab80bda5a8.scope: Deactivated successfully.
Oct 02 20:09:40 compute-0 podman[460277]: 2025-10-02 20:09:40.579291703 +0000 UTC m=+0.498195808 container attach 9c798244833c2dea2c95ca93d68c10dc886184f5db3726a252e371ab80bda5a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dewdney, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 20:09:40 compute-0 podman[460277]: 2025-10-02 20:09:40.582084466 +0000 UTC m=+0.500988511 container died 9c798244833c2dea2c95ca93d68c10dc886184f5db3726a252e371ab80bda5a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dewdney, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:09:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:09:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca76cb3f0de61a6ca19178783d2436b74225f4918a67e8c197e26a9f4b063f3c-merged.mount: Deactivated successfully.
Oct 02 20:09:40 compute-0 podman[460277]: 2025-10-02 20:09:40.932612276 +0000 UTC m=+0.851516311 container remove 9c798244833c2dea2c95ca93d68c10dc886184f5db3726a252e371ab80bda5a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 20:09:40 compute-0 ceph-mon[191910]: pgmap v1948: 321 pgs: 321 active+clean; 185 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 1 op/s
Oct 02 20:09:40 compute-0 systemd[1]: libpod-conmon-9c798244833c2dea2c95ca93d68c10dc886184f5db3726a252e371ab80bda5a8.scope: Deactivated successfully.
Oct 02 20:09:41 compute-0 podman[460317]: 2025-10-02 20:09:41.317645156 +0000 UTC m=+0.130969736 container create e6ff014a504fdd3eb87d465a23588393d46967f4607d2c757524dedbb35211c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct 02 20:09:41 compute-0 podman[460317]: 2025-10-02 20:09:41.233194528 +0000 UTC m=+0.046519178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:09:41 compute-0 systemd[1]: Started libpod-conmon-e6ff014a504fdd3eb87d465a23588393d46967f4607d2c757524dedbb35211c9.scope.
Oct 02 20:09:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0247ea5f5f55e92d5e649fc10c3403aea766545506ec2e5a44c93278ff87267c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0247ea5f5f55e92d5e649fc10c3403aea766545506ec2e5a44c93278ff87267c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0247ea5f5f55e92d5e649fc10c3403aea766545506ec2e5a44c93278ff87267c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0247ea5f5f55e92d5e649fc10c3403aea766545506ec2e5a44c93278ff87267c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:09:41 compute-0 podman[460317]: 2025-10-02 20:09:41.5587965 +0000 UTC m=+0.372121070 container init e6ff014a504fdd3eb87d465a23588393d46967f4607d2c757524dedbb35211c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:09:41 compute-0 podman[460317]: 2025-10-02 20:09:41.577521444 +0000 UTC m=+0.390846024 container start e6ff014a504fdd3eb87d465a23588393d46967f4607d2c757524dedbb35211c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 20:09:41 compute-0 podman[460317]: 2025-10-02 20:09:41.650967152 +0000 UTC m=+0.464291772 container attach e6ff014a504fdd3eb87d465a23588393d46967f4607d2c757524dedbb35211c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:09:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1949: 321 pgs: 321 active+clean; 191 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 599 KiB/s wr, 6 op/s
Oct 02 20:09:41 compute-0 ceph-mon[191910]: pgmap v1949: 321 pgs: 321 active+clean; 191 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 599 KiB/s wr, 6 op/s
Oct 02 20:09:42 compute-0 nova_compute[355794]: 2025-10-02 20:09:42.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:42.827 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '6e:8a:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:85:f4:22:b9:4f'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:09:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:42.831 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 20:09:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:09:42.835 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1de2af17-a89c-45e5-97c6-db433f26bbb6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]: {
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:         "osd_id": 1,
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:         "type": "bluestore"
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:     },
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:         "osd_id": 2,
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:         "type": "bluestore"
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:     },
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:         "osd_id": 0,
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:         "type": "bluestore"
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]:     }
Oct 02 20:09:42 compute-0 festive_kowalevski[460332]: }
Oct 02 20:09:42 compute-0 systemd[1]: libpod-e6ff014a504fdd3eb87d465a23588393d46967f4607d2c757524dedbb35211c9.scope: Deactivated successfully.
Oct 02 20:09:42 compute-0 systemd[1]: libpod-e6ff014a504fdd3eb87d465a23588393d46967f4607d2c757524dedbb35211c9.scope: Consumed 1.262s CPU time.
Oct 02 20:09:42 compute-0 podman[460317]: 2025-10-02 20:09:42.908897345 +0000 UTC m=+1.722221925 container died e6ff014a504fdd3eb87d465a23588393d46967f4607d2c757524dedbb35211c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:09:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0247ea5f5f55e92d5e649fc10c3403aea766545506ec2e5a44c93278ff87267c-merged.mount: Deactivated successfully.
Oct 02 20:09:43 compute-0 podman[460317]: 2025-10-02 20:09:43.194665866 +0000 UTC m=+2.007990466 container remove e6ff014a504fdd3eb87d465a23588393d46967f4607d2c757524dedbb35211c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 20:09:43 compute-0 systemd[1]: libpod-conmon-e6ff014a504fdd3eb87d465a23588393d46967f4607d2c757524dedbb35211c9.scope: Deactivated successfully.
Oct 02 20:09:43 compute-0 sudo[460213]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:09:43 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:09:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:09:43 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:09:43 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 8c408af5-2ffc-425f-bcfe-05ee77c6249b does not exist
Oct 02 20:09:43 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 56cfae14-479a-43f7-a0fb-194d3bb36495 does not exist
Oct 02 20:09:43 compute-0 sudo[460379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:09:43 compute-0 sudo[460379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:43 compute-0 sudo[460379]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:43 compute-0 sudo[460404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:09:43 compute-0 sudo[460404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:09:43 compute-0 sudo[460404]: pam_unix(sudo:session): session closed for user root
Oct 02 20:09:43 compute-0 nova_compute[355794]: 2025-10-02 20:09:43.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 199 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 1.0 MiB/s wr, 25 op/s
Oct 02 20:09:43 compute-0 ovn_controller[88435]: 2025-10-02T20:09:43Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:37:9a 10.100.1.149
Oct 02 20:09:43 compute-0 ovn_controller[88435]: 2025-10-02T20:09:43Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:37:9a 10.100.1.149
Oct 02 20:09:44 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:09:44 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:09:44 compute-0 ceph-mon[191910]: pgmap v1950: 321 pgs: 321 active+clean; 199 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 1.0 MiB/s wr, 25 op/s
Oct 02 20:09:44 compute-0 nova_compute[355794]: 2025-10-02 20:09:44.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:09:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 209 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 250 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Oct 02 20:09:46 compute-0 ceph-mon[191910]: pgmap v1951: 321 pgs: 321 active+clean; 209 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 250 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Oct 02 20:09:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 216 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 20:09:47 compute-0 podman[460429]: 2025-10-02 20:09:47.751136477 +0000 UTC m=+0.157407195 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Oct 02 20:09:48 compute-0 nova_compute[355794]: 2025-10-02 20:09:48.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:48 compute-0 ceph-mon[191910]: pgmap v1952: 321 pgs: 321 active+clean; 216 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 20:09:49 compute-0 nova_compute[355794]: 2025-10-02 20:09:49.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 02 20:09:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:09:50 compute-0 ceph-mon[191910]: pgmap v1953: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 02 20:09:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 274 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 02 20:09:52 compute-0 ceph-mon[191910]: pgmap v1954: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 274 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 02 20:09:53 compute-0 nova_compute[355794]: 2025-10-02 20:09:53.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 268 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Oct 02 20:09:54 compute-0 nova_compute[355794]: 2025-10-02 20:09:54.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:54 compute-0 ceph-mon[191910]: pgmap v1955: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 268 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Oct 02 20:09:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:09:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 1.1 MiB/s wr, 33 op/s
Oct 02 20:09:55 compute-0 podman[460450]: 2025-10-02 20:09:55.716150529 +0000 UTC m=+0.128168903 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 20:09:55 compute-0 podman[460451]: 2025-10-02 20:09:55.741869928 +0000 UTC m=+0.151512669 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true)
Oct 02 20:09:56 compute-0 ceph-mon[191910]: pgmap v1956: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 1.1 MiB/s wr, 33 op/s
Oct 02 20:09:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 80 KiB/s wr, 11 op/s
Oct 02 20:09:58 compute-0 nova_compute[355794]: 2025-10-02 20:09:58.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:58 compute-0 ceph-mon[191910]: pgmap v1957: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 80 KiB/s wr, 11 op/s
Oct 02 20:09:59 compute-0 nova_compute[355794]: 2025-10-02 20:09:59.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:09:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 73 KiB/s wr, 2 op/s
Oct 02 20:09:59 compute-0 podman[157186]: time="2025-10-02T20:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:09:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:09:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9559 "" "Go-http-client/1.1"
Oct 02 20:10:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:10:00 compute-0 ceph-mon[191910]: pgmap v1958: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 73 KiB/s wr, 2 op/s
Oct 02 20:10:01 compute-0 openstack_network_exporter[372736]: ERROR   20:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:10:01 compute-0 openstack_network_exporter[372736]: ERROR   20:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:10:01 compute-0 openstack_network_exporter[372736]: ERROR   20:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:10:01 compute-0 openstack_network_exporter[372736]: ERROR   20:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:10:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:10:01 compute-0 openstack_network_exporter[372736]: ERROR   20:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:10:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:10:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s wr, 0 op/s
Oct 02 20:10:02 compute-0 ceph-mon[191910]: pgmap v1959: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s wr, 0 op/s
Oct 02 20:10:03 compute-0 nova_compute[355794]: 2025-10-02 20:10:03.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:10:03 compute-0 nova_compute[355794]: 2025-10-02 20:10:03.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:10:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:10:03
Oct 02 20:10:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:10:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:10:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.meta', 'images', 'vms', 'cephfs.cephfs.data', 'backups']
Oct 02 20:10:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:10:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:10:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:10:03 compute-0 nova_compute[355794]: 2025-10-02 20:10:03.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:10:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:10:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:10:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:10:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Oct 02 20:10:03 compute-0 podman[460492]: 2025-10-02 20:10:03.72888378 +0000 UTC m=+0.136949204 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 20:10:03 compute-0 podman[460493]: 2025-10-02 20:10:03.73681744 +0000 UTC m=+0.149216789 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.29.0, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, name=ubi9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 20:10:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:10:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:10:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:10:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:10:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:10:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:10:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:10:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:10:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:10:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:10:04 compute-0 nova_compute[355794]: 2025-10-02 20:10:04.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:04 compute-0 ceph-mon[191910]: pgmap v1960: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Oct 02 20:10:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:10:05 compute-0 podman[460528]: 2025-10-02 20:10:05.706204905 +0000 UTC m=+0.115062277 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:10:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 02 20:10:05 compute-0 podman[460529]: 2025-10-02 20:10:05.72647472 +0000 UTC m=+0.128704107 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 20:10:05 compute-0 podman[460530]: 2025-10-02 20:10:05.776147471 +0000 UTC m=+0.172034921 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:10:05 compute-0 ceph-mon[191910]: pgmap v1961: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 02 20:10:07 compute-0 nova_compute[355794]: 2025-10-02 20:10:07.577 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:10:07 compute-0 podman[460593]: 2025-10-02 20:10:07.709455304 +0000 UTC m=+0.113584967 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 20:10:07 compute-0 podman[460592]: 2025-10-02 20:10:07.713533412 +0000 UTC m=+0.125870621 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, version=9.6)
Oct 02 20:10:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 20:10:08 compute-0 nova_compute[355794]: 2025-10-02 20:10:08.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:10:08 compute-0 nova_compute[355794]: 2025-10-02 20:10:08.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:10:08 compute-0 nova_compute[355794]: 2025-10-02 20:10:08.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:08 compute-0 nova_compute[355794]: 2025-10-02 20:10:08.759 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:10:08 compute-0 nova_compute[355794]: 2025-10-02 20:10:08.760 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:10:08 compute-0 nova_compute[355794]: 2025-10-02 20:10:08.761 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:10:08 compute-0 ceph-mon[191910]: pgmap v1962: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 20:10:09 compute-0 nova_compute[355794]: 2025-10-02 20:10:09.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1963: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 20:10:10 compute-0 nova_compute[355794]: 2025-10-02 20:10:10.408 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Updating instance_info_cache with network_info: [{"id": "f069cce3-8536-48d3-a068-b30f9a0107d5", "address": "fa:16:3e:45:37:9a", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.149", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069cce3-85", "ovs_interfaceid": "f069cce3-8536-48d3-a068-b30f9a0107d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:10:10 compute-0 nova_compute[355794]: 2025-10-02 20:10:10.434 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:10:10 compute-0 nova_compute[355794]: 2025-10-02 20:10:10.435 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:10:10 compute-0 nova_compute[355794]: 2025-10-02 20:10:10.437 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:10:10 compute-0 nova_compute[355794]: 2025-10-02 20:10:10.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:10:10 compute-0 nova_compute[355794]: 2025-10-02 20:10:10.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:10:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:10:10 compute-0 nova_compute[355794]: 2025-10-02 20:10:10.665 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:10:10 compute-0 nova_compute[355794]: 2025-10-02 20:10:10.666 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:10:10 compute-0 nova_compute[355794]: 2025-10-02 20:10:10.667 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:10:10 compute-0 nova_compute[355794]: 2025-10-02 20:10:10.667 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:10:10 compute-0 nova_compute[355794]: 2025-10-02 20:10:10.668 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:10:10 compute-0 ceph-mon[191910]: pgmap v1963: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 20:10:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:10:11 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1992082323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:10:11 compute-0 nova_compute[355794]: 2025-10-02 20:10:11.271 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:10:11 compute-0 nova_compute[355794]: 2025-10-02 20:10:11.401 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:10:11 compute-0 nova_compute[355794]: 2025-10-02 20:10:11.403 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:10:11 compute-0 nova_compute[355794]: 2025-10-02 20:10:11.403 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:10:11 compute-0 nova_compute[355794]: 2025-10-02 20:10:11.412 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:10:11 compute-0 nova_compute[355794]: 2025-10-02 20:10:11.413 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:10:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 20:10:11 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1992082323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:10:11 compute-0 nova_compute[355794]: 2025-10-02 20:10:11.933 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:10:11 compute-0 nova_compute[355794]: 2025-10-02 20:10:11.935 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3489MB free_disk=59.90974807739258GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:10:11 compute-0 nova_compute[355794]: 2025-10-02 20:10:11.935 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:10:11 compute-0 nova_compute[355794]: 2025-10-02 20:10:11.936 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:10:12 compute-0 nova_compute[355794]: 2025-10-02 20:10:12.023 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:10:12 compute-0 nova_compute[355794]: 2025-10-02 20:10:12.024 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance f50e6a55-f3b5-402b-91b2-12d34386f656 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:10:12 compute-0 nova_compute[355794]: 2025-10-02 20:10:12.024 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:10:12 compute-0 nova_compute[355794]: 2025-10-02 20:10:12.024 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:10:12 compute-0 nova_compute[355794]: 2025-10-02 20:10:12.080 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:10:12 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:10:12 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/284442019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:10:12 compute-0 nova_compute[355794]: 2025-10-02 20:10:12.634 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:10:12 compute-0 nova_compute[355794]: 2025-10-02 20:10:12.648 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:10:12 compute-0 nova_compute[355794]: 2025-10-02 20:10:12.672 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:10:12 compute-0 nova_compute[355794]: 2025-10-02 20:10:12.675 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:10:12 compute-0 nova_compute[355794]: 2025-10-02 20:10:12.676 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:10:12 compute-0 ceph-mon[191910]: pgmap v1964: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 20:10:12 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/284442019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013091419019014131 of space, bias 1.0, pg target 0.39274257057042394 quantized to 32 (current 32)
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:10:13 compute-0 nova_compute[355794]: 2025-10-02 20:10:13.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 20:10:14 compute-0 nova_compute[355794]: 2025-10-02 20:10:14.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:14 compute-0 ceph-mon[191910]: pgmap v1965: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 20:10:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:10:15 compute-0 nova_compute[355794]: 2025-10-02 20:10:15.673 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:10:15 compute-0 nova_compute[355794]: 2025-10-02 20:10:15.674 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:10:15 compute-0 nova_compute[355794]: 2025-10-02 20:10:15.674 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:10:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 767 B/s wr, 0 op/s
Oct 02 20:10:16 compute-0 nova_compute[355794]: 2025-10-02 20:10:16.571 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:10:16 compute-0 ceph-mon[191910]: pgmap v1966: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 767 B/s wr, 0 op/s
Oct 02 20:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:10:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2829 syncs, 3.68 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 2346 writes, 8456 keys, 2346 commit groups, 1.0 writes per commit group, ingest: 8.13 MB, 0.01 MB/s
                                            Interval WAL: 2346 writes, 966 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:10:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 767 B/s wr, 0 op/s
Oct 02 20:10:18 compute-0 nova_compute[355794]: 2025-10-02 20:10:18.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:18 compute-0 podman[460683]: 2025-10-02 20:10:18.74201704 +0000 UTC m=+0.161393439 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 20:10:18 compute-0 ceph-mon[191910]: pgmap v1967: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 767 B/s wr, 0 op/s
Oct 02 20:10:19 compute-0 nova_compute[355794]: 2025-10-02 20:10:19.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:10:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:10:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2760863857' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:10:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:10:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2760863857' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:10:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:10:20 compute-0 ceph-mon[191910]: pgmap v1968: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:10:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2760863857' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:10:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2760863857' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:10:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:10:22 compute-0 ovn_controller[88435]: 2025-10-02T20:10:22Z|00178|memory_trim|INFO|Detected inactivity (last active 30019 ms ago): trimming memory
Oct 02 20:10:22 compute-0 ceph-mon[191910]: pgmap v1969: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:10:23 compute-0 nova_compute[355794]: 2025-10-02 20:10:23.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:10:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2779 syncs, 3.76 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1922 writes, 6979 keys, 1922 commit groups, 1.0 writes per commit group, ingest: 7.75 MB, 0.01 MB/s
                                            Interval WAL: 1922 writes, 792 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:10:24 compute-0 nova_compute[355794]: 2025-10-02 20:10:24.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:24 compute-0 ceph-mon[191910]: pgmap v1970: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:10:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:10:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:10:26 compute-0 podman[460704]: 2025-10-02 20:10:26.682239868 +0000 UTC m=+0.100547804 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:10:26 compute-0 podman[460705]: 2025-10-02 20:10:26.694452401 +0000 UTC m=+0.109991824 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Oct 02 20:10:26 compute-0 ceph-mon[191910]: pgmap v1971: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:10:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:10:28 compute-0 nova_compute[355794]: 2025-10-02 20:10:28.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:28 compute-0 ceph-mon[191910]: pgmap v1972: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:10:29 compute-0 nova_compute[355794]: 2025-10-02 20:10:29.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:10:29 compute-0 podman[157186]: time="2025-10-02T20:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:10:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:10:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9562 "" "Go-http-client/1.1"
Oct 02 20:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:10:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 8697 writes, 34K keys, 8697 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 8697 writes, 2105 syncs, 4.13 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1674 writes, 6961 keys, 1674 commit groups, 1.0 writes per commit group, ingest: 8.89 MB, 0.01 MB/s
                                            Interval WAL: 1674 writes, 643 syncs, 2.60 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:10:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:10:30 compute-0 ceph-mon[191910]: pgmap v1973: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:10:31 compute-0 openstack_network_exporter[372736]: ERROR   20:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:10:31 compute-0 openstack_network_exporter[372736]: ERROR   20:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:10:31 compute-0 openstack_network_exporter[372736]: ERROR   20:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:10:31 compute-0 openstack_network_exporter[372736]: ERROR   20:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:10:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:10:31 compute-0 openstack_network_exporter[372736]: ERROR   20:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:10:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:10:31 compute-0 ceph-mgr[192222]: [devicehealth INFO root] Check health
Oct 02 20:10:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:10:32 compute-0 ceph-mon[191910]: pgmap v1974: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:10:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:10:32.325 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:10:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:10:32.325 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:10:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:10:32.327 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:10:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:10:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:10:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:10:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:10:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:10:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:10:33 compute-0 nova_compute[355794]: 2025-10-02 20:10:33.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Oct 02 20:10:34 compute-0 nova_compute[355794]: 2025-10-02 20:10:34.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:34 compute-0 podman[460744]: 2025-10-02 20:10:34.695589896 +0000 UTC m=+0.112756946 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:10:34 compute-0 podman[460745]: 2025-10-02 20:10:34.722862526 +0000 UTC m=+0.119589977 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.openshift.tags=base rhel9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, managed_by=edpm_ansible, name=ubi9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Oct 02 20:10:34 compute-0 ceph-mon[191910]: pgmap v1975: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Oct 02 20:10:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:10:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Oct 02 20:10:36 compute-0 podman[460781]: 2025-10-02 20:10:36.703759655 +0000 UTC m=+0.107017345 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:10:36 compute-0 podman[460780]: 2025-10-02 20:10:36.716211943 +0000 UTC m=+0.127903286 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 20:10:36 compute-0 podman[460782]: 2025-10-02 20:10:36.779284018 +0000 UTC m=+0.170782148 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct 02 20:10:36 compute-0 ceph-mon[191910]: pgmap v1976: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Oct 02 20:10:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Oct 02 20:10:38 compute-0 podman[460842]: 2025-10-02 20:10:38.69127631 +0000 UTC m=+0.123041888 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, architecture=x86_64, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, release=1755695350, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 20:10:38 compute-0 nova_compute[355794]: 2025-10-02 20:10:38.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:38 compute-0 podman[460843]: 2025-10-02 20:10:38.740338894 +0000 UTC m=+0.150178073 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 20:10:38 compute-0 ceph-mon[191910]: pgmap v1977: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Oct 02 20:10:39 compute-0 nova_compute[355794]: 2025-10-02 20:10:39.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Oct 02 20:10:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:10:40 compute-0 ceph-mon[191910]: pgmap v1978: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Oct 02 20:10:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Oct 02 20:10:42 compute-0 ceph-mon[191910]: pgmap v1979: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Oct 02 20:10:43 compute-0 nova_compute[355794]: 2025-10-02 20:10:43.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:43 compute-0 sudo[460886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:10:43 compute-0 sudo[460886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:43 compute-0 sudo[460886]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1980: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Oct 02 20:10:43 compute-0 sudo[460911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:10:43 compute-0 sudo[460911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:43 compute-0 sudo[460911]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:44 compute-0 sudo[460936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:10:44 compute-0 sudo[460936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:44 compute-0 sudo[460936]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:44 compute-0 sudo[460961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:10:44 compute-0 sudo[460961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:44 compute-0 nova_compute[355794]: 2025-10-02 20:10:44.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:44 compute-0 sudo[460961]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:44 compute-0 ceph-mon[191910]: pgmap v1980: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Oct 02 20:10:44 compute-0 sudo[461015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:10:44 compute-0 sudo[461015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:44 compute-0 sudo[461015]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:45 compute-0 sudo[461040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:10:45 compute-0 sudo[461040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:45 compute-0 sudo[461040]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:45 compute-0 sudo[461065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:10:45 compute-0 sudo[461065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:45 compute-0 sudo[461065]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:45 compute-0 sudo[461090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- inventory --format=json-pretty --filter-for-batch
Oct 02 20:10:45 compute-0 sudo[461090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:10:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1981: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:10:45 compute-0 podman[461153]: 2025-10-02 20:10:45.855142822 +0000 UTC m=+0.077191008 container create 16b588e6037429c388ef8ddd1a85f9ef077d440eae3b2cd79f39edaa155499a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mirzakhani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 20:10:45 compute-0 podman[461153]: 2025-10-02 20:10:45.826174777 +0000 UTC m=+0.048222973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:10:45 compute-0 systemd[1]: Started libpod-conmon-16b588e6037429c388ef8ddd1a85f9ef077d440eae3b2cd79f39edaa155499a8.scope.
Oct 02 20:10:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:10:46 compute-0 podman[461153]: 2025-10-02 20:10:46.007217615 +0000 UTC m=+0.229265801 container init 16b588e6037429c388ef8ddd1a85f9ef077d440eae3b2cd79f39edaa155499a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mirzakhani, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 02 20:10:46 compute-0 podman[461153]: 2025-10-02 20:10:46.03317817 +0000 UTC m=+0.255226356 container start 16b588e6037429c388ef8ddd1a85f9ef077d440eae3b2cd79f39edaa155499a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mirzakhani, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 20:10:46 compute-0 podman[461153]: 2025-10-02 20:10:46.042002152 +0000 UTC m=+0.264050358 container attach 16b588e6037429c388ef8ddd1a85f9ef077d440eae3b2cd79f39edaa155499a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:10:46 compute-0 gifted_mirzakhani[461169]: 167 167
Oct 02 20:10:46 compute-0 systemd[1]: libpod-16b588e6037429c388ef8ddd1a85f9ef077d440eae3b2cd79f39edaa155499a8.scope: Deactivated successfully.
Oct 02 20:10:46 compute-0 podman[461153]: 2025-10-02 20:10:46.050306912 +0000 UTC m=+0.272355138 container died 16b588e6037429c388ef8ddd1a85f9ef077d440eae3b2cd79f39edaa155499a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mirzakhani, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 20:10:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-18d7283045860cdaeb3d38492e5f2fd65aa372714fccff25de69d5bdcdaf7ee7-merged.mount: Deactivated successfully.
Oct 02 20:10:46 compute-0 podman[461153]: 2025-10-02 20:10:46.126303197 +0000 UTC m=+0.348351383 container remove 16b588e6037429c388ef8ddd1a85f9ef077d440eae3b2cd79f39edaa155499a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 02 20:10:46 compute-0 systemd[1]: libpod-conmon-16b588e6037429c388ef8ddd1a85f9ef077d440eae3b2cd79f39edaa155499a8.scope: Deactivated successfully.
Oct 02 20:10:46 compute-0 podman[461192]: 2025-10-02 20:10:46.476727734 +0000 UTC m=+0.109164482 container create dbe42d928b61d06e907c0b3c167d587f8c5988d801a4941ae1896285657602ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_babbage, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:10:46 compute-0 podman[461192]: 2025-10-02 20:10:46.434838938 +0000 UTC m=+0.067275776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:10:46 compute-0 systemd[1]: Started libpod-conmon-dbe42d928b61d06e907c0b3c167d587f8c5988d801a4941ae1896285657602ac.scope.
Oct 02 20:10:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b870b6a03f00faac676523816ab8d5932b8e73a38e074d9ff023604716e1572/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b870b6a03f00faac676523816ab8d5932b8e73a38e074d9ff023604716e1572/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b870b6a03f00faac676523816ab8d5932b8e73a38e074d9ff023604716e1572/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b870b6a03f00faac676523816ab8d5932b8e73a38e074d9ff023604716e1572/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:10:46 compute-0 podman[461192]: 2025-10-02 20:10:46.602797891 +0000 UTC m=+0.235234659 container init dbe42d928b61d06e907c0b3c167d587f8c5988d801a4941ae1896285657602ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 20:10:46 compute-0 podman[461192]: 2025-10-02 20:10:46.634806145 +0000 UTC m=+0.267242913 container start dbe42d928b61d06e907c0b3c167d587f8c5988d801a4941ae1896285657602ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_babbage, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 20:10:46 compute-0 podman[461192]: 2025-10-02 20:10:46.641147732 +0000 UTC m=+0.273584510 container attach dbe42d928b61d06e907c0b3c167d587f8c5988d801a4941ae1896285657602ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:10:46 compute-0 ceph-mon[191910]: pgmap v1981: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:10:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:10:48 compute-0 nova_compute[355794]: 2025-10-02 20:10:48.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:48 compute-0 ceph-mon[191910]: pgmap v1982: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:10:49 compute-0 elated_babbage[461208]: [
Oct 02 20:10:49 compute-0 elated_babbage[461208]:     {
Oct 02 20:10:49 compute-0 elated_babbage[461208]:         "available": false,
Oct 02 20:10:49 compute-0 elated_babbage[461208]:         "ceph_device": false,
Oct 02 20:10:49 compute-0 elated_babbage[461208]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:         "lsm_data": {},
Oct 02 20:10:49 compute-0 elated_babbage[461208]:         "lvs": [],
Oct 02 20:10:49 compute-0 elated_babbage[461208]:         "path": "/dev/sr0",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:         "rejected_reasons": [
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "Has a FileSystem",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "Insufficient space (<5GB)"
Oct 02 20:10:49 compute-0 elated_babbage[461208]:         ],
Oct 02 20:10:49 compute-0 elated_babbage[461208]:         "sys_api": {
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "actuators": null,
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "device_nodes": "sr0",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "devname": "sr0",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "human_readable_size": "482.00 KB",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "id_bus": "ata",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "model": "QEMU DVD-ROM",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "nr_requests": "2",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "parent": "/dev/sr0",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "partitions": {},
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "path": "/dev/sr0",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "removable": "1",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "rev": "2.5+",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "ro": "0",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "rotational": "0",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "sas_address": "",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "sas_device_handle": "",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "scheduler_mode": "mq-deadline",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "sectors": 0,
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "sectorsize": "2048",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "size": 493568.0,
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "support_discard": "2048",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "type": "disk",
Oct 02 20:10:49 compute-0 elated_babbage[461208]:             "vendor": "QEMU"
Oct 02 20:10:49 compute-0 elated_babbage[461208]:         }
Oct 02 20:10:49 compute-0 elated_babbage[461208]:     }
Oct 02 20:10:49 compute-0 elated_babbage[461208]: ]
Oct 02 20:10:49 compute-0 systemd[1]: libpod-dbe42d928b61d06e907c0b3c167d587f8c5988d801a4941ae1896285657602ac.scope: Deactivated successfully.
Oct 02 20:10:49 compute-0 podman[461192]: 2025-10-02 20:10:49.15709345 +0000 UTC m=+2.789530198 container died dbe42d928b61d06e907c0b3c167d587f8c5988d801a4941ae1896285657602ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 20:10:49 compute-0 systemd[1]: libpod-dbe42d928b61d06e907c0b3c167d587f8c5988d801a4941ae1896285657602ac.scope: Consumed 2.619s CPU time.
Oct 02 20:10:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b870b6a03f00faac676523816ab8d5932b8e73a38e074d9ff023604716e1572-merged.mount: Deactivated successfully.
Oct 02 20:10:49 compute-0 podman[461192]: 2025-10-02 20:10:49.357668033 +0000 UTC m=+2.990104791 container remove dbe42d928b61d06e907c0b3c167d587f8c5988d801a4941ae1896285657602ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 20:10:49 compute-0 podman[463817]: 2025-10-02 20:10:49.367341618 +0000 UTC m=+0.160490726 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 02 20:10:49 compute-0 systemd[1]: libpod-conmon-dbe42d928b61d06e907c0b3c167d587f8c5988d801a4941ae1896285657602ac.scope: Deactivated successfully.
Oct 02 20:10:49 compute-0 sudo[461090]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:10:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:10:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:10:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:10:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:10:49 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:10:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:10:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:10:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:10:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:10:49 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev ea7db292-9e8a-43ce-b48e-8e6ccaae2758 does not exist
Oct 02 20:10:49 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev a9065282-f04f-4481-9c9f-3c7d248c4beb does not exist
Oct 02 20:10:49 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 2d179f34-7fa2-43d2-a06e-a4b3a97cf7d1 does not exist
Oct 02 20:10:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:10:49 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:10:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:10:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:10:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:10:49 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:10:49 compute-0 nova_compute[355794]: 2025-10-02 20:10:49.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:49 compute-0 sudo[463848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:10:49 compute-0 sudo[463848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:49 compute-0 sudo[463848]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1983: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:10:49 compute-0 sudo[463873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:10:49 compute-0 sudo[463873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:49 compute-0 sudo[463873]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:49 compute-0 sudo[463898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:10:49 compute-0 sudo[463898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:49 compute-0 sudo[463898]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:50 compute-0 sudo[463923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:10:50 compute-0 sudo[463923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:10:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:10:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:10:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:10:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:10:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:10:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:10:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:10:50 compute-0 ceph-mon[191910]: pgmap v1983: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:10:50 compute-0 podman[463990]: 2025-10-02 20:10:50.605358255 +0000 UTC m=+0.079334554 container create 40b5530917853f4753a014833d03b83b4a0a988a7750adb21b7cad129ad2ff7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:10:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:10:50 compute-0 systemd[1]: Started libpod-conmon-40b5530917853f4753a014833d03b83b4a0a988a7750adb21b7cad129ad2ff7c.scope.
Oct 02 20:10:50 compute-0 podman[463990]: 2025-10-02 20:10:50.581101746 +0000 UTC m=+0.055078005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:10:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:10:50 compute-0 podman[463990]: 2025-10-02 20:10:50.733989529 +0000 UTC m=+0.207965798 container init 40b5530917853f4753a014833d03b83b4a0a988a7750adb21b7cad129ad2ff7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 20:10:50 compute-0 podman[463990]: 2025-10-02 20:10:50.742805901 +0000 UTC m=+0.216782160 container start 40b5530917853f4753a014833d03b83b4a0a988a7750adb21b7cad129ad2ff7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 20:10:50 compute-0 dreamy_mendeleev[464006]: 167 167
Oct 02 20:10:50 compute-0 systemd[1]: libpod-40b5530917853f4753a014833d03b83b4a0a988a7750adb21b7cad129ad2ff7c.scope: Deactivated successfully.
Oct 02 20:10:50 compute-0 podman[463990]: 2025-10-02 20:10:50.752627111 +0000 UTC m=+0.226603370 container attach 40b5530917853f4753a014833d03b83b4a0a988a7750adb21b7cad129ad2ff7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 20:10:50 compute-0 podman[463990]: 2025-10-02 20:10:50.753200256 +0000 UTC m=+0.227176515 container died 40b5530917853f4753a014833d03b83b4a0a988a7750adb21b7cad129ad2ff7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 20:10:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bb17b979d737e7d1eeb4dbfd2e43ed5c80bb433f14afdc76938d5027385d7f6-merged.mount: Deactivated successfully.
Oct 02 20:10:50 compute-0 podman[463990]: 2025-10-02 20:10:50.808048793 +0000 UTC m=+0.282025052 container remove 40b5530917853f4753a014833d03b83b4a0a988a7750adb21b7cad129ad2ff7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 20:10:50 compute-0 systemd[1]: libpod-conmon-40b5530917853f4753a014833d03b83b4a0a988a7750adb21b7cad129ad2ff7c.scope: Deactivated successfully.
Oct 02 20:10:51 compute-0 podman[464028]: 2025-10-02 20:10:51.11981783 +0000 UTC m=+0.128620985 container create f2c07e7bf9361506eebda7b532eb317b8ff9a666de6c56e1cbcc66827454d584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:10:51 compute-0 podman[464028]: 2025-10-02 20:10:51.043474215 +0000 UTC m=+0.052277350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:10:51 compute-0 systemd[1]: Started libpod-conmon-f2c07e7bf9361506eebda7b532eb317b8ff9a666de6c56e1cbcc66827454d584.scope.
Oct 02 20:10:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2e4e52e8138536c83d46fc1dc3a2600581896bbc86c329ba53ae8278a1ba3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2e4e52e8138536c83d46fc1dc3a2600581896bbc86c329ba53ae8278a1ba3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2e4e52e8138536c83d46fc1dc3a2600581896bbc86c329ba53ae8278a1ba3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2e4e52e8138536c83d46fc1dc3a2600581896bbc86c329ba53ae8278a1ba3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:10:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2e4e52e8138536c83d46fc1dc3a2600581896bbc86c329ba53ae8278a1ba3e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:10:51 compute-0 podman[464028]: 2025-10-02 20:10:51.292203458 +0000 UTC m=+0.301006623 container init f2c07e7bf9361506eebda7b532eb317b8ff9a666de6c56e1cbcc66827454d584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shannon, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:10:51 compute-0 podman[464028]: 2025-10-02 20:10:51.310241504 +0000 UTC m=+0.319044639 container start f2c07e7bf9361506eebda7b532eb317b8ff9a666de6c56e1cbcc66827454d584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 20:10:51 compute-0 podman[464028]: 2025-10-02 20:10:51.322836977 +0000 UTC m=+0.331640102 container attach f2c07e7bf9361506eebda7b532eb317b8ff9a666de6c56e1cbcc66827454d584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shannon, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 20:10:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1984: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:10:52 compute-0 practical_shannon[464044]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:10:52 compute-0 practical_shannon[464044]: --> relative data size: 1.0
Oct 02 20:10:52 compute-0 practical_shannon[464044]: --> All data devices are unavailable
Oct 02 20:10:52 compute-0 systemd[1]: libpod-f2c07e7bf9361506eebda7b532eb317b8ff9a666de6c56e1cbcc66827454d584.scope: Deactivated successfully.
Oct 02 20:10:52 compute-0 systemd[1]: libpod-f2c07e7bf9361506eebda7b532eb317b8ff9a666de6c56e1cbcc66827454d584.scope: Consumed 1.137s CPU time.
Oct 02 20:10:52 compute-0 podman[464028]: 2025-10-02 20:10:52.509456518 +0000 UTC m=+1.518259643 container died f2c07e7bf9361506eebda7b532eb317b8ff9a666de6c56e1cbcc66827454d584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shannon, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:10:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf2e4e52e8138536c83d46fc1dc3a2600581896bbc86c329ba53ae8278a1ba3e-merged.mount: Deactivated successfully.
Oct 02 20:10:52 compute-0 podman[464028]: 2025-10-02 20:10:52.617119709 +0000 UTC m=+1.625922834 container remove f2c07e7bf9361506eebda7b532eb317b8ff9a666de6c56e1cbcc66827454d584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shannon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:10:52 compute-0 systemd[1]: libpod-conmon-f2c07e7bf9361506eebda7b532eb317b8ff9a666de6c56e1cbcc66827454d584.scope: Deactivated successfully.
Oct 02 20:10:52 compute-0 sudo[463923]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:52 compute-0 sudo[464084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:10:52 compute-0 sudo[464084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:52 compute-0 sudo[464084]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:52 compute-0 ceph-mon[191910]: pgmap v1984: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:10:52 compute-0 sudo[464109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:10:52 compute-0 sudo[464109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:52 compute-0 sudo[464109]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:53 compute-0 sudo[464134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:10:53 compute-0 sudo[464134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:53 compute-0 sudo[464134]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:53 compute-0 sudo[464159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:10:53 compute-0 sudo[464159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:53 compute-0 podman[464222]: 2025-10-02 20:10:53.719872459 +0000 UTC m=+0.076970262 container create f69d6d1ff23f71cf1e2edf862e8d6f89db59541c6c6345f814a3975646ffa2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_murdock, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:10:53 compute-0 nova_compute[355794]: 2025-10-02 20:10:53.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:10:53 compute-0 podman[464222]: 2025-10-02 20:10:53.689792495 +0000 UTC m=+0.046890368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:10:53 compute-0 systemd[1]: Started libpod-conmon-f69d6d1ff23f71cf1e2edf862e8d6f89db59541c6c6345f814a3975646ffa2e3.scope.
Oct 02 20:10:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:10:53 compute-0 podman[464222]: 2025-10-02 20:10:53.873291817 +0000 UTC m=+0.230389660 container init f69d6d1ff23f71cf1e2edf862e8d6f89db59541c6c6345f814a3975646ffa2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_murdock, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:10:53 compute-0 podman[464222]: 2025-10-02 20:10:53.885499719 +0000 UTC m=+0.242597552 container start f69d6d1ff23f71cf1e2edf862e8d6f89db59541c6c6345f814a3975646ffa2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct 02 20:10:53 compute-0 intelligent_murdock[464238]: 167 167
Oct 02 20:10:53 compute-0 systemd[1]: libpod-f69d6d1ff23f71cf1e2edf862e8d6f89db59541c6c6345f814a3975646ffa2e3.scope: Deactivated successfully.
Oct 02 20:10:53 compute-0 podman[464222]: 2025-10-02 20:10:53.892832363 +0000 UTC m=+0.249930216 container attach f69d6d1ff23f71cf1e2edf862e8d6f89db59541c6c6345f814a3975646ffa2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_murdock, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:10:53 compute-0 podman[464222]: 2025-10-02 20:10:53.893247083 +0000 UTC m=+0.250344896 container died f69d6d1ff23f71cf1e2edf862e8d6f89db59541c6c6345f814a3975646ffa2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_murdock, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:10:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e691d161ab793a514094dc3acaa9a6222109b388074abcab4630be828530109e-merged.mount: Deactivated successfully.
Oct 02 20:10:53 compute-0 podman[464222]: 2025-10-02 20:10:53.964020961 +0000 UTC m=+0.321118774 container remove f69d6d1ff23f71cf1e2edf862e8d6f89db59541c6c6345f814a3975646ffa2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 20:10:53 compute-0 systemd[1]: libpod-conmon-f69d6d1ff23f71cf1e2edf862e8d6f89db59541c6c6345f814a3975646ffa2e3.scope: Deactivated successfully.
Oct 02 20:10:54 compute-0 podman[464262]: 2025-10-02 20:10:54.18495289 +0000 UTC m=+0.077904086 container create ea105b7c7d2b1955c4991f465e4bc3fa2834ea6444e196e37b1d61fa52ac09c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:10:54 compute-0 podman[464262]: 2025-10-02 20:10:54.149755902 +0000 UTC m=+0.042707168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:10:54 compute-0 systemd[1]: Started libpod-conmon-ea105b7c7d2b1955c4991f465e4bc3fa2834ea6444e196e37b1d61fa52ac09c9.scope.
Oct 02 20:10:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:10:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d26b82d6439a22986577fa0c15bd067493117e313663e872a2bbe254afd1fed3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:10:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d26b82d6439a22986577fa0c15bd067493117e313663e872a2bbe254afd1fed3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:10:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d26b82d6439a22986577fa0c15bd067493117e313663e872a2bbe254afd1fed3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:10:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d26b82d6439a22986577fa0c15bd067493117e313663e872a2bbe254afd1fed3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:10:54 compute-0 podman[464262]: 2025-10-02 20:10:54.347634192 +0000 UTC m=+0.240585388 container init ea105b7c7d2b1955c4991f465e4bc3fa2834ea6444e196e37b1d61fa52ac09c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_curran, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 20:10:54 compute-0 podman[464262]: 2025-10-02 20:10:54.36991765 +0000 UTC m=+0.262868846 container start ea105b7c7d2b1955c4991f465e4bc3fa2834ea6444e196e37b1d61fa52ac09c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:10:54 compute-0 podman[464262]: 2025-10-02 20:10:54.376452583 +0000 UTC m=+0.269403779 container attach ea105b7c7d2b1955c4991f465e4bc3fa2834ea6444e196e37b1d61fa52ac09c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_curran, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 20:10:54 compute-0 nova_compute[355794]: 2025-10-02 20:10:54.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:54 compute-0 ceph-mon[191910]: pgmap v1985: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:10:55 compute-0 romantic_curran[464277]: {
Oct 02 20:10:55 compute-0 romantic_curran[464277]:     "0": [
Oct 02 20:10:55 compute-0 romantic_curran[464277]:         {
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "devices": [
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "/dev/loop3"
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             ],
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "lv_name": "ceph_lv0",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "lv_size": "21470642176",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "name": "ceph_lv0",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "tags": {
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.cluster_name": "ceph",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.crush_device_class": "",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.encrypted": "0",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.osd_id": "0",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.type": "block",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.vdo": "0"
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             },
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "type": "block",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "vg_name": "ceph_vg0"
Oct 02 20:10:55 compute-0 romantic_curran[464277]:         }
Oct 02 20:10:55 compute-0 romantic_curran[464277]:     ],
Oct 02 20:10:55 compute-0 romantic_curran[464277]:     "1": [
Oct 02 20:10:55 compute-0 romantic_curran[464277]:         {
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "devices": [
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "/dev/loop4"
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             ],
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "lv_name": "ceph_lv1",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "lv_size": "21470642176",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "name": "ceph_lv1",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "tags": {
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.cluster_name": "ceph",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.crush_device_class": "",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.encrypted": "0",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.osd_id": "1",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.type": "block",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.vdo": "0"
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             },
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "type": "block",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "vg_name": "ceph_vg1"
Oct 02 20:10:55 compute-0 romantic_curran[464277]:         }
Oct 02 20:10:55 compute-0 romantic_curran[464277]:     ],
Oct 02 20:10:55 compute-0 romantic_curran[464277]:     "2": [
Oct 02 20:10:55 compute-0 romantic_curran[464277]:         {
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "devices": [
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "/dev/loop5"
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             ],
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "lv_name": "ceph_lv2",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "lv_size": "21470642176",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "name": "ceph_lv2",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "tags": {
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.cluster_name": "ceph",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.crush_device_class": "",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.encrypted": "0",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.osd_id": "2",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.type": "block",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:                 "ceph.vdo": "0"
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             },
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "type": "block",
Oct 02 20:10:55 compute-0 romantic_curran[464277]:             "vg_name": "ceph_vg2"
Oct 02 20:10:55 compute-0 romantic_curran[464277]:         }
Oct 02 20:10:55 compute-0 romantic_curran[464277]:     ]
Oct 02 20:10:55 compute-0 romantic_curran[464277]: }
Oct 02 20:10:55 compute-0 systemd[1]: libpod-ea105b7c7d2b1955c4991f465e4bc3fa2834ea6444e196e37b1d61fa52ac09c9.scope: Deactivated successfully.
Oct 02 20:10:55 compute-0 podman[464262]: 2025-10-02 20:10:55.253532206 +0000 UTC m=+1.146483382 container died ea105b7c7d2b1955c4991f465e4bc3fa2834ea6444e196e37b1d61fa52ac09c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:10:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-d26b82d6439a22986577fa0c15bd067493117e313663e872a2bbe254afd1fed3-merged.mount: Deactivated successfully.
Oct 02 20:10:55 compute-0 podman[464262]: 2025-10-02 20:10:55.484803789 +0000 UTC m=+1.377754975 container remove ea105b7c7d2b1955c4991f465e4bc3fa2834ea6444e196e37b1d61fa52ac09c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_curran, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:10:55 compute-0 systemd[1]: libpod-conmon-ea105b7c7d2b1955c4991f465e4bc3fa2834ea6444e196e37b1d61fa52ac09c9.scope: Deactivated successfully.
Oct 02 20:10:55 compute-0 sudo[464159]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:10:55 compute-0 sudo[464300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:10:55 compute-0 sudo[464300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:55 compute-0 sudo[464300]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1986: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:10:55 compute-0 sudo[464325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:10:55 compute-0 sudo[464325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:55 compute-0 sudo[464325]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:55 compute-0 sudo[464350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:10:55 compute-0 sudo[464350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:55 compute-0 sudo[464350]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:56 compute-0 sudo[464375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:10:56 compute-0 sudo[464375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:56 compute-0 podman[464438]: 2025-10-02 20:10:56.626467884 +0000 UTC m=+0.120219713 container create 52bbbaf5ff97596c6addf3dc0fe5a1d9a0bdd55b0eca73e8fcb5580ed395c489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_vaughan, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 20:10:56 compute-0 podman[464438]: 2025-10-02 20:10:56.544743358 +0000 UTC m=+0.038495227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:10:56 compute-0 systemd[1]: Started libpod-conmon-52bbbaf5ff97596c6addf3dc0fe5a1d9a0bdd55b0eca73e8fcb5580ed395c489.scope.
Oct 02 20:10:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:10:56 compute-0 podman[464438]: 2025-10-02 20:10:56.781154906 +0000 UTC m=+0.274906825 container init 52bbbaf5ff97596c6addf3dc0fe5a1d9a0bdd55b0eca73e8fcb5580ed395c489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_vaughan, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 02 20:10:56 compute-0 podman[464438]: 2025-10-02 20:10:56.793547853 +0000 UTC m=+0.287299692 container start 52bbbaf5ff97596c6addf3dc0fe5a1d9a0bdd55b0eca73e8fcb5580ed395c489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_vaughan, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:10:56 compute-0 youthful_vaughan[464457]: 167 167
Oct 02 20:10:56 compute-0 systemd[1]: libpod-52bbbaf5ff97596c6addf3dc0fe5a1d9a0bdd55b0eca73e8fcb5580ed395c489.scope: Deactivated successfully.
Oct 02 20:10:56 compute-0 podman[464438]: 2025-10-02 20:10:56.812252677 +0000 UTC m=+0.306004516 container attach 52bbbaf5ff97596c6addf3dc0fe5a1d9a0bdd55b0eca73e8fcb5580ed395c489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:10:56 compute-0 podman[464438]: 2025-10-02 20:10:56.813573172 +0000 UTC m=+0.307324991 container died 52bbbaf5ff97596c6addf3dc0fe5a1d9a0bdd55b0eca73e8fcb5580ed395c489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:10:56 compute-0 ceph-mon[191910]: pgmap v1986: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:10:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcb0eaa37645fd1dbf9e39707fc941e0079adbc17a37c1fc43fe2dbe255e85ca-merged.mount: Deactivated successfully.
Oct 02 20:10:56 compute-0 podman[464456]: 2025-10-02 20:10:56.916044006 +0000 UTC m=+0.171406164 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 20:10:56 compute-0 podman[464438]: 2025-10-02 20:10:56.950433083 +0000 UTC m=+0.444184912 container remove 52bbbaf5ff97596c6addf3dc0fe5a1d9a0bdd55b0eca73e8fcb5580ed395c489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:10:56 compute-0 systemd[1]: libpod-conmon-52bbbaf5ff97596c6addf3dc0fe5a1d9a0bdd55b0eca73e8fcb5580ed395c489.scope: Deactivated successfully.
Oct 02 20:10:57 compute-0 podman[464459]: 2025-10-02 20:10:57.019100175 +0000 UTC m=+0.271119295 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Oct 02 20:10:57 compute-0 podman[464520]: 2025-10-02 20:10:57.188645059 +0000 UTC m=+0.091542897 container create 559f49f14ff80bd1289819e47464861b6c9fffed032c2aa5d1f760be2e771360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_fermi, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 20:10:57 compute-0 podman[464520]: 2025-10-02 20:10:57.141044063 +0000 UTC m=+0.043941921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:10:57 compute-0 systemd[1]: Started libpod-conmon-559f49f14ff80bd1289819e47464861b6c9fffed032c2aa5d1f760be2e771360.scope.
Oct 02 20:10:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:10:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45701133857ce7d24e4cca221af819a800600ad4de2beac72c626fc6186d5821/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:10:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45701133857ce7d24e4cca221af819a800600ad4de2beac72c626fc6186d5821/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:10:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45701133857ce7d24e4cca221af819a800600ad4de2beac72c626fc6186d5821/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:10:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45701133857ce7d24e4cca221af819a800600ad4de2beac72c626fc6186d5821/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:10:57 compute-0 podman[464520]: 2025-10-02 20:10:57.406703243 +0000 UTC m=+0.309601091 container init 559f49f14ff80bd1289819e47464861b6c9fffed032c2aa5d1f760be2e771360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_fermi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:10:57 compute-0 podman[464520]: 2025-10-02 20:10:57.430639805 +0000 UTC m=+0.333537623 container start 559f49f14ff80bd1289819e47464861b6c9fffed032c2aa5d1f760be2e771360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_fermi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 20:10:57 compute-0 podman[464520]: 2025-10-02 20:10:57.465226577 +0000 UTC m=+0.368124485 container attach 559f49f14ff80bd1289819e47464861b6c9fffed032c2aa5d1f760be2e771360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:10:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]: {
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:         "osd_id": 1,
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:         "type": "bluestore"
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:     },
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:         "osd_id": 2,
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:         "type": "bluestore"
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:     },
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:         "osd_id": 0,
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:         "type": "bluestore"
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]:     }
Oct 02 20:10:58 compute-0 vibrant_fermi[464535]: }
Oct 02 20:10:58 compute-0 systemd[1]: libpod-559f49f14ff80bd1289819e47464861b6c9fffed032c2aa5d1f760be2e771360.scope: Deactivated successfully.
Oct 02 20:10:58 compute-0 systemd[1]: libpod-559f49f14ff80bd1289819e47464861b6c9fffed032c2aa5d1f760be2e771360.scope: Consumed 1.138s CPU time.
Oct 02 20:10:58 compute-0 conmon[464535]: conmon 559f49f14ff80bd12898 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-559f49f14ff80bd1289819e47464861b6c9fffed032c2aa5d1f760be2e771360.scope/container/memory.events
Oct 02 20:10:58 compute-0 podman[464520]: 2025-10-02 20:10:58.576523251 +0000 UTC m=+1.479421099 container died 559f49f14ff80bd1289819e47464861b6c9fffed032c2aa5d1f760be2e771360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_fermi, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:10:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-45701133857ce7d24e4cca221af819a800600ad4de2beac72c626fc6186d5821-merged.mount: Deactivated successfully.
Oct 02 20:10:58 compute-0 nova_compute[355794]: 2025-10-02 20:10:58.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:58 compute-0 podman[464520]: 2025-10-02 20:10:58.880782879 +0000 UTC m=+1.783680727 container remove 559f49f14ff80bd1289819e47464861b6c9fffed032c2aa5d1f760be2e771360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_fermi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:10:58 compute-0 systemd[1]: libpod-conmon-559f49f14ff80bd1289819e47464861b6c9fffed032c2aa5d1f760be2e771360.scope: Deactivated successfully.
Oct 02 20:10:58 compute-0 ceph-mon[191910]: pgmap v1987: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:10:58 compute-0 sudo[464375]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:10:59 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:10:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:10:59 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:10:59 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 4f8d861e-d7f5-4f85-9745-0a6455be01ff does not exist
Oct 02 20:10:59 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 01775520-45e8-4a7e-9de8-8857faa61e34 does not exist
Oct 02 20:10:59 compute-0 sudo[464580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:10:59 compute-0 sudo[464580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:59 compute-0 sudo[464580]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:59 compute-0 sudo[464605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:10:59 compute-0 sudo[464605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:10:59 compute-0 sudo[464605]: pam_unix(sudo:session): session closed for user root
Oct 02 20:10:59 compute-0 nova_compute[355794]: 2025-10-02 20:10:59.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:10:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:10:59 compute-0 podman[157186]: time="2025-10-02T20:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:10:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:10:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9555 "" "Go-http-client/1.1"
Oct 02 20:11:00 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:11:00 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:11:00 compute-0 ceph-mon[191910]: pgmap v1988: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:11:01 compute-0 openstack_network_exporter[372736]: ERROR   20:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:11:01 compute-0 openstack_network_exporter[372736]: ERROR   20:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:11:01 compute-0 openstack_network_exporter[372736]: ERROR   20:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:11:01 compute-0 openstack_network_exporter[372736]: ERROR   20:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:11:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:11:01 compute-0 openstack_network_exporter[372736]: ERROR   20:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:11:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:11:01 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 02 20:11:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1989: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:02 compute-0 ceph-mon[191910]: pgmap v1989: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:11:03
Oct 02 20:11:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:11:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:11:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'images', 'backups', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'vms']
Oct 02 20:11:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:11:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:11:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:11:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:11:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:11:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:11:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:11:03 compute-0 nova_compute[355794]: 2025-10-02 20:11:03.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1990: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.304 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.306 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.316 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3432388440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.320 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f50e6a55-f3b5-402b-91b2-12d34386f656', 'name': 'te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6', 'flavor': {'id': '2a4d7fef-934e-4921-8c3b-c6783966faa5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'fe71959f-8f59-4b45-ae05-4216d5f12fab'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '16e65e6cbbf848e5bb5755e6da3b1d33', 'user_id': 'e5d4abc29b2e475e9c7c54249ca341c4', 'hostId': '01141092f0da7a843904c1ec95a2fbe7e7386c3244d2a85e02147c4e', 'status': 'active', 'metadata': {'metering.server_group': 'f724f930-b01d-4568-9d24-c7060da9fe9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.320 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.320 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.320 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.320 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T20:11:04.320679) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.377 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.378 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.379 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:11:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:11:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:11:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:11:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:11:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:11:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:11:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:11:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:11:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.423 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.requests volume: 1074 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.424 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.425 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.426 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.426 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.426 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.426 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.426 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:11:04.426846) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.462 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.463 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.463 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.480 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.480 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.481 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.481 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.481 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.481 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.481 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.482 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.482 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.482 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.483 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:11:04.481987) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.482 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.483 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.484 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.484 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.484 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.484 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.484 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.484 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.485 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.485 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.485 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.485 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.latency volume: 61099367193 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.486 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.486 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.486 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:11:04.484861) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.487 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.487 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.487 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.487 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.487 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.488 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:11:04.487560) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.522 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.547 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.547 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.548 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.548 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.548 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.548 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.548 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.548 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.549 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.549 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.requests volume: 311 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.549 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.550 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.550 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.550 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.550 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.550 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.551 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:11:04.548538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.551 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:11:04.550864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.555 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.559 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.bytes.delta volume: 1542 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.560 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.560 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.561 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.561 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.561 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.562 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.562 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.562 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.563 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.563 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.563 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.563 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.564 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.564 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.565 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.565 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.565 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.565 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.566 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets volume: 11 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.566 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.566 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.567 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.567 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.567 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.568 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.569 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:11:04.561163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:11:04.563875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:11:04.565466) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:11:04.567428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.570 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.570 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.570 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.570 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.571 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.bytes.delta volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.572 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.572 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.572 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.573 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.574 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.574 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.575 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.575 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.575 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.bytes volume: 29916160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.576 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.577 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:11:04.570521) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.577 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:11:04.572584) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.577 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:11:04.574716) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.578 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.579 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.579 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.579 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.580 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.580 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.580 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.580 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.581 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.581 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.582 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.582 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:11:04.578811) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:11:04.580807) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.584 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.584 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.585 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.585 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.585 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.585 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/memory.usage volume: 43.0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.587 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.587 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2730 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.588 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.bytes volume: 1652 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.589 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.589 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.589 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.590 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.590 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.590 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.591 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.591 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.591 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.591 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.592 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.592 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.592 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.593 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:11:04.585127) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.593 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:11:04.587640) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.593 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:11:04.589519) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.593 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:11:04.590857) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.593 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.594 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.594 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.595 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.595 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.595 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.595 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.595 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.595 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.595 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.596 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.596 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.596 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.596 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.596 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.596 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.596 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 58660000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.597 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/cpu volume: 114730000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.597 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.597 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.597 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.598 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.598 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.598 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.598 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.598 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.599 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.latency volume: 3039231407 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.599 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.latency volume: 196198639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.599 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.602 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:11:04.594120) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:11:04.595546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:11:04.596856) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:11:04.598127) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.607 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.607 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:11:04.607 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:11:04 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 02 20:11:04 compute-0 nova_compute[355794]: 2025-10-02 20:11:04.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:04 compute-0 ceph-mon[191910]: pgmap v1990: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:05 compute-0 nova_compute[355794]: 2025-10-02 20:11:05.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:11:05 compute-0 nova_compute[355794]: 2025-10-02 20:11:05.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:11:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:11:05 compute-0 podman[464633]: 2025-10-02 20:11:05.699072452 +0000 UTC m=+0.129259001 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 20:11:05 compute-0 podman[464634]: 2025-10-02 20:11:05.739600052 +0000 UTC m=+0.160118566 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9)
Oct 02 20:11:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:06 compute-0 ceph-mon[191910]: pgmap v1991: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:07 compute-0 podman[464671]: 2025-10-02 20:11:07.689250008 +0000 UTC m=+0.111098983 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 02 20:11:07 compute-0 podman[464672]: 2025-10-02 20:11:07.709019709 +0000 UTC m=+0.121117147 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 20:11:07 compute-0 podman[464673]: 2025-10-02 20:11:07.750812552 +0000 UTC m=+0.162781726 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 02 20:11:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1992: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:08 compute-0 nova_compute[355794]: 2025-10-02 20:11:08.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:08 compute-0 ceph-mon[191910]: pgmap v1992: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:09 compute-0 nova_compute[355794]: 2025-10-02 20:11:09.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:11:09 compute-0 nova_compute[355794]: 2025-10-02 20:11:09.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:11:09 compute-0 nova_compute[355794]: 2025-10-02 20:11:09.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:11:09 compute-0 nova_compute[355794]: 2025-10-02 20:11:09.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:09 compute-0 podman[464730]: 2025-10-02 20:11:09.698217918 +0000 UTC m=+0.115882449 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 20:11:09 compute-0 podman[464729]: 2025-10-02 20:11:09.749351517 +0000 UTC m=+0.162107489 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9)
Oct 02 20:11:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1993: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:09 compute-0 nova_compute[355794]: 2025-10-02 20:11:09.884 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:11:09 compute-0 nova_compute[355794]: 2025-10-02 20:11:09.885 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:11:09 compute-0 nova_compute[355794]: 2025-10-02 20:11:09.887 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:11:09 compute-0 nova_compute[355794]: 2025-10-02 20:11:09.888 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:11:10 compute-0 ceph-mon[191910]: pgmap v1993: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:11:11 compute-0 nova_compute[355794]: 2025-10-02 20:11:11.056 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:11:11 compute-0 nova_compute[355794]: 2025-10-02 20:11:11.128 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:11:11 compute-0 nova_compute[355794]: 2025-10-02 20:11:11.129 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:11:11 compute-0 nova_compute[355794]: 2025-10-02 20:11:11.130 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:11:11 compute-0 nova_compute[355794]: 2025-10-02 20:11:11.130 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:11:11 compute-0 nova_compute[355794]: 2025-10-02 20:11:11.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:11:11 compute-0 nova_compute[355794]: 2025-10-02 20:11:11.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:11:11 compute-0 nova_compute[355794]: 2025-10-02 20:11:11.653 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:11:11 compute-0 nova_compute[355794]: 2025-10-02 20:11:11.653 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:11:11 compute-0 nova_compute[355794]: 2025-10-02 20:11:11.654 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:11:11 compute-0 nova_compute[355794]: 2025-10-02 20:11:11.654 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:11:11 compute-0 nova_compute[355794]: 2025-10-02 20:11:11.654 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:11:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:12 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:11:12 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2448719299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:11:12 compute-0 nova_compute[355794]: 2025-10-02 20:11:12.204 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:11:12 compute-0 nova_compute[355794]: 2025-10-02 20:11:12.472 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:11:12 compute-0 nova_compute[355794]: 2025-10-02 20:11:12.472 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:11:12 compute-0 nova_compute[355794]: 2025-10-02 20:11:12.473 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:11:12 compute-0 nova_compute[355794]: 2025-10-02 20:11:12.483 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:11:12 compute-0 nova_compute[355794]: 2025-10-02 20:11:12.484 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:11:12 compute-0 ceph-mon[191910]: pgmap v1994: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:12 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2448719299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.059 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.060 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3492MB free_disk=59.90974807739258GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.061 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.061 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.171 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.171 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance f50e6a55-f3b5-402b-91b2-12d34386f656 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.172 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.173 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.190 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing inventories for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013091419019014131 of space, bias 1.0, pg target 0.39274257057042394 quantized to 32 (current 32)
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.209 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating ProviderTree inventory for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.209 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating inventory in ProviderTree for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.225 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing aggregate associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.249 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing trait associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, traits: COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.310 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1995: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:11:13 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3497279882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.834 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:11:13 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3497279882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.845 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.866 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.867 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:11:13 compute-0 nova_compute[355794]: 2025-10-02 20:11:13.868 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:11:14 compute-0 nova_compute[355794]: 2025-10-02 20:11:14.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:14 compute-0 ceph-mon[191910]: pgmap v1995: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:11:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1996: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:15 compute-0 nova_compute[355794]: 2025-10-02 20:11:15.865 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:11:15 compute-0 nova_compute[355794]: 2025-10-02 20:11:15.866 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:11:16 compute-0 nova_compute[355794]: 2025-10-02 20:11:16.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:11:16 compute-0 ceph-mon[191910]: pgmap v1996: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1997: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:18 compute-0 nova_compute[355794]: 2025-10-02 20:11:18.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:18 compute-0 ceph-mon[191910]: pgmap v1997: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:19 compute-0 nova_compute[355794]: 2025-10-02 20:11:19.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:19 compute-0 podman[464814]: 2025-10-02 20:11:19.707522882 +0000 UTC m=+0.138267610 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 20:11:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1998: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:11:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3408831950' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:11:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:11:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3408831950' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:11:20 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 20:11:20 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 20:11:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:11:20 compute-0 ceph-mon[191910]: pgmap v1998: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3408831950' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:11:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3408831950' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:11:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v1999: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:23 compute-0 ceph-mon[191910]: pgmap v1999: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:23 compute-0 nova_compute[355794]: 2025-10-02 20:11:23.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:24 compute-0 ceph-mon[191910]: pgmap v2000: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:24 compute-0 nova_compute[355794]: 2025-10-02 20:11:24.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:11:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2001: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:26 compute-0 ceph-mon[191910]: pgmap v2001: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:27 compute-0 podman[464834]: 2025-10-02 20:11:27.65267204 +0000 UTC m=+0.083945826 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:11:27 compute-0 podman[464835]: 2025-10-02 20:11:27.666434893 +0000 UTC m=+0.089481972 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 20:11:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2002: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:28 compute-0 nova_compute[355794]: 2025-10-02 20:11:28.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:28 compute-0 ceph-mon[191910]: pgmap v2002: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:29 compute-0 nova_compute[355794]: 2025-10-02 20:11:29.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:29 compute-0 podman[157186]: time="2025-10-02T20:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:11:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:11:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9565 "" "Go-http-client/1.1"
Oct 02 20:11:30 compute-0 ceph-mon[191910]: pgmap v2003: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:11:31 compute-0 openstack_network_exporter[372736]: ERROR   20:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:11:31 compute-0 openstack_network_exporter[372736]: ERROR   20:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:11:31 compute-0 openstack_network_exporter[372736]: ERROR   20:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:11:31 compute-0 openstack_network_exporter[372736]: ERROR   20:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:11:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:11:31 compute-0 openstack_network_exporter[372736]: ERROR   20:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:11:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:11:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2004: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.203 2 DEBUG oslo_concurrency.lockutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquiring lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.204 2 DEBUG oslo_concurrency.lockutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.221 2 DEBUG nova.compute.manager [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.286 2 DEBUG oslo_concurrency.lockutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.287 2 DEBUG oslo_concurrency.lockutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.297 2 DEBUG nova.virt.hardware [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.297 2 INFO nova.compute.claims [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Claim successful on node compute-0.ctlplane.example.com
Oct 02 20:11:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:32.326 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:11:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:32.327 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:11:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:32.328 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.415 2 DEBUG oslo_concurrency.processutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.575 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.576 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.576 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.577 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.577 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.577 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.610 2 DEBUG nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.626 2 DEBUG nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.627 2 DEBUG nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Image id fe71959f-8f59-4b45-ae05-4216d5f12fab yields fingerprint 5791872ee933d4d58fd9e831120a99fbea624bcf _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.627 2 INFO nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] image fe71959f-8f59-4b45-ae05-4216d5f12fab at (/var/lib/nova/instances/_base/5791872ee933d4d58fd9e831120a99fbea624bcf): checking
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.628 2 DEBUG nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] image fe71959f-8f59-4b45-ae05-4216d5f12fab at (/var/lib/nova/instances/_base/5791872ee933d4d58fd9e831120a99fbea624bcf): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.631 2 DEBUG nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.631 2 DEBUG nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Image id ce28338d-119e-49e1-ab67-60da8882593a yields fingerprint 29c290047b888f2c82efe3bcb0c2a3e42b009a3e _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.632 2 INFO nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] image ce28338d-119e-49e1-ab67-60da8882593a at (/var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e): checking
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.632 2 DEBUG nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] image ce28338d-119e-49e1-ab67-60da8882593a at (/var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.634 2 DEBUG nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.634 2 DEBUG nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] f50e6a55-f3b5-402b-91b2-12d34386f656 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.634 2 WARNING nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/4e913ffd3c43828864a77c577e0c9e3c7f1ca233
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.635 2 WARNING nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.635 2 INFO nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Active base files: /var/lib/nova/instances/_base/5791872ee933d4d58fd9e831120a99fbea624bcf /var/lib/nova/instances/_base/29c290047b888f2c82efe3bcb0c2a3e42b009a3e
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.635 2 INFO nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Removable base files: /var/lib/nova/instances/_base/4e913ffd3c43828864a77c577e0c9e3c7f1ca233 /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.636 2 INFO nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/4e913ffd3c43828864a77c577e0c9e3c7f1ca233
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.636 2 INFO nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/0c456b520d71abe557f4853537116bdcc2ff0a79
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.637 2 DEBUG nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.637 2 DEBUG nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.637 2 DEBUG nova.virt.libvirt.imagecache [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Oct 02 20:11:32 compute-0 ceph-mon[191910]: pgmap v2004: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:11:32 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/872153891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.936 2 DEBUG oslo_concurrency.processutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.947 2 DEBUG nova.compute.provider_tree [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.966 2 DEBUG nova.scheduler.client.report [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.990 2 DEBUG oslo_concurrency.lockutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:11:32 compute-0 nova_compute[355794]: 2025-10-02 20:11:32.991 2 DEBUG nova.compute.manager [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.042 2 DEBUG nova.compute.manager [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.043 2 DEBUG nova.network.neutron [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.069 2 INFO nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.091 2 DEBUG nova.compute.manager [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.185 2 DEBUG nova.compute.manager [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.187 2 DEBUG nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.188 2 INFO nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Creating image(s)
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.227 2 DEBUG nova.storage.rbd_utils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] rbd image 03794a5e-b5ab-4b9e-8052-6de08e4c9f84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.278 2 DEBUG nova.storage.rbd_utils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] rbd image 03794a5e-b5ab-4b9e-8052-6de08e4c9f84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.320 2 DEBUG nova.storage.rbd_utils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] rbd image 03794a5e-b5ab-4b9e-8052-6de08e4c9f84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.328 2 DEBUG oslo_concurrency.processutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5791872ee933d4d58fd9e831120a99fbea624bcf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.424 2 DEBUG oslo_concurrency.processutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5791872ee933d4d58fd9e831120a99fbea624bcf --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.426 2 DEBUG oslo_concurrency.lockutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquiring lock "5791872ee933d4d58fd9e831120a99fbea624bcf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.427 2 DEBUG oslo_concurrency.lockutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "5791872ee933d4d58fd9e831120a99fbea624bcf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.428 2 DEBUG oslo_concurrency.lockutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "5791872ee933d4d58fd9e831120a99fbea624bcf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.475 2 DEBUG nova.storage.rbd_utils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] rbd image 03794a5e-b5ab-4b9e-8052-6de08e4c9f84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.500 2 DEBUG oslo_concurrency.processutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5791872ee933d4d58fd9e831120a99fbea624bcf 03794a5e-b5ab-4b9e-8052-6de08e4c9f84_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.569 2 DEBUG nova.policy [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5d4abc29b2e475e9c7c54249ca341c4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '16e65e6cbbf848e5bb5755e6da3b1d33', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 20:11:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:11:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:11:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:11:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:11:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:11:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:11:33 compute-0 nova_compute[355794]: 2025-10-02 20:11:33.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:33 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/872153891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:11:34 compute-0 nova_compute[355794]: 2025-10-02 20:11:34.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:34 compute-0 nova_compute[355794]: 2025-10-02 20:11:34.899 2 DEBUG oslo_concurrency.processutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5791872ee933d4d58fd9e831120a99fbea624bcf 03794a5e-b5ab-4b9e-8052-6de08e4c9f84_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:11:34 compute-0 ceph-mon[191910]: pgmap v2005: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:11:35 compute-0 nova_compute[355794]: 2025-10-02 20:11:35.063 2 DEBUG nova.storage.rbd_utils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] resizing rbd image 03794a5e-b5ab-4b9e-8052-6de08e4c9f84_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 20:11:35 compute-0 nova_compute[355794]: 2025-10-02 20:11:35.615 2 DEBUG nova.objects.instance [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lazy-loading 'migration_context' on Instance uuid 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:11:35 compute-0 nova_compute[355794]: 2025-10-02 20:11:35.631 2 DEBUG nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 20:11:35 compute-0 nova_compute[355794]: 2025-10-02 20:11:35.632 2 DEBUG nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Ensure instance console log exists: /var/lib/nova/instances/03794a5e-b5ab-4b9e-8052-6de08e4c9f84/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 20:11:35 compute-0 nova_compute[355794]: 2025-10-02 20:11:35.633 2 DEBUG oslo_concurrency.lockutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:11:35 compute-0 nova_compute[355794]: 2025-10-02 20:11:35.634 2 DEBUG oslo_concurrency.lockutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:11:35 compute-0 nova_compute[355794]: 2025-10-02 20:11:35.634 2 DEBUG oslo_concurrency.lockutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:11:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:11:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2006: 321 pgs: 321 active+clean; 246 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 1.3 MiB/s wr, 12 op/s
Oct 02 20:11:35 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:35.981 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '6e:8a:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:85:f4:22:b9:4f'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:11:35 compute-0 nova_compute[355794]: 2025-10-02 20:11:35.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:35 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:35.984 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 20:11:36 compute-0 ceph-mon[191910]: pgmap v2006: 321 pgs: 321 active+clean; 246 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 1.3 MiB/s wr, 12 op/s
Oct 02 20:11:36 compute-0 nova_compute[355794]: 2025-10-02 20:11:36.096 2 DEBUG nova.network.neutron [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Successfully created port: 8a7a2e73-aec8-473f-8f6e-6da1c63ae426 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 20:11:36 compute-0 podman[465063]: 2025-10-02 20:11:36.675548846 +0000 UTC m=+0.100601146 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, version=9.4, container_name=kepler, name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 20:11:36 compute-0 podman[465062]: 2025-10-02 20:11:36.716514187 +0000 UTC m=+0.137380606 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:11:36 compute-0 nova_compute[355794]: 2025-10-02 20:11:36.794 2 DEBUG nova.network.neutron [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Successfully updated port: 8a7a2e73-aec8-473f-8f6e-6da1c63ae426 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 20:11:36 compute-0 nova_compute[355794]: 2025-10-02 20:11:36.817 2 DEBUG oslo_concurrency.lockutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquiring lock "refresh_cache-03794a5e-b5ab-4b9e-8052-6de08e4c9f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:11:36 compute-0 nova_compute[355794]: 2025-10-02 20:11:36.818 2 DEBUG oslo_concurrency.lockutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquired lock "refresh_cache-03794a5e-b5ab-4b9e-8052-6de08e4c9f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:11:36 compute-0 nova_compute[355794]: 2025-10-02 20:11:36.819 2 DEBUG nova.network.neutron [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 20:11:36 compute-0 nova_compute[355794]: 2025-10-02 20:11:36.890 2 DEBUG nova.compute.manager [req-ca58dff4-c7aa-44e4-a026-1c52a0cbf09a req-7d1ab886-470f-4278-9056-f23eeabc722f 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Received event network-changed-8a7a2e73-aec8-473f-8f6e-6da1c63ae426 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:11:36 compute-0 nova_compute[355794]: 2025-10-02 20:11:36.891 2 DEBUG nova.compute.manager [req-ca58dff4-c7aa-44e4-a026-1c52a0cbf09a req-7d1ab886-470f-4278-9056-f23eeabc722f 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Refreshing instance network info cache due to event network-changed-8a7a2e73-aec8-473f-8f6e-6da1c63ae426. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 20:11:36 compute-0 nova_compute[355794]: 2025-10-02 20:11:36.892 2 DEBUG oslo_concurrency.lockutils [req-ca58dff4-c7aa-44e4-a026-1c52a0cbf09a req-7d1ab886-470f-4278-9056-f23eeabc722f 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "refresh_cache-03794a5e-b5ab-4b9e-8052-6de08e4c9f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:11:37 compute-0 nova_compute[355794]: 2025-10-02 20:11:37.468 2 DEBUG nova.network.neutron [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 20:11:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 255 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.5 MiB/s wr, 24 op/s
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.356 2 DEBUG nova.network.neutron [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Updating instance_info_cache with network_info: [{"id": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "address": "fa:16:3e:a4:22:b0", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a7a2e73-ae", "ovs_interfaceid": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.378 2 DEBUG oslo_concurrency.lockutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Releasing lock "refresh_cache-03794a5e-b5ab-4b9e-8052-6de08e4c9f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.379 2 DEBUG nova.compute.manager [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Instance network_info: |[{"id": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "address": "fa:16:3e:a4:22:b0", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a7a2e73-ae", "ovs_interfaceid": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.381 2 DEBUG oslo_concurrency.lockutils [req-ca58dff4-c7aa-44e4-a026-1c52a0cbf09a req-7d1ab886-470f-4278-9056-f23eeabc722f 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquired lock "refresh_cache-03794a5e-b5ab-4b9e-8052-6de08e4c9f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.382 2 DEBUG nova.network.neutron [req-ca58dff4-c7aa-44e4-a026-1c52a0cbf09a req-7d1ab886-470f-4278-9056-f23eeabc722f 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Refreshing network info cache for port 8a7a2e73-aec8-473f-8f6e-6da1c63ae426 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.387 2 DEBUG nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Start _get_guest_xml network_info=[{"id": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "address": "fa:16:3e:a4:22:b0", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a7a2e73-ae", "ovs_interfaceid": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:08:43Z,direct_url=<?>,disk_format='qcow2',id=fe71959f-8f59-4b45-ae05-4216d5f12fab,min_disk=0,min_ram=0,name='tempest-scenario-img--1806953314',owner='16e65e6cbbf848e5bb5755e6da3b1d33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:08:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'encrypted': False, 'disk_bus': 'virtio', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'image_id': 'fe71959f-8f59-4b45-ae05-4216d5f12fab'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.398 2 WARNING nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.410 2 DEBUG nova.virt.libvirt.host [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.411 2 DEBUG nova.virt.libvirt.host [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.417 2 DEBUG nova.virt.libvirt.host [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.418 2 DEBUG nova.virt.libvirt.host [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.419 2 DEBUG nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.419 2 DEBUG nova.virt.hardware [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T20:04:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2a4d7fef-934e-4921-8c3b-c6783966faa5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T20:08:43Z,direct_url=<?>,disk_format='qcow2',id=fe71959f-8f59-4b45-ae05-4216d5f12fab,min_disk=0,min_ram=0,name='tempest-scenario-img--1806953314',owner='16e65e6cbbf848e5bb5755e6da3b1d33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T20:08:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.420 2 DEBUG nova.virt.hardware [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.421 2 DEBUG nova.virt.hardware [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.421 2 DEBUG nova.virt.hardware [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.422 2 DEBUG nova.virt.hardware [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.422 2 DEBUG nova.virt.hardware [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.423 2 DEBUG nova.virt.hardware [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.423 2 DEBUG nova.virt.hardware [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.424 2 DEBUG nova.virt.hardware [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.424 2 DEBUG nova.virt.hardware [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.425 2 DEBUG nova.virt.hardware [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.428 2 DEBUG oslo_concurrency.processutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:11:38 compute-0 podman[465099]: 2025-10-02 20:11:38.67215837 +0000 UTC m=+0.097152035 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 20:11:38 compute-0 podman[465100]: 2025-10-02 20:11:38.704272437 +0000 UTC m=+0.118683333 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 20:11:38 compute-0 podman[465101]: 2025-10-02 20:11:38.7180229 +0000 UTC m=+0.129661243 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 20:11:38 compute-0 nova_compute[355794]: 2025-10-02 20:11:38.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:11:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1428430658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:11:39 compute-0 ceph-mon[191910]: pgmap v2007: 321 pgs: 321 active+clean; 255 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.5 MiB/s wr, 24 op/s
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.054 2 DEBUG oslo_concurrency.processutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.117 2 DEBUG nova.storage.rbd_utils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] rbd image 03794a5e-b5ab-4b9e-8052-6de08e4c9f84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.128 2 DEBUG oslo_concurrency.processutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:11:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 20:11:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2004639481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.680 2 DEBUG oslo_concurrency.processutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.683 2 DEBUG nova.virt.libvirt.vif [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T20:11:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-6077278-asg-4covgfuc3mpk-ztq7ffpxzcnr-y6lmqetg2enr',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6077278-asg-4covgfuc3mpk-ztq7ffpxzcnr-y6lmqetg2enr',id=15,image_ref='fe71959f-8f59-4b45-ae05-4216d5f12fab',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='f724f930-b01d-4568-9d24-c7060da9fe9c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='16e65e6cbbf848e5bb5755e6da3b1d33',ramdisk_id='',reservation_id='r-ixmbsl0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='fe71959f-8f59-4b45-ae05-4216d5f12fab',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1246773106',owner_user_name='tempest-PrometheusGabbiTest-1246773106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:11:33Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='e5d4abc29b2e475e9c7c54249ca341c4',uuid=03794a5e-b5ab-4b9e-8052-6de08e4c9f84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "address": "fa:16:3e:a4:22:b0", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a7a2e73-ae", "ovs_interfaceid": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.684 2 DEBUG nova.network.os_vif_util [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Converting VIF {"id": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "address": "fa:16:3e:a4:22:b0", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a7a2e73-ae", "ovs_interfaceid": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.686 2 DEBUG nova.network.os_vif_util [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:22:b0,bridge_name='br-int',has_traffic_filtering=True,id=8a7a2e73-aec8-473f-8f6e-6da1c63ae426,network=Network(f0deafed-687f-4945-b8e7-38e6d324244b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a7a2e73-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.689 2 DEBUG nova.objects.instance [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lazy-loading 'pci_devices' on Instance uuid 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.707 2 DEBUG nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] End _get_guest_xml xml=<domain type="kvm">
Oct 02 20:11:39 compute-0 nova_compute[355794]:   <uuid>03794a5e-b5ab-4b9e-8052-6de08e4c9f84</uuid>
Oct 02 20:11:39 compute-0 nova_compute[355794]:   <name>instance-0000000f</name>
Oct 02 20:11:39 compute-0 nova_compute[355794]:   <memory>131072</memory>
Oct 02 20:11:39 compute-0 nova_compute[355794]:   <vcpu>1</vcpu>
Oct 02 20:11:39 compute-0 nova_compute[355794]:   <metadata>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <nova:name>te-6077278-asg-4covgfuc3mpk-ztq7ffpxzcnr-y6lmqetg2enr</nova:name>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <nova:creationTime>2025-10-02 20:11:38</nova:creationTime>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <nova:flavor name="m1.nano">
Oct 02 20:11:39 compute-0 nova_compute[355794]:         <nova:memory>128</nova:memory>
Oct 02 20:11:39 compute-0 nova_compute[355794]:         <nova:disk>1</nova:disk>
Oct 02 20:11:39 compute-0 nova_compute[355794]:         <nova:swap>0</nova:swap>
Oct 02 20:11:39 compute-0 nova_compute[355794]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 20:11:39 compute-0 nova_compute[355794]:         <nova:vcpus>1</nova:vcpus>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       </nova:flavor>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <nova:owner>
Oct 02 20:11:39 compute-0 nova_compute[355794]:         <nova:user uuid="e5d4abc29b2e475e9c7c54249ca341c4">tempest-PrometheusGabbiTest-1246773106-project-member</nova:user>
Oct 02 20:11:39 compute-0 nova_compute[355794]:         <nova:project uuid="16e65e6cbbf848e5bb5755e6da3b1d33">tempest-PrometheusGabbiTest-1246773106</nova:project>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       </nova:owner>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <nova:root type="image" uuid="fe71959f-8f59-4b45-ae05-4216d5f12fab"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <nova:ports>
Oct 02 20:11:39 compute-0 nova_compute[355794]:         <nova:port uuid="8a7a2e73-aec8-473f-8f6e-6da1c63ae426">
Oct 02 20:11:39 compute-0 nova_compute[355794]:           <nova:ip type="fixed" address="10.100.3.13" ipVersion="4"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:         </nova:port>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       </nova:ports>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     </nova:instance>
Oct 02 20:11:39 compute-0 nova_compute[355794]:   </metadata>
Oct 02 20:11:39 compute-0 nova_compute[355794]:   <sysinfo type="smbios">
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <system>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <entry name="manufacturer">RDO</entry>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <entry name="product">OpenStack Compute</entry>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <entry name="serial">03794a5e-b5ab-4b9e-8052-6de08e4c9f84</entry>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <entry name="uuid">03794a5e-b5ab-4b9e-8052-6de08e4c9f84</entry>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <entry name="family">Virtual Machine</entry>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     </system>
Oct 02 20:11:39 compute-0 nova_compute[355794]:   </sysinfo>
Oct 02 20:11:39 compute-0 nova_compute[355794]:   <os>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <boot dev="hd"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <smbios mode="sysinfo"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:   </os>
Oct 02 20:11:39 compute-0 nova_compute[355794]:   <features>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <acpi/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <apic/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <vmcoreinfo/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:   </features>
Oct 02 20:11:39 compute-0 nova_compute[355794]:   <clock offset="utc">
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <timer name="hpet" present="no"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:   </clock>
Oct 02 20:11:39 compute-0 nova_compute[355794]:   <cpu mode="host-model" match="exact">
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:   </cpu>
Oct 02 20:11:39 compute-0 nova_compute[355794]:   <devices>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <disk type="network" device="disk">
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/03794a5e-b5ab-4b9e-8052-6de08e4c9f84_disk">
Oct 02 20:11:39 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       </source>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:11:39 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <target dev="vda" bus="virtio"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <disk type="network" device="cdrom">
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <driver type="raw" cache="none"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <source protocol="rbd" name="vms/03794a5e-b5ab-4b9e-8052-6de08e4c9f84_disk.config">
Oct 02 20:11:39 compute-0 nova_compute[355794]:         <host name="192.168.122.100" port="6789"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       </source>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <auth username="openstack">
Oct 02 20:11:39 compute-0 nova_compute[355794]:         <secret type="ceph" uuid="6019f664-a1c2-5955-8391-692cb79a59f9"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       </auth>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <target dev="sda" bus="sata"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     </disk>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <interface type="ethernet">
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <mac address="fa:16:3e:a4:22:b0"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <mtu size="1442"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <target dev="tap8a7a2e73-ae"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     </interface>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <serial type="pty">
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <log file="/var/lib/nova/instances/03794a5e-b5ab-4b9e-8052-6de08e4c9f84/console.log" append="off"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     </serial>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <video>
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <model type="virtio"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     </video>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <input type="tablet" bus="usb"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <rng model="virtio">
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <backend model="random">/dev/urandom</backend>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     </rng>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <controller type="usb" index="0"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     <memballoon model="virtio">
Oct 02 20:11:39 compute-0 nova_compute[355794]:       <stats period="10"/>
Oct 02 20:11:39 compute-0 nova_compute[355794]:     </memballoon>
Oct 02 20:11:39 compute-0 nova_compute[355794]:   </devices>
Oct 02 20:11:39 compute-0 nova_compute[355794]: </domain>
Oct 02 20:11:39 compute-0 nova_compute[355794]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.724 2 DEBUG nova.compute.manager [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Preparing to wait for external event network-vif-plugged-8a7a2e73-aec8-473f-8f6e-6da1c63ae426 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.725 2 DEBUG oslo_concurrency.lockutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquiring lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.725 2 DEBUG oslo_concurrency.lockutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.726 2 DEBUG oslo_concurrency.lockutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.727 2 DEBUG nova.virt.libvirt.vif [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T20:11:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-6077278-asg-4covgfuc3mpk-ztq7ffpxzcnr-y6lmqetg2enr',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6077278-asg-4covgfuc3mpk-ztq7ffpxzcnr-y6lmqetg2enr',id=15,image_ref='fe71959f-8f59-4b45-ae05-4216d5f12fab',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='f724f930-b01d-4568-9d24-c7060da9fe9c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='16e65e6cbbf848e5bb5755e6da3b1d33',ramdisk_id='',reservation_id='r-ixmbsl0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='fe71959f-8f59-4b45-ae05-4216d5f12fab',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1246773106',owner_user_name='tempest-PrometheusGabbiTest-1246773106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T20:11:33Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='e5d4abc29b2e475e9c7c54249ca341c4',uuid=03794a5e-b5ab-4b9e-8052-6de08e4c9f84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "address": "fa:16:3e:a4:22:b0", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a7a2e73-ae", "ovs_interfaceid": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.727 2 DEBUG nova.network.os_vif_util [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Converting VIF {"id": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "address": "fa:16:3e:a4:22:b0", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a7a2e73-ae", "ovs_interfaceid": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.728 2 DEBUG nova.network.os_vif_util [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:22:b0,bridge_name='br-int',has_traffic_filtering=True,id=8a7a2e73-aec8-473f-8f6e-6da1c63ae426,network=Network(f0deafed-687f-4945-b8e7-38e6d324244b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a7a2e73-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.729 2 DEBUG os_vif [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:22:b0,bridge_name='br-int',has_traffic_filtering=True,id=8a7a2e73-aec8-473f-8f6e-6da1c63ae426,network=Network(f0deafed-687f-4945-b8e7-38e6d324244b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a7a2e73-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.731 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.739 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a7a2e73-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.741 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8a7a2e73-ae, col_values=(('external_ids', {'iface-id': '8a7a2e73-aec8-473f-8f6e-6da1c63ae426', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:22:b0', 'vm-uuid': '03794a5e-b5ab-4b9e-8052-6de08e4c9f84'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:11:39 compute-0 NetworkManager[44968]: <info>  [1759435899.7451] manager: (tap8a7a2e73-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.758 2 INFO os_vif [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:22:b0,bridge_name='br-int',has_traffic_filtering=True,id=8a7a2e73-aec8-473f-8f6e-6da1c63ae426,network=Network(f0deafed-687f-4945-b8e7-38e6d324244b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a7a2e73-ae')
Oct 02 20:11:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2008: 321 pgs: 321 active+clean; 264 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.896 2 DEBUG nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.897 2 DEBUG nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.898 2 DEBUG nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] No VIF found with MAC fa:16:3e:a4:22:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.899 2 INFO nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Using config drive
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.945 2 DEBUG nova.storage.rbd_utils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] rbd image 03794a5e-b5ab-4b9e-8052-6de08e4c9f84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.972 2 DEBUG nova.network.neutron [req-ca58dff4-c7aa-44e4-a026-1c52a0cbf09a req-7d1ab886-470f-4278-9056-f23eeabc722f 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Updated VIF entry in instance network info cache for port 8a7a2e73-aec8-473f-8f6e-6da1c63ae426. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 20:11:39 compute-0 nova_compute[355794]: 2025-10-02 20:11:39.973 2 DEBUG nova.network.neutron [req-ca58dff4-c7aa-44e4-a026-1c52a0cbf09a req-7d1ab886-470f-4278-9056-f23eeabc722f 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Updating instance_info_cache with network_info: [{"id": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "address": "fa:16:3e:a4:22:b0", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a7a2e73-ae", "ovs_interfaceid": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:11:40 compute-0 nova_compute[355794]: 2025-10-02 20:11:40.005 2 DEBUG oslo_concurrency.lockutils [req-ca58dff4-c7aa-44e4-a026-1c52a0cbf09a req-7d1ab886-470f-4278-9056-f23eeabc722f 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Releasing lock "refresh_cache-03794a5e-b5ab-4b9e-8052-6de08e4c9f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:11:40 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1428430658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:11:40 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2004639481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 20:11:40 compute-0 ceph-mon[191910]: pgmap v2008: 321 pgs: 321 active+clean; 264 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 02 20:11:40 compute-0 nova_compute[355794]: 2025-10-02 20:11:40.440 2 INFO nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Creating config drive at /var/lib/nova/instances/03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.config
Oct 02 20:11:40 compute-0 nova_compute[355794]: 2025-10-02 20:11:40.454 2 DEBUG oslo_concurrency.processutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpae5n3rih execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:11:40 compute-0 nova_compute[355794]: 2025-10-02 20:11:40.609 2 DEBUG oslo_concurrency.processutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpae5n3rih" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:11:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:11:40 compute-0 nova_compute[355794]: 2025-10-02 20:11:40.666 2 DEBUG nova.storage.rbd_utils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] rbd image 03794a5e-b5ab-4b9e-8052-6de08e4c9f84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 20:11:40 compute-0 nova_compute[355794]: 2025-10-02 20:11:40.689 2 DEBUG oslo_concurrency.processutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.config 03794a5e-b5ab-4b9e-8052-6de08e4c9f84_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:11:40 compute-0 podman[465242]: 2025-10-02 20:11:40.709320335 +0000 UTC m=+0.123583353 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 20:11:40 compute-0 podman[465241]: 2025-10-02 20:11:40.715892187 +0000 UTC m=+0.138584007 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.buildah.version=1.33.7)
Oct 02 20:11:41 compute-0 nova_compute[355794]: 2025-10-02 20:11:41.664 2 DEBUG oslo_concurrency.processutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.config 03794a5e-b5ab-4b9e-8052-6de08e4c9f84_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.976s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:11:41 compute-0 nova_compute[355794]: 2025-10-02 20:11:41.665 2 INFO nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Deleting local config drive /var/lib/nova/instances/03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.config because it was imported into RBD.
Oct 02 20:11:41 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 02 20:11:41 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 02 20:11:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2009: 321 pgs: 321 active+clean; 264 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 20:11:41 compute-0 kernel: tap8a7a2e73-ae: entered promiscuous mode
Oct 02 20:11:41 compute-0 ovn_controller[88435]: 2025-10-02T20:11:41Z|00179|binding|INFO|Claiming lport 8a7a2e73-aec8-473f-8f6e-6da1c63ae426 for this chassis.
Oct 02 20:11:41 compute-0 ovn_controller[88435]: 2025-10-02T20:11:41Z|00180|binding|INFO|8a7a2e73-aec8-473f-8f6e-6da1c63ae426: Claiming fa:16:3e:a4:22:b0 10.100.3.13
Oct 02 20:11:41 compute-0 nova_compute[355794]: 2025-10-02 20:11:41.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:41 compute-0 NetworkManager[44968]: <info>  [1759435901.8801] manager: (tap8a7a2e73-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Oct 02 20:11:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:41.881 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:22:b0 10.100.3.13'], port_security=['fa:16:3e:a4:22:b0 10.100.3.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.13/16', 'neutron:device_id': '03794a5e-b5ab-4b9e-8052-6de08e4c9f84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0deafed-687f-4945-b8e7-38e6d324244b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '16e65e6cbbf848e5bb5755e6da3b1d33', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7d0acfe3-81ce-4e08-8e78-709b63816024', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cba1ebe5-3c4d-41f0-9003-ea3a824c4dce, chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=8a7a2e73-aec8-473f-8f6e-6da1c63ae426) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:11:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:41.884 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 8a7a2e73-aec8-473f-8f6e-6da1c63ae426 in datapath f0deafed-687f-4945-b8e7-38e6d324244b bound to our chassis
Oct 02 20:11:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:41.887 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0deafed-687f-4945-b8e7-38e6d324244b
Oct 02 20:11:41 compute-0 ovn_controller[88435]: 2025-10-02T20:11:41Z|00181|binding|INFO|Setting lport 8a7a2e73-aec8-473f-8f6e-6da1c63ae426 ovn-installed in OVS
Oct 02 20:11:41 compute-0 ovn_controller[88435]: 2025-10-02T20:11:41Z|00182|binding|INFO|Setting lport 8a7a2e73-aec8-473f-8f6e-6da1c63ae426 up in Southbound
Oct 02 20:11:41 compute-0 nova_compute[355794]: 2025-10-02 20:11:41.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:41 compute-0 nova_compute[355794]: 2025-10-02 20:11:41.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:41.922 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[5b8ac12d-00c5-42ac-9529-480ce4068393]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:11:41 compute-0 systemd-udevd[465352]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 20:11:41 compute-0 systemd-machined[137646]: New machine qemu-16-instance-0000000f.
Oct 02 20:11:41 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Oct 02 20:11:41 compute-0 NetworkManager[44968]: <info>  [1759435901.9528] device (tap8a7a2e73-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 20:11:41 compute-0 NetworkManager[44968]: <info>  [1759435901.9584] device (tap8a7a2e73-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 20:11:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:41.964 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[44a6bf47-0541-4557-9eae-5d8682befdff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:11:41 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:41.968 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[130ead48-54ed-4307-8f90-07a0390da753]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:11:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:42.013 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[462cac6b-2d1f-435a-acfd-e128e4945ffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:11:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:42.038 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2c717f-cd3d-4196-a492-57ae7db10c68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0deafed-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:dd:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691341, 'reachable_time': 41906, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 465361, 'error': None, 'target': 'ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:11:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:42.062 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[4b41d852-8379-4616-8c6c-0effba847642]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf0deafed-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691359, 'tstamp': 691359}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 465364, 'error': None, 'target': 'ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapf0deafed-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691364, 'tstamp': 691364}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 465364, 'error': None, 'target': 'ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:11:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:42.065 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0deafed-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:11:42 compute-0 nova_compute[355794]: 2025-10-02 20:11:42.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:42 compute-0 nova_compute[355794]: 2025-10-02 20:11:42.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:42.076 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0deafed-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:11:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:42.076 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:11:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:42.077 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0deafed-60, col_values=(('external_ids', {'iface-id': 'ad4572b7-e012-418a-9c6b-97a8e10ee248'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:11:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:42.078 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:11:42 compute-0 nova_compute[355794]: 2025-10-02 20:11:42.109 2 DEBUG nova.compute.manager [req-f55e536d-4360-4e3a-8770-65e0f207ebc7 req-21ff15bf-2244-4ffc-8f0a-b47d00f57149 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Received event network-vif-plugged-8a7a2e73-aec8-473f-8f6e-6da1c63ae426 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:11:42 compute-0 nova_compute[355794]: 2025-10-02 20:11:42.110 2 DEBUG oslo_concurrency.lockutils [req-f55e536d-4360-4e3a-8770-65e0f207ebc7 req-21ff15bf-2244-4ffc-8f0a-b47d00f57149 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:11:42 compute-0 nova_compute[355794]: 2025-10-02 20:11:42.111 2 DEBUG oslo_concurrency.lockutils [req-f55e536d-4360-4e3a-8770-65e0f207ebc7 req-21ff15bf-2244-4ffc-8f0a-b47d00f57149 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:11:42 compute-0 nova_compute[355794]: 2025-10-02 20:11:42.112 2 DEBUG oslo_concurrency.lockutils [req-f55e536d-4360-4e3a-8770-65e0f207ebc7 req-21ff15bf-2244-4ffc-8f0a-b47d00f57149 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:11:42 compute-0 nova_compute[355794]: 2025-10-02 20:11:42.113 2 DEBUG nova.compute.manager [req-f55e536d-4360-4e3a-8770-65e0f207ebc7 req-21ff15bf-2244-4ffc-8f0a-b47d00f57149 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Processing event network-vif-plugged-8a7a2e73-aec8-473f-8f6e-6da1c63ae426 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 20:11:43 compute-0 ceph-mon[191910]: pgmap v2009: 321 pgs: 321 active+clean; 264 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 20:11:43 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.503 2 DEBUG nova.compute.manager [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.504 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435903.5025556, 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.504 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] VM Started (Lifecycle Event)
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.514 2 DEBUG nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.526 2 INFO nova.virt.libvirt.driver [-] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Instance spawned successfully.
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.526 2 DEBUG nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.544 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.554 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:11:43 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.571 2 DEBUG nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.572 2 DEBUG nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.573 2 DEBUG nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.573 2 DEBUG nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.574 2 DEBUG nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.575 2 DEBUG nova.virt.libvirt.driver [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.628 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.629 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435903.5026834, 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.629 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] VM Paused (Lifecycle Event)
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.721 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.731 2 DEBUG nova.virt.driver [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] Emitting event <LifecycleEvent: 1759435903.5104203, 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.732 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] VM Resumed (Lifecycle Event)
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.758 2 INFO nova.compute.manager [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Took 10.57 seconds to spawn the instance on the hypervisor.
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.759 2 DEBUG nova.compute.manager [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:11:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2010: 321 pgs: 321 active+clean; 264 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.802 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.808 2 DEBUG nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.894 2 INFO nova.compute.manager [None req-03dfb53a-b913-4d5f-822f-8387649bc1ca - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 20:11:43 compute-0 nova_compute[355794]: 2025-10-02 20:11:43.938 2 INFO nova.compute.manager [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Took 11.68 seconds to build instance.
Oct 02 20:11:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:11:43.988 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1de2af17-a89c-45e5-97c6-db433f26bbb6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:11:44 compute-0 nova_compute[355794]: 2025-10-02 20:11:44.114 2 DEBUG oslo_concurrency.lockutils [None req-04492cc2-6664-41c8-91bb-fe00bcbc0850 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:11:44 compute-0 ceph-mon[191910]: pgmap v2010: 321 pgs: 321 active+clean; 264 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 02 20:11:44 compute-0 nova_compute[355794]: 2025-10-02 20:11:44.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:44 compute-0 nova_compute[355794]: 2025-10-02 20:11:44.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:44 compute-0 nova_compute[355794]: 2025-10-02 20:11:44.881 2 DEBUG nova.compute.manager [req-c23f6a55-db64-499b-824f-5eb4aa62068d req-0351a485-38a3-4461-a355-44f947ce5fd6 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Received event network-vif-plugged-8a7a2e73-aec8-473f-8f6e-6da1c63ae426 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:11:44 compute-0 nova_compute[355794]: 2025-10-02 20:11:44.882 2 DEBUG oslo_concurrency.lockutils [req-c23f6a55-db64-499b-824f-5eb4aa62068d req-0351a485-38a3-4461-a355-44f947ce5fd6 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:11:44 compute-0 nova_compute[355794]: 2025-10-02 20:11:44.882 2 DEBUG oslo_concurrency.lockutils [req-c23f6a55-db64-499b-824f-5eb4aa62068d req-0351a485-38a3-4461-a355-44f947ce5fd6 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:11:44 compute-0 nova_compute[355794]: 2025-10-02 20:11:44.883 2 DEBUG oslo_concurrency.lockutils [req-c23f6a55-db64-499b-824f-5eb4aa62068d req-0351a485-38a3-4461-a355-44f947ce5fd6 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:11:44 compute-0 nova_compute[355794]: 2025-10-02 20:11:44.883 2 DEBUG nova.compute.manager [req-c23f6a55-db64-499b-824f-5eb4aa62068d req-0351a485-38a3-4461-a355-44f947ce5fd6 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] No waiting events found dispatching network-vif-plugged-8a7a2e73-aec8-473f-8f6e-6da1c63ae426 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:11:44 compute-0 nova_compute[355794]: 2025-10-02 20:11:44.884 2 WARNING nova.compute.manager [req-c23f6a55-db64-499b-824f-5eb4aa62068d req-0351a485-38a3-4461-a355-44f947ce5fd6 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Received unexpected event network-vif-plugged-8a7a2e73-aec8-473f-8f6e-6da1c63ae426 for instance with vm_state active and task_state None.
Oct 02 20:11:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:11:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2011: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 466 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Oct 02 20:11:46 compute-0 ceph-mon[191910]: pgmap v2011: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 466 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Oct 02 20:11:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2012: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 689 KiB/s rd, 543 KiB/s wr, 46 op/s
Oct 02 20:11:48 compute-0 ceph-mon[191910]: pgmap v2012: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 689 KiB/s rd, 543 KiB/s wr, 46 op/s
Oct 02 20:11:49 compute-0 nova_compute[355794]: 2025-10-02 20:11:49.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:49 compute-0 nova_compute[355794]: 2025-10-02 20:11:49.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2013: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 926 KiB/s rd, 326 KiB/s wr, 42 op/s
Oct 02 20:11:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:11:50 compute-0 podman[465427]: 2025-10-02 20:11:50.718138066 +0000 UTC m=+0.132383975 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 02 20:11:50 compute-0 ceph-mon[191910]: pgmap v2013: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 926 KiB/s rd, 326 KiB/s wr, 42 op/s
Oct 02 20:11:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2014: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 20:11:52 compute-0 ceph-mon[191910]: pgmap v2014: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 20:11:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2015: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 20:11:54 compute-0 nova_compute[355794]: 2025-10-02 20:11:54.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:54 compute-0 nova_compute[355794]: 2025-10-02 20:11:54.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:54 compute-0 ceph-mon[191910]: pgmap v2015: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 20:11:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:11:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2016: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Oct 02 20:11:56 compute-0 ceph-mon[191910]: pgmap v2016: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Oct 02 20:11:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2017: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 0 B/s wr, 67 op/s
Oct 02 20:11:58 compute-0 podman[465449]: 2025-10-02 20:11:58.65523001 +0000 UTC m=+0.088266189 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 20:11:58 compute-0 podman[465448]: 2025-10-02 20:11:58.656019631 +0000 UTC m=+0.094670818 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 20:11:58 compute-0 ceph-mon[191910]: pgmap v2017: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 0 B/s wr, 67 op/s
Oct 02 20:11:59 compute-0 sudo[465489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:11:59 compute-0 sudo[465489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:11:59 compute-0 sudo[465489]: pam_unix(sudo:session): session closed for user root
Oct 02 20:11:59 compute-0 sudo[465514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:11:59 compute-0 sudo[465514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:11:59 compute-0 sudo[465514]: pam_unix(sudo:session): session closed for user root
Oct 02 20:11:59 compute-0 nova_compute[355794]: 2025-10-02 20:11:59.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:59 compute-0 sudo[465539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:11:59 compute-0 sudo[465539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:11:59 compute-0 sudo[465539]: pam_unix(sudo:session): session closed for user root
Oct 02 20:11:59 compute-0 podman[157186]: time="2025-10-02T20:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:11:59 compute-0 nova_compute[355794]: 2025-10-02 20:11:59.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:11:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:11:59 compute-0 sudo[465564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:11:59 compute-0 sudo[465564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:11:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2018: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 0 B/s wr, 87 op/s
Oct 02 20:11:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9565 "" "Go-http-client/1.1"
Oct 02 20:12:00 compute-0 ceph-mon[191910]: pgmap v2018: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 0 B/s wr, 87 op/s
Oct 02 20:12:00 compute-0 sudo[465564]: pam_unix(sudo:session): session closed for user root
Oct 02 20:12:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:12:00 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:12:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:12:00 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:12:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:12:00 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:12:00 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 57c8d1d1-84f7-4abe-a848-9d2219c4f45a does not exist
Oct 02 20:12:00 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 6c2baa90-4384-4a07-8984-561017e90873 does not exist
Oct 02 20:12:00 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev a0d17ecd-660e-464f-a7d1-262bee948f07 does not exist
Oct 02 20:12:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:12:00 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:12:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:12:00 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:12:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:12:00 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:12:00 compute-0 sudo[465620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:12:00 compute-0 sudo[465620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:12:00 compute-0 sudo[465620]: pam_unix(sudo:session): session closed for user root
Oct 02 20:12:00 compute-0 sudo[465645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:12:00 compute-0 sudo[465645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:12:00 compute-0 sudo[465645]: pam_unix(sudo:session): session closed for user root
Oct 02 20:12:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:12:00 compute-0 sudo[465670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:12:00 compute-0 sudo[465670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:12:00 compute-0 sudo[465670]: pam_unix(sudo:session): session closed for user root
Oct 02 20:12:00 compute-0 sudo[465695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:12:00 compute-0 sudo[465695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:12:01 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:12:01 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:12:01 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:12:01 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:12:01 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:12:01 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:12:01 compute-0 podman[465759]: 2025-10-02 20:12:01.273627812 +0000 UTC m=+0.079081958 container create 07e50db39e913059240fc80255ae484a5e6595fd2dfc551391ff7453d32cea0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poitras, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:12:01 compute-0 podman[465759]: 2025-10-02 20:12:01.242688906 +0000 UTC m=+0.048143082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:12:01 compute-0 systemd[1]: Started libpod-conmon-07e50db39e913059240fc80255ae484a5e6595fd2dfc551391ff7453d32cea0a.scope.
Oct 02 20:12:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:12:01 compute-0 podman[465759]: 2025-10-02 20:12:01.415692311 +0000 UTC m=+0.221146447 container init 07e50db39e913059240fc80255ae484a5e6595fd2dfc551391ff7453d32cea0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poitras, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 20:12:01 compute-0 openstack_network_exporter[372736]: ERROR   20:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:12:01 compute-0 openstack_network_exporter[372736]: ERROR   20:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:12:01 compute-0 openstack_network_exporter[372736]: ERROR   20:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:12:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:12:01 compute-0 openstack_network_exporter[372736]: ERROR   20:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:12:01 compute-0 openstack_network_exporter[372736]: ERROR   20:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:12:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:12:01 compute-0 podman[465759]: 2025-10-02 20:12:01.427679357 +0000 UTC m=+0.233133483 container start 07e50db39e913059240fc80255ae484a5e6595fd2dfc551391ff7453d32cea0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poitras, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:12:01 compute-0 podman[465759]: 2025-10-02 20:12:01.432766241 +0000 UTC m=+0.238220397 container attach 07e50db39e913059240fc80255ae484a5e6595fd2dfc551391ff7453d32cea0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 02 20:12:01 compute-0 agitated_poitras[465775]: 167 167
Oct 02 20:12:01 compute-0 systemd[1]: libpod-07e50db39e913059240fc80255ae484a5e6595fd2dfc551391ff7453d32cea0a.scope: Deactivated successfully.
Oct 02 20:12:01 compute-0 conmon[465775]: conmon 07e50db39e913059240f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-07e50db39e913059240fc80255ae484a5e6595fd2dfc551391ff7453d32cea0a.scope/container/memory.events
Oct 02 20:12:01 compute-0 podman[465780]: 2025-10-02 20:12:01.529002421 +0000 UTC m=+0.041057125 container died 07e50db39e913059240fc80255ae484a5e6595fd2dfc551391ff7453d32cea0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poitras, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 02 20:12:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae3774b5ac4574e4478ab8e19157abf3bbed8ef687791c201acfcf5c1aeaef90-merged.mount: Deactivated successfully.
Oct 02 20:12:01 compute-0 podman[465780]: 2025-10-02 20:12:01.587183886 +0000 UTC m=+0.099238570 container remove 07e50db39e913059240fc80255ae484a5e6595fd2dfc551391ff7453d32cea0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 20:12:01 compute-0 systemd[1]: libpod-conmon-07e50db39e913059240fc80255ae484a5e6595fd2dfc551391ff7453d32cea0a.scope: Deactivated successfully.
Oct 02 20:12:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2019: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 0 B/s wr, 98 op/s
Oct 02 20:12:01 compute-0 podman[465800]: 2025-10-02 20:12:01.869573457 +0000 UTC m=+0.099673451 container create 041ab1f7837ca39f59538cff5b5a45057aa99f879c80009ea02840b46dc6693d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_boyd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 20:12:01 compute-0 podman[465800]: 2025-10-02 20:12:01.83898921 +0000 UTC m=+0.069089284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:12:01 compute-0 systemd[1]: Started libpod-conmon-041ab1f7837ca39f59538cff5b5a45057aa99f879c80009ea02840b46dc6693d.scope.
Oct 02 20:12:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:12:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7de3945772a624fad5ef4c3b154174cbe05033ef9a24687998a7750f9b426517/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:12:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7de3945772a624fad5ef4c3b154174cbe05033ef9a24687998a7750f9b426517/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:12:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7de3945772a624fad5ef4c3b154174cbe05033ef9a24687998a7750f9b426517/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:12:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7de3945772a624fad5ef4c3b154174cbe05033ef9a24687998a7750f9b426517/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:12:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7de3945772a624fad5ef4c3b154174cbe05033ef9a24687998a7750f9b426517/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:12:01 compute-0 podman[465800]: 2025-10-02 20:12:01.994301719 +0000 UTC m=+0.224401723 container init 041ab1f7837ca39f59538cff5b5a45057aa99f879c80009ea02840b46dc6693d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_boyd, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:12:02 compute-0 podman[465800]: 2025-10-02 20:12:02.017592983 +0000 UTC m=+0.247692977 container start 041ab1f7837ca39f59538cff5b5a45057aa99f879c80009ea02840b46dc6693d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:12:02 compute-0 podman[465800]: 2025-10-02 20:12:02.025153283 +0000 UTC m=+0.255253277 container attach 041ab1f7837ca39f59538cff5b5a45057aa99f879c80009ea02840b46dc6693d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_boyd, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:12:02 compute-0 ceph-mon[191910]: pgmap v2019: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 0 B/s wr, 98 op/s
Oct 02 20:12:03 compute-0 focused_boyd[465815]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:12:03 compute-0 focused_boyd[465815]: --> relative data size: 1.0
Oct 02 20:12:03 compute-0 focused_boyd[465815]: --> All data devices are unavailable
Oct 02 20:12:03 compute-0 systemd[1]: libpod-041ab1f7837ca39f59538cff5b5a45057aa99f879c80009ea02840b46dc6693d.scope: Deactivated successfully.
Oct 02 20:12:03 compute-0 podman[465800]: 2025-10-02 20:12:03.424795564 +0000 UTC m=+1.654895568 container died 041ab1f7837ca39f59538cff5b5a45057aa99f879c80009ea02840b46dc6693d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_boyd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:12:03 compute-0 systemd[1]: libpod-041ab1f7837ca39f59538cff5b5a45057aa99f879c80009ea02840b46dc6693d.scope: Consumed 1.309s CPU time.
Oct 02 20:12:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-7de3945772a624fad5ef4c3b154174cbe05033ef9a24687998a7750f9b426517-merged.mount: Deactivated successfully.
Oct 02 20:12:03 compute-0 podman[465800]: 2025-10-02 20:12:03.547750189 +0000 UTC m=+1.777850193 container remove 041ab1f7837ca39f59538cff5b5a45057aa99f879c80009ea02840b46dc6693d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 20:12:03 compute-0 systemd[1]: libpod-conmon-041ab1f7837ca39f59538cff5b5a45057aa99f879c80009ea02840b46dc6693d.scope: Deactivated successfully.
Oct 02 20:12:03 compute-0 sudo[465695]: pam_unix(sudo:session): session closed for user root
Oct 02 20:12:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:12:03
Oct 02 20:12:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:12:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:12:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'vms', 'images', 'cephfs.cephfs.data', 'backups']
Oct 02 20:12:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:12:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:12:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:12:03 compute-0 sudo[465857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:12:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:12:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:12:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:12:03 compute-0 sudo[465857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:12:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:12:03 compute-0 sudo[465857]: pam_unix(sudo:session): session closed for user root
Oct 02 20:12:03 compute-0 sudo[465882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:12:03 compute-0 sudo[465882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:12:03 compute-0 sudo[465882]: pam_unix(sudo:session): session closed for user root
Oct 02 20:12:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2020: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct 02 20:12:03 compute-0 sudo[465907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:12:03 compute-0 sudo[465907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:12:03 compute-0 sudo[465907]: pam_unix(sudo:session): session closed for user root
Oct 02 20:12:04 compute-0 sudo[465932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:12:04 compute-0 sudo[465932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:12:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:12:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:12:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:12:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:12:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:12:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:12:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:12:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:12:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:12:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:12:04 compute-0 podman[465997]: 2025-10-02 20:12:04.630194301 +0000 UTC m=+0.082352214 container create ea4ecf0aab22df5577e22fe8d0a785ff6845698e2e0f6b40964a0e54cd8fe60c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 20:12:04 compute-0 podman[465997]: 2025-10-02 20:12:04.598338491 +0000 UTC m=+0.050496454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:12:04 compute-0 nova_compute[355794]: 2025-10-02 20:12:04.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:04 compute-0 systemd[1]: Started libpod-conmon-ea4ecf0aab22df5577e22fe8d0a785ff6845698e2e0f6b40964a0e54cd8fe60c.scope.
Oct 02 20:12:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:12:04 compute-0 nova_compute[355794]: 2025-10-02 20:12:04.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:04 compute-0 podman[465997]: 2025-10-02 20:12:04.779616034 +0000 UTC m=+0.231773997 container init ea4ecf0aab22df5577e22fe8d0a785ff6845698e2e0f6b40964a0e54cd8fe60c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 20:12:04 compute-0 podman[465997]: 2025-10-02 20:12:04.799432347 +0000 UTC m=+0.251590240 container start ea4ecf0aab22df5577e22fe8d0a785ff6845698e2e0f6b40964a0e54cd8fe60c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 20:12:04 compute-0 podman[465997]: 2025-10-02 20:12:04.805083696 +0000 UTC m=+0.257241659 container attach ea4ecf0aab22df5577e22fe8d0a785ff6845698e2e0f6b40964a0e54cd8fe60c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:12:04 compute-0 stoic_herschel[466013]: 167 167
Oct 02 20:12:04 compute-0 systemd[1]: libpod-ea4ecf0aab22df5577e22fe8d0a785ff6845698e2e0f6b40964a0e54cd8fe60c.scope: Deactivated successfully.
Oct 02 20:12:04 compute-0 ceph-mon[191910]: pgmap v2020: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct 02 20:12:04 compute-0 podman[466018]: 2025-10-02 20:12:04.89163576 +0000 UTC m=+0.047913905 container died ea4ecf0aab22df5577e22fe8d0a785ff6845698e2e0f6b40964a0e54cd8fe60c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct 02 20:12:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d517da0f312b25702d84d0a468c823d4ce7b0fff286e964ee3dc030baff1d5d-merged.mount: Deactivated successfully.
Oct 02 20:12:04 compute-0 podman[466018]: 2025-10-02 20:12:04.962143711 +0000 UTC m=+0.118421786 container remove ea4ecf0aab22df5577e22fe8d0a785ff6845698e2e0f6b40964a0e54cd8fe60c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 20:12:04 compute-0 systemd[1]: libpod-conmon-ea4ecf0aab22df5577e22fe8d0a785ff6845698e2e0f6b40964a0e54cd8fe60c.scope: Deactivated successfully.
Oct 02 20:12:05 compute-0 podman[466040]: 2025-10-02 20:12:05.282586456 +0000 UTC m=+0.069717851 container create 76de0e9bca11b145756548d9fb6cab099233ceb428e77286174d17f5be03c8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mclean, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:12:05 compute-0 podman[466040]: 2025-10-02 20:12:05.256612341 +0000 UTC m=+0.043743756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:12:05 compute-0 systemd[1]: Started libpod-conmon-76de0e9bca11b145756548d9fb6cab099233ceb428e77286174d17f5be03c8d7.scope.
Oct 02 20:12:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85371f932d75cd7fb36c797d1629369f8e01b76983024628b3a65bede8e37b87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85371f932d75cd7fb36c797d1629369f8e01b76983024628b3a65bede8e37b87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85371f932d75cd7fb36c797d1629369f8e01b76983024628b3a65bede8e37b87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85371f932d75cd7fb36c797d1629369f8e01b76983024628b3a65bede8e37b87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:12:05 compute-0 podman[466040]: 2025-10-02 20:12:05.416947081 +0000 UTC m=+0.204078506 container init 76de0e9bca11b145756548d9fb6cab099233ceb428e77286174d17f5be03c8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mclean, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:12:05 compute-0 podman[466040]: 2025-10-02 20:12:05.444993752 +0000 UTC m=+0.232125137 container start 76de0e9bca11b145756548d9fb6cab099233ceb428e77286174d17f5be03c8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mclean, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 20:12:05 compute-0 podman[466040]: 2025-10-02 20:12:05.450049095 +0000 UTC m=+0.237180520 container attach 76de0e9bca11b145756548d9fb6cab099233ceb428e77286174d17f5be03c8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:12:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:12:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2021: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct 02 20:12:06 compute-0 angry_mclean[466053]: {
Oct 02 20:12:06 compute-0 angry_mclean[466053]:     "0": [
Oct 02 20:12:06 compute-0 angry_mclean[466053]:         {
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "devices": [
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "/dev/loop3"
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             ],
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "lv_name": "ceph_lv0",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "lv_size": "21470642176",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "name": "ceph_lv0",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "tags": {
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.cluster_name": "ceph",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.crush_device_class": "",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.encrypted": "0",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.osd_id": "0",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.type": "block",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.vdo": "0"
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             },
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "type": "block",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "vg_name": "ceph_vg0"
Oct 02 20:12:06 compute-0 angry_mclean[466053]:         }
Oct 02 20:12:06 compute-0 angry_mclean[466053]:     ],
Oct 02 20:12:06 compute-0 angry_mclean[466053]:     "1": [
Oct 02 20:12:06 compute-0 angry_mclean[466053]:         {
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "devices": [
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "/dev/loop4"
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             ],
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "lv_name": "ceph_lv1",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "lv_size": "21470642176",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "name": "ceph_lv1",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "tags": {
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.cluster_name": "ceph",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.crush_device_class": "",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.encrypted": "0",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.osd_id": "1",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.type": "block",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.vdo": "0"
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             },
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "type": "block",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "vg_name": "ceph_vg1"
Oct 02 20:12:06 compute-0 angry_mclean[466053]:         }
Oct 02 20:12:06 compute-0 angry_mclean[466053]:     ],
Oct 02 20:12:06 compute-0 angry_mclean[466053]:     "2": [
Oct 02 20:12:06 compute-0 angry_mclean[466053]:         {
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "devices": [
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "/dev/loop5"
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             ],
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "lv_name": "ceph_lv2",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "lv_size": "21470642176",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "name": "ceph_lv2",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "tags": {
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.cluster_name": "ceph",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.crush_device_class": "",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.encrypted": "0",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.osd_id": "2",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.type": "block",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:                 "ceph.vdo": "0"
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             },
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "type": "block",
Oct 02 20:12:06 compute-0 angry_mclean[466053]:             "vg_name": "ceph_vg2"
Oct 02 20:12:06 compute-0 angry_mclean[466053]:         }
Oct 02 20:12:06 compute-0 angry_mclean[466053]:     ]
Oct 02 20:12:06 compute-0 angry_mclean[466053]: }
Oct 02 20:12:06 compute-0 systemd[1]: libpod-76de0e9bca11b145756548d9fb6cab099233ceb428e77286174d17f5be03c8d7.scope: Deactivated successfully.
Oct 02 20:12:06 compute-0 podman[466064]: 2025-10-02 20:12:06.307724436 +0000 UTC m=+0.036847134 container died 76de0e9bca11b145756548d9fb6cab099233ceb428e77286174d17f5be03c8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mclean, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 20:12:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-85371f932d75cd7fb36c797d1629369f8e01b76983024628b3a65bede8e37b87-merged.mount: Deactivated successfully.
Oct 02 20:12:06 compute-0 podman[466064]: 2025-10-02 20:12:06.451048117 +0000 UTC m=+0.180170805 container remove 76de0e9bca11b145756548d9fb6cab099233ceb428e77286174d17f5be03c8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mclean, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:12:06 compute-0 systemd[1]: libpod-conmon-76de0e9bca11b145756548d9fb6cab099233ceb428e77286174d17f5be03c8d7.scope: Deactivated successfully.
Oct 02 20:12:06 compute-0 sudo[465932]: pam_unix(sudo:session): session closed for user root
Oct 02 20:12:06 compute-0 sudo[466079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:12:06 compute-0 sudo[466079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:12:06 compute-0 sudo[466079]: pam_unix(sudo:session): session closed for user root
Oct 02 20:12:06 compute-0 sudo[466104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:12:06 compute-0 sudo[466104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:12:06 compute-0 sudo[466104]: pam_unix(sudo:session): session closed for user root
Oct 02 20:12:06 compute-0 sudo[466141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:12:06 compute-0 sudo[466141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:12:06 compute-0 podman[466128]: 2025-10-02 20:12:06.952490519 +0000 UTC m=+0.109399378 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true)
Oct 02 20:12:06 compute-0 sudo[466141]: pam_unix(sudo:session): session closed for user root
Oct 02 20:12:06 compute-0 podman[466129]: 2025-10-02 20:12:06.966212331 +0000 UTC m=+0.121130597 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, vcs-type=git, vendor=Red Hat, Inc., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=)
Oct 02 20:12:06 compute-0 ceph-mon[191910]: pgmap v2021: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct 02 20:12:07 compute-0 sudo[466191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:12:07 compute-0 sudo[466191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:12:07 compute-0 podman[466252]: 2025-10-02 20:12:07.47814382 +0000 UTC m=+0.032201401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:12:07 compute-0 podman[466252]: 2025-10-02 20:12:07.586199141 +0000 UTC m=+0.140256742 container create 678e8599507cea36d5b7230c1e525178af6d1f0dcd9584c8c799bd4b1566b415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_fermat, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 20:12:07 compute-0 nova_compute[355794]: 2025-10-02 20:12:07.639 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:12:07 compute-0 nova_compute[355794]: 2025-10-02 20:12:07.640 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:12:07 compute-0 systemd[1]: Started libpod-conmon-678e8599507cea36d5b7230c1e525178af6d1f0dcd9584c8c799bd4b1566b415.scope.
Oct 02 20:12:07 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:12:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2022: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 68 op/s
Oct 02 20:12:07 compute-0 podman[466252]: 2025-10-02 20:12:07.83703361 +0000 UTC m=+0.391091211 container init 678e8599507cea36d5b7230c1e525178af6d1f0dcd9584c8c799bd4b1566b415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 20:12:07 compute-0 podman[466252]: 2025-10-02 20:12:07.856336659 +0000 UTC m=+0.410394220 container start 678e8599507cea36d5b7230c1e525178af6d1f0dcd9584c8c799bd4b1566b415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_fermat, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:12:07 compute-0 podman[466252]: 2025-10-02 20:12:07.860615292 +0000 UTC m=+0.414672903 container attach 678e8599507cea36d5b7230c1e525178af6d1f0dcd9584c8c799bd4b1566b415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:12:07 compute-0 charming_fermat[466266]: 167 167
Oct 02 20:12:07 compute-0 systemd[1]: libpod-678e8599507cea36d5b7230c1e525178af6d1f0dcd9584c8c799bd4b1566b415.scope: Deactivated successfully.
Oct 02 20:12:07 compute-0 podman[466252]: 2025-10-02 20:12:07.868968552 +0000 UTC m=+0.423026143 container died 678e8599507cea36d5b7230c1e525178af6d1f0dcd9584c8c799bd4b1566b415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 20:12:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-73c50d0bc5ac4843966759321396341ce818d80e168dcd933dfe91ecf0c9a812-merged.mount: Deactivated successfully.
Oct 02 20:12:07 compute-0 podman[466252]: 2025-10-02 20:12:07.958634198 +0000 UTC m=+0.512691759 container remove 678e8599507cea36d5b7230c1e525178af6d1f0dcd9584c8c799bd4b1566b415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:12:07 compute-0 systemd[1]: libpod-conmon-678e8599507cea36d5b7230c1e525178af6d1f0dcd9584c8c799bd4b1566b415.scope: Deactivated successfully.
Oct 02 20:12:08 compute-0 podman[466291]: 2025-10-02 20:12:08.18796389 +0000 UTC m=+0.051882400 container create c70aec1d40a6e55ce60ffafdcb5df9935f2b7a1bad7a266fe748f11f3e5437a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:12:08 compute-0 systemd[1]: Started libpod-conmon-c70aec1d40a6e55ce60ffafdcb5df9935f2b7a1bad7a266fe748f11f3e5437a2.scope.
Oct 02 20:12:08 compute-0 podman[466291]: 2025-10-02 20:12:08.171027813 +0000 UTC m=+0.034946323 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:12:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfc95719d3710c8ebfc5137ad690ca1d22ad85fedcdc35a7e76ce57f5f83859b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfc95719d3710c8ebfc5137ad690ca1d22ad85fedcdc35a7e76ce57f5f83859b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfc95719d3710c8ebfc5137ad690ca1d22ad85fedcdc35a7e76ce57f5f83859b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfc95719d3710c8ebfc5137ad690ca1d22ad85fedcdc35a7e76ce57f5f83859b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:12:08 compute-0 podman[466291]: 2025-10-02 20:12:08.296697259 +0000 UTC m=+0.160615769 container init c70aec1d40a6e55ce60ffafdcb5df9935f2b7a1bad7a266fe748f11f3e5437a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:12:08 compute-0 podman[466291]: 2025-10-02 20:12:08.311661504 +0000 UTC m=+0.175580014 container start c70aec1d40a6e55ce60ffafdcb5df9935f2b7a1bad7a266fe748f11f3e5437a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:12:08 compute-0 podman[466291]: 2025-10-02 20:12:08.316242845 +0000 UTC m=+0.180161415 container attach c70aec1d40a6e55ce60ffafdcb5df9935f2b7a1bad7a266fe748f11f3e5437a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 20:12:08 compute-0 nova_compute[355794]: 2025-10-02 20:12:08.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:12:08 compute-0 ceph-mon[191910]: pgmap v2022: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 68 op/s
Oct 02 20:12:09 compute-0 elastic_yalow[466306]: {
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:         "osd_id": 1,
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:         "type": "bluestore"
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:     },
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:         "osd_id": 2,
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:         "type": "bluestore"
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:     },
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:         "osd_id": 0,
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:         "type": "bluestore"
Oct 02 20:12:09 compute-0 elastic_yalow[466306]:     }
Oct 02 20:12:09 compute-0 elastic_yalow[466306]: }
Oct 02 20:12:09 compute-0 systemd[1]: libpod-c70aec1d40a6e55ce60ffafdcb5df9935f2b7a1bad7a266fe748f11f3e5437a2.scope: Deactivated successfully.
Oct 02 20:12:09 compute-0 podman[466291]: 2025-10-02 20:12:09.458298732 +0000 UTC m=+1.322217242 container died c70aec1d40a6e55ce60ffafdcb5df9935f2b7a1bad7a266fe748f11f3e5437a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 02 20:12:09 compute-0 systemd[1]: libpod-c70aec1d40a6e55ce60ffafdcb5df9935f2b7a1bad7a266fe748f11f3e5437a2.scope: Consumed 1.140s CPU time.
Oct 02 20:12:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfc95719d3710c8ebfc5137ad690ca1d22ad85fedcdc35a7e76ce57f5f83859b-merged.mount: Deactivated successfully.
Oct 02 20:12:09 compute-0 podman[466291]: 2025-10-02 20:12:09.5412236 +0000 UTC m=+1.405142110 container remove c70aec1d40a6e55ce60ffafdcb5df9935f2b7a1bad7a266fe748f11f3e5437a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:12:09 compute-0 systemd[1]: libpod-conmon-c70aec1d40a6e55ce60ffafdcb5df9935f2b7a1bad7a266fe748f11f3e5437a2.scope: Deactivated successfully.
Oct 02 20:12:09 compute-0 sudo[466191]: pam_unix(sudo:session): session closed for user root
Oct 02 20:12:09 compute-0 nova_compute[355794]: 2025-10-02 20:12:09.591 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:12:09 compute-0 nova_compute[355794]: 2025-10-02 20:12:09.591 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:12:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:12:09 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:12:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:12:09.616498) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435929616552, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 1454, "num_deletes": 251, "total_data_size": 2333841, "memory_usage": 2384624, "flush_reason": "Manual Compaction"}
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435929637114, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 2288687, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40259, "largest_seqno": 41712, "table_properties": {"data_size": 2281827, "index_size": 3995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14036, "raw_average_key_size": 19, "raw_value_size": 2268235, "raw_average_value_size": 3212, "num_data_blocks": 179, "num_entries": 706, "num_filter_entries": 706, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759435776, "oldest_key_time": 1759435776, "file_creation_time": 1759435929, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 20688 microseconds, and 6138 cpu microseconds.
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:12:09.637181) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 2288687 bytes OK
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:12:09.637207) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:12:09.639623) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:12:09.639639) EVENT_LOG_v1 {"time_micros": 1759435929639634, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:12:09.639657) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 2327444, prev total WAL file size 2368888, number of live WAL files 2.
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:12:09.640940) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(2235KB)], [95(7230KB)]
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435929641024, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 9692847, "oldest_snapshot_seqno": -1}
Oct 02 20:12:09 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:12:09 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 5b108037-41d6-4e77-8aaf-545e74e6d069 does not exist
Oct 02 20:12:09 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 62a70791-903f-42d0-9174-6e5ace005f87 does not exist
Oct 02 20:12:09 compute-0 podman[466340]: 2025-10-02 20:12:09.656731248 +0000 UTC m=+0.153201334 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct 02 20:12:09 compute-0 podman[466348]: 2025-10-02 20:12:09.683139374 +0000 UTC m=+0.151579010 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:12:09 compute-0 nova_compute[355794]: 2025-10-02 20:12:09.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 5780 keys, 7935009 bytes, temperature: kUnknown
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435929701134, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 7935009, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7898198, "index_size": 21277, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 149749, "raw_average_key_size": 25, "raw_value_size": 7795365, "raw_average_value_size": 1348, "num_data_blocks": 846, "num_entries": 5780, "num_filter_entries": 5780, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759435929, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:12:09.701832) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 7935009 bytes
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:12:09.705915) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.0 rd, 131.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 7.1 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(7.7) write-amplify(3.5) OK, records in: 6294, records dropped: 514 output_compression: NoCompression
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:12:09.705952) EVENT_LOG_v1 {"time_micros": 1759435929705939, "job": 56, "event": "compaction_finished", "compaction_time_micros": 60194, "compaction_time_cpu_micros": 22296, "output_level": 6, "num_output_files": 1, "total_output_size": 7935009, "num_input_records": 6294, "num_output_records": 5780, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435929706919, "job": 56, "event": "table_file_deletion", "file_number": 97}
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759435929708770, "job": 56, "event": "table_file_deletion", "file_number": 95}
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:12:09.640645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:12:09.709128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:12:09.709135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:12:09.709137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:12:09.709139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:12:09 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:12:09.709141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:12:09 compute-0 podman[466349]: 2025-10-02 20:12:09.710442635 +0000 UTC m=+0.197841142 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 20:12:09 compute-0 sudo[466405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:12:09 compute-0 sudo[466405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:12:09 compute-0 sudo[466405]: pam_unix(sudo:session): session closed for user root
Oct 02 20:12:09 compute-0 nova_compute[355794]: 2025-10-02 20:12:09.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:09 compute-0 sudo[466433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:12:09 compute-0 sudo[466433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:12:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2023: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Oct 02 20:12:09 compute-0 sudo[466433]: pam_unix(sudo:session): session closed for user root
Oct 02 20:12:10 compute-0 nova_compute[355794]: 2025-10-02 20:12:10.327 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:12:10 compute-0 nova_compute[355794]: 2025-10-02 20:12:10.327 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:12:10 compute-0 nova_compute[355794]: 2025-10-02 20:12:10.328 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:12:10 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:12:10 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:12:10 compute-0 ceph-mon[191910]: pgmap v2023: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Oct 02 20:12:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:12:11 compute-0 podman[466459]: 2025-10-02 20:12:11.686708333 +0000 UTC m=+0.099549928 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 20:12:11 compute-0 podman[466458]: 2025-10-02 20:12:11.714745553 +0000 UTC m=+0.122117953 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-type=git)
Oct 02 20:12:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 02 20:12:11 compute-0 nova_compute[355794]: 2025-10-02 20:12:11.821 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Updating instance_info_cache with network_info: [{"id": "f069cce3-8536-48d3-a068-b30f9a0107d5", "address": "fa:16:3e:45:37:9a", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.149", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069cce3-85", "ovs_interfaceid": "f069cce3-8536-48d3-a068-b30f9a0107d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:12:11 compute-0 nova_compute[355794]: 2025-10-02 20:12:11.840 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:12:11 compute-0 nova_compute[355794]: 2025-10-02 20:12:11.841 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:12:11 compute-0 nova_compute[355794]: 2025-10-02 20:12:11.841 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:12:11 compute-0 nova_compute[355794]: 2025-10-02 20:12:11.842 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:12:11 compute-0 nova_compute[355794]: 2025-10-02 20:12:11.842 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:12:11 compute-0 nova_compute[355794]: 2025-10-02 20:12:11.870 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:12:11 compute-0 nova_compute[355794]: 2025-10-02 20:12:11.870 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:12:11 compute-0 nova_compute[355794]: 2025-10-02 20:12:11.871 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:12:11 compute-0 nova_compute[355794]: 2025-10-02 20:12:11.871 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:12:11 compute-0 nova_compute[355794]: 2025-10-02 20:12:11.872 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:12:11 compute-0 ovn_controller[88435]: 2025-10-02T20:12:11Z|00183|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct 02 20:12:12 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:12:12 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3438396096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:12:12 compute-0 nova_compute[355794]: 2025-10-02 20:12:12.493 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:12:12 compute-0 nova_compute[355794]: 2025-10-02 20:12:12.640 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:12:12 compute-0 nova_compute[355794]: 2025-10-02 20:12:12.641 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:12:12 compute-0 nova_compute[355794]: 2025-10-02 20:12:12.641 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:12:12 compute-0 nova_compute[355794]: 2025-10-02 20:12:12.648 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:12:12 compute-0 nova_compute[355794]: 2025-10-02 20:12:12.649 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:12:12 compute-0 nova_compute[355794]: 2025-10-02 20:12:12.657 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:12:12 compute-0 nova_compute[355794]: 2025-10-02 20:12:12.658 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:12:12 compute-0 ceph-mon[191910]: pgmap v2024: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 02 20:12:12 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3438396096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:12:13 compute-0 nova_compute[355794]: 2025-10-02 20:12:13.094 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:12:13 compute-0 nova_compute[355794]: 2025-10-02 20:12:13.095 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3234MB free_disk=59.88882827758789GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:12:13 compute-0 nova_compute[355794]: 2025-10-02 20:12:13.095 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:12:13 compute-0 nova_compute[355794]: 2025-10-02 20:12:13.095 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016578733435892426 of space, bias 1.0, pg target 0.49736200307677275 quantized to 32 (current 32)
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:12:13 compute-0 nova_compute[355794]: 2025-10-02 20:12:13.282 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:12:13 compute-0 nova_compute[355794]: 2025-10-02 20:12:13.283 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance f50e6a55-f3b5-402b-91b2-12d34386f656 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:12:13 compute-0 nova_compute[355794]: 2025-10-02 20:12:13.283 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:12:13 compute-0 nova_compute[355794]: 2025-10-02 20:12:13.284 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:12:13 compute-0 nova_compute[355794]: 2025-10-02 20:12:13.284 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:12:13 compute-0 nova_compute[355794]: 2025-10-02 20:12:13.502 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:12:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2025: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 8 op/s
Oct 02 20:12:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:12:13 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4018929375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:12:14 compute-0 nova_compute[355794]: 2025-10-02 20:12:14.008 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:12:14 compute-0 nova_compute[355794]: 2025-10-02 20:12:14.024 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:12:14 compute-0 nova_compute[355794]: 2025-10-02 20:12:14.040 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:12:14 compute-0 nova_compute[355794]: 2025-10-02 20:12:14.071 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:12:14 compute-0 nova_compute[355794]: 2025-10-02 20:12:14.073 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.977s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:12:14 compute-0 nova_compute[355794]: 2025-10-02 20:12:14.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:14 compute-0 nova_compute[355794]: 2025-10-02 20:12:14.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:14 compute-0 nova_compute[355794]: 2025-10-02 20:12:14.806 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:12:14 compute-0 nova_compute[355794]: 2025-10-02 20:12:14.808 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:12:15 compute-0 ceph-mon[191910]: pgmap v2025: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 8 op/s
Oct 02 20:12:15 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4018929375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:12:15 compute-0 nova_compute[355794]: 2025-10-02 20:12:15.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:12:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:12:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2026: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:12:16 compute-0 ceph-mon[191910]: pgmap v2026: 321 pgs: 321 active+clean; 265 MiB data, 401 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:12:16 compute-0 nova_compute[355794]: 2025-10-02 20:12:16.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:12:16 compute-0 nova_compute[355794]: 2025-10-02 20:12:16.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 20:12:16 compute-0 nova_compute[355794]: 2025-10-02 20:12:16.614 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 20:12:17 compute-0 nova_compute[355794]: 2025-10-02 20:12:17.614 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:12:17 compute-0 ovn_controller[88435]: 2025-10-02T20:12:17Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a4:22:b0 10.100.3.13
Oct 02 20:12:17 compute-0 ovn_controller[88435]: 2025-10-02T20:12:17Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a4:22:b0 10.100.3.13
Oct 02 20:12:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2027: 321 pgs: 321 active+clean; 271 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 815 KiB/s wr, 6 op/s
Oct 02 20:12:18 compute-0 ceph-mon[191910]: pgmap v2027: 321 pgs: 321 active+clean; 271 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 815 KiB/s wr, 6 op/s
Oct 02 20:12:19 compute-0 nova_compute[355794]: 2025-10-02 20:12:19.570 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:12:19 compute-0 nova_compute[355794]: 2025-10-02 20:12:19.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:19 compute-0 nova_compute[355794]: 2025-10-02 20:12:19.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2028: 321 pgs: 321 active+clean; 272 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 960 KiB/s wr, 22 op/s
Oct 02 20:12:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:12:20 compute-0 ceph-mon[191910]: pgmap v2028: 321 pgs: 321 active+clean; 272 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 960 KiB/s wr, 22 op/s
Oct 02 20:12:21 compute-0 podman[466546]: 2025-10-02 20:12:21.69233484 +0000 UTC m=+0.111571255 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 20:12:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2029: 321 pgs: 321 active+clean; 288 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 213 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Oct 02 20:12:22 compute-0 ceph-mon[191910]: pgmap v2029: 321 pgs: 321 active+clean; 288 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 213 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Oct 02 20:12:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 293 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Oct 02 20:12:24 compute-0 nova_compute[355794]: 2025-10-02 20:12:24.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:24 compute-0 nova_compute[355794]: 2025-10-02 20:12:24.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:24 compute-0 ceph-mon[191910]: pgmap v2030: 321 pgs: 321 active+clean; 293 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Oct 02 20:12:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:12:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2031: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 20:12:26 compute-0 ceph-mon[191910]: pgmap v2031: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 20:12:27 compute-0 nova_compute[355794]: 2025-10-02 20:12:27.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:12:27 compute-0 nova_compute[355794]: 2025-10-02 20:12:27.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 20:12:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2032: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 20:12:29 compute-0 ceph-mon[191910]: pgmap v2032: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 20:12:29 compute-0 podman[466566]: 2025-10-02 20:12:29.669620706 +0000 UTC m=+0.087748836 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 20:12:29 compute-0 podman[466567]: 2025-10-02 20:12:29.69441313 +0000 UTC m=+0.110585108 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=edpm)
Oct 02 20:12:29 compute-0 nova_compute[355794]: 2025-10-02 20:12:29.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:29 compute-0 podman[157186]: time="2025-10-02T20:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:12:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:12:29 compute-0 nova_compute[355794]: 2025-10-02 20:12:29.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9556 "" "Go-http-client/1.1"
Oct 02 20:12:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2033: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 225 KiB/s rd, 1.3 MiB/s wr, 50 op/s
Oct 02 20:12:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:12:31 compute-0 ceph-mon[191910]: pgmap v2033: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 225 KiB/s rd, 1.3 MiB/s wr, 50 op/s
Oct 02 20:12:31 compute-0 openstack_network_exporter[372736]: ERROR   20:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:12:31 compute-0 openstack_network_exporter[372736]: ERROR   20:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:12:31 compute-0 openstack_network_exporter[372736]: ERROR   20:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:12:31 compute-0 openstack_network_exporter[372736]: ERROR   20:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:12:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:12:31 compute-0 openstack_network_exporter[372736]: ERROR   20:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:12:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:12:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2034: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 168 KiB/s rd, 1.2 MiB/s wr, 34 op/s
Oct 02 20:12:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:12:32.327 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:12:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:12:32.328 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:12:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:12:32.328 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:12:33 compute-0 ceph-mon[191910]: pgmap v2034: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 168 KiB/s rd, 1.2 MiB/s wr, 34 op/s
Oct 02 20:12:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:12:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:12:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:12:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:12:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:12:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:12:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2035: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 13 KiB/s wr, 2 op/s
Oct 02 20:12:34 compute-0 ceph-mon[191910]: pgmap v2035: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 13 KiB/s wr, 2 op/s
Oct 02 20:12:34 compute-0 nova_compute[355794]: 2025-10-02 20:12:34.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:34 compute-0 nova_compute[355794]: 2025-10-02 20:12:34.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:12:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 0 op/s
Oct 02 20:12:36 compute-0 ceph-mon[191910]: pgmap v2036: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 0 op/s
Oct 02 20:12:37 compute-0 podman[466605]: 2025-10-02 20:12:37.652677655 +0000 UTC m=+0.079395946 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, name=ubi9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, release-0.7.12=, version=9.4, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Oct 02 20:12:37 compute-0 podman[466604]: 2025-10-02 20:12:37.656356502 +0000 UTC m=+0.087294305 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 20:12:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2037: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Oct 02 20:12:38 compute-0 ceph-mon[191910]: pgmap v2037: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Oct 02 20:12:39 compute-0 nova_compute[355794]: 2025-10-02 20:12:39.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:39 compute-0 nova_compute[355794]: 2025-10-02 20:12:39.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2038: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Oct 02 20:12:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:12:40 compute-0 podman[466641]: 2025-10-02 20:12:40.662540616 +0000 UTC m=+0.098584813 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 20:12:40 compute-0 podman[466643]: 2025-10-02 20:12:40.680460178 +0000 UTC m=+0.102469105 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 02 20:12:40 compute-0 podman[466642]: 2025-10-02 20:12:40.680627943 +0000 UTC m=+0.109429259 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 20:12:40 compute-0 ceph-mon[191910]: pgmap v2038: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Oct 02 20:12:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2039: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 20:12:42 compute-0 podman[466703]: 2025-10-02 20:12:42.683631005 +0000 UTC m=+0.096536708 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 20:12:42 compute-0 podman[466702]: 2025-10-02 20:12:42.705737399 +0000 UTC m=+0.136254737 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_id=edpm)
Oct 02 20:12:42 compute-0 ceph-mon[191910]: pgmap v2039: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 20:12:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2040: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 20:12:44 compute-0 nova_compute[355794]: 2025-10-02 20:12:44.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:44 compute-0 nova_compute[355794]: 2025-10-02 20:12:44.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:44 compute-0 ceph-mon[191910]: pgmap v2040: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 20:12:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:12:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2041: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 20:12:46 compute-0 systemd[1]: Starting dnf makecache...
Oct 02 20:12:46 compute-0 ceph-mon[191910]: pgmap v2041: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 20:12:46 compute-0 dnf[466743]: Metadata cache refreshed recently.
Oct 02 20:12:47 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 02 20:12:47 compute-0 systemd[1]: Finished dnf makecache.
Oct 02 20:12:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 20:12:48 compute-0 ceph-mon[191910]: pgmap v2042: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct 02 20:12:49 compute-0 nova_compute[355794]: 2025-10-02 20:12:49.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:49 compute-0 nova_compute[355794]: 2025-10-02 20:12:49.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2043: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:12:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:12:51 compute-0 ceph-mon[191910]: pgmap v2043: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:12:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2044: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 02 20:12:52 compute-0 podman[466745]: 2025-10-02 20:12:52.679098686 +0000 UTC m=+0.101625003 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 20:12:53 compute-0 ceph-mon[191910]: pgmap v2044: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 02 20:12:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Oct 02 20:12:54 compute-0 ceph-mon[191910]: pgmap v2045: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Oct 02 20:12:54 compute-0 nova_compute[355794]: 2025-10-02 20:12:54.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:54 compute-0 nova_compute[355794]: 2025-10-02 20:12:54.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:12:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2046: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:12:56 compute-0 ceph-mon[191910]: pgmap v2046: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:12:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:12:59 compute-0 ceph-mon[191910]: pgmap v2047: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:12:59 compute-0 nova_compute[355794]: 2025-10-02 20:12:59.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:59 compute-0 podman[157186]: time="2025-10-02T20:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:12:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:12:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9562 "" "Go-http-client/1.1"
Oct 02 20:12:59 compute-0 nova_compute[355794]: 2025-10-02 20:12:59.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:12:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2048: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:13:00 compute-0 ceph-mon[191910]: pgmap v2048: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:13:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:13:00 compute-0 podman[466765]: 2025-10-02 20:13:00.679608655 +0000 UTC m=+0.095741458 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, managed_by=edpm_ansible)
Oct 02 20:13:00 compute-0 podman[466764]: 2025-10-02 20:13:00.690142463 +0000 UTC m=+0.123309895 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:13:01 compute-0 openstack_network_exporter[372736]: ERROR   20:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:13:01 compute-0 openstack_network_exporter[372736]: ERROR   20:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:13:01 compute-0 openstack_network_exporter[372736]: ERROR   20:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:13:01 compute-0 openstack_network_exporter[372736]: ERROR   20:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:13:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:13:01 compute-0 openstack_network_exporter[372736]: ERROR   20:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:13:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:13:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2049: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:13:02 compute-0 ceph-mon[191910]: pgmap v2049: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:13:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:13:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:13:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:13:03
Oct 02 20:13:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:13:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:13:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'volumes', 'backups', 'images', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.rgw.root']
Oct 02 20:13:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:13:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:13:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:13:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:13:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:13:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.305 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.305 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343350dc10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.316 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.320 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 20:13:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:04.322 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/03794a5e-b5ab-4b9e-8052-6de08e4c9f84 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}83403cbd27ca997ec29507654ce1f65c7e3234a2ba47473760e788c908204f10" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 20:13:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:13:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:13:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:13:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:13:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:13:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:13:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:13:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:13:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:13:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:13:04 compute-0 nova_compute[355794]: 2025-10-02 20:13:04.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:04 compute-0 nova_compute[355794]: 2025-10-02 20:13:04.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:04 compute-0 ceph-mon[191910]: pgmap v2050: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.160 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1831 Content-Type: application/json Date: Thu, 02 Oct 2025 20:13:04 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-15970fd6-0831-4daa-95bc-99fd2521aab2 x-openstack-request-id: req-15970fd6-0831-4daa-95bc-99fd2521aab2 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.160 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "03794a5e-b5ab-4b9e-8052-6de08e4c9f84", "name": "te-6077278-asg-4covgfuc3mpk-ztq7ffpxzcnr-y6lmqetg2enr", "status": "ACTIVE", "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "user_id": "e5d4abc29b2e475e9c7c54249ca341c4", "metadata": {"metering.server_group": "f724f930-b01d-4568-9d24-c7060da9fe9c"}, "hostId": "01141092f0da7a843904c1ec95a2fbe7e7386c3244d2a85e02147c4e", "image": {"id": "fe71959f-8f59-4b45-ae05-4216d5f12fab", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/fe71959f-8f59-4b45-ae05-4216d5f12fab"}]}, "flavor": {"id": "2a4d7fef-934e-4921-8c3b-c6783966faa5", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/2a4d7fef-934e-4921-8c3b-c6783966faa5"}]}, "created": "2025-10-02T20:11:31Z", "updated": "2025-10-02T20:11:43Z", "addresses": {"": [{"version": 4, "addr": "10.100.3.13", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:a4:22:b0"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/03794a5e-b5ab-4b9e-8052-6de08e4c9f84"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/03794a5e-b5ab-4b9e-8052-6de08e4c9f84"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-02T20:11:43.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.160 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/03794a5e-b5ab-4b9e-8052-6de08e4c9f84 used request id req-15970fd6-0831-4daa-95bc-99fd2521aab2 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.163 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '03794a5e-b5ab-4b9e-8052-6de08e4c9f84', 'name': 'te-6077278-asg-4covgfuc3mpk-ztq7ffpxzcnr-y6lmqetg2enr', 'flavor': {'id': '2a4d7fef-934e-4921-8c3b-c6783966faa5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'fe71959f-8f59-4b45-ae05-4216d5f12fab'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '16e65e6cbbf848e5bb5755e6da3b1d33', 'user_id': 'e5d4abc29b2e475e9c7c54249ca341c4', 'hostId': '01141092f0da7a843904c1ec95a2fbe7e7386c3244d2a85e02147c4e', 'status': 'active', 'metadata': {'metering.server_group': 'f724f930-b01d-4568-9d24-c7060da9fe9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.168 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f50e6a55-f3b5-402b-91b2-12d34386f656', 'name': 'te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6', 'flavor': {'id': '2a4d7fef-934e-4921-8c3b-c6783966faa5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'fe71959f-8f59-4b45-ae05-4216d5f12fab'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '16e65e6cbbf848e5bb5755e6da3b1d33', 'user_id': 'e5d4abc29b2e475e9c7c54249ca341c4', 'hostId': '01141092f0da7a843904c1ec95a2fbe7e7386c3244d2a85e02147c4e', 'status': 'active', 'metadata': {'metering.server_group': 'f724f930-b01d-4568-9d24-c7060da9fe9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.169 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.169 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.169 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.169 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.171 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T20:13:05.169836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.220 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.221 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.221 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.254 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.requests volume: 1052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.255 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.289 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.requests volume: 1074 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.290 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.292 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.292 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.293 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.293 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.293 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.294 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:13:05.293558) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.314 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.314 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.314 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.327 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.327 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.340 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.340 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.341 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.341 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.341 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.342 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.342 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.342 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.343 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.bytes volume: 72802304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.343 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.344 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.344 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.344 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.345 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.345 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.345 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.346 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.346 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:13:05.342070) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:13:05.345991) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.347 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.347 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.latency volume: 9460675432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.347 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.347 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.latency volume: 61099367193 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.348 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.349 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.350 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.350 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.350 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.350 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.351 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:13:05.350717) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.374 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.406 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.441 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.442 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.442 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.443 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.443 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.443 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.444 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.444 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.445 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.446 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.446 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.requests volume: 271 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.446 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.447 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.requests volume: 311 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.447 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.448 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:13:05.443997) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.449 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.449 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.449 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.449 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.450 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.450 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.454 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:13:05.450261) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.456 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.461 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 / tap8a7a2e73-ae inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.461 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.466 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.467 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.467 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.467 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.467 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.467 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.468 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.469 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.469 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:13:05.468096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.470 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.470 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.470 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.470 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.470 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.471 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-6077278-asg-4covgfuc3mpk-ztq7ffpxzcnr-y6lmqetg2enr>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-6077278-asg-4covgfuc3mpk-ztq7ffpxzcnr-y6lmqetg2enr>]
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.472 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.473 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.473 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.473 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.473 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.475 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-02T20:13:05.470274) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.475 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:13:05.473836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.475 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.475 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.476 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.476 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.476 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.476 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.477 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.477 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.478 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.478 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.479 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.479 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.479 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:13:05.476516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.479 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.479 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.480 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.480 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.480 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.481 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.481 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:13:05.480014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.481 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.481 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.481 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.482 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.482 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.482 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:13:05.482106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.482 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.482 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.483 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.483 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.483 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.483 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.483 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.483 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.483 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.484 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.484 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:13:05.483851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.484 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.485 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.485 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.485 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.485 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.485 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.485 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.485 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.485 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.486 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:13:05.485424) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.486 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.486 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.bytes volume: 29584384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.486 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.487 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.bytes volume: 29916160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.487 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.488 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.488 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.488 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.488 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.488 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.489 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.489 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.490 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.490 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.490 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.490 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.490 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.491 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.491 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:13:05.488599) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.491 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:13:05.490988) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.491 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.491 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.491 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.491 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.492 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.492 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.492 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.493 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.493 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.493 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.493 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.493 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.493 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.494 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.494 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-02T20:13:05.493917) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.494 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-6077278-asg-4covgfuc3mpk-ztq7ffpxzcnr-y6lmqetg2enr>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-6077278-asg-4covgfuc3mpk-ztq7ffpxzcnr-y6lmqetg2enr>]
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.495 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.495 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.495 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.495 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.495 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.495 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:13:05.495254) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.496 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.497 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/memory.usage volume: 43.35546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.497 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/memory.usage volume: 43.0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.498 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.498 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.498 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.498 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.498 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.498 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.498 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2730 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:13:05.498681) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.499 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.bytes volume: 1646 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.499 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.bytes volume: 1820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.499 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.499 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.500 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.500 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.500 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.500 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.500 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.501 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.501 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:13:05.500211) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.501 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.501 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.501 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.502 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:13:05.502023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.502 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.502 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.503 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.503 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.503 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.503 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.504 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.504 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.504 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.505 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.505 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.505 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:13:05.505023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.506 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.506 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.506 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.507 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:13:05.506723) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.507 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.507 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.508 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.508 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.508 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.508 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.508 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 60360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:13:05.508547) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.508 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/cpu volume: 78420000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.509 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/cpu volume: 234710000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.509 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.509 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.510 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:13:05.510116) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.510 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.511 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.511 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.latency volume: 1970969152 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.511 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.latency volume: 129342858 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.511 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.latency volume: 3039231407 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.512 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.latency volume: 196198639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.512 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:13:05.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:13:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:13:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2051: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:13:06 compute-0 ceph-mon[191910]: pgmap v2051: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:13:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:08 compute-0 podman[466806]: 2025-10-02 20:13:08.68757633 +0000 UTC m=+0.111653147 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 20:13:08 compute-0 podman[466807]: 2025-10-02 20:13:08.70729362 +0000 UTC m=+0.126864248 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, build-date=2024-09-18T21:23:30, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, com.redhat.component=ubi9-container, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, version=9.4, architecture=x86_64, managed_by=edpm_ansible, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release=1214.1726694543)
Oct 02 20:13:08 compute-0 ceph-mon[191910]: pgmap v2052: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:09 compute-0 nova_compute[355794]: 2025-10-02 20:13:09.594 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:13:09 compute-0 nova_compute[355794]: 2025-10-02 20:13:09.595 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:13:09 compute-0 nova_compute[355794]: 2025-10-02 20:13:09.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:09 compute-0 nova_compute[355794]: 2025-10-02 20:13:09.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2053: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:09 compute-0 sudo[466842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:13:09 compute-0 sudo[466842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:13:09 compute-0 sudo[466842]: pam_unix(sudo:session): session closed for user root
Oct 02 20:13:10 compute-0 sudo[466867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:13:10 compute-0 sudo[466867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:13:10 compute-0 sudo[466867]: pam_unix(sudo:session): session closed for user root
Oct 02 20:13:10 compute-0 sudo[466892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:13:10 compute-0 sudo[466892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:13:10 compute-0 sudo[466892]: pam_unix(sudo:session): session closed for user root
Oct 02 20:13:10 compute-0 sudo[466917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:13:10 compute-0 sudo[466917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:13:10 compute-0 nova_compute[355794]: 2025-10-02 20:13:10.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:13:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:13:10 compute-0 sudo[466917]: pam_unix(sudo:session): session closed for user root
Oct 02 20:13:11 compute-0 ceph-mon[191910]: pgmap v2053: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:13:11 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:13:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:13:11 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:13:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:13:11 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:13:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:13:11 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:13:11 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 58947fb6-8299-4b0b-bab5-c958e7a45198 does not exist
Oct 02 20:13:11 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev ae9fa014-88c0-45d3-8be7-f73e3ccfa4a7 does not exist
Oct 02 20:13:11 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 2d7a6ba2-9856-4eff-ac25-b6f3352a7f01 does not exist
Oct 02 20:13:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:13:11 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:13:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:13:11 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:13:11 compute-0 sudo[466971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:13:11 compute-0 sudo[466971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:13:11 compute-0 sudo[466971]: pam_unix(sudo:session): session closed for user root
Oct 02 20:13:11 compute-0 sudo[467017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:13:11 compute-0 sudo[467017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:13:11 compute-0 podman[466996]: 2025-10-02 20:13:11.375243399 +0000 UTC m=+0.117668976 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:13:11 compute-0 sudo[467017]: pam_unix(sudo:session): session closed for user root
Oct 02 20:13:11 compute-0 podman[466995]: 2025-10-02 20:13:11.38171567 +0000 UTC m=+0.126949641 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct 02 20:13:11 compute-0 podman[466997]: 2025-10-02 20:13:11.447601288 +0000 UTC m=+0.178346237 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 20:13:11 compute-0 sudo[467077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:13:11 compute-0 sudo[467077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:13:11 compute-0 sudo[467077]: pam_unix(sudo:session): session closed for user root
Oct 02 20:13:11 compute-0 sudo[467108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:13:11 compute-0 nova_compute[355794]: 2025-10-02 20:13:11.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:13:11 compute-0 nova_compute[355794]: 2025-10-02 20:13:11.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:13:11 compute-0 nova_compute[355794]: 2025-10-02 20:13:11.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:13:11 compute-0 sudo[467108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:13:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2054: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:12 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:13:12 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:13:12 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:13:12 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:13:12 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:13:12 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:13:12 compute-0 podman[467171]: 2025-10-02 20:13:12.189620398 +0000 UTC m=+0.075499253 container create e4791e37380045ec34c3706add6c7b6f2ea551d30118ad25853af0a71b6c227b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wilbur, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:13:12 compute-0 podman[467171]: 2025-10-02 20:13:12.159955845 +0000 UTC m=+0.045834700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:13:12 compute-0 systemd[1]: Started libpod-conmon-e4791e37380045ec34c3706add6c7b6f2ea551d30118ad25853af0a71b6c227b.scope.
Oct 02 20:13:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:13:12 compute-0 podman[467171]: 2025-10-02 20:13:12.383942316 +0000 UTC m=+0.269821251 container init e4791e37380045ec34c3706add6c7b6f2ea551d30118ad25853af0a71b6c227b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 20:13:12 compute-0 podman[467171]: 2025-10-02 20:13:12.40116258 +0000 UTC m=+0.287041435 container start e4791e37380045ec34c3706add6c7b6f2ea551d30118ad25853af0a71b6c227b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wilbur, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 20:13:12 compute-0 podman[467171]: 2025-10-02 20:13:12.407039175 +0000 UTC m=+0.292918050 container attach e4791e37380045ec34c3706add6c7b6f2ea551d30118ad25853af0a71b6c227b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wilbur, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 20:13:12 compute-0 trusting_wilbur[467187]: 167 167
Oct 02 20:13:12 compute-0 systemd[1]: libpod-e4791e37380045ec34c3706add6c7b6f2ea551d30118ad25853af0a71b6c227b.scope: Deactivated successfully.
Oct 02 20:13:12 compute-0 podman[467171]: 2025-10-02 20:13:12.421591269 +0000 UTC m=+0.307470154 container died e4791e37380045ec34c3706add6c7b6f2ea551d30118ad25853af0a71b6c227b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wilbur, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 20:13:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-af619252bbf8bac0b72506c4e750173aa474f0776142aaee74db4c1054371307-merged.mount: Deactivated successfully.
Oct 02 20:13:12 compute-0 podman[467171]: 2025-10-02 20:13:12.503420969 +0000 UTC m=+0.389299814 container remove e4791e37380045ec34c3706add6c7b6f2ea551d30118ad25853af0a71b6c227b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:13:12 compute-0 systemd[1]: libpod-conmon-e4791e37380045ec34c3706add6c7b6f2ea551d30118ad25853af0a71b6c227b.scope: Deactivated successfully.
Oct 02 20:13:12 compute-0 nova_compute[355794]: 2025-10-02 20:13:12.789 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:13:12 compute-0 nova_compute[355794]: 2025-10-02 20:13:12.790 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:13:12 compute-0 nova_compute[355794]: 2025-10-02 20:13:12.790 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:13:12 compute-0 nova_compute[355794]: 2025-10-02 20:13:12.790 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:13:12 compute-0 podman[467210]: 2025-10-02 20:13:12.799262555 +0000 UTC m=+0.082376795 container create 43122770abe33f55b3f4ba539131154ff10f56d3d497cbd375a92aee2ce479fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:13:12 compute-0 podman[467210]: 2025-10-02 20:13:12.76687585 +0000 UTC m=+0.049990090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:13:12 compute-0 systemd[1]: Started libpod-conmon-43122770abe33f55b3f4ba539131154ff10f56d3d497cbd375a92aee2ce479fe.scope.
Oct 02 20:13:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8c80813bca3c32ca8100dac8c2be6eb6c74e0327cd2a361356a59baa1dd0e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8c80813bca3c32ca8100dac8c2be6eb6c74e0327cd2a361356a59baa1dd0e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8c80813bca3c32ca8100dac8c2be6eb6c74e0327cd2a361356a59baa1dd0e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8c80813bca3c32ca8100dac8c2be6eb6c74e0327cd2a361356a59baa1dd0e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8c80813bca3c32ca8100dac8c2be6eb6c74e0327cd2a361356a59baa1dd0e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:13:13 compute-0 podman[467210]: 2025-10-02 20:13:13.004426519 +0000 UTC m=+0.287540739 container init 43122770abe33f55b3f4ba539131154ff10f56d3d497cbd375a92aee2ce479fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keldysh, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 20:13:13 compute-0 podman[467224]: 2025-10-02 20:13:13.013405426 +0000 UTC m=+0.154121378 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-type=git)
Oct 02 20:13:13 compute-0 podman[467225]: 2025-10-02 20:13:13.014646398 +0000 UTC m=+0.148502399 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 20:13:13 compute-0 podman[467210]: 2025-10-02 20:13:13.026558033 +0000 UTC m=+0.309672233 container start 43122770abe33f55b3f4ba539131154ff10f56d3d497cbd375a92aee2ce479fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 20:13:13 compute-0 podman[467210]: 2025-10-02 20:13:13.032712995 +0000 UTC m=+0.315827285 container attach 43122770abe33f55b3f4ba539131154ff10f56d3d497cbd375a92aee2ce479fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 20:13:13 compute-0 ceph-mon[191910]: pgmap v2054: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002066825185583122 of space, bias 1.0, pg target 0.6200475556749365 quantized to 32 (current 32)
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:13:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2055: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:14 compute-0 ceph-mon[191910]: pgmap v2055: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:14 compute-0 charming_keldysh[467245]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:13:14 compute-0 charming_keldysh[467245]: --> relative data size: 1.0
Oct 02 20:13:14 compute-0 charming_keldysh[467245]: --> All data devices are unavailable
Oct 02 20:13:14 compute-0 systemd[1]: libpod-43122770abe33f55b3f4ba539131154ff10f56d3d497cbd375a92aee2ce479fe.scope: Deactivated successfully.
Oct 02 20:13:14 compute-0 conmon[467245]: conmon 43122770abe33f55b3f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-43122770abe33f55b3f4ba539131154ff10f56d3d497cbd375a92aee2ce479fe.scope/container/memory.events
Oct 02 20:13:14 compute-0 systemd[1]: libpod-43122770abe33f55b3f4ba539131154ff10f56d3d497cbd375a92aee2ce479fe.scope: Consumed 1.210s CPU time.
Oct 02 20:13:14 compute-0 podman[467210]: 2025-10-02 20:13:14.312143955 +0000 UTC m=+1.595258155 container died 43122770abe33f55b3f4ba539131154ff10f56d3d497cbd375a92aee2ce479fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keldysh, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:13:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac8c80813bca3c32ca8100dac8c2be6eb6c74e0327cd2a361356a59baa1dd0e4-merged.mount: Deactivated successfully.
Oct 02 20:13:14 compute-0 podman[467210]: 2025-10-02 20:13:14.577724413 +0000 UTC m=+1.860838623 container remove 43122770abe33f55b3f4ba539131154ff10f56d3d497cbd375a92aee2ce479fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:13:14 compute-0 sudo[467108]: pam_unix(sudo:session): session closed for user root
Oct 02 20:13:14 compute-0 systemd[1]: libpod-conmon-43122770abe33f55b3f4ba539131154ff10f56d3d497cbd375a92aee2ce479fe.scope: Deactivated successfully.
Oct 02 20:13:14 compute-0 nova_compute[355794]: 2025-10-02 20:13:14.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:14 compute-0 sudo[467308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:13:14 compute-0 sudo[467308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:13:14 compute-0 sudo[467308]: pam_unix(sudo:session): session closed for user root
Oct 02 20:13:14 compute-0 nova_compute[355794]: 2025-10-02 20:13:14.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:14 compute-0 sudo[467333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:13:14 compute-0 sudo[467333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:13:14 compute-0 sudo[467333]: pam_unix(sudo:session): session closed for user root
Oct 02 20:13:14 compute-0 sudo[467358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:13:14 compute-0 sudo[467358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:13:14 compute-0 sudo[467358]: pam_unix(sudo:session): session closed for user root
Oct 02 20:13:15 compute-0 sudo[467383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:13:15 compute-0 sudo[467383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:13:15 compute-0 podman[467448]: 2025-10-02 20:13:15.569270837 +0000 UTC m=+0.063133747 container create da36928deb42eb2cd3adcae8bf7979ed494dd1c2b9787f912a99e63616b7f370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:13:15 compute-0 systemd[1]: Started libpod-conmon-da36928deb42eb2cd3adcae8bf7979ed494dd1c2b9787f912a99e63616b7f370.scope.
Oct 02 20:13:15 compute-0 podman[467448]: 2025-10-02 20:13:15.546705071 +0000 UTC m=+0.040567991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:13:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:13:15 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:13:15 compute-0 podman[467448]: 2025-10-02 20:13:15.715087494 +0000 UTC m=+0.208950484 container init da36928deb42eb2cd3adcae8bf7979ed494dd1c2b9787f912a99e63616b7f370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 02 20:13:15 compute-0 podman[467448]: 2025-10-02 20:13:15.732096363 +0000 UTC m=+0.225959283 container start da36928deb42eb2cd3adcae8bf7979ed494dd1c2b9787f912a99e63616b7f370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_neumann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:13:15 compute-0 podman[467448]: 2025-10-02 20:13:15.738947514 +0000 UTC m=+0.232810464 container attach da36928deb42eb2cd3adcae8bf7979ed494dd1c2b9787f912a99e63616b7f370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:13:15 compute-0 condescending_neumann[467464]: 167 167
Oct 02 20:13:15 compute-0 systemd[1]: libpod-da36928deb42eb2cd3adcae8bf7979ed494dd1c2b9787f912a99e63616b7f370.scope: Deactivated successfully.
Oct 02 20:13:15 compute-0 podman[467448]: 2025-10-02 20:13:15.746690358 +0000 UTC m=+0.240553308 container died da36928deb42eb2cd3adcae8bf7979ed494dd1c2b9787f912a99e63616b7f370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 20:13:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fb2b6100a0f032da0022b32200eb96735a0a0b344a12c54b61a59fa66abcaad-merged.mount: Deactivated successfully.
Oct 02 20:13:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2056: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:15 compute-0 podman[467448]: 2025-10-02 20:13:15.913310045 +0000 UTC m=+0.407172995 container remove da36928deb42eb2cd3adcae8bf7979ed494dd1c2b9787f912a99e63616b7f370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_neumann, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 20:13:15 compute-0 systemd[1]: libpod-conmon-da36928deb42eb2cd3adcae8bf7979ed494dd1c2b9787f912a99e63616b7f370.scope: Deactivated successfully.
Oct 02 20:13:16 compute-0 podman[467487]: 2025-10-02 20:13:16.227362002 +0000 UTC m=+0.100352169 container create cd21e30d3db42b20939bdd55c6ccfb5baeee40d5c228e9d0d935b99ff7414744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 20:13:16 compute-0 nova_compute[355794]: 2025-10-02 20:13:16.226 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:13:16 compute-0 podman[467487]: 2025-10-02 20:13:16.186659228 +0000 UTC m=+0.059649465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:13:16 compute-0 nova_compute[355794]: 2025-10-02 20:13:16.313 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:13:16 compute-0 nova_compute[355794]: 2025-10-02 20:13:16.313 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:13:16 compute-0 nova_compute[355794]: 2025-10-02 20:13:16.313 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:13:16 compute-0 nova_compute[355794]: 2025-10-02 20:13:16.313 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:13:16 compute-0 nova_compute[355794]: 2025-10-02 20:13:16.314 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:13:16 compute-0 nova_compute[355794]: 2025-10-02 20:13:16.314 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:13:16 compute-0 systemd[1]: Started libpod-conmon-cd21e30d3db42b20939bdd55c6ccfb5baeee40d5c228e9d0d935b99ff7414744.scope.
Oct 02 20:13:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfe847b7f516cf22f739a9c53fa5687fad675289c1695b6069b336b581ec1e3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfe847b7f516cf22f739a9c53fa5687fad675289c1695b6069b336b581ec1e3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfe847b7f516cf22f739a9c53fa5687fad675289c1695b6069b336b581ec1e3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfe847b7f516cf22f739a9c53fa5687fad675289c1695b6069b336b581ec1e3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:13:16 compute-0 nova_compute[355794]: 2025-10-02 20:13:16.399 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:13:16 compute-0 nova_compute[355794]: 2025-10-02 20:13:16.399 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:13:16 compute-0 nova_compute[355794]: 2025-10-02 20:13:16.399 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:13:16 compute-0 nova_compute[355794]: 2025-10-02 20:13:16.400 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:13:16 compute-0 nova_compute[355794]: 2025-10-02 20:13:16.400 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:13:16 compute-0 podman[467487]: 2025-10-02 20:13:16.443860995 +0000 UTC m=+0.316851212 container init cd21e30d3db42b20939bdd55c6ccfb5baeee40d5c228e9d0d935b99ff7414744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_zhukovsky, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 20:13:16 compute-0 podman[467487]: 2025-10-02 20:13:16.460104473 +0000 UTC m=+0.333094600 container start cd21e30d3db42b20939bdd55c6ccfb5baeee40d5c228e9d0d935b99ff7414744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct 02 20:13:16 compute-0 podman[467487]: 2025-10-02 20:13:16.466230315 +0000 UTC m=+0.339220532 container attach cd21e30d3db42b20939bdd55c6ccfb5baeee40d5c228e9d0d935b99ff7414744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:13:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:13:16 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3502282891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:13:16 compute-0 nova_compute[355794]: 2025-10-02 20:13:16.871 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:13:16 compute-0 ceph-mon[191910]: pgmap v2056: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:16 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3502282891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:13:17 compute-0 nova_compute[355794]: 2025-10-02 20:13:17.020 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:13:17 compute-0 nova_compute[355794]: 2025-10-02 20:13:17.021 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:13:17 compute-0 nova_compute[355794]: 2025-10-02 20:13:17.021 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:13:17 compute-0 nova_compute[355794]: 2025-10-02 20:13:17.031 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:13:17 compute-0 nova_compute[355794]: 2025-10-02 20:13:17.032 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:13:17 compute-0 nova_compute[355794]: 2025-10-02 20:13:17.043 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:13:17 compute-0 nova_compute[355794]: 2025-10-02 20:13:17.043 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]: {
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:     "0": [
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:         {
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "devices": [
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "/dev/loop3"
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             ],
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "lv_name": "ceph_lv0",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "lv_size": "21470642176",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "name": "ceph_lv0",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "tags": {
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.cluster_name": "ceph",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.crush_device_class": "",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.encrypted": "0",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.osd_id": "0",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.type": "block",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.vdo": "0"
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             },
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "type": "block",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "vg_name": "ceph_vg0"
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:         }
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:     ],
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:     "1": [
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:         {
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "devices": [
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "/dev/loop4"
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             ],
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "lv_name": "ceph_lv1",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "lv_size": "21470642176",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "name": "ceph_lv1",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "tags": {
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.cluster_name": "ceph",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.crush_device_class": "",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.encrypted": "0",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.osd_id": "1",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.type": "block",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.vdo": "0"
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             },
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "type": "block",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "vg_name": "ceph_vg1"
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:         }
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:     ],
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:     "2": [
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:         {
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "devices": [
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "/dev/loop5"
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             ],
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "lv_name": "ceph_lv2",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "lv_size": "21470642176",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "name": "ceph_lv2",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "tags": {
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.cluster_name": "ceph",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.crush_device_class": "",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.encrypted": "0",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.osd_id": "2",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.type": "block",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:                 "ceph.vdo": "0"
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             },
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "type": "block",
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:             "vg_name": "ceph_vg2"
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:         }
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]:     ]
Oct 02 20:13:17 compute-0 unruffled_zhukovsky[467502]: }
Oct 02 20:13:17 compute-0 systemd[1]: libpod-cd21e30d3db42b20939bdd55c6ccfb5baeee40d5c228e9d0d935b99ff7414744.scope: Deactivated successfully.
Oct 02 20:13:17 compute-0 podman[467487]: 2025-10-02 20:13:17.281171769 +0000 UTC m=+1.154161906 container died cd21e30d3db42b20939bdd55c6ccfb5baeee40d5c228e9d0d935b99ff7414744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 20:13:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfe847b7f516cf22f739a9c53fa5687fad675289c1695b6069b336b581ec1e3f-merged.mount: Deactivated successfully.
Oct 02 20:13:17 compute-0 podman[467487]: 2025-10-02 20:13:17.478609488 +0000 UTC m=+1.351599625 container remove cd21e30d3db42b20939bdd55c6ccfb5baeee40d5c228e9d0d935b99ff7414744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 20:13:17 compute-0 systemd[1]: libpod-conmon-cd21e30d3db42b20939bdd55c6ccfb5baeee40d5c228e9d0d935b99ff7414744.scope: Deactivated successfully.
Oct 02 20:13:17 compute-0 sudo[467383]: pam_unix(sudo:session): session closed for user root
Oct 02 20:13:17 compute-0 sudo[467545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:13:17 compute-0 sudo[467545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:13:17 compute-0 sudo[467545]: pam_unix(sudo:session): session closed for user root
Oct 02 20:13:17 compute-0 nova_compute[355794]: 2025-10-02 20:13:17.641 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:13:17 compute-0 nova_compute[355794]: 2025-10-02 20:13:17.642 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3127MB free_disk=59.864295959472656GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:13:17 compute-0 nova_compute[355794]: 2025-10-02 20:13:17.642 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:13:17 compute-0 nova_compute[355794]: 2025-10-02 20:13:17.643 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:13:17 compute-0 nova_compute[355794]: 2025-10-02 20:13:17.719 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:13:17 compute-0 nova_compute[355794]: 2025-10-02 20:13:17.720 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance f50e6a55-f3b5-402b-91b2-12d34386f656 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:13:17 compute-0 nova_compute[355794]: 2025-10-02 20:13:17.720 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:13:17 compute-0 nova_compute[355794]: 2025-10-02 20:13:17.720 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:13:17 compute-0 nova_compute[355794]: 2025-10-02 20:13:17.720 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:13:17 compute-0 sudo[467570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:13:17 compute-0 sudo[467570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:13:17 compute-0 sudo[467570]: pam_unix(sudo:session): session closed for user root
Oct 02 20:13:17 compute-0 nova_compute[355794]: 2025-10-02 20:13:17.782 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:13:17 compute-0 sudo[467595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:13:17 compute-0 sudo[467595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:13:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:17 compute-0 sudo[467595]: pam_unix(sudo:session): session closed for user root
Oct 02 20:13:17 compute-0 sudo[467621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:13:17 compute-0 sudo[467621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:13:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:13:18 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2396611124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:13:18 compute-0 nova_compute[355794]: 2025-10-02 20:13:18.319 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:13:18 compute-0 nova_compute[355794]: 2025-10-02 20:13:18.333 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:13:18 compute-0 nova_compute[355794]: 2025-10-02 20:13:18.365 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:13:18 compute-0 nova_compute[355794]: 2025-10-02 20:13:18.367 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:13:18 compute-0 nova_compute[355794]: 2025-10-02 20:13:18.367 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:13:18 compute-0 podman[467704]: 2025-10-02 20:13:18.604941918 +0000 UTC m=+0.105227817 container create 32386b3b233a96d8ee27fd735edbed3e16221b5ba9cacde979f21a923cc1927b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 20:13:18 compute-0 podman[467704]: 2025-10-02 20:13:18.567991613 +0000 UTC m=+0.068277572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:13:18 compute-0 systemd[1]: Started libpod-conmon-32386b3b233a96d8ee27fd735edbed3e16221b5ba9cacde979f21a923cc1927b.scope.
Oct 02 20:13:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:13:18 compute-0 podman[467704]: 2025-10-02 20:13:18.770061705 +0000 UTC m=+0.270347644 container init 32386b3b233a96d8ee27fd735edbed3e16221b5ba9cacde979f21a923cc1927b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wescoff, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 20:13:18 compute-0 podman[467704]: 2025-10-02 20:13:18.788682807 +0000 UTC m=+0.288968706 container start 32386b3b233a96d8ee27fd735edbed3e16221b5ba9cacde979f21a923cc1927b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wescoff, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 02 20:13:18 compute-0 podman[467704]: 2025-10-02 20:13:18.796644527 +0000 UTC m=+0.296930426 container attach 32386b3b233a96d8ee27fd735edbed3e16221b5ba9cacde979f21a923cc1927b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wescoff, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:13:18 compute-0 adoring_wescoff[467720]: 167 167
Oct 02 20:13:18 compute-0 systemd[1]: libpod-32386b3b233a96d8ee27fd735edbed3e16221b5ba9cacde979f21a923cc1927b.scope: Deactivated successfully.
Oct 02 20:13:18 compute-0 podman[467704]: 2025-10-02 20:13:18.810839671 +0000 UTC m=+0.311125560 container died 32386b3b233a96d8ee27fd735edbed3e16221b5ba9cacde979f21a923cc1927b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wescoff, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 20:13:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdd3f16172724c2040e363edc256115d494be04b2760ba4672ccc71f9e9f2c2e-merged.mount: Deactivated successfully.
Oct 02 20:13:18 compute-0 podman[467704]: 2025-10-02 20:13:18.899795379 +0000 UTC m=+0.400081268 container remove 32386b3b233a96d8ee27fd735edbed3e16221b5ba9cacde979f21a923cc1927b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wescoff, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:13:18 compute-0 systemd[1]: libpod-conmon-32386b3b233a96d8ee27fd735edbed3e16221b5ba9cacde979f21a923cc1927b.scope: Deactivated successfully.
Oct 02 20:13:18 compute-0 ceph-mon[191910]: pgmap v2057: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:18 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2396611124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:13:19 compute-0 podman[467743]: 2025-10-02 20:13:19.159483721 +0000 UTC m=+0.049234610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:13:19 compute-0 podman[467743]: 2025-10-02 20:13:19.261194725 +0000 UTC m=+0.150945624 container create 576d4bec5542a616adf949c83946f2a91534f052a1fa8361a4a4200ebc9d4822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilbur, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:13:19 compute-0 systemd[1]: Started libpod-conmon-576d4bec5542a616adf949c83946f2a91534f052a1fa8361a4a4200ebc9d4822.scope.
Oct 02 20:13:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f4e052dc8121d70fd5bfce24ed0f23a5dca04a9986642aa2eb93ec0550df089/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f4e052dc8121d70fd5bfce24ed0f23a5dca04a9986642aa2eb93ec0550df089/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f4e052dc8121d70fd5bfce24ed0f23a5dca04a9986642aa2eb93ec0550df089/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f4e052dc8121d70fd5bfce24ed0f23a5dca04a9986642aa2eb93ec0550df089/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:13:19 compute-0 podman[467743]: 2025-10-02 20:13:19.554887935 +0000 UTC m=+0.444638904 container init 576d4bec5542a616adf949c83946f2a91534f052a1fa8361a4a4200ebc9d4822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilbur, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 20:13:19 compute-0 podman[467743]: 2025-10-02 20:13:19.577758348 +0000 UTC m=+0.467509247 container start 576d4bec5542a616adf949c83946f2a91534f052a1fa8361a4a4200ebc9d4822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilbur, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:13:19 compute-0 podman[467743]: 2025-10-02 20:13:19.592841086 +0000 UTC m=+0.482591985 container attach 576d4bec5542a616adf949c83946f2a91534f052a1fa8361a4a4200ebc9d4822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:13:19 compute-0 nova_compute[355794]: 2025-10-02 20:13:19.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:19 compute-0 nova_compute[355794]: 2025-10-02 20:13:19.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2058: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:13:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2547993395' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:13:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:13:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2547993395' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:13:20 compute-0 silly_wilbur[467759]: {
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:         "osd_id": 1,
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:         "type": "bluestore"
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:     },
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:         "osd_id": 2,
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:         "type": "bluestore"
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:     },
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:         "osd_id": 0,
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:         "type": "bluestore"
Oct 02 20:13:20 compute-0 silly_wilbur[467759]:     }
Oct 02 20:13:20 compute-0 silly_wilbur[467759]: }
Oct 02 20:13:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:13:20 compute-0 systemd[1]: libpod-576d4bec5542a616adf949c83946f2a91534f052a1fa8361a4a4200ebc9d4822.scope: Deactivated successfully.
Oct 02 20:13:20 compute-0 systemd[1]: libpod-576d4bec5542a616adf949c83946f2a91534f052a1fa8361a4a4200ebc9d4822.scope: Consumed 1.109s CPU time.
Oct 02 20:13:20 compute-0 podman[467793]: 2025-10-02 20:13:20.802327651 +0000 UTC m=+0.062449399 container died 576d4bec5542a616adf949c83946f2a91534f052a1fa8361a4a4200ebc9d4822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f4e052dc8121d70fd5bfce24ed0f23a5dca04a9986642aa2eb93ec0550df089-merged.mount: Deactivated successfully.
Oct 02 20:13:20 compute-0 podman[467793]: 2025-10-02 20:13:20.934776415 +0000 UTC m=+0.194898133 container remove 576d4bec5542a616adf949c83946f2a91534f052a1fa8361a4a4200ebc9d4822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:13:20 compute-0 systemd[1]: libpod-conmon-576d4bec5542a616adf949c83946f2a91534f052a1fa8361a4a4200ebc9d4822.scope: Deactivated successfully.
Oct 02 20:13:20 compute-0 ceph-mon[191910]: pgmap v2058: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2547993395' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:13:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2547993395' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:13:21 compute-0 sudo[467621]: pam_unix(sudo:session): session closed for user root
Oct 02 20:13:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:13:21 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:13:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:13:21 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:13:21 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev c0e6ae68-5938-4f16-9bbc-b5a4bc565a67 does not exist
Oct 02 20:13:21 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 7b259c52-884f-46d1-84b1-9d0abf727e5f does not exist
Oct 02 20:13:21 compute-0 sudo[467805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:13:21 compute-0 sudo[467805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:13:21 compute-0 sudo[467805]: pam_unix(sudo:session): session closed for user root
Oct 02 20:13:21 compute-0 sudo[467830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:13:21 compute-0 sudo[467830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:13:21 compute-0 sudo[467830]: pam_unix(sudo:session): session closed for user root
Oct 02 20:13:21 compute-0 nova_compute[355794]: 2025-10-02 20:13:21.629 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:13:21 compute-0 nova_compute[355794]: 2025-10-02 20:13:21.631 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:13:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2059: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:22 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:13:22 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:13:23 compute-0 ceph-mon[191910]: pgmap v2059: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:23 compute-0 podman[467855]: 2025-10-02 20:13:23.757578971 +0000 UTC m=+0.165704584 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 20:13:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:24 compute-0 ceph-mon[191910]: pgmap v2060: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:24 compute-0 nova_compute[355794]: 2025-10-02 20:13:24.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:24 compute-0 nova_compute[355794]: 2025-10-02 20:13:24.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:13:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2061: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Oct 02 20:13:26 compute-0 ceph-mon[191910]: pgmap v2061: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Oct 02 20:13:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2062: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Oct 02 20:13:28 compute-0 ceph-mon[191910]: pgmap v2062: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Oct 02 20:13:29 compute-0 nova_compute[355794]: 2025-10-02 20:13:29.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:29 compute-0 podman[157186]: time="2025-10-02T20:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:13:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:13:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9565 "" "Go-http-client/1.1"
Oct 02 20:13:29 compute-0 nova_compute[355794]: 2025-10-02 20:13:29.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2063: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Oct 02 20:13:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:13:31 compute-0 ceph-mon[191910]: pgmap v2063: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Oct 02 20:13:31 compute-0 openstack_network_exporter[372736]: ERROR   20:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:13:31 compute-0 openstack_network_exporter[372736]: ERROR   20:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:13:31 compute-0 openstack_network_exporter[372736]: ERROR   20:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:13:31 compute-0 openstack_network_exporter[372736]: ERROR   20:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:13:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:13:31 compute-0 openstack_network_exporter[372736]: ERROR   20:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:13:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:13:31 compute-0 podman[467876]: 2025-10-02 20:13:31.711605793 +0000 UTC m=+0.112576291 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 20:13:31 compute-0 podman[467877]: 2025-10-02 20:13:31.748554077 +0000 UTC m=+0.149927306 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Oct 02 20:13:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2064: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Oct 02 20:13:31 compute-0 nova_compute[355794]: 2025-10-02 20:13:31.979 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:13:32 compute-0 nova_compute[355794]: 2025-10-02 20:13:32.007 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Triggering sync for uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 20:13:32 compute-0 nova_compute[355794]: 2025-10-02 20:13:32.007 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Triggering sync for uuid f50e6a55-f3b5-402b-91b2-12d34386f656 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 20:13:32 compute-0 nova_compute[355794]: 2025-10-02 20:13:32.008 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Triggering sync for uuid 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 20:13:32 compute-0 nova_compute[355794]: 2025-10-02 20:13:32.008 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:13:32 compute-0 nova_compute[355794]: 2025-10-02 20:13:32.009 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:13:32 compute-0 nova_compute[355794]: 2025-10-02 20:13:32.010 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "f50e6a55-f3b5-402b-91b2-12d34386f656" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:13:32 compute-0 nova_compute[355794]: 2025-10-02 20:13:32.010 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "f50e6a55-f3b5-402b-91b2-12d34386f656" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:13:32 compute-0 nova_compute[355794]: 2025-10-02 20:13:32.011 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:13:32 compute-0 nova_compute[355794]: 2025-10-02 20:13:32.011 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:13:32 compute-0 nova_compute[355794]: 2025-10-02 20:13:32.062 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:13:32 compute-0 nova_compute[355794]: 2025-10-02 20:13:32.068 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "f50e6a55-f3b5-402b-91b2-12d34386f656" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:13:32 compute-0 nova_compute[355794]: 2025-10-02 20:13:32.105 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:13:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:13:32.329 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:13:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:13:32.330 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:13:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:13:32.330 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:13:33 compute-0 ceph-mon[191910]: pgmap v2064: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Oct 02 20:13:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:13:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:13:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:13:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:13:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:13:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:13:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2065: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Oct 02 20:13:34 compute-0 nova_compute[355794]: 2025-10-02 20:13:34.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:34 compute-0 nova_compute[355794]: 2025-10-02 20:13:34.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:35 compute-0 ceph-mon[191910]: pgmap v2065: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Oct 02 20:13:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:13:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2066: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Oct 02 20:13:36 compute-0 ceph-mon[191910]: pgmap v2066: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Oct 02 20:13:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2067: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:38 compute-0 ceph-mon[191910]: pgmap v2067: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:39 compute-0 podman[467918]: 2025-10-02 20:13:39.731539214 +0000 UTC m=+0.149710802 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001)
Oct 02 20:13:39 compute-0 podman[467919]: 2025-10-02 20:13:39.734977475 +0000 UTC m=+0.144830033 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.29.0, version=9.4, release-0.7.12=, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, architecture=x86_64, build-date=2024-09-18T21:23:30, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543)
Oct 02 20:13:39 compute-0 nova_compute[355794]: 2025-10-02 20:13:39.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:39 compute-0 nova_compute[355794]: 2025-10-02 20:13:39.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2068: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:13:41 compute-0 ceph-mon[191910]: pgmap v2068: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:41 compute-0 podman[467956]: 2025-10-02 20:13:41.69600079 +0000 UTC m=+0.108851993 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 20:13:41 compute-0 podman[467957]: 2025-10-02 20:13:41.71075897 +0000 UTC m=+0.112866889 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:13:41 compute-0 podman[467958]: 2025-10-02 20:13:41.754521625 +0000 UTC m=+0.158420792 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 20:13:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2069: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:41 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 02 20:13:42 compute-0 ceph-mon[191910]: pgmap v2069: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:43 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 02 20:13:43 compute-0 podman[468016]: 2025-10-02 20:13:43.7394302 +0000 UTC m=+0.149184488 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, name=ubi9-minimal, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-type=git, io.buildah.version=1.33.7, release=1755695350, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 02 20:13:43 compute-0 podman[468017]: 2025-10-02 20:13:43.778624664 +0000 UTC m=+0.175037200 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:13:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2070: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:44 compute-0 nova_compute[355794]: 2025-10-02 20:13:44.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:44 compute-0 nova_compute[355794]: 2025-10-02 20:13:44.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:45 compute-0 ceph-mon[191910]: pgmap v2070: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:13:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:47 compute-0 ceph-mon[191910]: pgmap v2071: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2072: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:49 compute-0 ceph-mon[191910]: pgmap v2072: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:49 compute-0 nova_compute[355794]: 2025-10-02 20:13:49.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:49 compute-0 nova_compute[355794]: 2025-10-02 20:13:49.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2073: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:50 compute-0 ceph-mon[191910]: pgmap v2073: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:13:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:53 compute-0 ceph-mon[191910]: pgmap v2074: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2075: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:54 compute-0 podman[468059]: 2025-10-02 20:13:54.730767117 +0000 UTC m=+0.135634580 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:13:54 compute-0 nova_compute[355794]: 2025-10-02 20:13:54.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:54 compute-0 nova_compute[355794]: 2025-10-02 20:13:54.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:55 compute-0 ceph-mon[191910]: pgmap v2075: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:13:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2076: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:57 compute-0 ceph-mon[191910]: pgmap v2076: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2077: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:59 compute-0 ceph-mon[191910]: pgmap v2077: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:13:59 compute-0 podman[157186]: time="2025-10-02T20:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:13:59 compute-0 nova_compute[355794]: 2025-10-02 20:13:59.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:13:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9561 "" "Go-http-client/1.1"
Oct 02 20:13:59 compute-0 nova_compute[355794]: 2025-10-02 20:13:59.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:13:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2078: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:00 compute-0 ceph-mon[191910]: pgmap v2078: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:14:01 compute-0 openstack_network_exporter[372736]: ERROR   20:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:14:01 compute-0 openstack_network_exporter[372736]: ERROR   20:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:14:01 compute-0 openstack_network_exporter[372736]: ERROR   20:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:14:01 compute-0 openstack_network_exporter[372736]: ERROR   20:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:14:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:14:01 compute-0 openstack_network_exporter[372736]: ERROR   20:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:14:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:14:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2079: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:02 compute-0 podman[468076]: 2025-10-02 20:14:02.715745946 +0000 UTC m=+0.125146083 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 20:14:02 compute-0 podman[468077]: 2025-10-02 20:14:02.727896917 +0000 UTC m=+0.134222993 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:14:02 compute-0 ceph-mon[191910]: pgmap v2079: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:14:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:14:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:14:03
Oct 02 20:14:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:14:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:14:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'images', '.mgr', '.rgw.root', 'backups', 'vms']
Oct 02 20:14:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:14:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:14:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:14:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:14:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:14:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:14:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:14:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:14:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:14:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:14:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:14:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:14:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:14:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:14:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:14:04 compute-0 nova_compute[355794]: 2025-10-02 20:14:04.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:04 compute-0 nova_compute[355794]: 2025-10-02 20:14:04.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:04 compute-0 ceph-mon[191910]: pgmap v2080: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:14:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2081: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:07 compute-0 ceph-mon[191910]: pgmap v2081: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:09 compute-0 ceph-mon[191910]: pgmap v2082: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:09 compute-0 nova_compute[355794]: 2025-10-02 20:14:09.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:09 compute-0 nova_compute[355794]: 2025-10-02 20:14:09.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2083: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:14:10 compute-0 podman[468116]: 2025-10-02 20:14:10.700983342 +0000 UTC m=+0.118489657 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:14:10.717606) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436050717702, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1166, "num_deletes": 256, "total_data_size": 1813504, "memory_usage": 1846944, "flush_reason": "Manual Compaction"}
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Oct 02 20:14:10 compute-0 podman[468117]: 2025-10-02 20:14:10.729635488 +0000 UTC m=+0.135328062 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.expose-services=, architecture=x86_64, release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436050747974, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 1786968, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41713, "largest_seqno": 42878, "table_properties": {"data_size": 1781239, "index_size": 3124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11595, "raw_average_key_size": 19, "raw_value_size": 1769936, "raw_average_value_size": 2949, "num_data_blocks": 140, "num_entries": 600, "num_filter_entries": 600, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759435929, "oldest_key_time": 1759435929, "file_creation_time": 1759436050, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 30469 microseconds, and 9409 cpu microseconds.
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:14:10.748073) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 1786968 bytes OK
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:14:10.748104) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:14:10.762709) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:14:10.762734) EVENT_LOG_v1 {"time_micros": 1759436050762726, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:14:10.762758) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 1808158, prev total WAL file size 1808158, number of live WAL files 2.
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:14:10.764335) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353035' seq:72057594037927935, type:22 .. '6C6F676D0031373537' seq:0, type:0; will stop at (end)
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(1745KB)], [98(7749KB)]
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436050764466, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 9721977, "oldest_snapshot_seqno": -1}
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 5856 keys, 9617482 bytes, temperature: kUnknown
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436050900507, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 9617482, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9577767, "index_size": 23989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 152200, "raw_average_key_size": 25, "raw_value_size": 9471193, "raw_average_value_size": 1617, "num_data_blocks": 962, "num_entries": 5856, "num_filter_entries": 5856, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759436050, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:14:10.900878) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 9617482 bytes
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:14:10.957558) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.4 rd, 70.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.6 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(10.8) write-amplify(5.4) OK, records in: 6380, records dropped: 524 output_compression: NoCompression
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:14:10.957619) EVENT_LOG_v1 {"time_micros": 1759436050957591, "job": 58, "event": "compaction_finished", "compaction_time_micros": 136151, "compaction_time_cpu_micros": 42725, "output_level": 6, "num_output_files": 1, "total_output_size": 9617482, "num_input_records": 6380, "num_output_records": 5856, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436050958550, "job": 58, "event": "table_file_deletion", "file_number": 100}
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436050962096, "job": 58, "event": "table_file_deletion", "file_number": 98}
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:14:10.764037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:14:10.962467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:14:10.962476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:14:10.962479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:14:10.962482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:14:10 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:14:10.962485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:14:11 compute-0 ceph-mon[191910]: pgmap v2083: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:11 compute-0 nova_compute[355794]: 2025-10-02 20:14:11.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:14:11 compute-0 nova_compute[355794]: 2025-10-02 20:14:11.578 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:14:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2084: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:12 compute-0 ceph-mon[191910]: pgmap v2084: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:12 compute-0 nova_compute[355794]: 2025-10-02 20:14:12.580 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:14:12 compute-0 nova_compute[355794]: 2025-10-02 20:14:12.581 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:14:12 compute-0 podman[468155]: 2025-10-02 20:14:12.701942821 +0000 UTC m=+0.116757762 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 20:14:12 compute-0 podman[468156]: 2025-10-02 20:14:12.711041271 +0000 UTC m=+0.119195216 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 20:14:12 compute-0 podman[468157]: 2025-10-02 20:14:12.758816992 +0000 UTC m=+0.156188303 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 20:14:12 compute-0 nova_compute[355794]: 2025-10-02 20:14:12.841 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:14:12 compute-0 nova_compute[355794]: 2025-10-02 20:14:12.842 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:14:12 compute-0 nova_compute[355794]: 2025-10-02 20:14:12.844 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002066825185583122 of space, bias 1.0, pg target 0.6200475556749365 quantized to 32 (current 32)
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:14:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.051 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Updating instance_info_cache with network_info: [{"id": "f069cce3-8536-48d3-a068-b30f9a0107d5", "address": "fa:16:3e:45:37:9a", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.149", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069cce3-85", "ovs_interfaceid": "f069cce3-8536-48d3-a068-b30f9a0107d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.069 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.071 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.074 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.075 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.077 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.122 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.124 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.125 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.127 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.128 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:14:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:14:14 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4022295709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.630 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:14:14 compute-0 podman[468235]: 2025-10-02 20:14:14.697697702 +0000 UTC m=+0.124763492 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, release=1755695350, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_id=edpm, maintainer=Red Hat, Inc.)
Oct 02 20:14:14 compute-0 podman[468236]: 2025-10-02 20:14:14.728908786 +0000 UTC m=+0.143423485 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.733 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.734 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.734 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.738 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.739 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.743 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.743 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:14 compute-0 nova_compute[355794]: 2025-10-02 20:14:14.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:14 compute-0 ceph-mon[191910]: pgmap v2085: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:14 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4022295709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:14:15 compute-0 nova_compute[355794]: 2025-10-02 20:14:15.088 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:14:15 compute-0 nova_compute[355794]: 2025-10-02 20:14:15.089 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3207MB free_disk=59.864295959472656GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:14:15 compute-0 nova_compute[355794]: 2025-10-02 20:14:15.089 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:14:15 compute-0 nova_compute[355794]: 2025-10-02 20:14:15.090 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:14:15 compute-0 nova_compute[355794]: 2025-10-02 20:14:15.161 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:14:15 compute-0 nova_compute[355794]: 2025-10-02 20:14:15.161 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance f50e6a55-f3b5-402b-91b2-12d34386f656 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:14:15 compute-0 nova_compute[355794]: 2025-10-02 20:14:15.161 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:14:15 compute-0 nova_compute[355794]: 2025-10-02 20:14:15.162 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:14:15 compute-0 nova_compute[355794]: 2025-10-02 20:14:15.162 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:14:15 compute-0 nova_compute[355794]: 2025-10-02 20:14:15.221 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:14:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:14:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:14:15 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/835179247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:14:15 compute-0 nova_compute[355794]: 2025-10-02 20:14:15.737 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:14:15 compute-0 nova_compute[355794]: 2025-10-02 20:14:15.753 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:14:15 compute-0 nova_compute[355794]: 2025-10-02 20:14:15.772 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:14:15 compute-0 nova_compute[355794]: 2025-10-02 20:14:15.775 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:14:15 compute-0 nova_compute[355794]: 2025-10-02 20:14:15.775 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:14:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:15 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/835179247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:14:16 compute-0 ceph-mon[191910]: pgmap v2086: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:17 compute-0 nova_compute[355794]: 2025-10-02 20:14:17.273 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:14:17 compute-0 nova_compute[355794]: 2025-10-02 20:14:17.274 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:14:17 compute-0 nova_compute[355794]: 2025-10-02 20:14:17.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:14:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:19 compute-0 ceph-mon[191910]: pgmap v2087: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:19 compute-0 nova_compute[355794]: 2025-10-02 20:14:19.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:19 compute-0 nova_compute[355794]: 2025-10-02 20:14:19.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:14:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/71246987' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:14:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:14:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/71246987' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:14:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:14:21 compute-0 ceph-mon[191910]: pgmap v2088: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/71246987' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:14:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/71246987' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:14:21 compute-0 sudo[468300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:14:21 compute-0 sudo[468300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:21 compute-0 sudo[468300]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:21 compute-0 nova_compute[355794]: 2025-10-02 20:14:21.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:14:21 compute-0 sudo[468325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:14:21 compute-0 sudo[468325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:21 compute-0 sudo[468325]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:21 compute-0 sudo[468350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:14:21 compute-0 sudo[468350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:21 compute-0 sudo[468350]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:21 compute-0 sudo[468375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 20:14:21 compute-0 sudo[468375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2089: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:22 compute-0 sudo[468375]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:22 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:14:22 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:14:22 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:14:22 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:14:22 compute-0 sudo[468418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:14:22 compute-0 sudo[468418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:22 compute-0 sudo[468418]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:22 compute-0 sudo[468443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:14:22 compute-0 sudo[468443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:22 compute-0 sudo[468443]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:22 compute-0 nova_compute[355794]: 2025-10-02 20:14:22.570 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:14:22 compute-0 sudo[468468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:14:22 compute-0 sudo[468468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:22 compute-0 sudo[468468]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:22 compute-0 sudo[468493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:14:22 compute-0 sudo[468493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:23 compute-0 ceph-mon[191910]: pgmap v2089: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:23 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:14:23 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:14:23 compute-0 sudo[468493]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:14:23 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:14:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:14:23 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:14:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:14:23 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:14:23 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev c816a829-8932-4e24-b1dd-443858e29880 does not exist
Oct 02 20:14:23 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev b13849d7-2d08-45ca-a454-9ad31f74db88 does not exist
Oct 02 20:14:23 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 5f47c611-46d8-49e4-8ca9-076848134e5c does not exist
Oct 02 20:14:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:14:23 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:14:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:14:23 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:14:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:14:23 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:14:23 compute-0 sudo[468548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:14:23 compute-0 sudo[468548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:23 compute-0 sudo[468548]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:23 compute-0 sudo[468573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:14:23 compute-0 sudo[468573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:23 compute-0 sudo[468573]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2090: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:23 compute-0 sudo[468598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:14:23 compute-0 sudo[468598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:23 compute-0 sudo[468598]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:24 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:14:24 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:14:24 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:14:24 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:14:24 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:14:24 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:14:24 compute-0 sudo[468623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:14:24 compute-0 sudo[468623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:24 compute-0 podman[468688]: 2025-10-02 20:14:24.611035645 +0000 UTC m=+0.113729702 container create 6393c7de2281ec24624ffbc732dddbfae73214c3cea049ecc8a42772669e4e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nobel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:14:24 compute-0 podman[468688]: 2025-10-02 20:14:24.533467478 +0000 UTC m=+0.036161525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:14:24 compute-0 systemd[1]: Started libpod-conmon-6393c7de2281ec24624ffbc732dddbfae73214c3cea049ecc8a42772669e4e6b.scope.
Oct 02 20:14:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:14:24 compute-0 podman[468688]: 2025-10-02 20:14:24.765571323 +0000 UTC m=+0.268265440 container init 6393c7de2281ec24624ffbc732dddbfae73214c3cea049ecc8a42772669e4e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nobel, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 20:14:24 compute-0 nova_compute[355794]: 2025-10-02 20:14:24.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:24 compute-0 podman[468688]: 2025-10-02 20:14:24.780752244 +0000 UTC m=+0.283446321 container start 6393c7de2281ec24624ffbc732dddbfae73214c3cea049ecc8a42772669e4e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nobel, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:14:24 compute-0 cranky_nobel[468704]: 167 167
Oct 02 20:14:24 compute-0 systemd[1]: libpod-6393c7de2281ec24624ffbc732dddbfae73214c3cea049ecc8a42772669e4e6b.scope: Deactivated successfully.
Oct 02 20:14:24 compute-0 podman[468688]: 2025-10-02 20:14:24.802286902 +0000 UTC m=+0.304980969 container attach 6393c7de2281ec24624ffbc732dddbfae73214c3cea049ecc8a42772669e4e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 20:14:24 compute-0 podman[468688]: 2025-10-02 20:14:24.802878547 +0000 UTC m=+0.305572614 container died 6393c7de2281ec24624ffbc732dddbfae73214c3cea049ecc8a42772669e4e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 20:14:24 compute-0 nova_compute[355794]: 2025-10-02 20:14:24.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aa673383f762648127997a75dc6447078b2fa2c2aaf3a7f274e9e476c15c691-merged.mount: Deactivated successfully.
Oct 02 20:14:25 compute-0 podman[468688]: 2025-10-02 20:14:25.074459624 +0000 UTC m=+0.577153661 container remove 6393c7de2281ec24624ffbc732dddbfae73214c3cea049ecc8a42772669e4e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nobel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 20:14:25 compute-0 ceph-mon[191910]: pgmap v2090: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:25 compute-0 systemd[1]: libpod-conmon-6393c7de2281ec24624ffbc732dddbfae73214c3cea049ecc8a42772669e4e6b.scope: Deactivated successfully.
Oct 02 20:14:25 compute-0 podman[468710]: 2025-10-02 20:14:25.220186019 +0000 UTC m=+0.373575229 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 20:14:25 compute-0 podman[468750]: 2025-10-02 20:14:25.404098011 +0000 UTC m=+0.092631744 container create b067c261ef048f456883437845e3ca3fcdcbf368c7ec27ec60769a74a7607068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:14:25 compute-0 podman[468750]: 2025-10-02 20:14:25.371803769 +0000 UTC m=+0.060337582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:14:25 compute-0 systemd[1]: Started libpod-conmon-b067c261ef048f456883437845e3ca3fcdcbf368c7ec27ec60769a74a7607068.scope.
Oct 02 20:14:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70987bc66cfaabaaad11301e0cd59ce72e350877a53d4e32946ada06c3016de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70987bc66cfaabaaad11301e0cd59ce72e350877a53d4e32946ada06c3016de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70987bc66cfaabaaad11301e0cd59ce72e350877a53d4e32946ada06c3016de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70987bc66cfaabaaad11301e0cd59ce72e350877a53d4e32946ada06c3016de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70987bc66cfaabaaad11301e0cd59ce72e350877a53d4e32946ada06c3016de/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:14:25 compute-0 podman[468750]: 2025-10-02 20:14:25.633685829 +0000 UTC m=+0.322219632 container init b067c261ef048f456883437845e3ca3fcdcbf368c7ec27ec60769a74a7607068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:14:25 compute-0 podman[468750]: 2025-10-02 20:14:25.652880715 +0000 UTC m=+0.341414478 container start b067c261ef048f456883437845e3ca3fcdcbf368c7ec27ec60769a74a7607068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lovelace, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 20:14:25 compute-0 podman[468750]: 2025-10-02 20:14:25.660474536 +0000 UTC m=+0.349008369 container attach b067c261ef048f456883437845e3ca3fcdcbf368c7ec27ec60769a74a7607068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lovelace, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 20:14:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:14:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:26 compute-0 ceph-mon[191910]: pgmap v2091: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:26 compute-0 cool_lovelace[468766]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:14:26 compute-0 cool_lovelace[468766]: --> relative data size: 1.0
Oct 02 20:14:26 compute-0 cool_lovelace[468766]: --> All data devices are unavailable
Oct 02 20:14:26 compute-0 systemd[1]: libpod-b067c261ef048f456883437845e3ca3fcdcbf368c7ec27ec60769a74a7607068.scope: Deactivated successfully.
Oct 02 20:14:26 compute-0 systemd[1]: libpod-b067c261ef048f456883437845e3ca3fcdcbf368c7ec27ec60769a74a7607068.scope: Consumed 1.245s CPU time.
Oct 02 20:14:27 compute-0 podman[468795]: 2025-10-02 20:14:27.041335163 +0000 UTC m=+0.056498722 container died b067c261ef048f456883437845e3ca3fcdcbf368c7ec27ec60769a74a7607068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lovelace, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 20:14:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a70987bc66cfaabaaad11301e0cd59ce72e350877a53d4e32946ada06c3016de-merged.mount: Deactivated successfully.
Oct 02 20:14:27 compute-0 podman[468795]: 2025-10-02 20:14:27.147533405 +0000 UTC m=+0.162696964 container remove b067c261ef048f456883437845e3ca3fcdcbf368c7ec27ec60769a74a7607068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:14:27 compute-0 systemd[1]: libpod-conmon-b067c261ef048f456883437845e3ca3fcdcbf368c7ec27ec60769a74a7607068.scope: Deactivated successfully.
Oct 02 20:14:27 compute-0 sudo[468623]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:27 compute-0 sudo[468808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:14:27 compute-0 sudo[468808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:27 compute-0 sudo[468808]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:27 compute-0 sudo[468833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:14:27 compute-0 sudo[468833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:27 compute-0 sudo[468833]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:27 compute-0 sudo[468858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:14:27 compute-0 sudo[468858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:27 compute-0 sudo[468858]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:27 compute-0 sudo[468883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:14:27 compute-0 sudo[468883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:28 compute-0 podman[468945]: 2025-10-02 20:14:28.324774439 +0000 UTC m=+0.079787986 container create 91d86a3fc5da123284e7d1995cef7e75a63a790dc27b9d71137ad7e3737edcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meninsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 02 20:14:28 compute-0 podman[468945]: 2025-10-02 20:14:28.295039255 +0000 UTC m=+0.050052882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:14:28 compute-0 systemd[1]: Started libpod-conmon-91d86a3fc5da123284e7d1995cef7e75a63a790dc27b9d71137ad7e3737edcb8.scope.
Oct 02 20:14:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:14:28 compute-0 podman[468945]: 2025-10-02 20:14:28.476970275 +0000 UTC m=+0.231983912 container init 91d86a3fc5da123284e7d1995cef7e75a63a790dc27b9d71137ad7e3737edcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:14:28 compute-0 podman[468945]: 2025-10-02 20:14:28.498822812 +0000 UTC m=+0.253836389 container start 91d86a3fc5da123284e7d1995cef7e75a63a790dc27b9d71137ad7e3737edcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meninsky, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:14:28 compute-0 podman[468945]: 2025-10-02 20:14:28.507619274 +0000 UTC m=+0.262632901 container attach 91d86a3fc5da123284e7d1995cef7e75a63a790dc27b9d71137ad7e3737edcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meninsky, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:14:28 compute-0 exciting_meninsky[468961]: 167 167
Oct 02 20:14:28 compute-0 systemd[1]: libpod-91d86a3fc5da123284e7d1995cef7e75a63a790dc27b9d71137ad7e3737edcb8.scope: Deactivated successfully.
Oct 02 20:14:28 compute-0 podman[468966]: 2025-10-02 20:14:28.587132782 +0000 UTC m=+0.048404968 container died 91d86a3fc5da123284e7d1995cef7e75a63a790dc27b9d71137ad7e3737edcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:14:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-85a7e966a4c0340ebdfeb93a1ee663723419b2991282fdab7ac3a9d31f069229-merged.mount: Deactivated successfully.
Oct 02 20:14:28 compute-0 podman[468966]: 2025-10-02 20:14:28.662245004 +0000 UTC m=+0.123517160 container remove 91d86a3fc5da123284e7d1995cef7e75a63a790dc27b9d71137ad7e3737edcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 20:14:28 compute-0 systemd[1]: libpod-conmon-91d86a3fc5da123284e7d1995cef7e75a63a790dc27b9d71137ad7e3737edcb8.scope: Deactivated successfully.
Oct 02 20:14:28 compute-0 podman[468986]: 2025-10-02 20:14:28.93840725 +0000 UTC m=+0.081338306 container create b395f6f56ebafcf3b6fd0c0ffff7f02b3e5a3d04fe76d9d95312ae1a2515e1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:14:28 compute-0 ceph-mon[191910]: pgmap v2092: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:29 compute-0 podman[468986]: 2025-10-02 20:14:28.908169853 +0000 UTC m=+0.051100919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:14:29 compute-0 systemd[1]: Started libpod-conmon-b395f6f56ebafcf3b6fd0c0ffff7f02b3e5a3d04fe76d9d95312ae1a2515e1eb.scope.
Oct 02 20:14:29 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2358c671c5d19548f17040b2d6e647279d8283d011247062e601f3751d421298/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2358c671c5d19548f17040b2d6e647279d8283d011247062e601f3751d421298/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2358c671c5d19548f17040b2d6e647279d8283d011247062e601f3751d421298/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2358c671c5d19548f17040b2d6e647279d8283d011247062e601f3751d421298/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:14:29 compute-0 podman[468986]: 2025-10-02 20:14:29.101705189 +0000 UTC m=+0.244636275 container init b395f6f56ebafcf3b6fd0c0ffff7f02b3e5a3d04fe76d9d95312ae1a2515e1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 20:14:29 compute-0 podman[468986]: 2025-10-02 20:14:29.115849892 +0000 UTC m=+0.258780948 container start b395f6f56ebafcf3b6fd0c0ffff7f02b3e5a3d04fe76d9d95312ae1a2515e1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:14:29 compute-0 podman[468986]: 2025-10-02 20:14:29.12484308 +0000 UTC m=+0.267774136 container attach b395f6f56ebafcf3b6fd0c0ffff7f02b3e5a3d04fe76d9d95312ae1a2515e1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:14:29 compute-0 podman[157186]: time="2025-10-02T20:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:14:29 compute-0 nova_compute[355794]: 2025-10-02 20:14:29.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 49067 "" "Go-http-client/1.1"
Oct 02 20:14:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9970 "" "Go-http-client/1.1"
Oct 02 20:14:29 compute-0 nova_compute[355794]: 2025-10-02 20:14:29.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]: {
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:     "0": [
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:         {
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "devices": [
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "/dev/loop3"
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             ],
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "lv_name": "ceph_lv0",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "lv_size": "21470642176",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "name": "ceph_lv0",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "tags": {
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.cluster_name": "ceph",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.crush_device_class": "",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.encrypted": "0",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.osd_id": "0",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.type": "block",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.vdo": "0"
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             },
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "type": "block",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "vg_name": "ceph_vg0"
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:         }
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:     ],
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:     "1": [
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:         {
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "devices": [
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "/dev/loop4"
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             ],
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "lv_name": "ceph_lv1",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "lv_size": "21470642176",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "name": "ceph_lv1",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "tags": {
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.cluster_name": "ceph",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.crush_device_class": "",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.encrypted": "0",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.osd_id": "1",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.type": "block",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.vdo": "0"
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             },
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "type": "block",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "vg_name": "ceph_vg1"
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:         }
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:     ],
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:     "2": [
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:         {
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "devices": [
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "/dev/loop5"
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             ],
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "lv_name": "ceph_lv2",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "lv_size": "21470642176",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "name": "ceph_lv2",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "tags": {
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.cluster_name": "ceph",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.crush_device_class": "",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.encrypted": "0",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.osd_id": "2",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.type": "block",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:                 "ceph.vdo": "0"
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             },
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "type": "block",
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:             "vg_name": "ceph_vg2"
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:         }
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]:     ]
Oct 02 20:14:30 compute-0 sweet_dewdney[469002]: }
Oct 02 20:14:30 compute-0 systemd[1]: libpod-b395f6f56ebafcf3b6fd0c0ffff7f02b3e5a3d04fe76d9d95312ae1a2515e1eb.scope: Deactivated successfully.
Oct 02 20:14:30 compute-0 conmon[469002]: conmon b395f6f56ebafcf3b6fd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b395f6f56ebafcf3b6fd0c0ffff7f02b3e5a3d04fe76d9d95312ae1a2515e1eb.scope/container/memory.events
Oct 02 20:14:30 compute-0 podman[469011]: 2025-10-02 20:14:30.153146794 +0000 UTC m=+0.055477065 container died b395f6f56ebafcf3b6fd0c0ffff7f02b3e5a3d04fe76d9d95312ae1a2515e1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 20:14:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-2358c671c5d19548f17040b2d6e647279d8283d011247062e601f3751d421298-merged.mount: Deactivated successfully.
Oct 02 20:14:30 compute-0 podman[469011]: 2025-10-02 20:14:30.273855699 +0000 UTC m=+0.176185890 container remove b395f6f56ebafcf3b6fd0c0ffff7f02b3e5a3d04fe76d9d95312ae1a2515e1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:14:30 compute-0 systemd[1]: libpod-conmon-b395f6f56ebafcf3b6fd0c0ffff7f02b3e5a3d04fe76d9d95312ae1a2515e1eb.scope: Deactivated successfully.
Oct 02 20:14:30 compute-0 sudo[468883]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:30 compute-0 sudo[469024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:14:30 compute-0 sudo[469024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:30 compute-0 sudo[469024]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:30 compute-0 sudo[469049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:14:30 compute-0 sudo[469049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:30 compute-0 sudo[469049]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:30 compute-0 sudo[469074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:14:30 compute-0 sudo[469074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:30 compute-0 sudo[469074]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:14:30 compute-0 sudo[469099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:14:30 compute-0 sudo[469099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:30 compute-0 ceph-mon[191910]: pgmap v2093: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:31 compute-0 openstack_network_exporter[372736]: ERROR   20:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:14:31 compute-0 openstack_network_exporter[372736]: ERROR   20:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:14:31 compute-0 openstack_network_exporter[372736]: ERROR   20:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:14:31 compute-0 openstack_network_exporter[372736]: ERROR   20:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:14:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:14:31 compute-0 openstack_network_exporter[372736]: ERROR   20:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:14:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:14:31 compute-0 podman[469165]: 2025-10-02 20:14:31.43696904 +0000 UTC m=+0.086090853 container create 0f2e70e4236b1880b4bffc4912c03c629a3da98240c3023c1c8ec48796566204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 20:14:31 compute-0 podman[469165]: 2025-10-02 20:14:31.399879622 +0000 UTC m=+0.049001465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:14:31 compute-0 systemd[1]: Started libpod-conmon-0f2e70e4236b1880b4bffc4912c03c629a3da98240c3023c1c8ec48796566204.scope.
Oct 02 20:14:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:14:31 compute-0 podman[469165]: 2025-10-02 20:14:31.582149301 +0000 UTC m=+0.231271204 container init 0f2e70e4236b1880b4bffc4912c03c629a3da98240c3023c1c8ec48796566204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct 02 20:14:31 compute-0 podman[469165]: 2025-10-02 20:14:31.598122003 +0000 UTC m=+0.247243826 container start 0f2e70e4236b1880b4bffc4912c03c629a3da98240c3023c1c8ec48796566204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 20:14:31 compute-0 podman[469165]: 2025-10-02 20:14:31.604900911 +0000 UTC m=+0.254022754 container attach 0f2e70e4236b1880b4bffc4912c03c629a3da98240c3023c1c8ec48796566204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 20:14:31 compute-0 hopeful_noyce[469181]: 167 167
Oct 02 20:14:31 compute-0 systemd[1]: libpod-0f2e70e4236b1880b4bffc4912c03c629a3da98240c3023c1c8ec48796566204.scope: Deactivated successfully.
Oct 02 20:14:31 compute-0 podman[469165]: 2025-10-02 20:14:31.610973372 +0000 UTC m=+0.260095215 container died 0f2e70e4236b1880b4bffc4912c03c629a3da98240c3023c1c8ec48796566204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:14:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d088259925e9985fc8781554695281fd7e2be57437cb274c7965aab5aaeae5d0-merged.mount: Deactivated successfully.
Oct 02 20:14:31 compute-0 podman[469165]: 2025-10-02 20:14:31.692081472 +0000 UTC m=+0.341203305 container remove 0f2e70e4236b1880b4bffc4912c03c629a3da98240c3023c1c8ec48796566204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_noyce, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:14:31 compute-0 systemd[1]: libpod-conmon-0f2e70e4236b1880b4bffc4912c03c629a3da98240c3023c1c8ec48796566204.scope: Deactivated successfully.
Oct 02 20:14:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:32 compute-0 podman[469205]: 2025-10-02 20:14:32.021139645 +0000 UTC m=+0.103844431 container create 9736791e87540f673fb91e276d78a72641ba919617380593ead20d2a2ebf59d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:14:32 compute-0 podman[469205]: 2025-10-02 20:14:31.972749308 +0000 UTC m=+0.055454074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:14:32 compute-0 systemd[1]: Started libpod-conmon-9736791e87540f673fb91e276d78a72641ba919617380593ead20d2a2ebf59d6.scope.
Oct 02 20:14:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:14:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e217b07acfd41600482683445899bc533ffc75eb21f9918c9bea5ab8476241/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:14:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e217b07acfd41600482683445899bc533ffc75eb21f9918c9bea5ab8476241/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:14:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e217b07acfd41600482683445899bc533ffc75eb21f9918c9bea5ab8476241/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:14:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e217b07acfd41600482683445899bc533ffc75eb21f9918c9bea5ab8476241/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:14:32 compute-0 podman[469205]: 2025-10-02 20:14:32.207496562 +0000 UTC m=+0.290201398 container init 9736791e87540f673fb91e276d78a72641ba919617380593ead20d2a2ebf59d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 20:14:32 compute-0 podman[469205]: 2025-10-02 20:14:32.230091218 +0000 UTC m=+0.312796004 container start 9736791e87540f673fb91e276d78a72641ba919617380593ead20d2a2ebf59d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 20:14:32 compute-0 podman[469205]: 2025-10-02 20:14:32.238700016 +0000 UTC m=+0.321404792 container attach 9736791e87540f673fb91e276d78a72641ba919617380593ead20d2a2ebf59d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_morse, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:14:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:14:32.330 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:14:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:14:32.334 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:14:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:14:32.336 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:14:33 compute-0 ceph-mon[191910]: pgmap v2094: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:33 compute-0 nice_morse[469221]: {
Oct 02 20:14:33 compute-0 nice_morse[469221]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:14:33 compute-0 nice_morse[469221]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:14:33 compute-0 nice_morse[469221]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:14:33 compute-0 nice_morse[469221]:         "osd_id": 1,
Oct 02 20:14:33 compute-0 nice_morse[469221]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:14:33 compute-0 nice_morse[469221]:         "type": "bluestore"
Oct 02 20:14:33 compute-0 nice_morse[469221]:     },
Oct 02 20:14:33 compute-0 nice_morse[469221]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:14:33 compute-0 nice_morse[469221]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:14:33 compute-0 nice_morse[469221]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:14:33 compute-0 nice_morse[469221]:         "osd_id": 2,
Oct 02 20:14:33 compute-0 nice_morse[469221]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:14:33 compute-0 nice_morse[469221]:         "type": "bluestore"
Oct 02 20:14:33 compute-0 nice_morse[469221]:     },
Oct 02 20:14:33 compute-0 nice_morse[469221]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:14:33 compute-0 nice_morse[469221]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:14:33 compute-0 nice_morse[469221]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:14:33 compute-0 nice_morse[469221]:         "osd_id": 0,
Oct 02 20:14:33 compute-0 nice_morse[469221]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:14:33 compute-0 nice_morse[469221]:         "type": "bluestore"
Oct 02 20:14:33 compute-0 nice_morse[469221]:     }
Oct 02 20:14:33 compute-0 nice_morse[469221]: }
Oct 02 20:14:33 compute-0 systemd[1]: libpod-9736791e87540f673fb91e276d78a72641ba919617380593ead20d2a2ebf59d6.scope: Deactivated successfully.
Oct 02 20:14:33 compute-0 podman[469205]: 2025-10-02 20:14:33.506271922 +0000 UTC m=+1.588976678 container died 9736791e87540f673fb91e276d78a72641ba919617380593ead20d2a2ebf59d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_morse, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:14:33 compute-0 systemd[1]: libpod-9736791e87540f673fb91e276d78a72641ba919617380593ead20d2a2ebf59d6.scope: Consumed 1.258s CPU time.
Oct 02 20:14:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7e217b07acfd41600482683445899bc533ffc75eb21f9918c9bea5ab8476241-merged.mount: Deactivated successfully.
Oct 02 20:14:33 compute-0 podman[469205]: 2025-10-02 20:14:33.608334055 +0000 UTC m=+1.691038811 container remove 9736791e87540f673fb91e276d78a72641ba919617380593ead20d2a2ebf59d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_morse, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:14:33 compute-0 systemd[1]: libpod-conmon-9736791e87540f673fb91e276d78a72641ba919617380593ead20d2a2ebf59d6.scope: Deactivated successfully.
Oct 02 20:14:33 compute-0 sudo[469099]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:14:33 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:14:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:14:33 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:14:33 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 509d3eba-2dbc-4f70-bab0-8138ff44eb5e does not exist
Oct 02 20:14:33 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 0595e47f-e1a4-4137-adfa-9031a72708f2 does not exist
Oct 02 20:14:33 compute-0 podman[469257]: 2025-10-02 20:14:33.672230351 +0000 UTC m=+0.107485597 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:14:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:14:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:14:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:14:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:14:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:14:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:14:33 compute-0 podman[469263]: 2025-10-02 20:14:33.704140863 +0000 UTC m=+0.142578793 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 20:14:33 compute-0 sudo[469306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:14:33 compute-0 sudo[469306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:33 compute-0 sudo[469306]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:33 compute-0 sudo[469331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:14:33 compute-0 sudo[469331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:14:33 compute-0 sudo[469331]: pam_unix(sudo:session): session closed for user root
Oct 02 20:14:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2095: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:34 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:14:34 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:14:34 compute-0 ceph-mon[191910]: pgmap v2095: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:34 compute-0 nova_compute[355794]: 2025-10-02 20:14:34.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:34 compute-0 nova_compute[355794]: 2025-10-02 20:14:34.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:14:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:36 compute-0 ceph-mon[191910]: pgmap v2096: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2097: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:38 compute-0 ceph-mon[191910]: pgmap v2097: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:39 compute-0 nova_compute[355794]: 2025-10-02 20:14:39.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:39 compute-0 nova_compute[355794]: 2025-10-02 20:14:39.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2098: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:14:40 compute-0 ceph-mon[191910]: pgmap v2098: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:41 compute-0 podman[469357]: 2025-10-02 20:14:41.693806356 +0000 UTC m=+0.117690986 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_id=edpm, container_name=kepler, release-0.7.12=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 20:14:41 compute-0 podman[469356]: 2025-10-02 20:14:41.717559023 +0000 UTC m=+0.137103809 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 20:14:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2099: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:43 compute-0 ceph-mon[191910]: pgmap v2099: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:43 compute-0 podman[469391]: 2025-10-02 20:14:43.692594508 +0000 UTC m=+0.107674663 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 20:14:43 compute-0 podman[469392]: 2025-10-02 20:14:43.702874979 +0000 UTC m=+0.118362184 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 20:14:43 compute-0 podman[469393]: 2025-10-02 20:14:43.749569121 +0000 UTC m=+0.153349458 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 20:14:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:44 compute-0 nova_compute[355794]: 2025-10-02 20:14:44.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:44 compute-0 nova_compute[355794]: 2025-10-02 20:14:44.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:44 compute-0 podman[469449]: 2025-10-02 20:14:44.866274658 +0000 UTC m=+0.119578506 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, config_id=edpm, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 20:14:44 compute-0 podman[469450]: 2025-10-02 20:14:44.88264567 +0000 UTC m=+0.106815340 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 20:14:45 compute-0 ceph-mon[191910]: pgmap v2100: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:14:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:14:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2101: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 170 B/s wr, 1 op/s
Oct 02 20:14:47 compute-0 ceph-mon[191910]: pgmap v2101: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 170 B/s wr, 1 op/s
Oct 02 20:14:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 170 B/s wr, 4 op/s
Oct 02 20:14:49 compute-0 ceph-mon[191910]: pgmap v2102: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 170 B/s wr, 4 op/s
Oct 02 20:14:49 compute-0 nova_compute[355794]: 2025-10-02 20:14:49.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:49 compute-0 nova_compute[355794]: 2025-10-02 20:14:49.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2103: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 170 B/s wr, 4 op/s
Oct 02 20:14:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:14:51 compute-0 ceph-mon[191910]: pgmap v2103: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 170 B/s wr, 4 op/s
Oct 02 20:14:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2104: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 255 B/s wr, 4 op/s
Oct 02 20:14:52 compute-0 ceph-mon[191910]: pgmap v2104: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 255 B/s wr, 4 op/s
Oct 02 20:14:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2105: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 7.9 KiB/s wr, 4 op/s
Oct 02 20:14:54 compute-0 nova_compute[355794]: 2025-10-02 20:14:54.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:54 compute-0 nova_compute[355794]: 2025-10-02 20:14:54.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:54 compute-0 ceph-mon[191910]: pgmap v2105: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 7.9 KiB/s wr, 4 op/s
Oct 02 20:14:55 compute-0 podman[469491]: 2025-10-02 20:14:55.691940104 +0000 UTC m=+0.123101779 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:14:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:14:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 8.6 KiB/s wr, 4 op/s
Oct 02 20:14:57 compute-0 ceph-mon[191910]: pgmap v2106: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 8.6 KiB/s wr, 4 op/s
Oct 02 20:14:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2107: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 8.4 KiB/s wr, 2 op/s
Oct 02 20:14:59 compute-0 ceph-mon[191910]: pgmap v2107: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 8.4 KiB/s wr, 2 op/s
Oct 02 20:14:59 compute-0 podman[157186]: time="2025-10-02T20:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:14:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:14:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9561 "" "Go-http-client/1.1"
Oct 02 20:14:59 compute-0 nova_compute[355794]: 2025-10-02 20:14:59.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:59 compute-0 nova_compute[355794]: 2025-10-02 20:14:59.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:14:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2108: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 8.4 KiB/s wr, 0 op/s
Oct 02 20:15:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:15:01 compute-0 ceph-mon[191910]: pgmap v2108: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 8.4 KiB/s wr, 0 op/s
Oct 02 20:15:01 compute-0 openstack_network_exporter[372736]: ERROR   20:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:15:01 compute-0 openstack_network_exporter[372736]: ERROR   20:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:15:01 compute-0 openstack_network_exporter[372736]: ERROR   20:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:15:01 compute-0 openstack_network_exporter[372736]: ERROR   20:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:15:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:15:01 compute-0 openstack_network_exporter[372736]: ERROR   20:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:15:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:15:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 0 op/s
Oct 02 20:15:02 compute-0 ceph-mon[191910]: pgmap v2109: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 0 op/s
Oct 02 20:15:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:15:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:15:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:15:03
Oct 02 20:15:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:15:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:15:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'volumes', 'backups', 'cephfs.cephfs.data']
Oct 02 20:15:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:15:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:15:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:15:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:15:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:15:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2110: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 8.3 KiB/s wr, 0 op/s
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.306 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.307 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.326 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343b23ec90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.337 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '03794a5e-b5ab-4b9e-8052-6de08e4c9f84', 'name': 'te-6077278-asg-4covgfuc3mpk-ztq7ffpxzcnr-y6lmqetg2enr', 'flavor': {'id': '2a4d7fef-934e-4921-8c3b-c6783966faa5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'fe71959f-8f59-4b45-ae05-4216d5f12fab'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '16e65e6cbbf848e5bb5755e6da3b1d33', 'user_id': 'e5d4abc29b2e475e9c7c54249ca341c4', 'hostId': '01141092f0da7a843904c1ec95a2fbe7e7386c3244d2a85e02147c4e', 'status': 'active', 'metadata': {'metering.server_group': 'f724f930-b01d-4568-9d24-c7060da9fe9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.345 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f50e6a55-f3b5-402b-91b2-12d34386f656', 'name': 'te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6', 'flavor': {'id': '2a4d7fef-934e-4921-8c3b-c6783966faa5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'fe71959f-8f59-4b45-ae05-4216d5f12fab'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '16e65e6cbbf848e5bb5755e6da3b1d33', 'user_id': 'e5d4abc29b2e475e9c7c54249ca341c4', 'hostId': '01141092f0da7a843904c1ec95a2fbe7e7386c3244d2a85e02147c4e', 'status': 'active', 'metadata': {'metering.server_group': 'f724f930-b01d-4568-9d24-c7060da9fe9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.345 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.346 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T20:15:04.346810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.404 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.406 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.407 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:15:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:15:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:15:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:15:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:15:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:15:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:15:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:15:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:15:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.464 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.requests volume: 1052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.465 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.503 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.requests volume: 1115 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.503 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.504 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.505 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.505 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:15:04.506106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.529 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.530 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.530 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.553 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.554 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.582 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.583 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.584 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.585 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.585 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.586 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:15:04.586057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.589 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.590 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.590 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.591 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.bytes volume: 72847360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.591 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.592 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.bytes volume: 72957952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.592 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.594 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.594 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:15:04.594510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.598 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.598 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.599 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.latency volume: 9662481186 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.599 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.600 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.latency volume: 61167655094 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.600 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.601 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.602 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.602 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.602 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.602 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:15:04.603012) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.633 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.658 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.683 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.683 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.684 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.684 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.684 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.684 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.684 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.684 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.685 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.requests volume: 278 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.685 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.685 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.685 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.686 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.686 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:15:04.684362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.686 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.686 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.686 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.687 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.687 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.687 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:15:04.687092) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.692 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.696 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.701 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.702 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.704 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.704 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.704 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.704 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.705 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.705 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.705 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.706 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.706 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.706 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.706 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.707 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.707 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.707 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.707 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.707 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.707 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.708 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.708 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.708 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.708 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.708 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.708 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.708 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.709 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.709 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.709 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.709 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.709 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.709 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.710 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.710 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.710 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.710 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.710 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.710 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.710 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.711 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.711 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.711 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.711 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.711 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.711 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.711 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.712 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.712 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.712 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.bytes volume: 29584384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.712 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.712 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.bytes volume: 30829056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.713 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.713 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.713 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.713 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.713 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.713 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.713 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.714 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.714 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.714 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.714 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.715 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.715 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.715 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.715 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.715 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.715 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.715 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.716 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 podman[469514]: 2025-10-02 20:15:04.716110114 +0000 UTC m=+0.110425955 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, io.buildah.version=1.41.4)
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.716 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.716 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.716 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.716 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.717 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.717 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.717 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/memory.usage volume: 43.35546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.717 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/memory.usage volume: 42.97265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.717 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.717 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.717 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.718 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.718 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2730 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.718 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.bytes volume: 2276 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.718 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.bytes volume: 1820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.724 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.724 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.724 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.724 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.724 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:15:04.704707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.725 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:15:04.706102) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:15:04.707066) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.725 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:15:04.708269) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:15:04.709448) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:15:04.710646) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.725 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:15:04.711775) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:15:04.713886) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:15:04.715008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:15:04.716996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:15:04.718037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.725 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.725 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.726 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.726 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.726 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.726 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.726 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.726 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.726 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.727 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.727 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.727 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.728 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.728 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.728 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.728 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.728 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.728 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 podman[469513]: 2025-10-02 20:15:04.728853729 +0000 UTC m=+0.153229783 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.729 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.729 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.729 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.729 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.729 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.729 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.729 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.730 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.730 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.730 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.730 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.730 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.730 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.731 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 62390000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.731 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/cpu volume: 196730000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.731 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/cpu volume: 333920000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.732 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.732 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.732 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.732 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.732 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.732 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.732 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.732 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.733 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.latency volume: 1970969152 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.733 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.latency volume: 129342858 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.733 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.latency volume: 3130282352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.733 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.latency volume: 223577318 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.733 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.734 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.735 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.735 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.735 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.735 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:15:04.724903) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.735 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.735 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:15:04.726153) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.735 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.735 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:15:04.728226) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.735 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.735 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:15:04.729748) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.735 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:15:04.730936) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.735 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:15:04.732359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.736 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.736 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.736 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.736 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.736 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.736 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.736 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.736 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.736 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.737 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.737 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.737 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.737 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.737 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.737 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.737 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.737 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:15:04.738 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:15:04 compute-0 nova_compute[355794]: 2025-10-02 20:15:04.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:04 compute-0 nova_compute[355794]: 2025-10-02 20:15:04.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:04 compute-0 ceph-mon[191910]: pgmap v2110: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 8.3 KiB/s wr, 0 op/s
Oct 02 20:15:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:15:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2111: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 0 op/s
Oct 02 20:15:07 compute-0 ceph-mon[191910]: pgmap v2111: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 0 op/s
Oct 02 20:15:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.0 KiB/s wr, 0 op/s
Oct 02 20:15:09 compute-0 ceph-mon[191910]: pgmap v2112: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.0 KiB/s wr, 0 op/s
Oct 02 20:15:09 compute-0 nova_compute[355794]: 2025-10-02 20:15:09.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:09 compute-0 nova_compute[355794]: 2025-10-02 20:15:09.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2113: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Oct 02 20:15:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:15:11 compute-0 ceph-mon[191910]: pgmap v2113: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Oct 02 20:15:11 compute-0 nova_compute[355794]: 2025-10-02 20:15:11.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:15:11 compute-0 nova_compute[355794]: 2025-10-02 20:15:11.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:15:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Oct 02 20:15:12 compute-0 podman[469555]: 2025-10-02 20:15:12.697923219 +0000 UTC m=+0.110430795 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 02 20:15:12 compute-0 podman[469556]: 2025-10-02 20:15:12.706223528 +0000 UTC m=+0.112274294 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, version=9.4, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, io.openshift.expose-services=)
Oct 02 20:15:13 compute-0 ceph-mon[191910]: pgmap v2114: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002069813948850687 of space, bias 1.0, pg target 0.6209441846552061 quantized to 32 (current 32)
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:15:13 compute-0 nova_compute[355794]: 2025-10-02 20:15:13.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:15:13 compute-0 nova_compute[355794]: 2025-10-02 20:15:13.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:15:13 compute-0 nova_compute[355794]: 2025-10-02 20:15:13.858 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-03794a5e-b5ab-4b9e-8052-6de08e4c9f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:15:13 compute-0 nova_compute[355794]: 2025-10-02 20:15:13.859 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-03794a5e-b5ab-4b9e-8052-6de08e4c9f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:15:13 compute-0 nova_compute[355794]: 2025-10-02 20:15:13.860 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:15:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Oct 02 20:15:14 compute-0 ceph-mon[191910]: pgmap v2115: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Oct 02 20:15:14 compute-0 podman[469594]: 2025-10-02 20:15:14.717816758 +0000 UTC m=+0.131115111 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 02 20:15:14 compute-0 podman[469595]: 2025-10-02 20:15:14.758593274 +0000 UTC m=+0.158494683 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 20:15:14 compute-0 podman[469596]: 2025-10-02 20:15:14.784923539 +0000 UTC m=+0.179837747 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 20:15:14 compute-0 nova_compute[355794]: 2025-10-02 20:15:14.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:14 compute-0 nova_compute[355794]: 2025-10-02 20:15:14.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:15:15 compute-0 podman[469655]: 2025-10-02 20:15:15.73432995 +0000 UTC m=+0.137248383 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 20:15:15 compute-0 podman[469654]: 2025-10-02 20:15:15.75519058 +0000 UTC m=+0.168415865 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, config_id=edpm, distribution-scope=public, release=1755695350, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, container_name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container)
Oct 02 20:15:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2116: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 7.3 KiB/s wr, 0 op/s
Oct 02 20:15:16 compute-0 nova_compute[355794]: 2025-10-02 20:15:16.886 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Updating instance_info_cache with network_info: [{"id": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "address": "fa:16:3e:a4:22:b0", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a7a2e73-ae", "ovs_interfaceid": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:15:16 compute-0 nova_compute[355794]: 2025-10-02 20:15:16.914 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-03794a5e-b5ab-4b9e-8052-6de08e4c9f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:15:16 compute-0 nova_compute[355794]: 2025-10-02 20:15:16.916 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:15:16 compute-0 nova_compute[355794]: 2025-10-02 20:15:16.917 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:15:16 compute-0 nova_compute[355794]: 2025-10-02 20:15:16.919 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:15:16 compute-0 nova_compute[355794]: 2025-10-02 20:15:16.921 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:15:16 compute-0 nova_compute[355794]: 2025-10-02 20:15:16.922 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:15:16 compute-0 nova_compute[355794]: 2025-10-02 20:15:16.955 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:15:16 compute-0 nova_compute[355794]: 2025-10-02 20:15:16.955 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:15:16 compute-0 nova_compute[355794]: 2025-10-02 20:15:16.956 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:15:16 compute-0 nova_compute[355794]: 2025-10-02 20:15:16.956 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:15:16 compute-0 nova_compute[355794]: 2025-10-02 20:15:16.957 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:15:17 compute-0 ceph-mon[191910]: pgmap v2116: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 7.3 KiB/s wr, 0 op/s
Oct 02 20:15:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:15:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/611482258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:15:17 compute-0 nova_compute[355794]: 2025-10-02 20:15:17.494 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:15:17 compute-0 nova_compute[355794]: 2025-10-02 20:15:17.626 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:15:17 compute-0 nova_compute[355794]: 2025-10-02 20:15:17.627 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:15:17 compute-0 nova_compute[355794]: 2025-10-02 20:15:17.629 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:15:17 compute-0 nova_compute[355794]: 2025-10-02 20:15:17.637 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:15:17 compute-0 nova_compute[355794]: 2025-10-02 20:15:17.638 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:15:17 compute-0 nova_compute[355794]: 2025-10-02 20:15:17.645 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:15:17 compute-0 nova_compute[355794]: 2025-10-02 20:15:17.646 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:15:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Oct 02 20:15:18 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/611482258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:15:18 compute-0 nova_compute[355794]: 2025-10-02 20:15:18.215 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:15:18 compute-0 nova_compute[355794]: 2025-10-02 20:15:18.218 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3222MB free_disk=59.86411666870117GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:15:18 compute-0 nova_compute[355794]: 2025-10-02 20:15:18.218 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:15:18 compute-0 nova_compute[355794]: 2025-10-02 20:15:18.219 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:15:18 compute-0 nova_compute[355794]: 2025-10-02 20:15:18.335 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:15:18 compute-0 nova_compute[355794]: 2025-10-02 20:15:18.336 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance f50e6a55-f3b5-402b-91b2-12d34386f656 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:15:18 compute-0 nova_compute[355794]: 2025-10-02 20:15:18.337 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:15:18 compute-0 nova_compute[355794]: 2025-10-02 20:15:18.338 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:15:18 compute-0 nova_compute[355794]: 2025-10-02 20:15:18.339 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:15:18 compute-0 nova_compute[355794]: 2025-10-02 20:15:18.408 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:15:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:15:18 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/826492059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:15:18 compute-0 nova_compute[355794]: 2025-10-02 20:15:18.893 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:15:18 compute-0 nova_compute[355794]: 2025-10-02 20:15:18.908 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:15:18 compute-0 nova_compute[355794]: 2025-10-02 20:15:18.928 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:15:18 compute-0 nova_compute[355794]: 2025-10-02 20:15:18.932 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:15:18 compute-0 nova_compute[355794]: 2025-10-02 20:15:18.935 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:15:19 compute-0 ceph-mon[191910]: pgmap v2117: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Oct 02 20:15:19 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/826492059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:15:19 compute-0 nova_compute[355794]: 2025-10-02 20:15:19.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:19 compute-0 nova_compute[355794]: 2025-10-02 20:15:19.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:15:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:15:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800249276' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:15:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:15:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800249276' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:15:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:15:21 compute-0 ceph-mon[191910]: pgmap v2118: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:15:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/800249276' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:15:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/800249276' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:15:21 compute-0 nova_compute[355794]: 2025-10-02 20:15:21.591 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:15:21 compute-0 nova_compute[355794]: 2025-10-02 20:15:21.592 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:15:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2119: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:15:23 compute-0 ceph-mon[191910]: pgmap v2119: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:15:23 compute-0 nova_compute[355794]: 2025-10-02 20:15:23.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:15:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:15:24 compute-0 nova_compute[355794]: 2025-10-02 20:15:24.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:24 compute-0 nova_compute[355794]: 2025-10-02 20:15:24.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:25 compute-0 ceph-mon[191910]: pgmap v2120: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:15:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:15:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:15:26 compute-0 ceph-mon[191910]: pgmap v2121: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:15:26 compute-0 podman[469742]: 2025-10-02 20:15:26.744348679 +0000 UTC m=+0.150569994 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:15:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s wr, 1 op/s
Oct 02 20:15:29 compute-0 ceph-mon[191910]: pgmap v2122: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s wr, 1 op/s
Oct 02 20:15:29 compute-0 podman[157186]: time="2025-10-02T20:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:15:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:15:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9567 "" "Go-http-client/1.1"
Oct 02 20:15:29 compute-0 nova_compute[355794]: 2025-10-02 20:15:29.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:29 compute-0 nova_compute[355794]: 2025-10-02 20:15:29.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct 02 20:15:30 compute-0 ceph-mon[191910]: pgmap v2123: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct 02 20:15:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:15:31 compute-0 openstack_network_exporter[372736]: ERROR   20:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:15:31 compute-0 openstack_network_exporter[372736]: ERROR   20:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:15:31 compute-0 openstack_network_exporter[372736]: ERROR   20:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:15:31 compute-0 openstack_network_exporter[372736]: ERROR   20:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:15:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:15:31 compute-0 openstack_network_exporter[372736]: ERROR   20:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:15:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:15:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct 02 20:15:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:15:32.330 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:15:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:15:32.331 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:15:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:15:32.332 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:15:33 compute-0 ceph-mon[191910]: pgmap v2124: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct 02 20:15:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:15:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:15:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:15:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:15:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:15:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:15:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2125: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct 02 20:15:34 compute-0 sudo[469761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:15:34 compute-0 sudo[469761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:15:34 compute-0 sudo[469761]: pam_unix(sudo:session): session closed for user root
Oct 02 20:15:34 compute-0 sudo[469786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:15:34 compute-0 sudo[469786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:15:34 compute-0 sudo[469786]: pam_unix(sudo:session): session closed for user root
Oct 02 20:15:34 compute-0 sudo[469811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:15:34 compute-0 sudo[469811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:15:34 compute-0 sudo[469811]: pam_unix(sudo:session): session closed for user root
Oct 02 20:15:34 compute-0 sudo[469836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:15:34 compute-0 sudo[469836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:15:34 compute-0 nova_compute[355794]: 2025-10-02 20:15:34.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:34 compute-0 nova_compute[355794]: 2025-10-02 20:15:34.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:35 compute-0 ceph-mon[191910]: pgmap v2125: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct 02 20:15:35 compute-0 sudo[469836]: pam_unix(sudo:session): session closed for user root
Oct 02 20:15:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 20:15:35 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 20:15:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:15:35 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:15:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:15:35 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:15:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:15:35 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:15:35 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 3b76d4bf-a725-4269-8b04-6314873556d7 does not exist
Oct 02 20:15:35 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e5bfa7c0-a469-40bb-ae19-a4d32550de1c does not exist
Oct 02 20:15:35 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 39912355-6b9b-4395-8f67-fa935d8dc024 does not exist
Oct 02 20:15:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:15:35 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:15:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:15:35 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:15:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:15:35 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:15:35 compute-0 sudo[469893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:15:35 compute-0 sudo[469893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:15:35 compute-0 sudo[469893]: pam_unix(sudo:session): session closed for user root
Oct 02 20:15:35 compute-0 podman[469917]: 2025-10-02 20:15:35.635049177 +0000 UTC m=+0.109698205 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:15:35 compute-0 sudo[469933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:15:35 compute-0 sudo[469933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:15:35 compute-0 sudo[469933]: pam_unix(sudo:session): session closed for user root
Oct 02 20:15:35 compute-0 podman[469918]: 2025-10-02 20:15:35.675017962 +0000 UTC m=+0.137933341 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm)
Oct 02 20:15:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:15:35 compute-0 sudo[469982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:15:35 compute-0 sudo[469982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:15:35 compute-0 sudo[469982]: pam_unix(sudo:session): session closed for user root
Oct 02 20:15:35 compute-0 sudo[470007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:15:35 compute-0 sudo[470007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:15:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2126: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct 02 20:15:36 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 20:15:36 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:15:36 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:15:36 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:15:36 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:15:36 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:15:36 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:15:36 compute-0 podman[470069]: 2025-10-02 20:15:36.471710435 +0000 UTC m=+0.077663911 container create a8c35de87a26d661ec6a209e5b1666445c82f2cfcd561a7bd70e7606ad21beb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:15:36 compute-0 podman[470069]: 2025-10-02 20:15:36.44575688 +0000 UTC m=+0.051710376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:15:36 compute-0 systemd[1]: Started libpod-conmon-a8c35de87a26d661ec6a209e5b1666445c82f2cfcd561a7bd70e7606ad21beb6.scope.
Oct 02 20:15:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:15:36 compute-0 podman[470069]: 2025-10-02 20:15:36.639213165 +0000 UTC m=+0.245166661 container init a8c35de87a26d661ec6a209e5b1666445c82f2cfcd561a7bd70e7606ad21beb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 20:15:36 compute-0 podman[470069]: 2025-10-02 20:15:36.659774397 +0000 UTC m=+0.265727903 container start a8c35de87a26d661ec6a209e5b1666445c82f2cfcd561a7bd70e7606ad21beb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khayyam, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:15:36 compute-0 angry_khayyam[470086]: 167 167
Oct 02 20:15:36 compute-0 systemd[1]: libpod-a8c35de87a26d661ec6a209e5b1666445c82f2cfcd561a7bd70e7606ad21beb6.scope: Deactivated successfully.
Oct 02 20:15:36 compute-0 conmon[470086]: conmon a8c35de87a26d661ec6a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8c35de87a26d661ec6a209e5b1666445c82f2cfcd561a7bd70e7606ad21beb6.scope/container/memory.events
Oct 02 20:15:36 compute-0 podman[470069]: 2025-10-02 20:15:36.70991098 +0000 UTC m=+0.315864496 container attach a8c35de87a26d661ec6a209e5b1666445c82f2cfcd561a7bd70e7606ad21beb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khayyam, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:15:36 compute-0 podman[470069]: 2025-10-02 20:15:36.711023799 +0000 UTC m=+0.316977315 container died a8c35de87a26d661ec6a209e5b1666445c82f2cfcd561a7bd70e7606ad21beb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 20:15:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9ab59e347312aac6dbf5de2196c82e29d3d4401e2b8985b5e3a5896e1f00633-merged.mount: Deactivated successfully.
Oct 02 20:15:37 compute-0 ceph-mon[191910]: pgmap v2126: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct 02 20:15:37 compute-0 podman[470069]: 2025-10-02 20:15:37.310925728 +0000 UTC m=+0.916879244 container remove a8c35de87a26d661ec6a209e5b1666445c82f2cfcd561a7bd70e7606ad21beb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:15:37 compute-0 systemd[1]: libpod-conmon-a8c35de87a26d661ec6a209e5b1666445c82f2cfcd561a7bd70e7606ad21beb6.scope: Deactivated successfully.
Oct 02 20:15:37 compute-0 podman[470109]: 2025-10-02 20:15:37.610033291 +0000 UTC m=+0.077229269 container create f2e01f61549942fe296be596644022119644a131dd48514008e24671c49e8d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_taussig, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:15:37 compute-0 podman[470109]: 2025-10-02 20:15:37.580166133 +0000 UTC m=+0.047362081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:15:37 compute-0 systemd[1]: Started libpod-conmon-f2e01f61549942fe296be596644022119644a131dd48514008e24671c49e8d79.scope.
Oct 02 20:15:37 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8a953f540accd62a74e6e408ba4e4b80e0917cba7d8783ab4d8342bb59423e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8a953f540accd62a74e6e408ba4e4b80e0917cba7d8783ab4d8342bb59423e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8a953f540accd62a74e6e408ba4e4b80e0917cba7d8783ab4d8342bb59423e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8a953f540accd62a74e6e408ba4e4b80e0917cba7d8783ab4d8342bb59423e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8a953f540accd62a74e6e408ba4e4b80e0917cba7d8783ab4d8342bb59423e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:15:37 compute-0 podman[470109]: 2025-10-02 20:15:37.929923842 +0000 UTC m=+0.397119800 container init f2e01f61549942fe296be596644022119644a131dd48514008e24671c49e8d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 20:15:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2127: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct 02 20:15:37 compute-0 podman[470109]: 2025-10-02 20:15:37.952525898 +0000 UTC m=+0.419721876 container start f2e01f61549942fe296be596644022119644a131dd48514008e24671c49e8d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:15:37 compute-0 podman[470109]: 2025-10-02 20:15:37.96546901 +0000 UTC m=+0.432665028 container attach f2e01f61549942fe296be596644022119644a131dd48514008e24671c49e8d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 20:15:38 compute-0 ceph-mon[191910]: pgmap v2127: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct 02 20:15:39 compute-0 wizardly_taussig[470124]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:15:39 compute-0 wizardly_taussig[470124]: --> relative data size: 1.0
Oct 02 20:15:39 compute-0 wizardly_taussig[470124]: --> All data devices are unavailable
Oct 02 20:15:39 compute-0 systemd[1]: libpod-f2e01f61549942fe296be596644022119644a131dd48514008e24671c49e8d79.scope: Deactivated successfully.
Oct 02 20:15:39 compute-0 systemd[1]: libpod-f2e01f61549942fe296be596644022119644a131dd48514008e24671c49e8d79.scope: Consumed 1.344s CPU time.
Oct 02 20:15:39 compute-0 podman[470154]: 2025-10-02 20:15:39.444638061 +0000 UTC m=+0.040365136 container died f2e01f61549942fe296be596644022119644a131dd48514008e24671c49e8d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_taussig, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 20:15:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c8a953f540accd62a74e6e408ba4e4b80e0917cba7d8783ab4d8342bb59423e-merged.mount: Deactivated successfully.
Oct 02 20:15:39 compute-0 podman[470154]: 2025-10-02 20:15:39.679301613 +0000 UTC m=+0.275028658 container remove f2e01f61549942fe296be596644022119644a131dd48514008e24671c49e8d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:15:39 compute-0 systemd[1]: libpod-conmon-f2e01f61549942fe296be596644022119644a131dd48514008e24671c49e8d79.scope: Deactivated successfully.
Oct 02 20:15:39 compute-0 sudo[470007]: pam_unix(sudo:session): session closed for user root
Oct 02 20:15:39 compute-0 nova_compute[355794]: 2025-10-02 20:15:39.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:39 compute-0 sudo[470169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:15:39 compute-0 sudo[470169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:15:39 compute-0 sudo[470169]: pam_unix(sudo:session): session closed for user root
Oct 02 20:15:39 compute-0 nova_compute[355794]: 2025-10-02 20:15:39.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2128: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s wr, 0 op/s
Oct 02 20:15:39 compute-0 sudo[470194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:15:39 compute-0 sudo[470194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:15:39 compute-0 sudo[470194]: pam_unix(sudo:session): session closed for user root
Oct 02 20:15:40 compute-0 sudo[470219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:15:40 compute-0 sudo[470219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:15:40 compute-0 sudo[470219]: pam_unix(sudo:session): session closed for user root
Oct 02 20:15:40 compute-0 sudo[470244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:15:40 compute-0 sudo[470244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:15:40 compute-0 podman[470306]: 2025-10-02 20:15:40.690298979 +0000 UTC m=+0.072777241 container create 42fc2eaf3ab4e0131d7c15ae73d37a8e43c7583cf8f208643927dae61f021aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wu, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:15:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:15:40 compute-0 podman[470306]: 2025-10-02 20:15:40.66227618 +0000 UTC m=+0.044754452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:15:40 compute-0 systemd[1]: Started libpod-conmon-42fc2eaf3ab4e0131d7c15ae73d37a8e43c7583cf8f208643927dae61f021aea.scope.
Oct 02 20:15:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:15:40 compute-0 podman[470306]: 2025-10-02 20:15:40.838246353 +0000 UTC m=+0.220724605 container init 42fc2eaf3ab4e0131d7c15ae73d37a8e43c7583cf8f208643927dae61f021aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:15:40 compute-0 podman[470306]: 2025-10-02 20:15:40.850343372 +0000 UTC m=+0.232821624 container start 42fc2eaf3ab4e0131d7c15ae73d37a8e43c7583cf8f208643927dae61f021aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wu, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 20:15:40 compute-0 podman[470306]: 2025-10-02 20:15:40.857414069 +0000 UTC m=+0.239892341 container attach 42fc2eaf3ab4e0131d7c15ae73d37a8e43c7583cf8f208643927dae61f021aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wu, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 20:15:40 compute-0 stoic_wu[470322]: 167 167
Oct 02 20:15:40 compute-0 systemd[1]: libpod-42fc2eaf3ab4e0131d7c15ae73d37a8e43c7583cf8f208643927dae61f021aea.scope: Deactivated successfully.
Oct 02 20:15:40 compute-0 podman[470306]: 2025-10-02 20:15:40.864090575 +0000 UTC m=+0.246568837 container died 42fc2eaf3ab4e0131d7c15ae73d37a8e43c7583cf8f208643927dae61f021aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wu, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:15:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d42bca514ab0da9b71fd78c403d4ea6ab1794d9bec6d823ebe542c67dada569c-merged.mount: Deactivated successfully.
Oct 02 20:15:40 compute-0 podman[470306]: 2025-10-02 20:15:40.955669101 +0000 UTC m=+0.338147323 container remove 42fc2eaf3ab4e0131d7c15ae73d37a8e43c7583cf8f208643927dae61f021aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wu, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 20:15:40 compute-0 systemd[1]: libpod-conmon-42fc2eaf3ab4e0131d7c15ae73d37a8e43c7583cf8f208643927dae61f021aea.scope: Deactivated successfully.
Oct 02 20:15:41 compute-0 ceph-mon[191910]: pgmap v2128: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s wr, 0 op/s
Oct 02 20:15:41 compute-0 podman[470345]: 2025-10-02 20:15:41.283670216 +0000 UTC m=+0.091435273 container create 84adb1b4337d3746490fb74e7a018f251fe9d2e50a2f676dd938dc0409bf3dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 20:15:41 compute-0 podman[470345]: 2025-10-02 20:15:41.256031617 +0000 UTC m=+0.063796754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:15:41 compute-0 systemd[1]: Started libpod-conmon-84adb1b4337d3746490fb74e7a018f251fe9d2e50a2f676dd938dc0409bf3dca.scope.
Oct 02 20:15:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17fe8838775651ad05b410fef898577fb62a5d08104651944dd5e3cf95526030/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17fe8838775651ad05b410fef898577fb62a5d08104651944dd5e3cf95526030/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17fe8838775651ad05b410fef898577fb62a5d08104651944dd5e3cf95526030/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17fe8838775651ad05b410fef898577fb62a5d08104651944dd5e3cf95526030/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:15:41 compute-0 podman[470345]: 2025-10-02 20:15:41.4262953 +0000 UTC m=+0.234060367 container init 84adb1b4337d3746490fb74e7a018f251fe9d2e50a2f676dd938dc0409bf3dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bartik, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 20:15:41 compute-0 podman[470345]: 2025-10-02 20:15:41.452000648 +0000 UTC m=+0.259765695 container start 84adb1b4337d3746490fb74e7a018f251fe9d2e50a2f676dd938dc0409bf3dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bartik, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:15:41 compute-0 podman[470345]: 2025-10-02 20:15:41.457447192 +0000 UTC m=+0.265212239 container attach 84adb1b4337d3746490fb74e7a018f251fe9d2e50a2f676dd938dc0409bf3dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:15:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2129: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:15:42 compute-0 frosty_bartik[470361]: {
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:     "0": [
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:         {
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "devices": [
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "/dev/loop3"
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             ],
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "lv_name": "ceph_lv0",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "lv_size": "21470642176",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "name": "ceph_lv0",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "tags": {
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.cluster_name": "ceph",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.crush_device_class": "",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.encrypted": "0",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.osd_id": "0",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.type": "block",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.vdo": "0"
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             },
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "type": "block",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "vg_name": "ceph_vg0"
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:         }
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:     ],
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:     "1": [
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:         {
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "devices": [
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "/dev/loop4"
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             ],
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "lv_name": "ceph_lv1",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "lv_size": "21470642176",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "name": "ceph_lv1",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "tags": {
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.cluster_name": "ceph",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.crush_device_class": "",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.encrypted": "0",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.osd_id": "1",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.type": "block",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.vdo": "0"
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             },
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "type": "block",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "vg_name": "ceph_vg1"
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:         }
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:     ],
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:     "2": [
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:         {
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "devices": [
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "/dev/loop5"
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             ],
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "lv_name": "ceph_lv2",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "lv_size": "21470642176",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "name": "ceph_lv2",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "tags": {
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.cluster_name": "ceph",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.crush_device_class": "",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.encrypted": "0",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.osd_id": "2",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.type": "block",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:                 "ceph.vdo": "0"
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             },
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "type": "block",
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:             "vg_name": "ceph_vg2"
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:         }
Oct 02 20:15:42 compute-0 frosty_bartik[470361]:     ]
Oct 02 20:15:42 compute-0 frosty_bartik[470361]: }
Oct 02 20:15:42 compute-0 systemd[1]: libpod-84adb1b4337d3746490fb74e7a018f251fe9d2e50a2f676dd938dc0409bf3dca.scope: Deactivated successfully.
Oct 02 20:15:42 compute-0 podman[470345]: 2025-10-02 20:15:42.276532425 +0000 UTC m=+1.084297512 container died 84adb1b4337d3746490fb74e7a018f251fe9d2e50a2f676dd938dc0409bf3dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-17fe8838775651ad05b410fef898577fb62a5d08104651944dd5e3cf95526030-merged.mount: Deactivated successfully.
Oct 02 20:15:42 compute-0 podman[470345]: 2025-10-02 20:15:42.499456108 +0000 UTC m=+1.307221165 container remove 84adb1b4337d3746490fb74e7a018f251fe9d2e50a2f676dd938dc0409bf3dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bartik, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:15:42 compute-0 systemd[1]: libpod-conmon-84adb1b4337d3746490fb74e7a018f251fe9d2e50a2f676dd938dc0409bf3dca.scope: Deactivated successfully.
Oct 02 20:15:42 compute-0 sudo[470244]: pam_unix(sudo:session): session closed for user root
Oct 02 20:15:42 compute-0 sudo[470382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:15:42 compute-0 sudo[470382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:15:42 compute-0 sudo[470382]: pam_unix(sudo:session): session closed for user root
Oct 02 20:15:42 compute-0 sudo[470407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:15:42 compute-0 sudo[470407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:15:42 compute-0 sudo[470407]: pam_unix(sudo:session): session closed for user root
Oct 02 20:15:42 compute-0 sudo[470444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:15:42 compute-0 sudo[470444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:15:42 compute-0 sudo[470444]: pam_unix(sudo:session): session closed for user root
Oct 02 20:15:42 compute-0 podman[470431]: 2025-10-02 20:15:42.990231588 +0000 UTC m=+0.128743598 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 20:15:43 compute-0 podman[470432]: 2025-10-02 20:15:43.00017548 +0000 UTC m=+0.141162416 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm, distribution-scope=public, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, architecture=x86_64, container_name=kepler, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct 02 20:15:43 compute-0 ceph-mon[191910]: pgmap v2129: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:15:43 compute-0 sudo[470491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:15:43 compute-0 sudo[470491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:15:43 compute-0 podman[470555]: 2025-10-02 20:15:43.669752098 +0000 UTC m=+0.093777745 container create 031265d6a9c3739d56cb6821f7870e8f58078b038f0fd1d4d254c29e5be04ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:15:43 compute-0 podman[470555]: 2025-10-02 20:15:43.629559388 +0000 UTC m=+0.053585075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:15:43 compute-0 systemd[1]: Started libpod-conmon-031265d6a9c3739d56cb6821f7870e8f58078b038f0fd1d4d254c29e5be04ce7.scope.
Oct 02 20:15:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:15:43 compute-0 podman[470555]: 2025-10-02 20:15:43.863005678 +0000 UTC m=+0.287031325 container init 031265d6a9c3739d56cb6821f7870e8f58078b038f0fd1d4d254c29e5be04ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:15:43 compute-0 podman[470555]: 2025-10-02 20:15:43.885732027 +0000 UTC m=+0.309757654 container start 031265d6a9c3739d56cb6821f7870e8f58078b038f0fd1d4d254c29e5be04ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:15:43 compute-0 podman[470555]: 2025-10-02 20:15:43.893512063 +0000 UTC m=+0.317537720 container attach 031265d6a9c3739d56cb6821f7870e8f58078b038f0fd1d4d254c29e5be04ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 20:15:43 compute-0 elated_volhard[470570]: 167 167
Oct 02 20:15:43 compute-0 podman[470555]: 2025-10-02 20:15:43.905584861 +0000 UTC m=+0.329610518 container died 031265d6a9c3739d56cb6821f7870e8f58078b038f0fd1d4d254c29e5be04ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:15:43 compute-0 systemd[1]: libpod-031265d6a9c3739d56cb6821f7870e8f58078b038f0fd1d4d254c29e5be04ce7.scope: Deactivated successfully.
Oct 02 20:15:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2130: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:15:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc32aafbff82369fc4aec641021ba75d2d4a98c8dee98dd1e530681ad56a6321-merged.mount: Deactivated successfully.
Oct 02 20:15:44 compute-0 podman[470555]: 2025-10-02 20:15:44.043822879 +0000 UTC m=+0.467848526 container remove 031265d6a9c3739d56cb6821f7870e8f58078b038f0fd1d4d254c29e5be04ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 02 20:15:44 compute-0 systemd[1]: libpod-conmon-031265d6a9c3739d56cb6821f7870e8f58078b038f0fd1d4d254c29e5be04ce7.scope: Deactivated successfully.
Oct 02 20:15:44 compute-0 podman[470593]: 2025-10-02 20:15:44.343044444 +0000 UTC m=+0.096996761 container create a76f821cfb03412c3b1a7baab7155880d05773c120e55564f8f64f5112f374e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hertz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:15:44 compute-0 podman[470593]: 2025-10-02 20:15:44.29061956 +0000 UTC m=+0.044571947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:15:44 compute-0 systemd[1]: Started libpod-conmon-a76f821cfb03412c3b1a7baab7155880d05773c120e55564f8f64f5112f374e3.scope.
Oct 02 20:15:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3848da066afb4b4f58ce38e307e09d949104c67a6640cf446b41ccc3d9389b93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3848da066afb4b4f58ce38e307e09d949104c67a6640cf446b41ccc3d9389b93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3848da066afb4b4f58ce38e307e09d949104c67a6640cf446b41ccc3d9389b93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3848da066afb4b4f58ce38e307e09d949104c67a6640cf446b41ccc3d9389b93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:15:44 compute-0 podman[470593]: 2025-10-02 20:15:44.498495246 +0000 UTC m=+0.252447633 container init a76f821cfb03412c3b1a7baab7155880d05773c120e55564f8f64f5112f374e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hertz, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:15:44 compute-0 podman[470593]: 2025-10-02 20:15:44.51306527 +0000 UTC m=+0.267017607 container start a76f821cfb03412c3b1a7baab7155880d05773c120e55564f8f64f5112f374e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 20:15:44 compute-0 podman[470593]: 2025-10-02 20:15:44.521129973 +0000 UTC m=+0.275082280 container attach a76f821cfb03412c3b1a7baab7155880d05773c120e55564f8f64f5112f374e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hertz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 20:15:44 compute-0 nova_compute[355794]: 2025-10-02 20:15:44.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:44 compute-0 nova_compute[355794]: 2025-10-02 20:15:44.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:45 compute-0 ceph-mon[191910]: pgmap v2130: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:15:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:15:45 compute-0 elated_hertz[470608]: {
Oct 02 20:15:45 compute-0 elated_hertz[470608]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:15:45 compute-0 elated_hertz[470608]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:15:45 compute-0 elated_hertz[470608]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:15:45 compute-0 elated_hertz[470608]:         "osd_id": 1,
Oct 02 20:15:45 compute-0 elated_hertz[470608]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:15:45 compute-0 elated_hertz[470608]:         "type": "bluestore"
Oct 02 20:15:45 compute-0 elated_hertz[470608]:     },
Oct 02 20:15:45 compute-0 elated_hertz[470608]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:15:45 compute-0 elated_hertz[470608]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:15:45 compute-0 elated_hertz[470608]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:15:45 compute-0 elated_hertz[470608]:         "osd_id": 2,
Oct 02 20:15:45 compute-0 elated_hertz[470608]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:15:45 compute-0 elated_hertz[470608]:         "type": "bluestore"
Oct 02 20:15:45 compute-0 elated_hertz[470608]:     },
Oct 02 20:15:45 compute-0 elated_hertz[470608]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:15:45 compute-0 elated_hertz[470608]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:15:45 compute-0 elated_hertz[470608]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:15:45 compute-0 elated_hertz[470608]:         "osd_id": 0,
Oct 02 20:15:45 compute-0 elated_hertz[470608]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:15:45 compute-0 elated_hertz[470608]:         "type": "bluestore"
Oct 02 20:15:45 compute-0 elated_hertz[470608]:     }
Oct 02 20:15:45 compute-0 elated_hertz[470608]: }
Oct 02 20:15:45 compute-0 systemd[1]: libpod-a76f821cfb03412c3b1a7baab7155880d05773c120e55564f8f64f5112f374e3.scope: Deactivated successfully.
Oct 02 20:15:45 compute-0 podman[470593]: 2025-10-02 20:15:45.87318785 +0000 UTC m=+1.627140217 container died a76f821cfb03412c3b1a7baab7155880d05773c120e55564f8f64f5112f374e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 20:15:45 compute-0 systemd[1]: libpod-a76f821cfb03412c3b1a7baab7155880d05773c120e55564f8f64f5112f374e3.scope: Consumed 1.085s CPU time.
Oct 02 20:15:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2131: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:15:46 compute-0 podman[470637]: 2025-10-02 20:15:46.123032202 +0000 UTC m=+0.531390132 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 20:15:46 compute-0 podman[470635]: 2025-10-02 20:15:46.13621238 +0000 UTC m=+0.552635853 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 20:15:46 compute-0 podman[470636]: 2025-10-02 20:15:46.146117702 +0000 UTC m=+0.561214960 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 20:15:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-3848da066afb4b4f58ce38e307e09d949104c67a6640cf446b41ccc3d9389b93-merged.mount: Deactivated successfully.
Oct 02 20:15:46 compute-0 ceph-mon[191910]: pgmap v2131: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:15:46 compute-0 podman[470593]: 2025-10-02 20:15:46.391893887 +0000 UTC m=+2.145846234 container remove a76f821cfb03412c3b1a7baab7155880d05773c120e55564f8f64f5112f374e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hertz, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:15:46 compute-0 podman[470698]: 2025-10-02 20:15:46.434557483 +0000 UTC m=+0.513234034 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 20:15:46 compute-0 sudo[470491]: pam_unix(sudo:session): session closed for user root
Oct 02 20:15:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:15:46 compute-0 podman[470697]: 2025-10-02 20:15:46.477783783 +0000 UTC m=+0.572680872 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, version=9.6, release=1755695350, managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc.)
Oct 02 20:15:46 compute-0 systemd[1]: libpod-conmon-a76f821cfb03412c3b1a7baab7155880d05773c120e55564f8f64f5112f374e3.scope: Deactivated successfully.
Oct 02 20:15:46 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:15:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:15:46 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:15:46 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e6b6542a-8a61-4339-9854-53047b6f64c7 does not exist
Oct 02 20:15:46 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev b2563c7b-fdd9-40d5-bd59-776d9a9715a9 does not exist
Oct 02 20:15:46 compute-0 sudo[470756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:15:46 compute-0 sudo[470756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:15:46 compute-0 sudo[470756]: pam_unix(sudo:session): session closed for user root
Oct 02 20:15:46 compute-0 sudo[470781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:15:46 compute-0 sudo[470781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:15:46 compute-0 sudo[470781]: pam_unix(sudo:session): session closed for user root
Oct 02 20:15:47 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:15:47 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:15:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2132: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:15:48 compute-0 ceph-mon[191910]: pgmap v2132: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:15:49 compute-0 nova_compute[355794]: 2025-10-02 20:15:49.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:49 compute-0 nova_compute[355794]: 2025-10-02 20:15:49.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2133: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:15:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:15:51 compute-0 sshd-session[470807]: Connection closed by 91.203.81.17 port 60648
Oct 02 20:15:51 compute-0 ceph-mon[191910]: pgmap v2133: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:15:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2134: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:15:52 compute-0 ceph-mon[191910]: pgmap v2134: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:15:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2135: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:15:54 compute-0 ceph-mon[191910]: pgmap v2135: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:15:54 compute-0 nova_compute[355794]: 2025-10-02 20:15:54.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:54 compute-0 nova_compute[355794]: 2025-10-02 20:15:54.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:15:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2136: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:15:57 compute-0 ceph-mon[191910]: pgmap v2136: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:15:57 compute-0 podman[470809]: 2025-10-02 20:15:57.737113713 +0000 UTC m=+0.141514495 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:15:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2137: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:15:58 compute-0 ceph-mon[191910]: pgmap v2137: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:15:58.417913) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436158417986, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1134, "num_deletes": 251, "total_data_size": 1684296, "memory_usage": 1706544, "flush_reason": "Manual Compaction"}
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436158459959, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 1645962, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42879, "largest_seqno": 44012, "table_properties": {"data_size": 1640465, "index_size": 2892, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11681, "raw_average_key_size": 19, "raw_value_size": 1629537, "raw_average_value_size": 2757, "num_data_blocks": 130, "num_entries": 591, "num_filter_entries": 591, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759436051, "oldest_key_time": 1759436051, "file_creation_time": 1759436158, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 42121 microseconds, and 10133 cpu microseconds.
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:15:58.460035) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 1645962 bytes OK
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:15:58.460065) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:15:58.654798) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:15:58.654853) EVENT_LOG_v1 {"time_micros": 1759436158654840, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:15:58.654884) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 1679059, prev total WAL file size 1679059, number of live WAL files 2.
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:15:58.657862) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(1607KB)], [101(9392KB)]
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436158657917, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11263444, "oldest_snapshot_seqno": -1}
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 5933 keys, 9562099 bytes, temperature: kUnknown
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436158756024, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9562099, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9521886, "index_size": 24295, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14853, "raw_key_size": 154454, "raw_average_key_size": 26, "raw_value_size": 9414009, "raw_average_value_size": 1586, "num_data_blocks": 969, "num_entries": 5933, "num_filter_entries": 5933, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759436158, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:15:58.756867) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9562099 bytes
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:15:58.760636) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.1 rd, 96.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 9.2 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(12.7) write-amplify(5.8) OK, records in: 6447, records dropped: 514 output_compression: NoCompression
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:15:58.760669) EVENT_LOG_v1 {"time_micros": 1759436158760653, "job": 60, "event": "compaction_finished", "compaction_time_micros": 98719, "compaction_time_cpu_micros": 45966, "output_level": 6, "num_output_files": 1, "total_output_size": 9562099, "num_input_records": 6447, "num_output_records": 5933, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436158763179, "job": 60, "event": "table_file_deletion", "file_number": 103}
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436158767546, "job": 60, "event": "table_file_deletion", "file_number": 101}
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:15:58.657332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:15:58.768662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:15:58.768671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:15:58.768675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:15:58.768678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:15:58 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:15:58.768681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:15:59 compute-0 podman[157186]: time="2025-10-02T20:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:15:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:15:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9564 "" "Go-http-client/1.1"
Oct 02 20:15:59 compute-0 nova_compute[355794]: 2025-10-02 20:15:59.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:59 compute-0 nova_compute[355794]: 2025-10-02 20:15:59.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:15:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2138: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:16:01 compute-0 ceph-mon[191910]: pgmap v2138: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:01 compute-0 openstack_network_exporter[372736]: ERROR   20:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:16:01 compute-0 openstack_network_exporter[372736]: ERROR   20:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:16:01 compute-0 openstack_network_exporter[372736]: ERROR   20:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:16:01 compute-0 openstack_network_exporter[372736]: ERROR   20:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:16:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:16:01 compute-0 openstack_network_exporter[372736]: ERROR   20:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:16:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:16:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2139: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:03 compute-0 ceph-mon[191910]: pgmap v2139: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:16:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:16:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:16:03
Oct 02 20:16:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:16:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:16:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'volumes']
Oct 02 20:16:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:16:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:16:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:16:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:16:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:16:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2140: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:04 compute-0 ceph-mon[191910]: pgmap v2140: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:16:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:16:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:16:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:16:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:16:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:16:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:16:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:16:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:16:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:16:04 compute-0 nova_compute[355794]: 2025-10-02 20:16:04.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:04 compute-0 nova_compute[355794]: 2025-10-02 20:16:04.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:16:05 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2141: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:06 compute-0 podman[470831]: 2025-10-02 20:16:06.69597915 +0000 UTC m=+0.125370969 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:16:06 compute-0 podman[470832]: 2025-10-02 20:16:06.710315468 +0000 UTC m=+0.121713692 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 20:16:07 compute-0 ceph-mon[191910]: pgmap v2141: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:07 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2142: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:08 compute-0 ceph-mon[191910]: pgmap v2142: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:09 compute-0 nova_compute[355794]: 2025-10-02 20:16:09.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:09 compute-0 nova_compute[355794]: 2025-10-02 20:16:09.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:09 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2143: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:10 compute-0 ceph-mon[191910]: pgmap v2143: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:16:11 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2144: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:13 compute-0 ceph-mon[191910]: pgmap v2144: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020698775395585076 of space, bias 1.0, pg target 0.6209632618675522 quantized to 32 (current 32)
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:16:13 compute-0 nova_compute[355794]: 2025-10-02 20:16:13.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:16:13 compute-0 nova_compute[355794]: 2025-10-02 20:16:13.578 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:16:13 compute-0 podman[470875]: 2025-10-02 20:16:13.721237405 +0000 UTC m=+0.138431994 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Oct 02 20:16:13 compute-0 podman[470876]: 2025-10-02 20:16:13.725641241 +0000 UTC m=+0.132898298 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, architecture=x86_64, vcs-type=git, version=9.4, config_id=edpm, container_name=kepler, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public)
Oct 02 20:16:13 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2145: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:14 compute-0 nova_compute[355794]: 2025-10-02 20:16:14.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:16:14 compute-0 nova_compute[355794]: 2025-10-02 20:16:14.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:16:14 compute-0 nova_compute[355794]: 2025-10-02 20:16:14.578 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:16:14 compute-0 nova_compute[355794]: 2025-10-02 20:16:14.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:14 compute-0 nova_compute[355794]: 2025-10-02 20:16:14.894 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:16:14 compute-0 nova_compute[355794]: 2025-10-02 20:16:14.895 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:16:14 compute-0 nova_compute[355794]: 2025-10-02 20:16:14.896 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:16:14 compute-0 nova_compute[355794]: 2025-10-02 20:16:14.897 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:16:14 compute-0 nova_compute[355794]: 2025-10-02 20:16:14.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:15 compute-0 ceph-mon[191910]: pgmap v2145: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:16:15 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2146: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:16 compute-0 podman[470912]: 2025-10-02 20:16:16.707515913 +0000 UTC m=+0.112382136 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 20:16:16 compute-0 podman[470913]: 2025-10-02 20:16:16.72292314 +0000 UTC m=+0.120236614 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:16:16 compute-0 podman[470916]: 2025-10-02 20:16:16.729855233 +0000 UTC m=+0.108309819 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 20:16:16 compute-0 podman[470914]: 2025-10-02 20:16:16.750411615 +0000 UTC m=+0.140375605 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, architecture=x86_64, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.6, vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.openshift.expose-services=)
Oct 02 20:16:16 compute-0 podman[470915]: 2025-10-02 20:16:16.801281607 +0000 UTC m=+0.187922679 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 20:16:17 compute-0 ceph-mon[191910]: pgmap v2146: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:17 compute-0 nova_compute[355794]: 2025-10-02 20:16:17.526 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:16:17 compute-0 nova_compute[355794]: 2025-10-02 20:16:17.544 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:16:17 compute-0 nova_compute[355794]: 2025-10-02 20:16:17.545 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:16:17 compute-0 nova_compute[355794]: 2025-10-02 20:16:17.547 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:16:17 compute-0 nova_compute[355794]: 2025-10-02 20:16:17.548 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:16:17 compute-0 nova_compute[355794]: 2025-10-02 20:16:17.548 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:16:17 compute-0 nova_compute[355794]: 2025-10-02 20:16:17.550 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:16:17 compute-0 nova_compute[355794]: 2025-10-02 20:16:17.587 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:16:17 compute-0 nova_compute[355794]: 2025-10-02 20:16:17.587 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:16:17 compute-0 nova_compute[355794]: 2025-10-02 20:16:17.588 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:16:17 compute-0 nova_compute[355794]: 2025-10-02 20:16:17.589 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:16:17 compute-0 nova_compute[355794]: 2025-10-02 20:16:17.589 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:16:17 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2147: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:16:18 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3751914236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.158 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:16:18 compute-0 ceph-mon[191910]: pgmap v2147: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:18 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3751914236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.292 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.293 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.294 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.300 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.301 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.307 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.307 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.846 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.848 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3190MB free_disk=59.864112854003906GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.848 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.849 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.945 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.946 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance f50e6a55-f3b5-402b-91b2-12d34386f656 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.946 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.946 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.947 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.964 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing inventories for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.988 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating ProviderTree inventory for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 20:16:18 compute-0 nova_compute[355794]: 2025-10-02 20:16:18.989 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating inventory in ProviderTree for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 20:16:19 compute-0 nova_compute[355794]: 2025-10-02 20:16:19.004 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing aggregate associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 20:16:19 compute-0 nova_compute[355794]: 2025-10-02 20:16:19.028 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing trait associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, traits: COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 20:16:19 compute-0 nova_compute[355794]: 2025-10-02 20:16:19.091 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:16:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:16:19 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1198781101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:16:19 compute-0 nova_compute[355794]: 2025-10-02 20:16:19.583 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:16:19 compute-0 nova_compute[355794]: 2025-10-02 20:16:19.595 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:16:19 compute-0 nova_compute[355794]: 2025-10-02 20:16:19.614 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:16:19 compute-0 nova_compute[355794]: 2025-10-02 20:16:19.616 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:16:19 compute-0 nova_compute[355794]: 2025-10-02 20:16:19.616 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:16:19 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1198781101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:16:19 compute-0 nova_compute[355794]: 2025-10-02 20:16:19.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:19 compute-0 nova_compute[355794]: 2025-10-02 20:16:19.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:19 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2148: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:16:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/330977672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:16:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:16:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/330977672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:16:20 compute-0 ceph-mon[191910]: pgmap v2148: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/330977672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:16:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/330977672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:16:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:16:20.735213) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436180735250, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 419, "num_deletes": 250, "total_data_size": 323864, "memory_usage": 332528, "flush_reason": "Manual Compaction"}
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436180741491, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 254855, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44013, "largest_seqno": 44431, "table_properties": {"data_size": 252487, "index_size": 468, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6368, "raw_average_key_size": 20, "raw_value_size": 247804, "raw_average_value_size": 789, "num_data_blocks": 21, "num_entries": 314, "num_filter_entries": 314, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759436160, "oldest_key_time": 1759436160, "file_creation_time": 1759436180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 6302 microseconds, and 1443 cpu microseconds.
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:16:20.741516) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 254855 bytes OK
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:16:20.741530) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:16:20.744347) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:16:20.744359) EVENT_LOG_v1 {"time_micros": 1759436180744355, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:16:20.744398) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 321257, prev total WAL file size 321257, number of live WAL files 2.
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:16:20.745842) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373533' seq:72057594037927935, type:22 .. '6D6772737461740032303034' seq:0, type:0; will stop at (end)
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(248KB)], [104(9337KB)]
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436180745864, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 9816954, "oldest_snapshot_seqno": -1}
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 5745 keys, 6593444 bytes, temperature: kUnknown
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436180788270, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 6593444, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6559106, "index_size": 18868, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 150737, "raw_average_key_size": 26, "raw_value_size": 6459085, "raw_average_value_size": 1124, "num_data_blocks": 743, "num_entries": 5745, "num_filter_entries": 5745, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759436180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:16:20.788895) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 6593444 bytes
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:16:20.790911) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.9 rd, 153.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.1 +0.0 blob) out(6.3 +0.0 blob), read-write-amplify(64.4) write-amplify(25.9) OK, records in: 6247, records dropped: 502 output_compression: NoCompression
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:16:20.790927) EVENT_LOG_v1 {"time_micros": 1759436180790920, "job": 62, "event": "compaction_finished", "compaction_time_micros": 42895, "compaction_time_cpu_micros": 18275, "output_level": 6, "num_output_files": 1, "total_output_size": 6593444, "num_input_records": 6247, "num_output_records": 5745, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436180792698, "job": 62, "event": "table_file_deletion", "file_number": 106}
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436180795064, "job": 62, "event": "table_file_deletion", "file_number": 104}
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:16:20.745607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:16:20.795703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:16:20.795712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:16:20.795715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:16:20.795718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:16:20 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:16:20.795722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:16:21 compute-0 nova_compute[355794]: 2025-10-02 20:16:21.644 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:16:21 compute-0 nova_compute[355794]: 2025-10-02 20:16:21.645 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:16:21 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2149: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:22 compute-0 nova_compute[355794]: 2025-10-02 20:16:22.570 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:16:23 compute-0 ceph-mon[191910]: pgmap v2149: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:23 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2150: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:24 compute-0 nova_compute[355794]: 2025-10-02 20:16:24.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:24 compute-0 nova_compute[355794]: 2025-10-02 20:16:24.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:25 compute-0 ceph-mon[191910]: pgmap v2150: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:25 compute-0 nova_compute[355794]: 2025-10-02 20:16:25.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:16:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:16:25 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2151: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:27 compute-0 ceph-mon[191910]: pgmap v2151: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:27 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2152: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:28 compute-0 podman[471057]: 2025-10-02 20:16:28.762287462 +0000 UTC m=+0.171423455 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 20:16:29 compute-0 ceph-mon[191910]: pgmap v2152: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:29 compute-0 podman[157186]: time="2025-10-02T20:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:16:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:16:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9553 "" "Go-http-client/1.1"
Oct 02 20:16:29 compute-0 nova_compute[355794]: 2025-10-02 20:16:29.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:29 compute-0 nova_compute[355794]: 2025-10-02 20:16:29.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:29 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2153: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:16:31 compute-0 ceph-mon[191910]: pgmap v2153: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:31 compute-0 openstack_network_exporter[372736]: ERROR   20:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:16:31 compute-0 openstack_network_exporter[372736]: ERROR   20:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:16:31 compute-0 openstack_network_exporter[372736]: ERROR   20:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:16:31 compute-0 openstack_network_exporter[372736]: ERROR   20:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:16:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:16:31 compute-0 openstack_network_exporter[372736]: ERROR   20:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:16:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:16:31 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2154: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:32 compute-0 ceph-mon[191910]: pgmap v2154: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:16:32.331 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:16:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:16:32.332 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:16:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:16:32.333 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:16:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:16:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:16:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:16:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:16:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:16:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:16:33 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2155: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:34 compute-0 nova_compute[355794]: 2025-10-02 20:16:34.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:34 compute-0 nova_compute[355794]: 2025-10-02 20:16:34.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:35 compute-0 ceph-mon[191910]: pgmap v2155: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:16:35 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2156: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:37 compute-0 ceph-mon[191910]: pgmap v2156: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:37 compute-0 podman[471077]: 2025-10-02 20:16:37.700128824 +0000 UTC m=+0.119531806 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 20:16:37 compute-0 podman[471078]: 2025-10-02 20:16:37.736988336 +0000 UTC m=+0.144325389 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct 02 20:16:37 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2157: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:38 compute-0 ceph-mon[191910]: pgmap v2157: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:39 compute-0 nova_compute[355794]: 2025-10-02 20:16:39.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:39 compute-0 nova_compute[355794]: 2025-10-02 20:16:39.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:39 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2158: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:16:41 compute-0 ceph-mon[191910]: pgmap v2158: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:41 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2159: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:43 compute-0 ceph-mon[191910]: pgmap v2159: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:43 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2160: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:44 compute-0 podman[471118]: 2025-10-02 20:16:44.69436253 +0000 UTC m=+0.127355792 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 20:16:44 compute-0 podman[471119]: 2025-10-02 20:16:44.696568008 +0000 UTC m=+0.120354507 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, distribution-scope=public, name=ubi9, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=edpm, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.openshift.expose-services=, version=9.4, build-date=2024-09-18T21:23:30, release-0.7.12=)
Oct 02 20:16:44 compute-0 nova_compute[355794]: 2025-10-02 20:16:44.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:44 compute-0 nova_compute[355794]: 2025-10-02 20:16:44.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:45 compute-0 ceph-mon[191910]: pgmap v2160: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:16:45 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2161: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:47 compute-0 ceph-mon[191910]: pgmap v2161: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:47 compute-0 sudo[471152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:16:47 compute-0 sudo[471152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:16:47 compute-0 sudo[471152]: pam_unix(sudo:session): session closed for user root
Oct 02 20:16:47 compute-0 sudo[471206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:16:47 compute-0 sudo[471206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:16:47 compute-0 sudo[471206]: pam_unix(sudo:session): session closed for user root
Oct 02 20:16:47 compute-0 podman[471177]: 2025-10-02 20:16:47.296520742 +0000 UTC m=+0.124243879 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 20:16:47 compute-0 podman[471176]: 2025-10-02 20:16:47.303346862 +0000 UTC m=+0.150096091 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 20:16:47 compute-0 podman[471178]: 2025-10-02 20:16:47.303323412 +0000 UTC m=+0.126433157 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, version=9.6, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 20:16:47 compute-0 podman[471185]: 2025-10-02 20:16:47.317572118 +0000 UTC m=+0.119340650 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 20:16:47 compute-0 podman[471184]: 2025-10-02 20:16:47.346286515 +0000 UTC m=+0.161384989 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:16:47 compute-0 sudo[471301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:16:47 compute-0 sudo[471301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:16:47 compute-0 sudo[471301]: pam_unix(sudo:session): session closed for user root
Oct 02 20:16:47 compute-0 sudo[471330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:16:47 compute-0 sudo[471330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:16:47 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2162: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:48 compute-0 sudo[471330]: pam_unix(sudo:session): session closed for user root
Oct 02 20:16:48 compute-0 ceph-mon[191910]: pgmap v2162: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:16:48 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:16:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:16:48 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:16:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:16:48 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:16:48 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 2a3d74a2-f968-4108-b805-c662d3fdf1bd does not exist
Oct 02 20:16:48 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 6d160069-ef91-4fba-a7be-9a40b1344e13 does not exist
Oct 02 20:16:48 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev d3d6d49e-2414-4542-ab3f-7b67a1772041 does not exist
Oct 02 20:16:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:16:48 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:16:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:16:48 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:16:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:16:48 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:16:48 compute-0 sudo[471385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:16:48 compute-0 sudo[471385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:16:48 compute-0 sudo[471385]: pam_unix(sudo:session): session closed for user root
Oct 02 20:16:48 compute-0 sudo[471410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:16:48 compute-0 sudo[471410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:16:48 compute-0 sudo[471410]: pam_unix(sudo:session): session closed for user root
Oct 02 20:16:48 compute-0 sudo[471435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:16:48 compute-0 sudo[471435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:16:48 compute-0 sudo[471435]: pam_unix(sudo:session): session closed for user root
Oct 02 20:16:48 compute-0 sudo[471460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:16:48 compute-0 sudo[471460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:16:49 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:16:49 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:16:49 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:16:49 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:16:49 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:16:49 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:16:49 compute-0 podman[471523]: 2025-10-02 20:16:49.388194575 +0000 UTC m=+0.084521242 container create 392bb2c908a61a906be663bee65f10ff05db3118f26317f127209a9d215f5e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:16:49 compute-0 podman[471523]: 2025-10-02 20:16:49.345058256 +0000 UTC m=+0.041384943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:16:49 compute-0 systemd[1]: Started libpod-conmon-392bb2c908a61a906be663bee65f10ff05db3118f26317f127209a9d215f5e3d.scope.
Oct 02 20:16:49 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:16:49 compute-0 podman[471523]: 2025-10-02 20:16:49.531988099 +0000 UTC m=+0.228314796 container init 392bb2c908a61a906be663bee65f10ff05db3118f26317f127209a9d215f5e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_panini, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 20:16:49 compute-0 podman[471523]: 2025-10-02 20:16:49.554209575 +0000 UTC m=+0.250536242 container start 392bb2c908a61a906be663bee65f10ff05db3118f26317f127209a9d215f5e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 20:16:49 compute-0 podman[471523]: 2025-10-02 20:16:49.561313093 +0000 UTC m=+0.257639780 container attach 392bb2c908a61a906be663bee65f10ff05db3118f26317f127209a9d215f5e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 20:16:49 compute-0 exciting_panini[471539]: 167 167
Oct 02 20:16:49 compute-0 systemd[1]: libpod-392bb2c908a61a906be663bee65f10ff05db3118f26317f127209a9d215f5e3d.scope: Deactivated successfully.
Oct 02 20:16:49 compute-0 conmon[471539]: conmon 392bb2c908a61a906be6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-392bb2c908a61a906be663bee65f10ff05db3118f26317f127209a9d215f5e3d.scope/container/memory.events
Oct 02 20:16:49 compute-0 podman[471523]: 2025-10-02 20:16:49.57107371 +0000 UTC m=+0.267400377 container died 392bb2c908a61a906be663bee65f10ff05db3118f26317f127209a9d215f5e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_panini, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:16:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b7b448c246e83f3b681625cb0fbc9d107b8e14af76f9c59d9c75bbccdabbfd9-merged.mount: Deactivated successfully.
Oct 02 20:16:49 compute-0 podman[471523]: 2025-10-02 20:16:49.655673963 +0000 UTC m=+0.352000630 container remove 392bb2c908a61a906be663bee65f10ff05db3118f26317f127209a9d215f5e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 20:16:49 compute-0 systemd[1]: libpod-conmon-392bb2c908a61a906be663bee65f10ff05db3118f26317f127209a9d215f5e3d.scope: Deactivated successfully.
Oct 02 20:16:49 compute-0 nova_compute[355794]: 2025-10-02 20:16:49.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:49 compute-0 nova_compute[355794]: 2025-10-02 20:16:49.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:49 compute-0 podman[471564]: 2025-10-02 20:16:49.952775152 +0000 UTC m=+0.085251950 container create 6c918073d03251fde615b84b332ff1500d0ddc1e1081e549341b15852865aad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 20:16:49 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2163: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:50 compute-0 podman[471564]: 2025-10-02 20:16:49.920771988 +0000 UTC m=+0.053248796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:16:50 compute-0 systemd[1]: Started libpod-conmon-6c918073d03251fde615b84b332ff1500d0ddc1e1081e549341b15852865aad9.scope.
Oct 02 20:16:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c80183c6ad80ebf7ed3e0e84bb3ab414f790a7b68630caed7bf4b4b4beda4bef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c80183c6ad80ebf7ed3e0e84bb3ab414f790a7b68630caed7bf4b4b4beda4bef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c80183c6ad80ebf7ed3e0e84bb3ab414f790a7b68630caed7bf4b4b4beda4bef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c80183c6ad80ebf7ed3e0e84bb3ab414f790a7b68630caed7bf4b4b4beda4bef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c80183c6ad80ebf7ed3e0e84bb3ab414f790a7b68630caed7bf4b4b4beda4bef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:16:50 compute-0 podman[471564]: 2025-10-02 20:16:50.22061544 +0000 UTC m=+0.353092238 container init 6c918073d03251fde615b84b332ff1500d0ddc1e1081e549341b15852865aad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bardeen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 20:16:50 compute-0 podman[471564]: 2025-10-02 20:16:50.249137882 +0000 UTC m=+0.381614690 container start 6c918073d03251fde615b84b332ff1500d0ddc1e1081e549341b15852865aad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bardeen, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 20:16:50 compute-0 podman[471564]: 2025-10-02 20:16:50.257204125 +0000 UTC m=+0.389680903 container attach 6c918073d03251fde615b84b332ff1500d0ddc1e1081e549341b15852865aad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bardeen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 20:16:50 compute-0 ceph-mon[191910]: pgmap v2163: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:16:51 compute-0 elated_bardeen[471580]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:16:51 compute-0 elated_bardeen[471580]: --> relative data size: 1.0
Oct 02 20:16:51 compute-0 elated_bardeen[471580]: --> All data devices are unavailable
Oct 02 20:16:51 compute-0 systemd[1]: libpod-6c918073d03251fde615b84b332ff1500d0ddc1e1081e549341b15852865aad9.scope: Deactivated successfully.
Oct 02 20:16:51 compute-0 systemd[1]: libpod-6c918073d03251fde615b84b332ff1500d0ddc1e1081e549341b15852865aad9.scope: Consumed 1.240s CPU time.
Oct 02 20:16:51 compute-0 podman[471564]: 2025-10-02 20:16:51.559058297 +0000 UTC m=+1.691535095 container died 6c918073d03251fde615b84b332ff1500d0ddc1e1081e549341b15852865aad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bardeen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 20:16:51 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2164: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c80183c6ad80ebf7ed3e0e84bb3ab414f790a7b68630caed7bf4b4b4beda4bef-merged.mount: Deactivated successfully.
Oct 02 20:16:52 compute-0 podman[471564]: 2025-10-02 20:16:52.299188176 +0000 UTC m=+2.431664944 container remove 6c918073d03251fde615b84b332ff1500d0ddc1e1081e549341b15852865aad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bardeen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 20:16:52 compute-0 sudo[471460]: pam_unix(sudo:session): session closed for user root
Oct 02 20:16:52 compute-0 systemd[1]: libpod-conmon-6c918073d03251fde615b84b332ff1500d0ddc1e1081e549341b15852865aad9.scope: Deactivated successfully.
Oct 02 20:16:52 compute-0 sudo[471619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:16:52 compute-0 sudo[471619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:16:52 compute-0 sudo[471619]: pam_unix(sudo:session): session closed for user root
Oct 02 20:16:52 compute-0 sudo[471645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:16:52 compute-0 sudo[471645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:16:52 compute-0 sudo[471645]: pam_unix(sudo:session): session closed for user root
Oct 02 20:16:52 compute-0 sudo[471670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:16:52 compute-0 sudo[471670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:16:52 compute-0 sudo[471670]: pam_unix(sudo:session): session closed for user root
Oct 02 20:16:52 compute-0 sudo[471695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:16:52 compute-0 sudo[471695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:16:53 compute-0 ceph-mon[191910]: pgmap v2164: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:53 compute-0 podman[471758]: 2025-10-02 20:16:53.512653216 +0000 UTC m=+0.089496132 container create 5020212324a21f7a9368f2a524268c89dcce3742f56a0ebb11522917f7514a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 20:16:53 compute-0 podman[471758]: 2025-10-02 20:16:53.482236034 +0000 UTC m=+0.059078960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:16:53 compute-0 systemd[1]: Started libpod-conmon-5020212324a21f7a9368f2a524268c89dcce3742f56a0ebb11522917f7514a53.scope.
Oct 02 20:16:53 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:16:53 compute-0 podman[471758]: 2025-10-02 20:16:53.65413093 +0000 UTC m=+0.230973906 container init 5020212324a21f7a9368f2a524268c89dcce3742f56a0ebb11522917f7514a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:16:53 compute-0 podman[471758]: 2025-10-02 20:16:53.670931773 +0000 UTC m=+0.247774669 container start 5020212324a21f7a9368f2a524268c89dcce3742f56a0ebb11522917f7514a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:16:53 compute-0 podman[471758]: 2025-10-02 20:16:53.676474269 +0000 UTC m=+0.253317195 container attach 5020212324a21f7a9368f2a524268c89dcce3742f56a0ebb11522917f7514a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:16:53 compute-0 affectionate_matsumoto[471773]: 167 167
Oct 02 20:16:53 compute-0 systemd[1]: libpod-5020212324a21f7a9368f2a524268c89dcce3742f56a0ebb11522917f7514a53.scope: Deactivated successfully.
Oct 02 20:16:53 compute-0 podman[471758]: 2025-10-02 20:16:53.682365685 +0000 UTC m=+0.259208601 container died 5020212324a21f7a9368f2a524268c89dcce3742f56a0ebb11522917f7514a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:16:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc25d8a1bb60da6dcefda852d33f98d22ffe41649d642bc88704b8f4ee2feae9-merged.mount: Deactivated successfully.
Oct 02 20:16:53 compute-0 podman[471758]: 2025-10-02 20:16:53.766288609 +0000 UTC m=+0.343131505 container remove 5020212324a21f7a9368f2a524268c89dcce3742f56a0ebb11522917f7514a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:16:53 compute-0 systemd[1]: libpod-conmon-5020212324a21f7a9368f2a524268c89dcce3742f56a0ebb11522917f7514a53.scope: Deactivated successfully.
Oct 02 20:16:53 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2165: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:54 compute-0 podman[471798]: 2025-10-02 20:16:54.043106813 +0000 UTC m=+0.091169966 container create 48694ab97a1ddb323e56df576c17cb7202e270f8314061f86421d43eeaae192e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:16:54 compute-0 systemd[1]: Started libpod-conmon-48694ab97a1ddb323e56df576c17cb7202e270f8314061f86421d43eeaae192e.scope.
Oct 02 20:16:54 compute-0 podman[471798]: 2025-10-02 20:16:54.016868191 +0000 UTC m=+0.064931374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:16:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f838b8fe72dc72dfc50f8b8116ed988a29e4785404e86d6ed6aaf8901fcf4c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f838b8fe72dc72dfc50f8b8116ed988a29e4785404e86d6ed6aaf8901fcf4c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f838b8fe72dc72dfc50f8b8116ed988a29e4785404e86d6ed6aaf8901fcf4c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f838b8fe72dc72dfc50f8b8116ed988a29e4785404e86d6ed6aaf8901fcf4c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:16:54 compute-0 podman[471798]: 2025-10-02 20:16:54.167141916 +0000 UTC m=+0.215205109 container init 48694ab97a1ddb323e56df576c17cb7202e270f8314061f86421d43eeaae192e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:16:54 compute-0 podman[471798]: 2025-10-02 20:16:54.185068729 +0000 UTC m=+0.233131872 container start 48694ab97a1ddb323e56df576c17cb7202e270f8314061f86421d43eeaae192e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bohr, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct 02 20:16:54 compute-0 podman[471798]: 2025-10-02 20:16:54.191431397 +0000 UTC m=+0.239494540 container attach 48694ab97a1ddb323e56df576c17cb7202e270f8314061f86421d43eeaae192e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 20:16:54 compute-0 nova_compute[355794]: 2025-10-02 20:16:54.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:54 compute-0 nova_compute[355794]: 2025-10-02 20:16:54.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:55 compute-0 musing_bohr[471815]: {
Oct 02 20:16:55 compute-0 musing_bohr[471815]:     "0": [
Oct 02 20:16:55 compute-0 musing_bohr[471815]:         {
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "devices": [
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "/dev/loop3"
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             ],
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "lv_name": "ceph_lv0",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "lv_size": "21470642176",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "name": "ceph_lv0",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "tags": {
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.cluster_name": "ceph",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.crush_device_class": "",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.encrypted": "0",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.osd_id": "0",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.type": "block",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.vdo": "0"
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             },
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "type": "block",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "vg_name": "ceph_vg0"
Oct 02 20:16:55 compute-0 musing_bohr[471815]:         }
Oct 02 20:16:55 compute-0 musing_bohr[471815]:     ],
Oct 02 20:16:55 compute-0 musing_bohr[471815]:     "1": [
Oct 02 20:16:55 compute-0 musing_bohr[471815]:         {
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "devices": [
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "/dev/loop4"
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             ],
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "lv_name": "ceph_lv1",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "lv_size": "21470642176",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "name": "ceph_lv1",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "tags": {
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.cluster_name": "ceph",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.crush_device_class": "",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.encrypted": "0",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.osd_id": "1",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.type": "block",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.vdo": "0"
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             },
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "type": "block",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "vg_name": "ceph_vg1"
Oct 02 20:16:55 compute-0 musing_bohr[471815]:         }
Oct 02 20:16:55 compute-0 musing_bohr[471815]:     ],
Oct 02 20:16:55 compute-0 musing_bohr[471815]:     "2": [
Oct 02 20:16:55 compute-0 musing_bohr[471815]:         {
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "devices": [
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "/dev/loop5"
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             ],
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "lv_name": "ceph_lv2",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "lv_size": "21470642176",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "name": "ceph_lv2",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "tags": {
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.cluster_name": "ceph",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.crush_device_class": "",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.encrypted": "0",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.osd_id": "2",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.type": "block",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:                 "ceph.vdo": "0"
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             },
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "type": "block",
Oct 02 20:16:55 compute-0 musing_bohr[471815]:             "vg_name": "ceph_vg2"
Oct 02 20:16:55 compute-0 musing_bohr[471815]:         }
Oct 02 20:16:55 compute-0 musing_bohr[471815]:     ]
Oct 02 20:16:55 compute-0 musing_bohr[471815]: }
Oct 02 20:16:55 compute-0 systemd[1]: libpod-48694ab97a1ddb323e56df576c17cb7202e270f8314061f86421d43eeaae192e.scope: Deactivated successfully.
Oct 02 20:16:55 compute-0 podman[471798]: 2025-10-02 20:16:55.113363535 +0000 UTC m=+1.161426698 container died 48694ab97a1ddb323e56df576c17cb7202e270f8314061f86421d43eeaae192e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bohr, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:16:55 compute-0 ceph-mon[191910]: pgmap v2165: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f838b8fe72dc72dfc50f8b8116ed988a29e4785404e86d6ed6aaf8901fcf4c1-merged.mount: Deactivated successfully.
Oct 02 20:16:55 compute-0 podman[471798]: 2025-10-02 20:16:55.217217945 +0000 UTC m=+1.265281078 container remove 48694ab97a1ddb323e56df576c17cb7202e270f8314061f86421d43eeaae192e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bohr, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:16:55 compute-0 systemd[1]: libpod-conmon-48694ab97a1ddb323e56df576c17cb7202e270f8314061f86421d43eeaae192e.scope: Deactivated successfully.
Oct 02 20:16:55 compute-0 sudo[471695]: pam_unix(sudo:session): session closed for user root
Oct 02 20:16:55 compute-0 sudo[471835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:16:55 compute-0 sudo[471835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:16:55 compute-0 sudo[471835]: pam_unix(sudo:session): session closed for user root
Oct 02 20:16:55 compute-0 sudo[471860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:16:55 compute-0 sudo[471860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:16:55 compute-0 sudo[471860]: pam_unix(sudo:session): session closed for user root
Oct 02 20:16:55 compute-0 sudo[471885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:16:55 compute-0 sudo[471885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:16:55 compute-0 sudo[471885]: pam_unix(sudo:session): session closed for user root
Oct 02 20:16:55 compute-0 sudo[471910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:16:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:16:55 compute-0 sudo[471910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:16:55 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2166: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:56 compute-0 podman[471975]: 2025-10-02 20:16:56.315782212 +0000 UTC m=+0.104815917 container create fa5f2ae80523efafe39ade750d01f050d98c17c4ca8f356b980a17612547e5bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:16:56 compute-0 podman[471975]: 2025-10-02 20:16:56.273153657 +0000 UTC m=+0.062187422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:16:56 compute-0 systemd[1]: Started libpod-conmon-fa5f2ae80523efafe39ade750d01f050d98c17c4ca8f356b980a17612547e5bc.scope.
Oct 02 20:16:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:16:56 compute-0 podman[471975]: 2025-10-02 20:16:56.452566321 +0000 UTC m=+0.241600026 container init fa5f2ae80523efafe39ade750d01f050d98c17c4ca8f356b980a17612547e5bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_merkle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 20:16:56 compute-0 podman[471975]: 2025-10-02 20:16:56.475047465 +0000 UTC m=+0.264081170 container start fa5f2ae80523efafe39ade750d01f050d98c17c4ca8f356b980a17612547e5bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:16:56 compute-0 podman[471975]: 2025-10-02 20:16:56.484298789 +0000 UTC m=+0.273332494 container attach fa5f2ae80523efafe39ade750d01f050d98c17c4ca8f356b980a17612547e5bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_merkle, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:16:56 compute-0 romantic_merkle[471991]: 167 167
Oct 02 20:16:56 compute-0 systemd[1]: libpod-fa5f2ae80523efafe39ade750d01f050d98c17c4ca8f356b980a17612547e5bc.scope: Deactivated successfully.
Oct 02 20:16:56 compute-0 podman[471975]: 2025-10-02 20:16:56.491228722 +0000 UTC m=+0.280262467 container died fa5f2ae80523efafe39ade750d01f050d98c17c4ca8f356b980a17612547e5bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 20:16:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-140a467ce2587e193b62c8780b960e0fb33e3bfc48bac2af4cf272443c1a0438-merged.mount: Deactivated successfully.
Oct 02 20:16:56 compute-0 podman[471975]: 2025-10-02 20:16:56.562761459 +0000 UTC m=+0.351795134 container remove fa5f2ae80523efafe39ade750d01f050d98c17c4ca8f356b980a17612547e5bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:16:56 compute-0 systemd[1]: libpod-conmon-fa5f2ae80523efafe39ade750d01f050d98c17c4ca8f356b980a17612547e5bc.scope: Deactivated successfully.
Oct 02 20:16:56 compute-0 podman[472015]: 2025-10-02 20:16:56.839542923 +0000 UTC m=+0.077267080 container create bb9193492853d8240778131bba67d5c795b0c8d6d383b5cbe770ed7e76359fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_liskov, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 20:16:56 compute-0 podman[472015]: 2025-10-02 20:16:56.813125156 +0000 UTC m=+0.050849313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:16:56 compute-0 systemd[1]: Started libpod-conmon-bb9193492853d8240778131bba67d5c795b0c8d6d383b5cbe770ed7e76359fab.scope.
Oct 02 20:16:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42658b3dd0c30d7b6911ba6c8cd7a7bc399d9b4af99ff01a1ad7c8413d59c30d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42658b3dd0c30d7b6911ba6c8cd7a7bc399d9b4af99ff01a1ad7c8413d59c30d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42658b3dd0c30d7b6911ba6c8cd7a7bc399d9b4af99ff01a1ad7c8413d59c30d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42658b3dd0c30d7b6911ba6c8cd7a7bc399d9b4af99ff01a1ad7c8413d59c30d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:16:56 compute-0 podman[472015]: 2025-10-02 20:16:56.988995576 +0000 UTC m=+0.226719733 container init bb9193492853d8240778131bba67d5c795b0c8d6d383b5cbe770ed7e76359fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 20:16:57 compute-0 podman[472015]: 2025-10-02 20:16:57.013907034 +0000 UTC m=+0.251631161 container start bb9193492853d8240778131bba67d5c795b0c8d6d383b5cbe770ed7e76359fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_liskov, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 20:16:57 compute-0 podman[472015]: 2025-10-02 20:16:57.019184913 +0000 UTC m=+0.256909040 container attach bb9193492853d8240778131bba67d5c795b0c8d6d383b5cbe770ed7e76359fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 20:16:57 compute-0 ceph-mon[191910]: pgmap v2166: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:57 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2167: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]: {
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:         "osd_id": 1,
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:         "type": "bluestore"
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:     },
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:         "osd_id": 2,
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:         "type": "bluestore"
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:     },
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:         "osd_id": 0,
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:         "type": "bluestore"
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]:     }
Oct 02 20:16:58 compute-0 intelligent_liskov[472031]: }
Oct 02 20:16:58 compute-0 systemd[1]: libpod-bb9193492853d8240778131bba67d5c795b0c8d6d383b5cbe770ed7e76359fab.scope: Deactivated successfully.
Oct 02 20:16:58 compute-0 systemd[1]: libpod-bb9193492853d8240778131bba67d5c795b0c8d6d383b5cbe770ed7e76359fab.scope: Consumed 1.243s CPU time.
Oct 02 20:16:58 compute-0 podman[472015]: 2025-10-02 20:16:58.266555887 +0000 UTC m=+1.504280054 container died bb9193492853d8240778131bba67d5c795b0c8d6d383b5cbe770ed7e76359fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_liskov, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:16:58 compute-0 ceph-mon[191910]: pgmap v2167: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:16:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-42658b3dd0c30d7b6911ba6c8cd7a7bc399d9b4af99ff01a1ad7c8413d59c30d-merged.mount: Deactivated successfully.
Oct 02 20:16:58 compute-0 podman[472015]: 2025-10-02 20:16:58.684257319 +0000 UTC m=+1.921981466 container remove bb9193492853d8240778131bba67d5c795b0c8d6d383b5cbe770ed7e76359fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:16:58 compute-0 systemd[1]: libpod-conmon-bb9193492853d8240778131bba67d5c795b0c8d6d383b5cbe770ed7e76359fab.scope: Deactivated successfully.
Oct 02 20:16:58 compute-0 sudo[471910]: pam_unix(sudo:session): session closed for user root
Oct 02 20:16:58 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:16:58 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:16:58 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:16:58 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:16:58 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev dc5ab9df-4e7e-4273-b3c1-2f0be8e0492f does not exist
Oct 02 20:16:58 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev fb934851-1f00-4c8f-8b3e-7f65df537cec does not exist
Oct 02 20:16:59 compute-0 sudo[472074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:16:59 compute-0 sudo[472074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:16:59 compute-0 sudo[472074]: pam_unix(sudo:session): session closed for user root
Oct 02 20:16:59 compute-0 sudo[472100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:16:59 compute-0 sudo[472100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:16:59 compute-0 sudo[472100]: pam_unix(sudo:session): session closed for user root
Oct 02 20:16:59 compute-0 podman[472098]: 2025-10-02 20:16:59.259124728 +0000 UTC m=+0.140399735 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd)
Oct 02 20:16:59 compute-0 podman[157186]: time="2025-10-02T20:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:16:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:16:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9555 "" "Go-http-client/1.1"
Oct 02 20:16:59 compute-0 nova_compute[355794]: 2025-10-02 20:16:59.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:59 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:16:59 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:16:59 compute-0 nova_compute[355794]: 2025-10-02 20:16:59.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:16:59 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2168: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:17:00 compute-0 ceph-mon[191910]: pgmap v2168: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:01 compute-0 openstack_network_exporter[372736]: ERROR   20:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:17:01 compute-0 openstack_network_exporter[372736]: ERROR   20:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:17:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:17:01 compute-0 openstack_network_exporter[372736]: ERROR   20:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:17:01 compute-0 openstack_network_exporter[372736]: ERROR   20:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:17:01 compute-0 openstack_network_exporter[372736]: ERROR   20:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:17:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:17:01 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2169: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:03 compute-0 ceph-mon[191910]: pgmap v2169: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:17:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:17:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:17:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:17:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:17:03
Oct 02 20:17:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:17:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:17:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'vms', '.rgw.root', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'images']
Oct 02 20:17:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:17:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:17:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:17:03 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2170: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.306 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.307 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.319 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.328 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '03794a5e-b5ab-4b9e-8052-6de08e4c9f84', 'name': 'te-6077278-asg-4covgfuc3mpk-ztq7ffpxzcnr-y6lmqetg2enr', 'flavor': {'id': '2a4d7fef-934e-4921-8c3b-c6783966faa5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'fe71959f-8f59-4b45-ae05-4216d5f12fab'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '16e65e6cbbf848e5bb5755e6da3b1d33', 'user_id': 'e5d4abc29b2e475e9c7c54249ca341c4', 'hostId': '01141092f0da7a843904c1ec95a2fbe7e7386c3244d2a85e02147c4e', 'status': 'active', 'metadata': {'metering.server_group': 'f724f930-b01d-4568-9d24-c7060da9fe9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.334 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f50e6a55-f3b5-402b-91b2-12d34386f656', 'name': 'te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6', 'flavor': {'id': '2a4d7fef-934e-4921-8c3b-c6783966faa5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'fe71959f-8f59-4b45-ae05-4216d5f12fab'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '16e65e6cbbf848e5bb5755e6da3b1d33', 'user_id': 'e5d4abc29b2e475e9c7c54249ca341c4', 'hostId': '01141092f0da7a843904c1ec95a2fbe7e7386c3244d2a85e02147c4e', 'status': 'active', 'metadata': {'metering.server_group': 'f724f930-b01d-4568-9d24-c7060da9fe9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.334 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.335 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T20:17:04.335506) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.391 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.392 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.393 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.431 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.requests volume: 1052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.432 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.471 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.requests volume: 1115 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.472 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:17:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.474 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.474 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.474 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:17:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.475 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.475 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.475 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.477 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:17:04.475500) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:17:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:17:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:17:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:17:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:17:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:17:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.499 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.500 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.501 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.521 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.522 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.542 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.543 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.545 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.546 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.546 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.547 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.547 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.548 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.548 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.548 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:17:04.547366) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.549 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.549 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.bytes volume: 72847360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.550 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.550 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.551 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.553 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.553 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.554 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.554 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.554 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.555 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.556 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.556 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.557 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.latency volume: 9662481186 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.558 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.559 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.latency volume: 61686940867 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.559 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.561 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.561 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.562 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.562 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.562 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.570 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:17:04.554679) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:17:04.562483) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.593 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.614 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.644 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.645 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.645 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.646 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.646 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.646 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.647 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.requests volume: 278 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.647 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.647 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.requests volume: 336 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.648 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.648 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.649 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.649 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.650 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.650 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.651 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:17:04.645919) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.652 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:17:04.650181) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.654 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.659 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.664 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.665 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.665 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.665 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.665 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.666 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.666 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.666 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.667 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.667 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.667 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.667 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.667 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.668 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.668 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.667 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:17:04.666167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.669 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.669 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.669 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.669 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.669 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.669 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.669 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:17:04.668182) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.670 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:17:04.669711) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.670 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.671 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.671 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.671 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.671 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.671 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.672 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.672 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.672 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.673 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.673 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.673 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.673 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.674 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.674 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.674 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.674 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.674 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.675 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.676 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.676 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.676 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.676 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.676 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.677 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.677 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.677 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:17:04.672057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.677 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.677 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.677 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:17:04.674317) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.678 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.678 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.679 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.679 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.679 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:17:04.676990) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.679 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.680 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.bytes volume: 29584384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.680 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:17:04.679053) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.680 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.680 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.bytes volume: 30829056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.681 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.682 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.682 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.682 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.682 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.682 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.682 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.683 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.683 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.684 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.684 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.685 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.685 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.685 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.685 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:17:04.682932) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.686 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.686 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:17:04.685812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.686 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.687 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.687 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.687 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.688 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.688 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.689 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.689 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.689 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.689 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.689 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.689 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.689 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.689 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.690 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.690 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/memory.usage volume: 43.35546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.690 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/memory.usage volume: 42.4609375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.691 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.691 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.691 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.691 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.691 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.691 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.692 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2730 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.692 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.bytes volume: 2276 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.692 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.bytes volume: 1820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.693 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.693 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.693 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.693 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.694 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.694 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.693 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:17:04.689940) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.694 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.694 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:17:04.691852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.694 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.695 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.695 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:17:04.694137) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.695 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.695 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.696 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.696 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.696 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.696 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.696 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.696 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.697 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.697 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.697 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.698 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.698 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.698 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:17:04.696332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.699 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.699 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.699 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.700 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.700 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.700 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.700 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.701 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.701 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.701 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.701 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.702 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.702 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.702 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.702 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.702 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.703 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.703 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.704 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:17:04.700100) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.704 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.704 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.704 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.704 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 64390000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:17:04.702195) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.705 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/cpu volume: 315460000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:17:04.704549) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.705 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/cpu volume: 335870000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.706 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.706 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.706 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.706 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.706 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.707 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:17:04.706643) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.707 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.707 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.708 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.latency volume: 1970969152 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.708 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.latency volume: 129342858 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.708 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.latency volume: 3130282352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.709 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.latency volume: 223577318 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.709 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.710 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.710 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.710 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.710 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.710 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.710 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:17:04.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:17:04 compute-0 nova_compute[355794]: 2025-10-02 20:17:04.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:04 compute-0 nova_compute[355794]: 2025-10-02 20:17:04.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:05 compute-0 ceph-mon[191910]: pgmap v2170: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:17:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2171: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:07 compute-0 ceph-mon[191910]: pgmap v2171: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2172: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:08 compute-0 podman[472143]: 2025-10-02 20:17:08.697763434 +0000 UTC m=+0.113528817 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 20:17:08 compute-0 podman[472144]: 2025-10-02 20:17:08.755011935 +0000 UTC m=+0.163800773 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 20:17:09 compute-0 ceph-mon[191910]: pgmap v2172: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:09 compute-0 nova_compute[355794]: 2025-10-02 20:17:09.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:09 compute-0 nova_compute[355794]: 2025-10-02 20:17:09.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2173: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:10 compute-0 ceph-mon[191910]: pgmap v2173: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:17:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2174: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:13 compute-0 ceph-mon[191910]: pgmap v2174: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020698775395585076 of space, bias 1.0, pg target 0.6209632618675522 quantized to 32 (current 32)
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:17:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:17:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2175: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:14 compute-0 nova_compute[355794]: 2025-10-02 20:17:14.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:17:14 compute-0 nova_compute[355794]: 2025-10-02 20:17:14.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:17:14 compute-0 podman[472186]: 2025-10-02 20:17:14.88017215 +0000 UTC m=+0.107504088 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, version=9.4, distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-type=git, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, architecture=x86_64, name=ubi9, release-0.7.12=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_id=edpm, io.buildah.version=1.29.0)
Oct 02 20:17:14 compute-0 podman[472185]: 2025-10-02 20:17:14.887148144 +0000 UTC m=+0.133262348 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:17:14 compute-0 nova_compute[355794]: 2025-10-02 20:17:14.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:14 compute-0 nova_compute[355794]: 2025-10-02 20:17:14.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:15 compute-0 ceph-mon[191910]: pgmap v2175: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:15 compute-0 nova_compute[355794]: 2025-10-02 20:17:15.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:17:15 compute-0 nova_compute[355794]: 2025-10-02 20:17:15.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:17:15 compute-0 nova_compute[355794]: 2025-10-02 20:17:15.578 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:17:15 compute-0 nova_compute[355794]: 2025-10-02 20:17:15.579 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:17:15 compute-0 nova_compute[355794]: 2025-10-02 20:17:15.621 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:17:15 compute-0 nova_compute[355794]: 2025-10-02 20:17:15.622 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:17:15 compute-0 nova_compute[355794]: 2025-10-02 20:17:15.623 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:17:15 compute-0 nova_compute[355794]: 2025-10-02 20:17:15.623 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:17:15 compute-0 nova_compute[355794]: 2025-10-02 20:17:15.625 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:17:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:17:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2176: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:17:16 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3488483254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:17:16 compute-0 nova_compute[355794]: 2025-10-02 20:17:16.141 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:17:16 compute-0 nova_compute[355794]: 2025-10-02 20:17:16.274 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:17:16 compute-0 nova_compute[355794]: 2025-10-02 20:17:16.275 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:17:16 compute-0 nova_compute[355794]: 2025-10-02 20:17:16.276 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:17:16 compute-0 nova_compute[355794]: 2025-10-02 20:17:16.291 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:17:16 compute-0 nova_compute[355794]: 2025-10-02 20:17:16.292 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:17:16 compute-0 nova_compute[355794]: 2025-10-02 20:17:16.300 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:17:16 compute-0 nova_compute[355794]: 2025-10-02 20:17:16.300 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:17:16 compute-0 nova_compute[355794]: 2025-10-02 20:17:16.872 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:17:16 compute-0 nova_compute[355794]: 2025-10-02 20:17:16.873 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3176MB free_disk=59.864112854003906GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:17:16 compute-0 nova_compute[355794]: 2025-10-02 20:17:16.874 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:17:16 compute-0 nova_compute[355794]: 2025-10-02 20:17:16.874 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:17:17 compute-0 nova_compute[355794]: 2025-10-02 20:17:17.059 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:17:17 compute-0 nova_compute[355794]: 2025-10-02 20:17:17.062 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance f50e6a55-f3b5-402b-91b2-12d34386f656 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:17:17 compute-0 nova_compute[355794]: 2025-10-02 20:17:17.073 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:17:17 compute-0 nova_compute[355794]: 2025-10-02 20:17:17.075 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:17:17 compute-0 nova_compute[355794]: 2025-10-02 20:17:17.078 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:17:17 compute-0 ceph-mon[191910]: pgmap v2176: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:17 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3488483254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:17:17 compute-0 nova_compute[355794]: 2025-10-02 20:17:17.269 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:17:17 compute-0 podman[472268]: 2025-10-02 20:17:17.713608275 +0000 UTC m=+0.106619794 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 20:17:17 compute-0 podman[472265]: 2025-10-02 20:17:17.722922171 +0000 UTC m=+0.115460898 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 20:17:17 compute-0 podman[472264]: 2025-10-02 20:17:17.727225005 +0000 UTC m=+0.139075431 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 20:17:17 compute-0 podman[472266]: 2025-10-02 20:17:17.732091523 +0000 UTC m=+0.133526864 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, config_id=edpm, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter)
Oct 02 20:17:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:17:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2729229810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:17:17 compute-0 nova_compute[355794]: 2025-10-02 20:17:17.773 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:17:17 compute-0 podman[472267]: 2025-10-02 20:17:17.776832754 +0000 UTC m=+0.182374414 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:17:17 compute-0 nova_compute[355794]: 2025-10-02 20:17:17.783 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:17:17 compute-0 nova_compute[355794]: 2025-10-02 20:17:17.801 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:17:17 compute-0 nova_compute[355794]: 2025-10-02 20:17:17.803 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:17:17 compute-0 nova_compute[355794]: 2025-10-02 20:17:17.804 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:17:17 compute-0 nova_compute[355794]: 2025-10-02 20:17:17.804 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:17:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2177: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:18 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2729229810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:17:18 compute-0 nova_compute[355794]: 2025-10-02 20:17:18.815 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:17:18 compute-0 nova_compute[355794]: 2025-10-02 20:17:18.816 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:17:18 compute-0 nova_compute[355794]: 2025-10-02 20:17:18.817 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:17:19 compute-0 ceph-mon[191910]: pgmap v2177: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:19 compute-0 nova_compute[355794]: 2025-10-02 20:17:19.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:19 compute-0 nova_compute[355794]: 2025-10-02 20:17:19.940 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:17:19 compute-0 nova_compute[355794]: 2025-10-02 20:17:19.942 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:17:19 compute-0 nova_compute[355794]: 2025-10-02 20:17:19.943 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:17:19 compute-0 nova_compute[355794]: 2025-10-02 20:17:19.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2178: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:17:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1938145061' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:17:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:17:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1938145061' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:17:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:17:21 compute-0 ceph-mon[191910]: pgmap v2178: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1938145061' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:17:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1938145061' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:17:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2179: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 op/s
Oct 02 20:17:22 compute-0 nova_compute[355794]: 2025-10-02 20:17:22.163 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Updating instance_info_cache with network_info: [{"id": "f069cce3-8536-48d3-a068-b30f9a0107d5", "address": "fa:16:3e:45:37:9a", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.149", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069cce3-85", "ovs_interfaceid": "f069cce3-8536-48d3-a068-b30f9a0107d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:17:22 compute-0 nova_compute[355794]: 2025-10-02 20:17:22.176 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:17:22 compute-0 nova_compute[355794]: 2025-10-02 20:17:22.178 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:17:22 compute-0 nova_compute[355794]: 2025-10-02 20:17:22.179 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:17:22 compute-0 ceph-mon[191910]: pgmap v2179: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 op/s
Oct 02 20:17:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2180: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 170 B/s wr, 4 op/s
Oct 02 20:17:24 compute-0 nova_compute[355794]: 2025-10-02 20:17:24.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:24 compute-0 nova_compute[355794]: 2025-10-02 20:17:24.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:25 compute-0 ceph-mon[191910]: pgmap v2180: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 170 B/s wr, 4 op/s
Oct 02 20:17:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:17:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2181: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 8.5 KiB/s wr, 5 op/s
Oct 02 20:17:26 compute-0 ceph-mon[191910]: pgmap v2181: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 8.5 KiB/s wr, 5 op/s
Oct 02 20:17:26 compute-0 nova_compute[355794]: 2025-10-02 20:17:26.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:17:26 compute-0 nova_compute[355794]: 2025-10-02 20:17:26.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:17:26 compute-0 nova_compute[355794]: 2025-10-02 20:17:26.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 20:17:26 compute-0 nova_compute[355794]: 2025-10-02 20:17:26.600 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 20:17:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2182: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Oct 02 20:17:28 compute-0 ceph-mon[191910]: pgmap v2182: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Oct 02 20:17:29 compute-0 podman[472365]: 2025-10-02 20:17:29.741394867 +0000 UTC m=+0.153381298 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 20:17:29 compute-0 podman[157186]: time="2025-10-02T20:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:17:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:17:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9561 "" "Go-http-client/1.1"
Oct 02 20:17:29 compute-0 nova_compute[355794]: 2025-10-02 20:17:29.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:29 compute-0 nova_compute[355794]: 2025-10-02 20:17:29.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2183: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Oct 02 20:17:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:17:31 compute-0 ceph-mon[191910]: pgmap v2183: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Oct 02 20:17:31 compute-0 openstack_network_exporter[372736]: ERROR   20:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:17:31 compute-0 openstack_network_exporter[372736]: ERROR   20:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:17:31 compute-0 openstack_network_exporter[372736]: ERROR   20:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:17:31 compute-0 openstack_network_exporter[372736]: ERROR   20:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:17:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:17:31 compute-0 openstack_network_exporter[372736]: ERROR   20:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:17:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:17:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2184: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Oct 02 20:17:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:17:32.334 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:17:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:17:32.335 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:17:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:17:32.336 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:17:33 compute-0 ceph-mon[191910]: pgmap v2184: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Oct 02 20:17:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:17:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:17:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:17:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:17:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:17:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:17:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2185: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 8.6 KiB/s wr, 4 op/s
Oct 02 20:17:34 compute-0 nova_compute[355794]: 2025-10-02 20:17:34.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:34 compute-0 nova_compute[355794]: 2025-10-02 20:17:34.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:35 compute-0 ceph-mon[191910]: pgmap v2185: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 8.6 KiB/s wr, 4 op/s
Oct 02 20:17:35 compute-0 nova_compute[355794]: 2025-10-02 20:17:35.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:17:35 compute-0 nova_compute[355794]: 2025-10-02 20:17:35.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 20:17:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:17:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2186: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 0 op/s
Oct 02 20:17:36 compute-0 ceph-mon[191910]: pgmap v2186: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 0 op/s
Oct 02 20:17:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2187: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Oct 02 20:17:39 compute-0 ceph-mon[191910]: pgmap v2187: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Oct 02 20:17:39 compute-0 podman[472384]: 2025-10-02 20:17:39.722701105 +0000 UTC m=+0.125563455 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 20:17:39 compute-0 podman[472385]: 2025-10-02 20:17:39.747346375 +0000 UTC m=+0.137261983 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute)
Oct 02 20:17:39 compute-0 nova_compute[355794]: 2025-10-02 20:17:39.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:39 compute-0 nova_compute[355794]: 2025-10-02 20:17:39.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2188: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:17:41 compute-0 ceph-mon[191910]: pgmap v2188: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2189: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 02 20:17:43 compute-0 ceph-mon[191910]: pgmap v2189: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 02 20:17:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2190: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 02 20:17:44 compute-0 nova_compute[355794]: 2025-10-02 20:17:44.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:44 compute-0 nova_compute[355794]: 2025-10-02 20:17:44.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:45 compute-0 ceph-mon[191910]: pgmap v2190: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 02 20:17:45 compute-0 podman[472426]: 2025-10-02 20:17:45.722951013 +0000 UTC m=+0.134451709 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 20:17:45 compute-0 podman[472427]: 2025-10-02 20:17:45.732450394 +0000 UTC m=+0.135862466 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vendor=Red Hat, Inc., architecture=x86_64, version=9.4, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, managed_by=edpm_ansible)
Oct 02 20:17:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:17:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2191: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Oct 02 20:17:47 compute-0 ceph-mon[191910]: pgmap v2191: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Oct 02 20:17:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2192: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Oct 02 20:17:48 compute-0 podman[472462]: 2025-10-02 20:17:48.721227268 +0000 UTC m=+0.134089959 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:17:48 compute-0 podman[472463]: 2025-10-02 20:17:48.732112596 +0000 UTC m=+0.136501423 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:17:48 compute-0 podman[472472]: 2025-10-02 20:17:48.73909933 +0000 UTC m=+0.121355253 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 20:17:48 compute-0 podman[472464]: 2025-10-02 20:17:48.743726382 +0000 UTC m=+0.142386928 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=)
Oct 02 20:17:48 compute-0 podman[472465]: 2025-10-02 20:17:48.784468587 +0000 UTC m=+0.158024591 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 20:17:49 compute-0 ceph-mon[191910]: pgmap v2192: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Oct 02 20:17:49 compute-0 nova_compute[355794]: 2025-10-02 20:17:49.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:49 compute-0 nova_compute[355794]: 2025-10-02 20:17:49.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2193: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Oct 02 20:17:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:17:51 compute-0 ceph-mon[191910]: pgmap v2193: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Oct 02 20:17:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2194: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Oct 02 20:17:53 compute-0 ceph-mon[191910]: pgmap v2194: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Oct 02 20:17:53 compute-0 sshd[113124]: Timeout before authentication for connection from 91.203.81.17 to 38.102.83.148, pid = 470808
Oct 02 20:17:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2195: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Oct 02 20:17:54 compute-0 ceph-mon[191910]: pgmap v2195: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Oct 02 20:17:54 compute-0 nova_compute[355794]: 2025-10-02 20:17:54.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:54 compute-0 nova_compute[355794]: 2025-10-02 20:17:54.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:17:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2196: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Oct 02 20:17:57 compute-0 ceph-mon[191910]: pgmap v2196: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Oct 02 20:17:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2197: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:59 compute-0 ceph-mon[191910]: pgmap v2197: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:17:59 compute-0 sudo[472565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:17:59 compute-0 sudo[472565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:17:59 compute-0 sudo[472565]: pam_unix(sudo:session): session closed for user root
Oct 02 20:17:59 compute-0 sudo[472590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:17:59 compute-0 sudo[472590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:17:59 compute-0 sudo[472590]: pam_unix(sudo:session): session closed for user root
Oct 02 20:17:59 compute-0 sudo[472615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:17:59 compute-0 sudo[472615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:17:59 compute-0 sudo[472615]: pam_unix(sudo:session): session closed for user root
Oct 02 20:17:59 compute-0 podman[157186]: time="2025-10-02T20:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:17:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:17:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9568 "" "Go-http-client/1.1"
Oct 02 20:17:59 compute-0 sudo[472640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 20:17:59 compute-0 sudo[472640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:17:59 compute-0 nova_compute[355794]: 2025-10-02 20:17:59.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:59 compute-0 nova_compute[355794]: 2025-10-02 20:17:59.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:17:59 compute-0 podman[472664]: 2025-10-02 20:17:59.990228897 +0000 UTC m=+0.126895499 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 20:18:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2198: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:18:00 compute-0 podman[472751]: 2025-10-02 20:18:00.76613256 +0000 UTC m=+0.110040035 container exec a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:18:00 compute-0 podman[472751]: 2025-10-02 20:18:00.897784904 +0000 UTC m=+0.241692369 container exec_died a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:18:01 compute-0 ceph-mon[191910]: pgmap v2198: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:01 compute-0 openstack_network_exporter[372736]: ERROR   20:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:18:01 compute-0 openstack_network_exporter[372736]: ERROR   20:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:18:01 compute-0 openstack_network_exporter[372736]: ERROR   20:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:18:01 compute-0 openstack_network_exporter[372736]: ERROR   20:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:18:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:18:01 compute-0 openstack_network_exporter[372736]: ERROR   20:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:18:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:18:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2199: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Oct 02 20:18:02 compute-0 sudo[472640]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:18:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:18:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:18:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:18:02 compute-0 sudo[472904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:18:02 compute-0 sudo[472904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:18:02 compute-0 sudo[472904]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:02 compute-0 sudo[472929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:18:02 compute-0 sudo[472929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:18:02 compute-0 sudo[472929]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:02 compute-0 sudo[472954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:18:02 compute-0 sudo[472954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:18:02 compute-0 sudo[472954]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:02 compute-0 sudo[472979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:18:02 compute-0 sudo[472979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:18:03 compute-0 ceph-mon[191910]: pgmap v2199: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Oct 02 20:18:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:18:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:18:03 compute-0 sudo[472979]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:18:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:18:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:18:03 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:18:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:18:03 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:18:03 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 2c538469-3972-4d4c-9c11-613e3f1c4031 does not exist
Oct 02 20:18:03 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 90e75cf7-8b33-4409-b429-c952f68352e9 does not exist
Oct 02 20:18:03 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 39c6678a-818f-4199-8e7d-40a72b9cbf49 does not exist
Oct 02 20:18:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:18:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:18:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:18:03 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:18:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:18:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:18:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:18:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:18:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:18:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:18:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:18:03
Oct 02 20:18:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:18:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:18:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.meta', '.mgr', 'backups', 'volumes', 'images']
Oct 02 20:18:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:18:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:18:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:18:03 compute-0 sudo[473034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:18:03 compute-0 sudo[473034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:18:03 compute-0 sudo[473034]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:03 compute-0 sudo[473059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:18:03 compute-0 sudo[473059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:18:03 compute-0 sudo[473059]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2200: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct 02 20:18:04 compute-0 sudo[473084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:18:04 compute-0 sudo[473084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:18:04 compute-0 sudo[473084]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:18:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:18:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:18:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:18:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:18:04 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:18:04 compute-0 sudo[473109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:18:04 compute-0 sudo[473109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:18:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:18:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:18:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:18:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:18:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:18:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:18:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:18:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:18:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:18:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:18:04 compute-0 podman[473172]: 2025-10-02 20:18:04.80754498 +0000 UTC m=+0.107061236 container create 5b8de32b3fc31f00cf87075e5f18f22da9b0a6c5343608f27899f79d989780ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lewin, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 20:18:04 compute-0 podman[473172]: 2025-10-02 20:18:04.756848963 +0000 UTC m=+0.056365259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:18:04 compute-0 systemd[1]: Started libpod-conmon-5b8de32b3fc31f00cf87075e5f18f22da9b0a6c5343608f27899f79d989780ad.scope.
Oct 02 20:18:04 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:18:04 compute-0 nova_compute[355794]: 2025-10-02 20:18:04.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:04 compute-0 podman[473172]: 2025-10-02 20:18:04.961272617 +0000 UTC m=+0.260788873 container init 5b8de32b3fc31f00cf87075e5f18f22da9b0a6c5343608f27899f79d989780ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lewin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 20:18:04 compute-0 nova_compute[355794]: 2025-10-02 20:18:04.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:04 compute-0 podman[473172]: 2025-10-02 20:18:04.979158229 +0000 UTC m=+0.278674445 container start 5b8de32b3fc31f00cf87075e5f18f22da9b0a6c5343608f27899f79d989780ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lewin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:18:04 compute-0 podman[473172]: 2025-10-02 20:18:04.984829068 +0000 UTC m=+0.284345314 container attach 5b8de32b3fc31f00cf87075e5f18f22da9b0a6c5343608f27899f79d989780ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lewin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 20:18:04 compute-0 brave_lewin[473188]: 167 167
Oct 02 20:18:04 compute-0 systemd[1]: libpod-5b8de32b3fc31f00cf87075e5f18f22da9b0a6c5343608f27899f79d989780ad.scope: Deactivated successfully.
Oct 02 20:18:04 compute-0 podman[473172]: 2025-10-02 20:18:04.992136611 +0000 UTC m=+0.291652857 container died 5b8de32b3fc31f00cf87075e5f18f22da9b0a6c5343608f27899f79d989780ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 20:18:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c2f81884b5c0126e0645dec8cecb58cbfc471cfd4563abef18457c679c53593-merged.mount: Deactivated successfully.
Oct 02 20:18:05 compute-0 podman[473172]: 2025-10-02 20:18:05.059422487 +0000 UTC m=+0.358938683 container remove 5b8de32b3fc31f00cf87075e5f18f22da9b0a6c5343608f27899f79d989780ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:18:05 compute-0 systemd[1]: libpod-conmon-5b8de32b3fc31f00cf87075e5f18f22da9b0a6c5343608f27899f79d989780ad.scope: Deactivated successfully.
Oct 02 20:18:05 compute-0 ceph-mon[191910]: pgmap v2200: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct 02 20:18:05 compute-0 podman[473210]: 2025-10-02 20:18:05.368730328 +0000 UTC m=+0.092860181 container create d5556eb3279ddee776a2665bc0dee674d1848941d6f7afc0055b11cfc2697694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_turing, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:18:05 compute-0 podman[473210]: 2025-10-02 20:18:05.334826614 +0000 UTC m=+0.058956527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:18:05 compute-0 systemd[1]: Started libpod-conmon-d5556eb3279ddee776a2665bc0dee674d1848941d6f7afc0055b11cfc2697694.scope.
Oct 02 20:18:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a27931896e12faf4d7d78f419868af4c1d4c6481006c0c289cfa655c92080be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a27931896e12faf4d7d78f419868af4c1d4c6481006c0c289cfa655c92080be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a27931896e12faf4d7d78f419868af4c1d4c6481006c0c289cfa655c92080be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a27931896e12faf4d7d78f419868af4c1d4c6481006c0c289cfa655c92080be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a27931896e12faf4d7d78f419868af4c1d4c6481006c0c289cfa655c92080be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:18:05 compute-0 podman[473210]: 2025-10-02 20:18:05.574328544 +0000 UTC m=+0.298458447 container init d5556eb3279ddee776a2665bc0dee674d1848941d6f7afc0055b11cfc2697694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:18:05 compute-0 podman[473210]: 2025-10-02 20:18:05.604729276 +0000 UTC m=+0.328859109 container start d5556eb3279ddee776a2665bc0dee674d1848941d6f7afc0055b11cfc2697694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 20:18:05 compute-0 podman[473210]: 2025-10-02 20:18:05.614140724 +0000 UTC m=+0.338270587 container attach d5556eb3279ddee776a2665bc0dee674d1848941d6f7afc0055b11cfc2697694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 02 20:18:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:18:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2201: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct 02 20:18:06 compute-0 musing_turing[473226]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:18:06 compute-0 musing_turing[473226]: --> relative data size: 1.0
Oct 02 20:18:06 compute-0 musing_turing[473226]: --> All data devices are unavailable
Oct 02 20:18:06 compute-0 systemd[1]: libpod-d5556eb3279ddee776a2665bc0dee674d1848941d6f7afc0055b11cfc2697694.scope: Deactivated successfully.
Oct 02 20:18:06 compute-0 systemd[1]: libpod-d5556eb3279ddee776a2665bc0dee674d1848941d6f7afc0055b11cfc2697694.scope: Consumed 1.175s CPU time.
Oct 02 20:18:06 compute-0 podman[473210]: 2025-10-02 20:18:06.844005427 +0000 UTC m=+1.568135330 container died d5556eb3279ddee776a2665bc0dee674d1848941d6f7afc0055b11cfc2697694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_turing, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 20:18:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a27931896e12faf4d7d78f419868af4c1d4c6481006c0c289cfa655c92080be-merged.mount: Deactivated successfully.
Oct 02 20:18:06 compute-0 podman[473210]: 2025-10-02 20:18:06.934478974 +0000 UTC m=+1.658608807 container remove d5556eb3279ddee776a2665bc0dee674d1848941d6f7afc0055b11cfc2697694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_turing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:18:06 compute-0 systemd[1]: libpod-conmon-d5556eb3279ddee776a2665bc0dee674d1848941d6f7afc0055b11cfc2697694.scope: Deactivated successfully.
Oct 02 20:18:06 compute-0 sudo[473109]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:07 compute-0 sudo[473266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:18:07 compute-0 sudo[473266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:18:07 compute-0 sudo[473266]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:07 compute-0 ceph-mon[191910]: pgmap v2201: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct 02 20:18:07 compute-0 sudo[473291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:18:07 compute-0 sudo[473291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:18:07 compute-0 sudo[473291]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:07 compute-0 sudo[473316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:18:07 compute-0 sudo[473316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:18:07 compute-0 sudo[473316]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:07 compute-0 sudo[473341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:18:07 compute-0 sudo[473341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:18:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2202: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct 02 20:18:08 compute-0 ceph-mon[191910]: pgmap v2202: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct 02 20:18:08 compute-0 podman[473406]: 2025-10-02 20:18:08.221756711 +0000 UTC m=+0.097980257 container create 01d65c76d817912716c5bde22aa24030a6e664599c6e90cfdde08d7d4384663b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_feistel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 20:18:08 compute-0 podman[473406]: 2025-10-02 20:18:08.184763014 +0000 UTC m=+0.060986660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:18:08 compute-0 systemd[1]: Started libpod-conmon-01d65c76d817912716c5bde22aa24030a6e664599c6e90cfdde08d7d4384663b.scope.
Oct 02 20:18:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:18:08 compute-0 podman[473406]: 2025-10-02 20:18:08.377517631 +0000 UTC m=+0.253741187 container init 01d65c76d817912716c5bde22aa24030a6e664599c6e90cfdde08d7d4384663b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_feistel, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 20:18:08 compute-0 podman[473406]: 2025-10-02 20:18:08.389038735 +0000 UTC m=+0.265262311 container start 01d65c76d817912716c5bde22aa24030a6e664599c6e90cfdde08d7d4384663b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:18:08 compute-0 podman[473406]: 2025-10-02 20:18:08.396288376 +0000 UTC m=+0.272511912 container attach 01d65c76d817912716c5bde22aa24030a6e664599c6e90cfdde08d7d4384663b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_feistel, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 20:18:08 compute-0 keen_feistel[473422]: 167 167
Oct 02 20:18:08 compute-0 systemd[1]: libpod-01d65c76d817912716c5bde22aa24030a6e664599c6e90cfdde08d7d4384663b.scope: Deactivated successfully.
Oct 02 20:18:08 compute-0 podman[473406]: 2025-10-02 20:18:08.398680209 +0000 UTC m=+0.274903755 container died 01d65c76d817912716c5bde22aa24030a6e664599c6e90cfdde08d7d4384663b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:18:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5b777c6513b56f551025367f0e43c784848b2be52fae9606bc8c1eb57c8680e-merged.mount: Deactivated successfully.
Oct 02 20:18:08 compute-0 podman[473406]: 2025-10-02 20:18:08.472436205 +0000 UTC m=+0.348659771 container remove 01d65c76d817912716c5bde22aa24030a6e664599c6e90cfdde08d7d4384663b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 20:18:08 compute-0 systemd[1]: libpod-conmon-01d65c76d817912716c5bde22aa24030a6e664599c6e90cfdde08d7d4384663b.scope: Deactivated successfully.
Oct 02 20:18:08 compute-0 podman[473445]: 2025-10-02 20:18:08.787803507 +0000 UTC m=+0.087998763 container create 7b32c424d06af1706bce173fc394291e282cd8f2376f7a323b7c2204c1fd16c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:18:08 compute-0 podman[473445]: 2025-10-02 20:18:08.752015173 +0000 UTC m=+0.052210499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:18:08 compute-0 systemd[1]: Started libpod-conmon-7b32c424d06af1706bce173fc394291e282cd8f2376f7a323b7c2204c1fd16c1.scope.
Oct 02 20:18:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52da750d0f81c1ff841685af2d8d9f8b2c3668b7750cae571cae51f3fc159724/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52da750d0f81c1ff841685af2d8d9f8b2c3668b7750cae571cae51f3fc159724/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52da750d0f81c1ff841685af2d8d9f8b2c3668b7750cae571cae51f3fc159724/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52da750d0f81c1ff841685af2d8d9f8b2c3668b7750cae571cae51f3fc159724/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:18:08 compute-0 podman[473445]: 2025-10-02 20:18:08.997071409 +0000 UTC m=+0.297266685 container init 7b32c424d06af1706bce173fc394291e282cd8f2376f7a323b7c2204c1fd16c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:18:09 compute-0 podman[473445]: 2025-10-02 20:18:09.026587848 +0000 UTC m=+0.326783114 container start 7b32c424d06af1706bce173fc394291e282cd8f2376f7a323b7c2204c1fd16c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:18:09 compute-0 podman[473445]: 2025-10-02 20:18:09.034164828 +0000 UTC m=+0.334360144 container attach 7b32c424d06af1706bce173fc394291e282cd8f2376f7a323b7c2204c1fd16c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]: {
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:     "0": [
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:         {
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "devices": [
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "/dev/loop3"
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             ],
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "lv_name": "ceph_lv0",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "lv_size": "21470642176",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "name": "ceph_lv0",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "tags": {
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.cluster_name": "ceph",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.crush_device_class": "",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.encrypted": "0",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.osd_id": "0",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.type": "block",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.vdo": "0"
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             },
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "type": "block",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "vg_name": "ceph_vg0"
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:         }
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:     ],
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:     "1": [
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:         {
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "devices": [
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "/dev/loop4"
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             ],
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "lv_name": "ceph_lv1",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "lv_size": "21470642176",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "name": "ceph_lv1",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "tags": {
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.cluster_name": "ceph",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.crush_device_class": "",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.encrypted": "0",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.osd_id": "1",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.type": "block",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.vdo": "0"
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             },
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "type": "block",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "vg_name": "ceph_vg1"
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:         }
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:     ],
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:     "2": [
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:         {
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "devices": [
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "/dev/loop5"
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             ],
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "lv_name": "ceph_lv2",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "lv_size": "21470642176",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "name": "ceph_lv2",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "tags": {
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.cluster_name": "ceph",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.crush_device_class": "",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.encrypted": "0",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.osd_id": "2",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.type": "block",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:                 "ceph.vdo": "0"
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             },
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "type": "block",
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:             "vg_name": "ceph_vg2"
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:         }
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]:     ]
Oct 02 20:18:09 compute-0 beautiful_chaplygin[473461]: }
Oct 02 20:18:09 compute-0 systemd[1]: libpod-7b32c424d06af1706bce173fc394291e282cd8f2376f7a323b7c2204c1fd16c1.scope: Deactivated successfully.
Oct 02 20:18:09 compute-0 podman[473445]: 2025-10-02 20:18:09.882672207 +0000 UTC m=+1.182867503 container died 7b32c424d06af1706bce173fc394291e282cd8f2376f7a323b7c2204c1fd16c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:18:09 compute-0 nova_compute[355794]: 2025-10-02 20:18:09.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-52da750d0f81c1ff841685af2d8d9f8b2c3668b7750cae571cae51f3fc159724-merged.mount: Deactivated successfully.
Oct 02 20:18:09 compute-0 nova_compute[355794]: 2025-10-02 20:18:09.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:09 compute-0 podman[473445]: 2025-10-02 20:18:09.989815205 +0000 UTC m=+1.290010431 container remove 7b32c424d06af1706bce173fc394291e282cd8f2376f7a323b7c2204c1fd16c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 20:18:10 compute-0 systemd[1]: libpod-conmon-7b32c424d06af1706bce173fc394291e282cd8f2376f7a323b7c2204c1fd16c1.scope: Deactivated successfully.
Oct 02 20:18:10 compute-0 sudo[473341]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2203: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct 02 20:18:10 compute-0 podman[473478]: 2025-10-02 20:18:10.058835996 +0000 UTC m=+0.125973435 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:18:10 compute-0 podman[473471]: 2025-10-02 20:18:10.06694756 +0000 UTC m=+0.136613056 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 20:18:10 compute-0 sudo[473522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:18:10 compute-0 sudo[473522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:18:10 compute-0 sudo[473522]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:10 compute-0 sudo[473550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:18:10 compute-0 sudo[473550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:18:10 compute-0 sudo[473550]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:10 compute-0 sudo[473575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:18:10 compute-0 sudo[473575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:18:10 compute-0 sudo[473575]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:10 compute-0 sudo[473600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:18:10 compute-0 sudo[473600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:18:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:18:11 compute-0 ceph-mon[191910]: pgmap v2203: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct 02 20:18:11 compute-0 podman[473663]: 2025-10-02 20:18:11.153241943 +0000 UTC m=+0.094199796 container create 4db6c9f8018a077461ac8b0ed9a8f5122abf46715bed1df70cb2457ff667a3fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_curran, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:18:11 compute-0 podman[473663]: 2025-10-02 20:18:11.119496633 +0000 UTC m=+0.060454536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:18:11 compute-0 systemd[1]: Started libpod-conmon-4db6c9f8018a077461ac8b0ed9a8f5122abf46715bed1df70cb2457ff667a3fe.scope.
Oct 02 20:18:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:18:11 compute-0 podman[473663]: 2025-10-02 20:18:11.312954988 +0000 UTC m=+0.253912881 container init 4db6c9f8018a077461ac8b0ed9a8f5122abf46715bed1df70cb2457ff667a3fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 20:18:11 compute-0 podman[473663]: 2025-10-02 20:18:11.331695872 +0000 UTC m=+0.272653725 container start 4db6c9f8018a077461ac8b0ed9a8f5122abf46715bed1df70cb2457ff667a3fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_curran, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 20:18:11 compute-0 podman[473663]: 2025-10-02 20:18:11.340897965 +0000 UTC m=+0.281855858 container attach 4db6c9f8018a077461ac8b0ed9a8f5122abf46715bed1df70cb2457ff667a3fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_curran, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:18:11 compute-0 serene_curran[473679]: 167 167
Oct 02 20:18:11 compute-0 systemd[1]: libpod-4db6c9f8018a077461ac8b0ed9a8f5122abf46715bed1df70cb2457ff667a3fe.scope: Deactivated successfully.
Oct 02 20:18:11 compute-0 conmon[473679]: conmon 4db6c9f8018a077461ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4db6c9f8018a077461ac8b0ed9a8f5122abf46715bed1df70cb2457ff667a3fe.scope/container/memory.events
Oct 02 20:18:11 compute-0 podman[473663]: 2025-10-02 20:18:11.349713347 +0000 UTC m=+0.290671190 container died 4db6c9f8018a077461ac8b0ed9a8f5122abf46715bed1df70cb2457ff667a3fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 20:18:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cb543992d68869748fdd88386407385dd9b5328f96504e7ac3f6bd5a5f00ff2-merged.mount: Deactivated successfully.
Oct 02 20:18:11 compute-0 podman[473663]: 2025-10-02 20:18:11.434285659 +0000 UTC m=+0.375243512 container remove 4db6c9f8018a077461ac8b0ed9a8f5122abf46715bed1df70cb2457ff667a3fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_curran, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:18:11 compute-0 systemd[1]: libpod-conmon-4db6c9f8018a077461ac8b0ed9a8f5122abf46715bed1df70cb2457ff667a3fe.scope: Deactivated successfully.
Oct 02 20:18:11 compute-0 podman[473702]: 2025-10-02 20:18:11.750307538 +0000 UTC m=+0.100359859 container create 040b97ee84b29174e19c9e60bef79e667b62b93a01f83d0a7a6b28229f681677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 20:18:11 compute-0 podman[473702]: 2025-10-02 20:18:11.711295969 +0000 UTC m=+0.061348340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:18:11 compute-0 systemd[1]: Started libpod-conmon-040b97ee84b29174e19c9e60bef79e667b62b93a01f83d0a7a6b28229f681677.scope.
Oct 02 20:18:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:18:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36147a1ebc8bb8c873158f2640baa50d05b48e2b47e1a0160d04469be391879/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:18:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36147a1ebc8bb8c873158f2640baa50d05b48e2b47e1a0160d04469be391879/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:18:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36147a1ebc8bb8c873158f2640baa50d05b48e2b47e1a0160d04469be391879/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:18:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a36147a1ebc8bb8c873158f2640baa50d05b48e2b47e1a0160d04469be391879/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:18:11 compute-0 podman[473702]: 2025-10-02 20:18:11.951152888 +0000 UTC m=+0.301205249 container init 040b97ee84b29174e19c9e60bef79e667b62b93a01f83d0a7a6b28229f681677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:18:11 compute-0 podman[473702]: 2025-10-02 20:18:11.982644959 +0000 UTC m=+0.332697270 container start 040b97ee84b29174e19c9e60bef79e667b62b93a01f83d0a7a6b28229f681677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 20:18:11 compute-0 podman[473702]: 2025-10-02 20:18:11.990217989 +0000 UTC m=+0.340270310 container attach 040b97ee84b29174e19c9e60bef79e667b62b93a01f83d0a7a6b28229f681677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:18:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2204: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.3 KiB/s wr, 1 op/s
Oct 02 20:18:13 compute-0 ceph-mon[191910]: pgmap v2204: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 9.3 KiB/s wr, 1 op/s
Oct 02 20:18:13 compute-0 beautiful_wing[473718]: {
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:         "osd_id": 1,
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:         "type": "bluestore"
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:     },
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:         "osd_id": 2,
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:         "type": "bluestore"
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:     },
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:         "osd_id": 0,
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:         "type": "bluestore"
Oct 02 20:18:13 compute-0 beautiful_wing[473718]:     }
Oct 02 20:18:13 compute-0 beautiful_wing[473718]: }
Oct 02 20:18:13 compute-0 systemd[1]: libpod-040b97ee84b29174e19c9e60bef79e667b62b93a01f83d0a7a6b28229f681677.scope: Deactivated successfully.
Oct 02 20:18:13 compute-0 systemd[1]: libpod-040b97ee84b29174e19c9e60bef79e667b62b93a01f83d0a7a6b28229f681677.scope: Consumed 1.232s CPU time.
Oct 02 20:18:13 compute-0 podman[473702]: 2025-10-02 20:18:13.223489031 +0000 UTC m=+1.573541372 container died 040b97ee84b29174e19c9e60bef79e667b62b93a01f83d0a7a6b28229f681677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 20:18:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a36147a1ebc8bb8c873158f2640baa50d05b48e2b47e1a0160d04469be391879-merged.mount: Deactivated successfully.
Oct 02 20:18:13 compute-0 podman[473702]: 2025-10-02 20:18:13.32084682 +0000 UTC m=+1.670899101 container remove 040b97ee84b29174e19c9e60bef79e667b62b93a01f83d0a7a6b28229f681677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:18:13 compute-0 systemd[1]: libpod-conmon-040b97ee84b29174e19c9e60bef79e667b62b93a01f83d0a7a6b28229f681677.scope: Deactivated successfully.
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020729298935338934 of space, bias 1.0, pg target 0.6218789680601681 quantized to 32 (current 32)
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:18:13 compute-0 sudo[473600]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:18:13 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:18:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:18:13 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 8b1a7ce0-0169-48d9-aead-e92d41a815bf does not exist
Oct 02 20:18:13 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 1599fe3a-1e4f-47ec-b4fa-bfddecacbf17 does not exist
Oct 02 20:18:13 compute-0 sudo[473761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:18:13 compute-0 sudo[473761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:18:13 compute-0 sudo[473761]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:13 compute-0 sudo[473786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:18:13 compute-0 sudo[473786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:18:13 compute-0 sudo[473786]: pam_unix(sudo:session): session closed for user root
Oct 02 20:18:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2205: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:18:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:18:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.0 total, 600.0 interval
                                            Cumulative writes: 9959 writes, 45K keys, 9959 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s
                                            Cumulative WAL: 9959 writes, 9959 syncs, 1.00 writes per sync, written: 0.06 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1327 writes, 6265 keys, 1327 commit groups, 1.0 writes per commit group, ingest: 8.71 MB, 0.01 MB/s
                                            Interval WAL: 1327 writes, 1327 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     86.2      0.64              0.27        31    0.021       0      0       0.0       0.0
                                              L6      1/0    6.29 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.1    130.5    107.4      2.11              1.02        30    0.070    159K    16K       0.0       0.0
                                             Sum      1/0    6.29 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.1    100.2    102.5      2.74              1.28        61    0.045    159K    16K       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.5     81.0     78.4      0.68              0.22        12    0.056     37K   3085       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    130.5    107.4      2.11              1.02        30    0.070    159K    16K       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     86.7      0.63              0.27        30    0.021       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 4200.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.054, interval 0.007
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.27 GB write, 0.07 MB/s write, 0.27 GB read, 0.07 MB/s read, 2.7 seconds
                                            Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.7 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x557df35531f0#2 capacity: 304.00 MB usage: 32.81 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000318 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2087,31.64 MB,10.4094%) FilterBlock(62,451.80 KB,0.145134%) IndexBlock(62,745.61 KB,0.239518%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Oct 02 20:18:14 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:18:14 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:18:14 compute-0 ceph-mon[191910]: pgmap v2205: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 20:18:14 compute-0 nova_compute[355794]: 2025-10-02 20:18:14.597 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:18:14 compute-0 nova_compute[355794]: 2025-10-02 20:18:14.600 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:18:14 compute-0 nova_compute[355794]: 2025-10-02 20:18:14.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:14 compute-0 nova_compute[355794]: 2025-10-02 20:18:14.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:18:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2206: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:18:16 compute-0 nova_compute[355794]: 2025-10-02 20:18:16.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:18:16 compute-0 nova_compute[355794]: 2025-10-02 20:18:16.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:18:16 compute-0 nova_compute[355794]: 2025-10-02 20:18:16.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:18:16 compute-0 nova_compute[355794]: 2025-10-02 20:18:16.626 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:18:16 compute-0 nova_compute[355794]: 2025-10-02 20:18:16.627 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:18:16 compute-0 nova_compute[355794]: 2025-10-02 20:18:16.628 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:18:16 compute-0 nova_compute[355794]: 2025-10-02 20:18:16.628 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:18:16 compute-0 nova_compute[355794]: 2025-10-02 20:18:16.629 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:18:16 compute-0 podman[473812]: 2025-10-02 20:18:16.767720733 +0000 UTC m=+0.166160395 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, managed_by=edpm_ansible, version=9.4, io.openshift.expose-services=)
Oct 02 20:18:16 compute-0 podman[473811]: 2025-10-02 20:18:16.814324193 +0000 UTC m=+0.210186557 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 20:18:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:18:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2253443488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:18:17 compute-0 ceph-mon[191910]: pgmap v2206: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:18:17 compute-0 nova_compute[355794]: 2025-10-02 20:18:17.176 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:18:17 compute-0 nova_compute[355794]: 2025-10-02 20:18:17.335 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:18:17 compute-0 nova_compute[355794]: 2025-10-02 20:18:17.336 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:18:17 compute-0 nova_compute[355794]: 2025-10-02 20:18:17.336 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:18:17 compute-0 nova_compute[355794]: 2025-10-02 20:18:17.344 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:18:17 compute-0 nova_compute[355794]: 2025-10-02 20:18:17.345 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:18:17 compute-0 nova_compute[355794]: 2025-10-02 20:18:17.352 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:18:17 compute-0 nova_compute[355794]: 2025-10-02 20:18:17.352 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:18:17 compute-0 nova_compute[355794]: 2025-10-02 20:18:17.951 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:18:17 compute-0 nova_compute[355794]: 2025-10-02 20:18:17.953 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3179MB free_disk=59.863929748535156GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:18:17 compute-0 nova_compute[355794]: 2025-10-02 20:18:17.953 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:18:17 compute-0 nova_compute[355794]: 2025-10-02 20:18:17.954 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:18:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2207: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:18:18 compute-0 nova_compute[355794]: 2025-10-02 20:18:18.107 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:18:18 compute-0 nova_compute[355794]: 2025-10-02 20:18:18.108 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance f50e6a55-f3b5-402b-91b2-12d34386f656 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:18:18 compute-0 nova_compute[355794]: 2025-10-02 20:18:18.109 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:18:18 compute-0 nova_compute[355794]: 2025-10-02 20:18:18.111 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:18:18 compute-0 nova_compute[355794]: 2025-10-02 20:18:18.111 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:18:18 compute-0 nova_compute[355794]: 2025-10-02 20:18:18.218 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:18:18 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2253443488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:18:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:18:18 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/806251430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:18:18 compute-0 nova_compute[355794]: 2025-10-02 20:18:18.819 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:18:18 compute-0 nova_compute[355794]: 2025-10-02 20:18:18.834 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:18:18 compute-0 nova_compute[355794]: 2025-10-02 20:18:18.859 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:18:18 compute-0 nova_compute[355794]: 2025-10-02 20:18:18.866 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:18:18 compute-0 nova_compute[355794]: 2025-10-02 20:18:18.867 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:18:19 compute-0 ceph-mon[191910]: pgmap v2207: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:18:19 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/806251430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:18:19 compute-0 podman[473893]: 2025-10-02 20:18:19.724174655 +0000 UTC m=+0.133214447 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 20:18:19 compute-0 podman[473897]: 2025-10-02 20:18:19.727700258 +0000 UTC m=+0.103574194 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:18:19 compute-0 podman[473895]: 2025-10-02 20:18:19.745644611 +0000 UTC m=+0.143299232 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 20:18:19 compute-0 podman[473894]: 2025-10-02 20:18:19.753867818 +0000 UTC m=+0.155959766 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3)
Oct 02 20:18:19 compute-0 podman[473896]: 2025-10-02 20:18:19.769040268 +0000 UTC m=+0.157910767 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 20:18:19 compute-0 nova_compute[355794]: 2025-10-02 20:18:19.864 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:18:19 compute-0 nova_compute[355794]: 2025-10-02 20:18:19.864 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:18:19 compute-0 nova_compute[355794]: 2025-10-02 20:18:19.864 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:18:19 compute-0 nova_compute[355794]: 2025-10-02 20:18:19.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:19 compute-0 nova_compute[355794]: 2025-10-02 20:18:19.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2208: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:18:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:18:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/128232357' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:18:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:18:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/128232357' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:18:20 compute-0 ceph-mon[191910]: pgmap v2208: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:18:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/128232357' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:18:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/128232357' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:18:20 compute-0 nova_compute[355794]: 2025-10-02 20:18:20.394 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-03794a5e-b5ab-4b9e-8052-6de08e4c9f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:18:20 compute-0 nova_compute[355794]: 2025-10-02 20:18:20.395 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-03794a5e-b5ab-4b9e-8052-6de08e4c9f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:18:20 compute-0 nova_compute[355794]: 2025-10-02 20:18:20.395 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:18:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:18:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2209: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:18:23 compute-0 ceph-mon[191910]: pgmap v2209: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct 02 20:18:23 compute-0 nova_compute[355794]: 2025-10-02 20:18:23.709 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Updating instance_info_cache with network_info: [{"id": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "address": "fa:16:3e:a4:22:b0", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a7a2e73-ae", "ovs_interfaceid": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:18:23 compute-0 nova_compute[355794]: 2025-10-02 20:18:23.733 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-03794a5e-b5ab-4b9e-8052-6de08e4c9f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:18:23 compute-0 nova_compute[355794]: 2025-10-02 20:18:23.734 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:18:23 compute-0 nova_compute[355794]: 2025-10-02 20:18:23.734 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:18:23 compute-0 nova_compute[355794]: 2025-10-02 20:18:23.735 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:18:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2210: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:24 compute-0 nova_compute[355794]: 2025-10-02 20:18:24.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:24 compute-0 nova_compute[355794]: 2025-10-02 20:18:24.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:25 compute-0 ceph-mon[191910]: pgmap v2210: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:18:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2211: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:27 compute-0 ceph-mon[191910]: pgmap v2211: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2212: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:28 compute-0 nova_compute[355794]: 2025-10-02 20:18:28.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:18:28 compute-0 nova_compute[355794]: 2025-10-02 20:18:28.602 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:18:29 compute-0 ceph-mon[191910]: pgmap v2212: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:29 compute-0 podman[157186]: time="2025-10-02T20:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:18:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:18:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9564 "" "Go-http-client/1.1"
Oct 02 20:18:29 compute-0 nova_compute[355794]: 2025-10-02 20:18:29.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:29 compute-0 nova_compute[355794]: 2025-10-02 20:18:29.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2213: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:30 compute-0 podman[473994]: 2025-10-02 20:18:30.730067562 +0000 UTC m=+0.142449580 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 20:18:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:18:31 compute-0 ceph-mon[191910]: pgmap v2213: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:31 compute-0 openstack_network_exporter[372736]: ERROR   20:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:18:31 compute-0 openstack_network_exporter[372736]: ERROR   20:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:18:31 compute-0 openstack_network_exporter[372736]: ERROR   20:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:18:31 compute-0 openstack_network_exporter[372736]: ERROR   20:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:18:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:18:31 compute-0 openstack_network_exporter[372736]: ERROR   20:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:18:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:18:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:18:32.335 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:18:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:18:32.337 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:18:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:18:32.339 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:18:33 compute-0 ceph-mon[191910]: pgmap v2214: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:18:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:18:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:18:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:18:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:18:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:18:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:34 compute-0 nova_compute[355794]: 2025-10-02 20:18:34.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:35 compute-0 nova_compute[355794]: 2025-10-02 20:18:35.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:35 compute-0 ceph-mon[191910]: pgmap v2215: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:18:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:36 compute-0 ceph-mon[191910]: pgmap v2216: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:39 compute-0 ceph-mon[191910]: pgmap v2217: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:39 compute-0 nova_compute[355794]: 2025-10-02 20:18:39.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:40 compute-0 nova_compute[355794]: 2025-10-02 20:18:40.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:40 compute-0 podman[474014]: 2025-10-02 20:18:40.674639441 +0000 UTC m=+0.101980162 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:18:40 compute-0 podman[474015]: 2025-10-02 20:18:40.725090442 +0000 UTC m=+0.136433531 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 20:18:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:18:41 compute-0 ceph-mon[191910]: pgmap v2218: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:43 compute-0 ceph-mon[191910]: pgmap v2219: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:44 compute-0 nova_compute[355794]: 2025-10-02 20:18:44.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:45 compute-0 nova_compute[355794]: 2025-10-02 20:18:45.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:45 compute-0 ceph-mon[191910]: pgmap v2220: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:18:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:47 compute-0 ceph-mon[191910]: pgmap v2221: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:47 compute-0 podman[474055]: 2025-10-02 20:18:47.740674924 +0000 UTC m=+0.152100164 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 20:18:47 compute-0 podman[474056]: 2025-10-02 20:18:47.741109055 +0000 UTC m=+0.144965586 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, container_name=kepler, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, vcs-type=git)
Oct 02 20:18:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2222: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:49 compute-0 ceph-mon[191910]: pgmap v2222: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:49 compute-0 nova_compute[355794]: 2025-10-02 20:18:49.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:50 compute-0 nova_compute[355794]: 2025-10-02 20:18:50.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:50 compute-0 podman[474095]: 2025-10-02 20:18:50.692426071 +0000 UTC m=+0.111334888 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Oct 02 20:18:50 compute-0 podman[474096]: 2025-10-02 20:18:50.710264122 +0000 UTC m=+0.123459149 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid)
Oct 02 20:18:50 compute-0 podman[474105]: 2025-10-02 20:18:50.722780942 +0000 UTC m=+0.108497914 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:18:50 compute-0 podman[474098]: 2025-10-02 20:18:50.744329791 +0000 UTC m=+0.148761486 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 20:18:50 compute-0 podman[474097]: 2025-10-02 20:18:50.75034436 +0000 UTC m=+0.150693238 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, release=1755695350, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible)
Oct 02 20:18:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:18:51 compute-0 ceph-mon[191910]: pgmap v2223: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2224: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:52 compute-0 ceph-mon[191910]: pgmap v2224: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2225: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:54 compute-0 nova_compute[355794]: 2025-10-02 20:18:54.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:55 compute-0 nova_compute[355794]: 2025-10-02 20:18:55.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:18:55 compute-0 ceph-mon[191910]: pgmap v2225: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:18:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:57 compute-0 ceph-mon[191910]: pgmap v2226: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2227: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:59 compute-0 ceph-mon[191910]: pgmap v2227: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:18:59 compute-0 podman[157186]: time="2025-10-02T20:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:18:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:18:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9568 "" "Go-http-client/1.1"
Oct 02 20:18:59 compute-0 nova_compute[355794]: 2025-10-02 20:18:59.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:00 compute-0 nova_compute[355794]: 2025-10-02 20:19:00.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2228: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:19:01 compute-0 ceph-mon[191910]: pgmap v2228: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:01 compute-0 openstack_network_exporter[372736]: ERROR   20:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:19:01 compute-0 openstack_network_exporter[372736]: ERROR   20:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:19:01 compute-0 openstack_network_exporter[372736]: ERROR   20:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:19:01 compute-0 openstack_network_exporter[372736]: ERROR   20:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:19:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:19:01 compute-0 openstack_network_exporter[372736]: ERROR   20:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:19:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:19:01 compute-0 podman[474195]: 2025-10-02 20:19:01.707527586 +0000 UTC m=+0.118915059 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 20:19:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:03 compute-0 ceph-mon[191910]: pgmap v2229: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:19:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:19:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:19:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:19:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:19:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:19:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:19:03
Oct 02 20:19:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:19:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:19:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.control', 'volumes', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'vms', '.rgw.root']
Oct 02 20:19:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:19:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.308 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.309 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.321 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.327 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '03794a5e-b5ab-4b9e-8052-6de08e4c9f84', 'name': 'te-6077278-asg-4covgfuc3mpk-ztq7ffpxzcnr-y6lmqetg2enr', 'flavor': {'id': '2a4d7fef-934e-4921-8c3b-c6783966faa5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'fe71959f-8f59-4b45-ae05-4216d5f12fab'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '16e65e6cbbf848e5bb5755e6da3b1d33', 'user_id': 'e5d4abc29b2e475e9c7c54249ca341c4', 'hostId': '01141092f0da7a843904c1ec95a2fbe7e7386c3244d2a85e02147c4e', 'status': 'active', 'metadata': {'metering.server_group': 'f724f930-b01d-4568-9d24-c7060da9fe9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.333 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f50e6a55-f3b5-402b-91b2-12d34386f656', 'name': 'te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6', 'flavor': {'id': '2a4d7fef-934e-4921-8c3b-c6783966faa5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'fe71959f-8f59-4b45-ae05-4216d5f12fab'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '16e65e6cbbf848e5bb5755e6da3b1d33', 'user_id': 'e5d4abc29b2e475e9c7c54249ca341c4', 'hostId': '01141092f0da7a843904c1ec95a2fbe7e7386c3244d2a85e02147c4e', 'status': 'active', 'metadata': {'metering.server_group': 'f724f930-b01d-4568-9d24-c7060da9fe9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.334 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.335 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T20:19:04.335205) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.411 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.412 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.412 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.457 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.requests volume: 1099 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.459 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:19:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:19:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:19:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:19:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:19:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:19:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:19:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:19:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.509 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.requests volume: 1115 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.511 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.512 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.513 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.513 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.513 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.514 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.515 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:19:04.514427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.547 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.548 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.548 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.573 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.574 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.600 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.601 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.603 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.603 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.604 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.604 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.604 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.604 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.606 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.606 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.607 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:19:04.604340) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.607 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.bytes volume: 73154560 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.608 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.608 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.609 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.610 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.610 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.610 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.611 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.611 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.611 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.612 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.612 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:19:04.611560) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.612 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.613 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.613 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.latency volume: 10218551533 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.614 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.614 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.latency volume: 61686940867 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.615 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.616 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.616 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.616 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.616 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.617 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.617 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.618 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:19:04.617360) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.655 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.693 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.733 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.734 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.734 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.734 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.734 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.735 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.735 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.735 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:19:04.735263) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.736 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.736 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.737 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.requests volume: 303 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.737 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.738 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.requests volume: 336 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.738 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.739 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.740 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.740 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.740 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.740 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.740 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.741 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:19:04.740826) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.747 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.754 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.761 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.762 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.762 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.762 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.762 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.763 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.763 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.763 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.764 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:19:04.763077) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.764 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.764 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.765 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.765 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.765 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.765 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.766 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.766 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.767 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.767 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.767 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.767 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.767 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.767 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.767 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.768 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.768 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.769 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.769 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.769 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.769 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:19:04.766056) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.770 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:19:04.767557) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.769 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.770 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.770 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.770 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.771 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.771 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.771 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.771 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.772 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.772 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:19:04.770461) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.772 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.772 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.772 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.773 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.773 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.774 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.774 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.774 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.774 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.774 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.774 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.774 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.775 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.775 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.776 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:19:04.772753) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.776 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:19:04.774780) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.775 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.776 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.777 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.777 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.777 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.777 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.777 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.778 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.778 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.bytes volume: 30812672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.779 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.779 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.bytes volume: 30829056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.779 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.780 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.780 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.780 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.780 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.780 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.781 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.781 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.781 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:19:04.777228) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.782 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:19:04.780935) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.782 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.782 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.783 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.783 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.783 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.783 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.783 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.783 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.784 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.784 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.784 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.785 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.785 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.786 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.786 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.786 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.787 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:19:04.783468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.787 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.787 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.787 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.788 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:19:04.787580) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.788 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/memory.usage volume: 42.20703125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.788 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/memory.usage volume: 42.4609375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.789 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.789 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.789 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.789 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.789 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.790 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2730 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.790 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.bytes volume: 2276 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.790 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.bytes volume: 2450 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.791 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.791 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.791 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.791 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:19:04.789831) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.791 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.792 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.792 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.792 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:19:04.792108) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.793 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.793 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.793 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.794 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.794 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.794 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.794 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.794 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.794 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.795 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.795 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.795 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.796 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.796 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.796 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.798 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:19:04.794513) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.799 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.799 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.799 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.799 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.799 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.799 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.800 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.800 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.801 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.801 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.801 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.801 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.801 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.802 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.802 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.802 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.803 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.803 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.803 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.803 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.803 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 66520000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.803 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/cpu volume: 333240000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.804 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/cpu volume: 337940000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.805 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:19:04.799359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.805 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:19:04.801671) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.805 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:19:04.803512) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.804 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.806 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.806 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.806 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.806 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.806 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.806 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.806 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.807 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.807 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.latency volume: 2082740870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.807 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.latency volume: 153685830 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.808 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.latency volume: 3130282352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.808 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.latency volume: 223577318 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.809 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.810 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:19:04.806495) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:19:04.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:19:04 compute-0 nova_compute[355794]: 2025-10-02 20:19:04.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:05 compute-0 nova_compute[355794]: 2025-10-02 20:19:05.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:05 compute-0 ceph-mon[191910]: pgmap v2230: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:19:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2231: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:07 compute-0 ceph-mon[191910]: pgmap v2231: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2232: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:08 compute-0 ceph-mon[191910]: pgmap v2232: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:09 compute-0 nova_compute[355794]: 2025-10-02 20:19:09.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:10 compute-0 nova_compute[355794]: 2025-10-02 20:19:10.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:19:11 compute-0 ceph-mon[191910]: pgmap v2233: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:11 compute-0 podman[474216]: 2025-10-02 20:19:11.716898719 +0000 UTC m=+0.134532750 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 20:19:11 compute-0 podman[474217]: 2025-10-02 20:19:11.755699153 +0000 UTC m=+0.167108390 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 20:19:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2234: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:13 compute-0 ceph-mon[191910]: pgmap v2234: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020729298935338934 of space, bias 1.0, pg target 0.6218789680601681 quantized to 32 (current 32)
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:19:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:19:13 compute-0 sudo[474257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:19:13 compute-0 sudo[474257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:19:13 compute-0 sudo[474257]: pam_unix(sudo:session): session closed for user root
Oct 02 20:19:14 compute-0 sudo[474282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:19:14 compute-0 sudo[474282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:19:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:14 compute-0 sudo[474282]: pam_unix(sudo:session): session closed for user root
Oct 02 20:19:14 compute-0 sudo[474307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:19:14 compute-0 sudo[474307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:19:14 compute-0 sudo[474307]: pam_unix(sudo:session): session closed for user root
Oct 02 20:19:14 compute-0 sudo[474332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:19:14 compute-0 sudo[474332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:19:14 compute-0 nova_compute[355794]: 2025-10-02 20:19:14.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:14 compute-0 sudo[474332]: pam_unix(sudo:session): session closed for user root
Oct 02 20:19:15 compute-0 nova_compute[355794]: 2025-10-02 20:19:15.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:19:15 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:19:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:19:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:19:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:19:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:19:15 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev fb058307-1c5a-4d9d-9364-062a92c96e88 does not exist
Oct 02 20:19:15 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev a868c89f-adb8-427f-9266-c515ae3ccf00 does not exist
Oct 02 20:19:15 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e5edfd53-369b-4c74-b723-5a8900580d4a does not exist
Oct 02 20:19:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:19:15 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:19:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:19:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:19:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:19:15 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:19:15 compute-0 ceph-mon[191910]: pgmap v2235: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:19:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:19:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:19:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:19:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:19:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:19:15 compute-0 sudo[474388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:19:15 compute-0 sudo[474388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:19:15 compute-0 sudo[474388]: pam_unix(sudo:session): session closed for user root
Oct 02 20:19:15 compute-0 sudo[474413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:19:15 compute-0 sudo[474413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:19:15 compute-0 sudo[474413]: pam_unix(sudo:session): session closed for user root
Oct 02 20:19:15 compute-0 sudo[474438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:19:15 compute-0 sudo[474438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:19:15 compute-0 sudo[474438]: pam_unix(sudo:session): session closed for user root
Oct 02 20:19:15 compute-0 sudo[474463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:19:15 compute-0 sudo[474463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:19:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:19:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:16 compute-0 podman[474526]: 2025-10-02 20:19:16.152835496 +0000 UTC m=+0.069005382 container create 326f3717230e17d07bdf84b3a5370c1c556c230876f02bde291d2e9ef0b8098f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hugle, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 20:19:16 compute-0 podman[474526]: 2025-10-02 20:19:16.118289004 +0000 UTC m=+0.034458890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:19:16 compute-0 systemd[1]: Started libpod-conmon-326f3717230e17d07bdf84b3a5370c1c556c230876f02bde291d2e9ef0b8098f.scope.
Oct 02 20:19:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:19:16 compute-0 podman[474526]: 2025-10-02 20:19:16.295766807 +0000 UTC m=+0.211936713 container init 326f3717230e17d07bdf84b3a5370c1c556c230876f02bde291d2e9ef0b8098f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hugle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 20:19:16 compute-0 podman[474526]: 2025-10-02 20:19:16.314367558 +0000 UTC m=+0.230537454 container start 326f3717230e17d07bdf84b3a5370c1c556c230876f02bde291d2e9ef0b8098f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hugle, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:19:16 compute-0 podman[474526]: 2025-10-02 20:19:16.321955368 +0000 UTC m=+0.238125254 container attach 326f3717230e17d07bdf84b3a5370c1c556c230876f02bde291d2e9ef0b8098f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:19:16 compute-0 nostalgic_hugle[474542]: 167 167
Oct 02 20:19:16 compute-0 systemd[1]: libpod-326f3717230e17d07bdf84b3a5370c1c556c230876f02bde291d2e9ef0b8098f.scope: Deactivated successfully.
Oct 02 20:19:16 compute-0 podman[474526]: 2025-10-02 20:19:16.330558605 +0000 UTC m=+0.246728461 container died 326f3717230e17d07bdf84b3a5370c1c556c230876f02bde291d2e9ef0b8098f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hugle, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 20:19:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf018c61ec7e3e568c1b0f718950f9d961649fab5e0da90553084a1091a6b980-merged.mount: Deactivated successfully.
Oct 02 20:19:16 compute-0 podman[474526]: 2025-10-02 20:19:16.412899668 +0000 UTC m=+0.329069524 container remove 326f3717230e17d07bdf84b3a5370c1c556c230876f02bde291d2e9ef0b8098f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:19:16 compute-0 systemd[1]: libpod-conmon-326f3717230e17d07bdf84b3a5370c1c556c230876f02bde291d2e9ef0b8098f.scope: Deactivated successfully.
Oct 02 20:19:16 compute-0 nova_compute[355794]: 2025-10-02 20:19:16.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:19:16 compute-0 nova_compute[355794]: 2025-10-02 20:19:16.577 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:19:16 compute-0 nova_compute[355794]: 2025-10-02 20:19:16.578 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:19:16 compute-0 nova_compute[355794]: 2025-10-02 20:19:16.578 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:19:16 compute-0 podman[474565]: 2025-10-02 20:19:16.737814761 +0000 UTC m=+0.092538382 container create 14bf00ce4bc9152cf1d26a9240d33d4cf236bf15315c2099e9fe3c88cbac5b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 20:19:16 compute-0 podman[474565]: 2025-10-02 20:19:16.694139469 +0000 UTC m=+0.048863140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:19:16 compute-0 systemd[1]: Started libpod-conmon-14bf00ce4bc9152cf1d26a9240d33d4cf236bf15315c2099e9fe3c88cbac5b1a.scope.
Oct 02 20:19:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c401be465b5f0480662ca46f8adec845a14aa05e1b16ec0636cd8f779ee4256e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c401be465b5f0480662ca46f8adec845a14aa05e1b16ec0636cd8f779ee4256e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c401be465b5f0480662ca46f8adec845a14aa05e1b16ec0636cd8f779ee4256e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c401be465b5f0480662ca46f8adec845a14aa05e1b16ec0636cd8f779ee4256e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c401be465b5f0480662ca46f8adec845a14aa05e1b16ec0636cd8f779ee4256e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:19:16 compute-0 podman[474565]: 2025-10-02 20:19:16.913659592 +0000 UTC m=+0.268383193 container init 14bf00ce4bc9152cf1d26a9240d33d4cf236bf15315c2099e9fe3c88cbac5b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 20:19:16 compute-0 podman[474565]: 2025-10-02 20:19:16.934424849 +0000 UTC m=+0.289148450 container start 14bf00ce4bc9152cf1d26a9240d33d4cf236bf15315c2099e9fe3c88cbac5b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 20:19:16 compute-0 podman[474565]: 2025-10-02 20:19:16.940955772 +0000 UTC m=+0.295679363 container attach 14bf00ce4bc9152cf1d26a9240d33d4cf236bf15315c2099e9fe3c88cbac5b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keller, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 02 20:19:17 compute-0 ceph-mon[191910]: pgmap v2236: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:17 compute-0 nova_compute[355794]: 2025-10-02 20:19:17.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:19:17 compute-0 nova_compute[355794]: 2025-10-02 20:19:17.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:19:17 compute-0 nova_compute[355794]: 2025-10-02 20:19:17.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:19:18 compute-0 nova_compute[355794]: 2025-10-02 20:19:18.050 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:19:18 compute-0 nova_compute[355794]: 2025-10-02 20:19:18.051 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:19:18 compute-0 nova_compute[355794]: 2025-10-02 20:19:18.051 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:19:18 compute-0 nova_compute[355794]: 2025-10-02 20:19:18.052 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:19:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:18 compute-0 pensive_keller[474582]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:19:18 compute-0 pensive_keller[474582]: --> relative data size: 1.0
Oct 02 20:19:18 compute-0 pensive_keller[474582]: --> All data devices are unavailable
Oct 02 20:19:18 compute-0 systemd[1]: libpod-14bf00ce4bc9152cf1d26a9240d33d4cf236bf15315c2099e9fe3c88cbac5b1a.scope: Deactivated successfully.
Oct 02 20:19:18 compute-0 systemd[1]: libpod-14bf00ce4bc9152cf1d26a9240d33d4cf236bf15315c2099e9fe3c88cbac5b1a.scope: Consumed 1.267s CPU time.
Oct 02 20:19:18 compute-0 podman[474565]: 2025-10-02 20:19:18.2919257 +0000 UTC m=+1.646649311 container died 14bf00ce4bc9152cf1d26a9240d33d4cf236bf15315c2099e9fe3c88cbac5b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keller, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:19:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c401be465b5f0480662ca46f8adec845a14aa05e1b16ec0636cd8f779ee4256e-merged.mount: Deactivated successfully.
Oct 02 20:19:18 compute-0 podman[474565]: 2025-10-02 20:19:18.425991838 +0000 UTC m=+1.780715419 container remove 14bf00ce4bc9152cf1d26a9240d33d4cf236bf15315c2099e9fe3c88cbac5b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keller, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 20:19:18 compute-0 systemd[1]: libpod-conmon-14bf00ce4bc9152cf1d26a9240d33d4cf236bf15315c2099e9fe3c88cbac5b1a.scope: Deactivated successfully.
Oct 02 20:19:18 compute-0 podman[474612]: 2025-10-02 20:19:18.468756516 +0000 UTC m=+0.132192699 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 20:19:18 compute-0 sudo[474463]: pam_unix(sudo:session): session closed for user root
Oct 02 20:19:18 compute-0 podman[474618]: 2025-10-02 20:19:18.493083308 +0000 UTC m=+0.156520291 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.tags=base rhel9, container_name=kepler, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, architecture=x86_64)
Oct 02 20:19:18 compute-0 sudo[474663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:19:18 compute-0 sudo[474663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:19:18 compute-0 sudo[474663]: pam_unix(sudo:session): session closed for user root
Oct 02 20:19:18 compute-0 sudo[474688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:19:18 compute-0 sudo[474688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:19:18 compute-0 sudo[474688]: pam_unix(sudo:session): session closed for user root
Oct 02 20:19:18 compute-0 sudo[474713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:19:18 compute-0 sudo[474713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:19:18 compute-0 sudo[474713]: pam_unix(sudo:session): session closed for user root
Oct 02 20:19:18 compute-0 sudo[474738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:19:18 compute-0 sudo[474738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:19:19 compute-0 nova_compute[355794]: 2025-10-02 20:19:19.233 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:19:19 compute-0 ceph-mon[191910]: pgmap v2237: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:19 compute-0 nova_compute[355794]: 2025-10-02 20:19:19.257 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:19:19 compute-0 nova_compute[355794]: 2025-10-02 20:19:19.258 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:19:19 compute-0 nova_compute[355794]: 2025-10-02 20:19:19.258 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:19:19 compute-0 nova_compute[355794]: 2025-10-02 20:19:19.286 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:19:19 compute-0 nova_compute[355794]: 2025-10-02 20:19:19.286 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:19:19 compute-0 nova_compute[355794]: 2025-10-02 20:19:19.287 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:19:19 compute-0 nova_compute[355794]: 2025-10-02 20:19:19.287 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:19:19 compute-0 nova_compute[355794]: 2025-10-02 20:19:19.287 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:19:19 compute-0 podman[474812]: 2025-10-02 20:19:19.570792835 +0000 UTC m=+0.083818573 container create 89bc415b16501671ff4ec06f3ade5dfd075f71d2245374435cf7e03652ac4de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pike, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 20:19:19 compute-0 podman[474812]: 2025-10-02 20:19:19.541717488 +0000 UTC m=+0.054743266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:19:19 compute-0 systemd[1]: Started libpod-conmon-89bc415b16501671ff4ec06f3ade5dfd075f71d2245374435cf7e03652ac4de8.scope.
Oct 02 20:19:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:19:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:19:19 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/156482300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:19:19 compute-0 podman[474812]: 2025-10-02 20:19:19.785561012 +0000 UTC m=+0.298586840 container init 89bc415b16501671ff4ec06f3ade5dfd075f71d2245374435cf7e03652ac4de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pike, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct 02 20:19:19 compute-0 nova_compute[355794]: 2025-10-02 20:19:19.783 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:19:19 compute-0 podman[474812]: 2025-10-02 20:19:19.796029698 +0000 UTC m=+0.309055466 container start 89bc415b16501671ff4ec06f3ade5dfd075f71d2245374435cf7e03652ac4de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:19:19 compute-0 vibrant_pike[474838]: 167 167
Oct 02 20:19:19 compute-0 systemd[1]: libpod-89bc415b16501671ff4ec06f3ade5dfd075f71d2245374435cf7e03652ac4de8.scope: Deactivated successfully.
Oct 02 20:19:19 compute-0 podman[474812]: 2025-10-02 20:19:19.826487402 +0000 UTC m=+0.339513190 container attach 89bc415b16501671ff4ec06f3ade5dfd075f71d2245374435cf7e03652ac4de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:19:19 compute-0 podman[474812]: 2025-10-02 20:19:19.827345925 +0000 UTC m=+0.340371713 container died 89bc415b16501671ff4ec06f3ade5dfd075f71d2245374435cf7e03652ac4de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 02 20:19:19 compute-0 nova_compute[355794]: 2025-10-02 20:19:19.930 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:19:19 compute-0 nova_compute[355794]: 2025-10-02 20:19:19.932 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:19:19 compute-0 nova_compute[355794]: 2025-10-02 20:19:19.932 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:19:19 compute-0 nova_compute[355794]: 2025-10-02 20:19:19.941 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:19:19 compute-0 nova_compute[355794]: 2025-10-02 20:19:19.942 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:19:19 compute-0 nova_compute[355794]: 2025-10-02 20:19:19.950 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:19:19 compute-0 nova_compute[355794]: 2025-10-02 20:19:19.951 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:19:19 compute-0 nova_compute[355794]: 2025-10-02 20:19:19.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a81c36731994dd99fb81f8bc13329fe6e084d0f1a40187de7328eb80611a01f-merged.mount: Deactivated successfully.
Oct 02 20:19:20 compute-0 nova_compute[355794]: 2025-10-02 20:19:20.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:20 compute-0 podman[474812]: 2025-10-02 20:19:20.135740703 +0000 UTC m=+0.648766461 container remove 89bc415b16501671ff4ec06f3ade5dfd075f71d2245374435cf7e03652ac4de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pike, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 20:19:20 compute-0 systemd[1]: libpod-conmon-89bc415b16501671ff4ec06f3ade5dfd075f71d2245374435cf7e03652ac4de8.scope: Deactivated successfully.
Oct 02 20:19:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:19:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/505352730' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:19:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:19:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/505352730' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:19:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/156482300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:19:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/505352730' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:19:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/505352730' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:19:20 compute-0 podman[474864]: 2025-10-02 20:19:20.400970021 +0000 UTC m=+0.071771865 container create c87c25d8b2e497433c2fed8f1a4008e8f5b7f6074d90baba9ac8c4962f8ee2ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_feynman, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 20:19:20 compute-0 podman[474864]: 2025-10-02 20:19:20.365324741 +0000 UTC m=+0.036126615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:19:20 compute-0 systemd[1]: Started libpod-conmon-c87c25d8b2e497433c2fed8f1a4008e8f5b7f6074d90baba9ac8c4962f8ee2ed.scope.
Oct 02 20:19:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2bd58d8fd3afef581cd3860a1832e57421305ec239d00b461bfcf54b261fcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2bd58d8fd3afef581cd3860a1832e57421305ec239d00b461bfcf54b261fcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2bd58d8fd3afef581cd3860a1832e57421305ec239d00b461bfcf54b261fcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2bd58d8fd3afef581cd3860a1832e57421305ec239d00b461bfcf54b261fcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:19:20 compute-0 podman[474864]: 2025-10-02 20:19:20.570998158 +0000 UTC m=+0.241799992 container init c87c25d8b2e497433c2fed8f1a4008e8f5b7f6074d90baba9ac8c4962f8ee2ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_feynman, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 20:19:20 compute-0 podman[474864]: 2025-10-02 20:19:20.590493482 +0000 UTC m=+0.261295296 container start c87c25d8b2e497433c2fed8f1a4008e8f5b7f6074d90baba9ac8c4962f8ee2ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_feynman, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 20:19:20 compute-0 nova_compute[355794]: 2025-10-02 20:19:20.605 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:19:20 compute-0 nova_compute[355794]: 2025-10-02 20:19:20.606 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3150MB free_disk=59.863929748535156GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:19:20 compute-0 nova_compute[355794]: 2025-10-02 20:19:20.606 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:19:20 compute-0 nova_compute[355794]: 2025-10-02 20:19:20.606 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:19:20 compute-0 podman[474864]: 2025-10-02 20:19:20.622118697 +0000 UTC m=+0.292920571 container attach c87c25d8b2e497433c2fed8f1a4008e8f5b7f6074d90baba9ac8c4962f8ee2ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_feynman, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:19:20 compute-0 nova_compute[355794]: 2025-10-02 20:19:20.711 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:19:20 compute-0 nova_compute[355794]: 2025-10-02 20:19:20.712 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance f50e6a55-f3b5-402b-91b2-12d34386f656 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:19:20 compute-0 nova_compute[355794]: 2025-10-02 20:19:20.712 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:19:20 compute-0 nova_compute[355794]: 2025-10-02 20:19:20.712 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:19:20 compute-0 nova_compute[355794]: 2025-10-02 20:19:20.713 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:19:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:19:20 compute-0 nova_compute[355794]: 2025-10-02 20:19:20.799 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:19:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:19:21 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4283976513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:19:21 compute-0 ceph-mon[191910]: pgmap v2238: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4283976513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:19:21 compute-0 nova_compute[355794]: 2025-10-02 20:19:21.301 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:19:21 compute-0 nova_compute[355794]: 2025-10-02 20:19:21.312 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:19:21 compute-0 nova_compute[355794]: 2025-10-02 20:19:21.336 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:19:21 compute-0 nova_compute[355794]: 2025-10-02 20:19:21.339 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:19:21 compute-0 nova_compute[355794]: 2025-10-02 20:19:21.340 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:19:21 compute-0 loving_feynman[474880]: {
Oct 02 20:19:21 compute-0 loving_feynman[474880]:     "0": [
Oct 02 20:19:21 compute-0 loving_feynman[474880]:         {
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "devices": [
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "/dev/loop3"
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             ],
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "lv_name": "ceph_lv0",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "lv_size": "21470642176",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "name": "ceph_lv0",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "tags": {
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.cluster_name": "ceph",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.crush_device_class": "",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.encrypted": "0",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.osd_id": "0",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.type": "block",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.vdo": "0"
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             },
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "type": "block",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "vg_name": "ceph_vg0"
Oct 02 20:19:21 compute-0 loving_feynman[474880]:         }
Oct 02 20:19:21 compute-0 loving_feynman[474880]:     ],
Oct 02 20:19:21 compute-0 loving_feynman[474880]:     "1": [
Oct 02 20:19:21 compute-0 loving_feynman[474880]:         {
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "devices": [
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "/dev/loop4"
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             ],
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "lv_name": "ceph_lv1",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "lv_size": "21470642176",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "name": "ceph_lv1",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "tags": {
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.cluster_name": "ceph",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.crush_device_class": "",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.encrypted": "0",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.osd_id": "1",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.type": "block",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.vdo": "0"
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             },
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "type": "block",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "vg_name": "ceph_vg1"
Oct 02 20:19:21 compute-0 loving_feynman[474880]:         }
Oct 02 20:19:21 compute-0 loving_feynman[474880]:     ],
Oct 02 20:19:21 compute-0 loving_feynman[474880]:     "2": [
Oct 02 20:19:21 compute-0 loving_feynman[474880]:         {
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "devices": [
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "/dev/loop5"
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             ],
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "lv_name": "ceph_lv2",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "lv_size": "21470642176",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "name": "ceph_lv2",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "tags": {
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.cluster_name": "ceph",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.crush_device_class": "",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.encrypted": "0",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.osd_id": "2",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.type": "block",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:                 "ceph.vdo": "0"
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             },
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "type": "block",
Oct 02 20:19:21 compute-0 loving_feynman[474880]:             "vg_name": "ceph_vg2"
Oct 02 20:19:21 compute-0 loving_feynman[474880]:         }
Oct 02 20:19:21 compute-0 loving_feynman[474880]:     ]
Oct 02 20:19:21 compute-0 loving_feynman[474880]: }
Oct 02 20:19:21 compute-0 systemd[1]: libpod-c87c25d8b2e497433c2fed8f1a4008e8f5b7f6074d90baba9ac8c4962f8ee2ed.scope: Deactivated successfully.
Oct 02 20:19:21 compute-0 podman[474864]: 2025-10-02 20:19:21.477255791 +0000 UTC m=+1.148057635 container died c87c25d8b2e497433c2fed8f1a4008e8f5b7f6074d90baba9ac8c4962f8ee2ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_feynman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 20:19:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f2bd58d8fd3afef581cd3860a1832e57421305ec239d00b461bfcf54b261fcf-merged.mount: Deactivated successfully.
Oct 02 20:19:21 compute-0 podman[474864]: 2025-10-02 20:19:21.61022458 +0000 UTC m=+1.281026404 container remove c87c25d8b2e497433c2fed8f1a4008e8f5b7f6074d90baba9ac8c4962f8ee2ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_feynman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:19:21 compute-0 systemd[1]: libpod-conmon-c87c25d8b2e497433c2fed8f1a4008e8f5b7f6074d90baba9ac8c4962f8ee2ed.scope: Deactivated successfully.
Oct 02 20:19:21 compute-0 nova_compute[355794]: 2025-10-02 20:19:21.657 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:19:21 compute-0 nova_compute[355794]: 2025-10-02 20:19:21.659 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:19:21 compute-0 sudo[474738]: pam_unix(sudo:session): session closed for user root
Oct 02 20:19:21 compute-0 podman[474915]: 2025-10-02 20:19:21.683589946 +0000 UTC m=+0.144657678 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, architecture=x86_64, name=ubi9-minimal, managed_by=edpm_ansible, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, version=9.6, config_id=edpm, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Oct 02 20:19:21 compute-0 podman[474911]: 2025-10-02 20:19:21.699836355 +0000 UTC m=+0.162245273 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:19:21 compute-0 podman[474922]: 2025-10-02 20:19:21.701785136 +0000 UTC m=+0.148233883 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 20:19:21 compute-0 podman[474914]: 2025-10-02 20:19:21.707750693 +0000 UTC m=+0.159061058 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:19:21 compute-0 podman[474921]: 2025-10-02 20:19:21.742735697 +0000 UTC m=+0.164846361 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 20:19:21 compute-0 sudo[475001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:19:21 compute-0 sudo[475001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:19:21 compute-0 sudo[475001]: pam_unix(sudo:session): session closed for user root
Oct 02 20:19:21 compute-0 sudo[475044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:19:21 compute-0 sudo[475044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:19:21 compute-0 sudo[475044]: pam_unix(sudo:session): session closed for user root
Oct 02 20:19:21 compute-0 sudo[475069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:19:21 compute-0 sudo[475069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:19:21 compute-0 sudo[475069]: pam_unix(sudo:session): session closed for user root
Oct 02 20:19:22 compute-0 sudo[475094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:19:22 compute-0 sudo[475094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:19:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2239: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:22 compute-0 ceph-mon[191910]: pgmap v2239: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:22 compute-0 podman[475157]: 2025-10-02 20:19:22.722369025 +0000 UTC m=+0.110899497 container create 44b55eb25e1f09df441ba87f58c6ac0a31af65b528c393dea1265bcc5cd89963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_austin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 02 20:19:22 compute-0 podman[475157]: 2025-10-02 20:19:22.679820393 +0000 UTC m=+0.068350915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:19:22 compute-0 systemd[1]: Started libpod-conmon-44b55eb25e1f09df441ba87f58c6ac0a31af65b528c393dea1265bcc5cd89963.scope.
Oct 02 20:19:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:19:22 compute-0 podman[475157]: 2025-10-02 20:19:22.90218832 +0000 UTC m=+0.290718832 container init 44b55eb25e1f09df441ba87f58c6ac0a31af65b528c393dea1265bcc5cd89963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_austin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 20:19:22 compute-0 podman[475157]: 2025-10-02 20:19:22.920465673 +0000 UTC m=+0.308996155 container start 44b55eb25e1f09df441ba87f58c6ac0a31af65b528c393dea1265bcc5cd89963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_austin, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 20:19:22 compute-0 podman[475157]: 2025-10-02 20:19:22.926996955 +0000 UTC m=+0.315527487 container attach 44b55eb25e1f09df441ba87f58c6ac0a31af65b528c393dea1265bcc5cd89963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 20:19:22 compute-0 intelligent_austin[475172]: 167 167
Oct 02 20:19:22 compute-0 systemd[1]: libpod-44b55eb25e1f09df441ba87f58c6ac0a31af65b528c393dea1265bcc5cd89963.scope: Deactivated successfully.
Oct 02 20:19:22 compute-0 podman[475157]: 2025-10-02 20:19:22.936969058 +0000 UTC m=+0.325499530 container died 44b55eb25e1f09df441ba87f58c6ac0a31af65b528c393dea1265bcc5cd89963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 20:19:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed837673f189233bc7690f4203f4b1bcdccd5e6bbd73874859c94ef7de1a458e-merged.mount: Deactivated successfully.
Oct 02 20:19:23 compute-0 podman[475157]: 2025-10-02 20:19:23.03368502 +0000 UTC m=+0.422215502 container remove 44b55eb25e1f09df441ba87f58c6ac0a31af65b528c393dea1265bcc5cd89963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:19:23 compute-0 systemd[1]: libpod-conmon-44b55eb25e1f09df441ba87f58c6ac0a31af65b528c393dea1265bcc5cd89963.scope: Deactivated successfully.
Oct 02 20:19:23 compute-0 podman[475198]: 2025-10-02 20:19:23.400342735 +0000 UTC m=+0.123138550 container create d233ff86a686274e452514512cf34e2ed42ebba390c4da62e233e63cf4790420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 20:19:23 compute-0 podman[475198]: 2025-10-02 20:19:23.368358711 +0000 UTC m=+0.091154566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:19:23 compute-0 systemd[1]: Started libpod-conmon-d233ff86a686274e452514512cf34e2ed42ebba390c4da62e233e63cf4790420.scope.
Oct 02 20:19:23 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:19:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708ec3d07d9e9223cbf940c3f8a431a52e33ff122e7d34a043602389e9abb4d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:19:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708ec3d07d9e9223cbf940c3f8a431a52e33ff122e7d34a043602389e9abb4d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:19:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708ec3d07d9e9223cbf940c3f8a431a52e33ff122e7d34a043602389e9abb4d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:19:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708ec3d07d9e9223cbf940c3f8a431a52e33ff122e7d34a043602389e9abb4d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:19:23 compute-0 podman[475198]: 2025-10-02 20:19:23.553352313 +0000 UTC m=+0.276148178 container init d233ff86a686274e452514512cf34e2ed42ebba390c4da62e233e63cf4790420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ishizaka, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:19:23 compute-0 podman[475198]: 2025-10-02 20:19:23.567611779 +0000 UTC m=+0.290407574 container start d233ff86a686274e452514512cf34e2ed42ebba390c4da62e233e63cf4790420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ishizaka, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:19:23 compute-0 podman[475198]: 2025-10-02 20:19:23.576026921 +0000 UTC m=+0.298822726 container attach d233ff86a686274e452514512cf34e2ed42ebba390c4da62e233e63cf4790420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ishizaka, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:19:23 compute-0 nova_compute[355794]: 2025-10-02 20:19:23.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:19:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]: {
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:         "osd_id": 1,
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:         "type": "bluestore"
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:     },
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:         "osd_id": 2,
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:         "type": "bluestore"
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:     },
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:         "osd_id": 0,
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:         "type": "bluestore"
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]:     }
Oct 02 20:19:24 compute-0 zealous_ishizaka[475213]: }
Oct 02 20:19:24 compute-0 systemd[1]: libpod-d233ff86a686274e452514512cf34e2ed42ebba390c4da62e233e63cf4790420.scope: Deactivated successfully.
Oct 02 20:19:24 compute-0 podman[475198]: 2025-10-02 20:19:24.953075077 +0000 UTC m=+1.675870912 container died d233ff86a686274e452514512cf34e2ed42ebba390c4da62e233e63cf4790420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:19:24 compute-0 systemd[1]: libpod-d233ff86a686274e452514512cf34e2ed42ebba390c4da62e233e63cf4790420.scope: Consumed 1.350s CPU time.
Oct 02 20:19:24 compute-0 nova_compute[355794]: 2025-10-02 20:19:24.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-708ec3d07d9e9223cbf940c3f8a431a52e33ff122e7d34a043602389e9abb4d4-merged.mount: Deactivated successfully.
Oct 02 20:19:25 compute-0 nova_compute[355794]: 2025-10-02 20:19:25.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:25 compute-0 podman[475198]: 2025-10-02 20:19:25.07028036 +0000 UTC m=+1.793076165 container remove d233ff86a686274e452514512cf34e2ed42ebba390c4da62e233e63cf4790420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 20:19:25 compute-0 systemd[1]: libpod-conmon-d233ff86a686274e452514512cf34e2ed42ebba390c4da62e233e63cf4790420.scope: Deactivated successfully.
Oct 02 20:19:25 compute-0 sudo[475094]: pam_unix(sudo:session): session closed for user root
Oct 02 20:19:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:19:25 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:19:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:19:25 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:19:25 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 77c8ebf5-3250-4bc5-abee-dc3e992a2a35 does not exist
Oct 02 20:19:25 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 02fd51ed-bdcf-428c-a258-4386b0663054 does not exist
Oct 02 20:19:25 compute-0 ceph-mon[191910]: pgmap v2240: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:25 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:19:25 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:19:25 compute-0 sudo[475258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:19:25 compute-0 sudo[475258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:19:25 compute-0 sudo[475258]: pam_unix(sudo:session): session closed for user root
Oct 02 20:19:25 compute-0 sudo[475283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:19:25 compute-0 sudo[475283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:19:25 compute-0 sudo[475283]: pam_unix(sudo:session): session closed for user root
Oct 02 20:19:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:19:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2241: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:27 compute-0 ceph-mon[191910]: pgmap v2241: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2242: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:29 compute-0 ceph-mon[191910]: pgmap v2242: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:29 compute-0 nova_compute[355794]: 2025-10-02 20:19:29.578 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:19:29 compute-0 podman[157186]: time="2025-10-02T20:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:19:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:19:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9562 "" "Go-http-client/1.1"
Oct 02 20:19:29 compute-0 nova_compute[355794]: 2025-10-02 20:19:29.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:30 compute-0 nova_compute[355794]: 2025-10-02 20:19:30.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2243: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:19:31 compute-0 ceph-mon[191910]: pgmap v2243: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:31 compute-0 openstack_network_exporter[372736]: ERROR   20:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:19:31 compute-0 openstack_network_exporter[372736]: ERROR   20:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:19:31 compute-0 openstack_network_exporter[372736]: ERROR   20:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:19:31 compute-0 openstack_network_exporter[372736]: ERROR   20:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:19:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:19:31 compute-0 openstack_network_exporter[372736]: ERROR   20:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:19:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:19:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2244: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:19:32.338 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:19:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:19:32.341 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:19:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:19:32.343 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:19:32 compute-0 podman[475308]: 2025-10-02 20:19:32.744441958 +0000 UTC m=+0.156906981 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Oct 02 20:19:33 compute-0 ceph-mon[191910]: pgmap v2244: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:19:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:19:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:19:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:19:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:19:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:19:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2245: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:34 compute-0 nova_compute[355794]: 2025-10-02 20:19:34.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:35 compute-0 nova_compute[355794]: 2025-10-02 20:19:35.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:35 compute-0 ceph-mon[191910]: pgmap v2245: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:19:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:36 compute-0 ceph-mon[191910]: pgmap v2246: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2247: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:39 compute-0 ceph-mon[191910]: pgmap v2247: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:40 compute-0 nova_compute[355794]: 2025-10-02 20:19:40.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:40 compute-0 nova_compute[355794]: 2025-10-02 20:19:40.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2248: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:19:41 compute-0 ceph-mon[191910]: pgmap v2248: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:42 compute-0 podman[475329]: 2025-10-02 20:19:42.711153649 +0000 UTC m=+0.127481845 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 20:19:42 compute-0 podman[475330]: 2025-10-02 20:19:42.735135232 +0000 UTC m=+0.144355000 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 20:19:43 compute-0 ceph-mon[191910]: pgmap v2249: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2250: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:45 compute-0 nova_compute[355794]: 2025-10-02 20:19:45.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:45 compute-0 nova_compute[355794]: 2025-10-02 20:19:45.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:45 compute-0 ceph-mon[191910]: pgmap v2250: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:19:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2251: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:47 compute-0 ceph-mon[191910]: pgmap v2251: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:48 compute-0 podman[475370]: 2025-10-02 20:19:48.699039689 +0000 UTC m=+0.116220168 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 20:19:48 compute-0 podman[475371]: 2025-10-02 20:19:48.714331382 +0000 UTC m=+0.132090676 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., release-0.7.12=, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git)
Oct 02 20:19:49 compute-0 ceph-mon[191910]: pgmap v2252: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:50 compute-0 nova_compute[355794]: 2025-10-02 20:19:50.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:50 compute-0 nova_compute[355794]: 2025-10-02 20:19:50.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2253: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:19:50.295884) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436390296034, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1877, "num_deletes": 251, "total_data_size": 3156033, "memory_usage": 3207376, "flush_reason": "Manual Compaction"}
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Oct 02 20:19:50 compute-0 ceph-mon[191910]: pgmap v2253: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436390330084, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 3092795, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44432, "largest_seqno": 46308, "table_properties": {"data_size": 3084107, "index_size": 5438, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17251, "raw_average_key_size": 20, "raw_value_size": 3066937, "raw_average_value_size": 3557, "num_data_blocks": 242, "num_entries": 862, "num_filter_entries": 862, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759436182, "oldest_key_time": 1759436182, "file_creation_time": 1759436390, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 34303 microseconds, and 16627 cpu microseconds.
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:19:50.330196) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 3092795 bytes OK
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:19:50.330236) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:19:50.337946) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:19:50.337984) EVENT_LOG_v1 {"time_micros": 1759436390337974, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:19:50.338020) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 3148091, prev total WAL file size 3148091, number of live WAL files 2.
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:19:50.340273) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(3020KB)], [107(6438KB)]
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436390340495, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 9686239, "oldest_snapshot_seqno": -1}
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6093 keys, 7936587 bytes, temperature: kUnknown
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436390455942, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 7936587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7898542, "index_size": 21679, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 158561, "raw_average_key_size": 26, "raw_value_size": 7790957, "raw_average_value_size": 1278, "num_data_blocks": 859, "num_entries": 6093, "num_filter_entries": 6093, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759436390, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:19:50.456349) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 7936587 bytes
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:19:50.477000) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 83.8 rd, 68.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 6.3 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 6607, records dropped: 514 output_compression: NoCompression
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:19:50.477084) EVENT_LOG_v1 {"time_micros": 1759436390477052, "job": 64, "event": "compaction_finished", "compaction_time_micros": 115548, "compaction_time_cpu_micros": 37237, "output_level": 6, "num_output_files": 1, "total_output_size": 7936587, "num_input_records": 6607, "num_output_records": 6093, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436390478751, "job": 64, "event": "table_file_deletion", "file_number": 109}
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436390481195, "job": 64, "event": "table_file_deletion", "file_number": 107}
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:19:50.339936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:19:50.481468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:19:50.481477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:19:50.481480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:19:50.481483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:19:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:19:50.481489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:19:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:19:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2254: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:52 compute-0 podman[475414]: 2025-10-02 20:19:52.698322784 +0000 UTC m=+0.099269480 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, architecture=x86_64, io.buildah.version=1.33.7, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 02 20:19:52 compute-0 podman[475413]: 2025-10-02 20:19:52.709619642 +0000 UTC m=+0.108680998 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 20:19:52 compute-0 podman[475412]: 2025-10-02 20:19:52.719325759 +0000 UTC m=+0.124551488 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:19:52 compute-0 podman[475420]: 2025-10-02 20:19:52.7447753 +0000 UTC m=+0.124659640 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 20:19:52 compute-0 podman[475415]: 2025-10-02 20:19:52.785960967 +0000 UTC m=+0.174405723 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 20:19:53 compute-0 ceph-mon[191910]: pgmap v2254: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2255: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:55 compute-0 nova_compute[355794]: 2025-10-02 20:19:55.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:55 compute-0 nova_compute[355794]: 2025-10-02 20:19:55.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:19:55 compute-0 ceph-mon[191910]: pgmap v2255: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:19:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:57 compute-0 ceph-mon[191910]: pgmap v2256: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2257: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:59 compute-0 ceph-mon[191910]: pgmap v2257: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:19:59 compute-0 podman[157186]: time="2025-10-02T20:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:19:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:19:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9571 "" "Go-http-client/1.1"
Oct 02 20:20:00 compute-0 nova_compute[355794]: 2025-10-02 20:20:00.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:00 compute-0 nova_compute[355794]: 2025-10-02 20:20:00.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:20:01 compute-0 ceph-mon[191910]: pgmap v2258: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:01 compute-0 openstack_network_exporter[372736]: ERROR   20:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:20:01 compute-0 openstack_network_exporter[372736]: ERROR   20:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:20:01 compute-0 openstack_network_exporter[372736]: ERROR   20:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:20:01 compute-0 openstack_network_exporter[372736]: ERROR   20:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:20:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:20:01 compute-0 openstack_network_exporter[372736]: ERROR   20:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:20:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:20:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2259: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:03 compute-0 ceph-mon[191910]: pgmap v2259: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:20:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:20:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:20:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:20:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:20:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:20:03 compute-0 podman[475511]: 2025-10-02 20:20:03.704625521 +0000 UTC m=+0.125632816 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:20:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:20:03
Oct 02 20:20:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:20:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:20:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'backups', '.mgr', 'vms', 'default.rgw.log']
Oct 02 20:20:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:20:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2260: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:04 compute-0 ceph-mon[191910]: pgmap v2260: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:20:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:20:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:20:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:20:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:20:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:20:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:20:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:20:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:20:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:20:05 compute-0 nova_compute[355794]: 2025-10-02 20:20:05.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:05 compute-0 nova_compute[355794]: 2025-10-02 20:20:05.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:20:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:07 compute-0 ceph-mon[191910]: pgmap v2261: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:09 compute-0 ceph-mon[191910]: pgmap v2262: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:10 compute-0 nova_compute[355794]: 2025-10-02 20:20:10.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:10 compute-0 nova_compute[355794]: 2025-10-02 20:20:10.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:20:11 compute-0 ceph-mon[191910]: pgmap v2263: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2264: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:13 compute-0 ceph-mon[191910]: pgmap v2264: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020729298935338934 of space, bias 1.0, pg target 0.6218789680601681 quantized to 32 (current 32)
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:20:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:20:13 compute-0 podman[475533]: 2025-10-02 20:20:13.715631991 +0000 UTC m=+0.126680374 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Oct 02 20:20:13 compute-0 podman[475532]: 2025-10-02 20:20:13.71901393 +0000 UTC m=+0.134422408 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:20:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:15 compute-0 nova_compute[355794]: 2025-10-02 20:20:15.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:15 compute-0 nova_compute[355794]: 2025-10-02 20:20:15.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:15 compute-0 ceph-mon[191910]: pgmap v2265: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:20:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2266: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:16 compute-0 ceph-mon[191910]: pgmap v2266: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:16 compute-0 nova_compute[355794]: 2025-10-02 20:20:16.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:20:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:20:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2997 syncs, 3.59 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 347 writes, 721 keys, 347 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s
                                            Interval WAL: 347 writes, 168 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:20:17 compute-0 nova_compute[355794]: 2025-10-02 20:20:17.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:20:17 compute-0 nova_compute[355794]: 2025-10-02 20:20:17.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:20:18 compute-0 nova_compute[355794]: 2025-10-02 20:20:18.060 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:20:18 compute-0 nova_compute[355794]: 2025-10-02 20:20:18.061 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:20:18 compute-0 nova_compute[355794]: 2025-10-02 20:20:18.061 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:20:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2267: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:19 compute-0 ceph-mon[191910]: pgmap v2267: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:19 compute-0 nova_compute[355794]: 2025-10-02 20:20:19.419 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Updating instance_info_cache with network_info: [{"id": "f069cce3-8536-48d3-a068-b30f9a0107d5", "address": "fa:16:3e:45:37:9a", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.149", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069cce3-85", "ovs_interfaceid": "f069cce3-8536-48d3-a068-b30f9a0107d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:20:19 compute-0 nova_compute[355794]: 2025-10-02 20:20:19.444 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-f50e6a55-f3b5-402b-91b2-12d34386f656" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:20:19 compute-0 nova_compute[355794]: 2025-10-02 20:20:19.445 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:20:19 compute-0 nova_compute[355794]: 2025-10-02 20:20:19.446 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:20:19 compute-0 nova_compute[355794]: 2025-10-02 20:20:19.447 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:20:19 compute-0 nova_compute[355794]: 2025-10-02 20:20:19.447 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:20:19 compute-0 nova_compute[355794]: 2025-10-02 20:20:19.448 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:20:19 compute-0 nova_compute[355794]: 2025-10-02 20:20:19.482 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:20:19 compute-0 nova_compute[355794]: 2025-10-02 20:20:19.483 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:20:19 compute-0 nova_compute[355794]: 2025-10-02 20:20:19.484 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:20:19 compute-0 nova_compute[355794]: 2025-10-02 20:20:19.485 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:20:19 compute-0 nova_compute[355794]: 2025-10-02 20:20:19.486 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:20:19 compute-0 podman[475579]: 2025-10-02 20:20:19.72997557 +0000 UTC m=+0.124707360 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, version=9.4, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, release-0.7.12=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:20:19 compute-0 podman[475578]: 2025-10-02 20:20:19.748726935 +0000 UTC m=+0.153381137 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 20:20:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:20:19 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1971064728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.025 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.159 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.159 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.160 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.165 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.166 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.172 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.172 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:20:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:20:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2640291892' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:20:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:20:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2640291892' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:20:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1971064728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:20:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2640291892' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:20:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2640291892' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.778 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.779 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3231MB free_disk=59.863929748535156GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.780 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.780 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:20:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.879 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.880 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance f50e6a55-f3b5-402b-91b2-12d34386f656 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.880 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.880 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.881 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:20:20 compute-0 nova_compute[355794]: 2025-10-02 20:20:20.977 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:20:21 compute-0 ceph-mon[191910]: pgmap v2268: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:20:21 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3790234896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:20:21 compute-0 nova_compute[355794]: 2025-10-02 20:20:21.508 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:20:21 compute-0 nova_compute[355794]: 2025-10-02 20:20:21.521 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:20:21 compute-0 nova_compute[355794]: 2025-10-02 20:20:21.539 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:20:21 compute-0 nova_compute[355794]: 2025-10-02 20:20:21.542 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:20:21 compute-0 nova_compute[355794]: 2025-10-02 20:20:21.542 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:20:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2269: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:22 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3790234896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:20:22 compute-0 nova_compute[355794]: 2025-10-02 20:20:22.671 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:20:22 compute-0 nova_compute[355794]: 2025-10-02 20:20:22.672 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:20:23 compute-0 ceph-mon[191910]: pgmap v2269: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:23 compute-0 podman[475659]: 2025-10-02 20:20:23.683046343 +0000 UTC m=+0.104095798 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 20:20:23 compute-0 podman[475661]: 2025-10-02 20:20:23.689007991 +0000 UTC m=+0.109242164 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 20:20:23 compute-0 podman[475663]: 2025-10-02 20:20:23.70150906 +0000 UTC m=+0.104453247 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 20:20:23 compute-0 podman[475660]: 2025-10-02 20:20:23.724737563 +0000 UTC m=+0.139819140 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 20:20:23 compute-0 podman[475662]: 2025-10-02 20:20:23.744621778 +0000 UTC m=+0.148888860 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 02 20:20:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:20:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 2993 syncs, 3.68 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 565 writes, 1917 keys, 565 commit groups, 1.0 writes per commit group, ingest: 2.62 MB, 0.00 MB/s
                                            Interval WAL: 565 writes, 214 syncs, 2.64 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:20:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2270: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:25 compute-0 nova_compute[355794]: 2025-10-02 20:20:25.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:25 compute-0 nova_compute[355794]: 2025-10-02 20:20:25.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:25 compute-0 ceph-mon[191910]: pgmap v2270: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:25 compute-0 nova_compute[355794]: 2025-10-02 20:20:25.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:20:25 compute-0 sudo[475756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:20:25 compute-0 sudo[475756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:20:25 compute-0 sudo[475756]: pam_unix(sudo:session): session closed for user root
Oct 02 20:20:25 compute-0 sudo[475781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:20:25 compute-0 sudo[475781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:20:25 compute-0 sudo[475781]: pam_unix(sudo:session): session closed for user root
Oct 02 20:20:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:20:25 compute-0 sudo[475806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:20:25 compute-0 sudo[475806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:20:25 compute-0 sudo[475806]: pam_unix(sudo:session): session closed for user root
Oct 02 20:20:26 compute-0 sudo[475831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:20:26 compute-0 sudo[475831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:20:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2271: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:26 compute-0 ceph-mon[191910]: pgmap v2271: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:26 compute-0 sudo[475831]: pam_unix(sudo:session): session closed for user root
Oct 02 20:20:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:20:26 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:20:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:20:26 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:20:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:20:26 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:20:26 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev c2ec3aa4-1f3b-4b57-b933-7a9711b968f3 does not exist
Oct 02 20:20:26 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 660f4bd5-eef2-4e85-bdf6-297bc0643101 does not exist
Oct 02 20:20:26 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev a6da6f5f-8556-47db-83d4-790c5584343c does not exist
Oct 02 20:20:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:20:26 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:20:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:20:26 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:20:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:20:26 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:20:27 compute-0 sudo[475886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:20:27 compute-0 sudo[475886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:20:27 compute-0 sudo[475886]: pam_unix(sudo:session): session closed for user root
Oct 02 20:20:27 compute-0 sudo[475911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:20:27 compute-0 sudo[475911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:20:27 compute-0 sudo[475911]: pam_unix(sudo:session): session closed for user root
Oct 02 20:20:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:20:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:20:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:20:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:20:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:20:27 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:20:27 compute-0 sudo[475936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:20:27 compute-0 sudo[475936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:20:27 compute-0 sudo[475936]: pam_unix(sudo:session): session closed for user root
Oct 02 20:20:27 compute-0 sudo[475961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:20:27 compute-0 sudo[475961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:20:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:28 compute-0 podman[476025]: 2025-10-02 20:20:28.17366666 +0000 UTC m=+0.075480833 container create 5cbff3a38813cfed68cb9ff1be38d89de8be4e5b71b69226204ed759f4b5a57d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:20:28 compute-0 podman[476025]: 2025-10-02 20:20:28.144218763 +0000 UTC m=+0.046032956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:20:28 compute-0 systemd[1]: Started libpod-conmon-5cbff3a38813cfed68cb9ff1be38d89de8be4e5b71b69226204ed759f4b5a57d.scope.
Oct 02 20:20:28 compute-0 ceph-mon[191910]: pgmap v2272: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:20:28 compute-0 podman[476025]: 2025-10-02 20:20:28.365033529 +0000 UTC m=+0.266847762 container init 5cbff3a38813cfed68cb9ff1be38d89de8be4e5b71b69226204ed759f4b5a57d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 20:20:28 compute-0 podman[476025]: 2025-10-02 20:20:28.385554051 +0000 UTC m=+0.287368254 container start 5cbff3a38813cfed68cb9ff1be38d89de8be4e5b71b69226204ed759f4b5a57d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tharp, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:20:28 compute-0 podman[476025]: 2025-10-02 20:20:28.393507561 +0000 UTC m=+0.295321814 container attach 5cbff3a38813cfed68cb9ff1be38d89de8be4e5b71b69226204ed759f4b5a57d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 20:20:28 compute-0 nostalgic_tharp[476040]: 167 167
Oct 02 20:20:28 compute-0 systemd[1]: libpod-5cbff3a38813cfed68cb9ff1be38d89de8be4e5b71b69226204ed759f4b5a57d.scope: Deactivated successfully.
Oct 02 20:20:28 compute-0 podman[476025]: 2025-10-02 20:20:28.402620381 +0000 UTC m=+0.304434584 container died 5cbff3a38813cfed68cb9ff1be38d89de8be4e5b71b69226204ed759f4b5a57d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tharp, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:20:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a66a1eb90f2cf4a7c4f682f1b59f1f5567243e90dcebaa8b8da1290cacd72686-merged.mount: Deactivated successfully.
Oct 02 20:20:28 compute-0 podman[476025]: 2025-10-02 20:20:28.488881067 +0000 UTC m=+0.390695260 container remove 5cbff3a38813cfed68cb9ff1be38d89de8be4e5b71b69226204ed759f4b5a57d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 20:20:28 compute-0 systemd[1]: libpod-conmon-5cbff3a38813cfed68cb9ff1be38d89de8be4e5b71b69226204ed759f4b5a57d.scope: Deactivated successfully.
Oct 02 20:20:28 compute-0 nova_compute[355794]: 2025-10-02 20:20:28.571 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:20:28 compute-0 podman[476065]: 2025-10-02 20:20:28.815004083 +0000 UTC m=+0.096083157 container create cb5ce8d101f03514effa9c66f3181bff622d15e58a5272ea030f7b85e2ca8b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:20:28 compute-0 podman[476065]: 2025-10-02 20:20:28.77964249 +0000 UTC m=+0.060721654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:20:28 compute-0 systemd[1]: Started libpod-conmon-cb5ce8d101f03514effa9c66f3181bff622d15e58a5272ea030f7b85e2ca8b34.scope.
Oct 02 20:20:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c410b93d9a665e85b30e2f4f582e11afec7486d93eac4546489bdf93d9023a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c410b93d9a665e85b30e2f4f582e11afec7486d93eac4546489bdf93d9023a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c410b93d9a665e85b30e2f4f582e11afec7486d93eac4546489bdf93d9023a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c410b93d9a665e85b30e2f4f582e11afec7486d93eac4546489bdf93d9023a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c410b93d9a665e85b30e2f4f582e11afec7486d93eac4546489bdf93d9023a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:20:28 compute-0 podman[476065]: 2025-10-02 20:20:28.986698493 +0000 UTC m=+0.267777657 container init cb5ce8d101f03514effa9c66f3181bff622d15e58a5272ea030f7b85e2ca8b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:20:29 compute-0 podman[476065]: 2025-10-02 20:20:29.013542792 +0000 UTC m=+0.294621906 container start cb5ce8d101f03514effa9c66f3181bff622d15e58a5272ea030f7b85e2ca8b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_yonath, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct 02 20:20:29 compute-0 podman[476065]: 2025-10-02 20:20:29.020078204 +0000 UTC m=+0.301157278 container attach cb5ce8d101f03514effa9c66f3181bff622d15e58a5272ea030f7b85e2ca8b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:20:29 compute-0 nova_compute[355794]: 2025-10-02 20:20:29.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:20:29 compute-0 podman[157186]: time="2025-10-02T20:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:20:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 49207 "" "Go-http-client/1.1"
Oct 02 20:20:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9975 "" "Go-http-client/1.1"
Oct 02 20:20:30 compute-0 nova_compute[355794]: 2025-10-02 20:20:30.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:30 compute-0 nova_compute[355794]: 2025-10-02 20:20:30.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:30 compute-0 flamboyant_yonath[476081]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:20:30 compute-0 flamboyant_yonath[476081]: --> relative data size: 1.0
Oct 02 20:20:30 compute-0 flamboyant_yonath[476081]: --> All data devices are unavailable
Oct 02 20:20:30 compute-0 systemd[1]: libpod-cb5ce8d101f03514effa9c66f3181bff622d15e58a5272ea030f7b85e2ca8b34.scope: Deactivated successfully.
Oct 02 20:20:30 compute-0 systemd[1]: libpod-cb5ce8d101f03514effa9c66f3181bff622d15e58a5272ea030f7b85e2ca8b34.scope: Consumed 1.317s CPU time.
Oct 02 20:20:30 compute-0 podman[476110]: 2025-10-02 20:20:30.529235575 +0000 UTC m=+0.061090602 container died cb5ce8d101f03514effa9c66f3181bff622d15e58a5272ea030f7b85e2ca8b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_yonath, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:20:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c410b93d9a665e85b30e2f4f582e11afec7486d93eac4546489bdf93d9023a6-merged.mount: Deactivated successfully.
Oct 02 20:20:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:20:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 9063 writes, 35K keys, 9063 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9063 writes, 2265 syncs, 4.00 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 366 writes, 1012 keys, 366 commit groups, 1.0 writes per commit group, ingest: 1.01 MB, 0.00 MB/s
                                            Interval WAL: 366 writes, 160 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:20:30 compute-0 podman[476110]: 2025-10-02 20:20:30.644944389 +0000 UTC m=+0.176799416 container remove cb5ce8d101f03514effa9c66f3181bff622d15e58a5272ea030f7b85e2ca8b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_yonath, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 20:20:30 compute-0 systemd[1]: libpod-conmon-cb5ce8d101f03514effa9c66f3181bff622d15e58a5272ea030f7b85e2ca8b34.scope: Deactivated successfully.
Oct 02 20:20:30 compute-0 sudo[475961]: pam_unix(sudo:session): session closed for user root
Oct 02 20:20:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:20:30 compute-0 sudo[476124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:20:30 compute-0 sudo[476124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:20:30 compute-0 sudo[476124]: pam_unix(sudo:session): session closed for user root
Oct 02 20:20:31 compute-0 sudo[476149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:20:31 compute-0 sudo[476149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:20:31 compute-0 sudo[476149]: pam_unix(sudo:session): session closed for user root
Oct 02 20:20:31 compute-0 ceph-mon[191910]: pgmap v2273: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:31 compute-0 sudo[476174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:20:31 compute-0 sudo[476174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:20:31 compute-0 sudo[476174]: pam_unix(sudo:session): session closed for user root
Oct 02 20:20:31 compute-0 sudo[476199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:20:31 compute-0 sudo[476199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:20:31 compute-0 openstack_network_exporter[372736]: ERROR   20:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:20:31 compute-0 openstack_network_exporter[372736]: ERROR   20:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:20:31 compute-0 openstack_network_exporter[372736]: ERROR   20:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:20:31 compute-0 openstack_network_exporter[372736]: ERROR   20:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:20:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:20:31 compute-0 openstack_network_exporter[372736]: ERROR   20:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:20:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:20:31 compute-0 ceph-mgr[192222]: [devicehealth INFO root] Check health
Oct 02 20:20:32 compute-0 podman[476262]: 2025-10-02 20:20:32.045127126 +0000 UTC m=+0.081863921 container create 5d83de2cfb37aaa4c4d66a7500b7c5ab1b45d1bd45dca2b19f02b8d5a473c7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lewin, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 20:20:32 compute-0 podman[476262]: 2025-10-02 20:20:32.019595202 +0000 UTC m=+0.056331977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:20:32 compute-0 systemd[1]: Started libpod-conmon-5d83de2cfb37aaa4c4d66a7500b7c5ab1b45d1bd45dca2b19f02b8d5a473c7e3.scope.
Oct 02 20:20:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:20:32 compute-0 podman[476262]: 2025-10-02 20:20:32.210250413 +0000 UTC m=+0.246987208 container init 5d83de2cfb37aaa4c4d66a7500b7c5ab1b45d1bd45dca2b19f02b8d5a473c7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lewin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 20:20:32 compute-0 podman[476262]: 2025-10-02 20:20:32.23211104 +0000 UTC m=+0.268847835 container start 5d83de2cfb37aaa4c4d66a7500b7c5ab1b45d1bd45dca2b19f02b8d5a473c7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 20:20:32 compute-0 podman[476262]: 2025-10-02 20:20:32.239240888 +0000 UTC m=+0.275977653 container attach 5d83de2cfb37aaa4c4d66a7500b7c5ab1b45d1bd45dca2b19f02b8d5a473c7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:20:32 compute-0 musing_lewin[476279]: 167 167
Oct 02 20:20:32 compute-0 systemd[1]: libpod-5d83de2cfb37aaa4c4d66a7500b7c5ab1b45d1bd45dca2b19f02b8d5a473c7e3.scope: Deactivated successfully.
Oct 02 20:20:32 compute-0 podman[476262]: 2025-10-02 20:20:32.250131185 +0000 UTC m=+0.286867980 container died 5d83de2cfb37aaa4c4d66a7500b7c5ab1b45d1bd45dca2b19f02b8d5a473c7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lewin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:20:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-347caa45563219cd66c93013754f7f3993341af496d58812681a5724f90449c5-merged.mount: Deactivated successfully.
Oct 02 20:20:32 compute-0 podman[476262]: 2025-10-02 20:20:32.331886442 +0000 UTC m=+0.368623207 container remove 5d83de2cfb37aaa4c4d66a7500b7c5ab1b45d1bd45dca2b19f02b8d5a473c7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct 02 20:20:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:20:32.339 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:20:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:20:32.343 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:20:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:20:32.346 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:20:32 compute-0 systemd[1]: libpod-conmon-5d83de2cfb37aaa4c4d66a7500b7c5ab1b45d1bd45dca2b19f02b8d5a473c7e3.scope: Deactivated successfully.
Oct 02 20:20:32 compute-0 podman[476303]: 2025-10-02 20:20:32.638157174 +0000 UTC m=+0.088830075 container create 0d27dec0e8d151b12d627b3466636ab2de3d433756ea94f9944ca949267be929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_archimedes, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:20:32 compute-0 podman[476303]: 2025-10-02 20:20:32.600576042 +0000 UTC m=+0.051248993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:20:32 compute-0 systemd[1]: Started libpod-conmon-0d27dec0e8d151b12d627b3466636ab2de3d433756ea94f9944ca949267be929.scope.
Oct 02 20:20:32 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a3c4d1667f820bb59ae907f333be40a6b9521b7df8d90e9a333caa917ff8e37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a3c4d1667f820bb59ae907f333be40a6b9521b7df8d90e9a333caa917ff8e37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a3c4d1667f820bb59ae907f333be40a6b9521b7df8d90e9a333caa917ff8e37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a3c4d1667f820bb59ae907f333be40a6b9521b7df8d90e9a333caa917ff8e37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:20:32 compute-0 podman[476303]: 2025-10-02 20:20:32.827832509 +0000 UTC m=+0.278505470 container init 0d27dec0e8d151b12d627b3466636ab2de3d433756ea94f9944ca949267be929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 20:20:32 compute-0 podman[476303]: 2025-10-02 20:20:32.86084233 +0000 UTC m=+0.311515241 container start 0d27dec0e8d151b12d627b3466636ab2de3d433756ea94f9944ca949267be929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_archimedes, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:20:32 compute-0 podman[476303]: 2025-10-02 20:20:32.866693454 +0000 UTC m=+0.317366535 container attach 0d27dec0e8d151b12d627b3466636ab2de3d433756ea94f9944ca949267be929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_archimedes, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 02 20:20:33 compute-0 ceph-mon[191910]: pgmap v2274: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:20:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:20:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:20:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:20:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:20:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]: {
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:     "0": [
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:         {
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "devices": [
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "/dev/loop3"
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             ],
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "lv_name": "ceph_lv0",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "lv_size": "21470642176",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "name": "ceph_lv0",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "tags": {
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.cluster_name": "ceph",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.crush_device_class": "",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.encrypted": "0",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.osd_id": "0",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.type": "block",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.vdo": "0"
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             },
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "type": "block",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "vg_name": "ceph_vg0"
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:         }
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:     ],
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:     "1": [
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:         {
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "devices": [
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "/dev/loop4"
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             ],
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "lv_name": "ceph_lv1",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "lv_size": "21470642176",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "name": "ceph_lv1",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "tags": {
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.cluster_name": "ceph",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.crush_device_class": "",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.encrypted": "0",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.osd_id": "1",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.type": "block",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.vdo": "0"
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             },
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "type": "block",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "vg_name": "ceph_vg1"
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:         }
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:     ],
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:     "2": [
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:         {
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "devices": [
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "/dev/loop5"
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             ],
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "lv_name": "ceph_lv2",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "lv_size": "21470642176",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "name": "ceph_lv2",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "tags": {
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.cluster_name": "ceph",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.crush_device_class": "",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.encrypted": "0",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.osd_id": "2",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.type": "block",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:                 "ceph.vdo": "0"
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             },
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "type": "block",
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:             "vg_name": "ceph_vg2"
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:         }
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]:     ]
Oct 02 20:20:33 compute-0 inspiring_archimedes[476319]: }
Oct 02 20:20:33 compute-0 systemd[1]: libpod-0d27dec0e8d151b12d627b3466636ab2de3d433756ea94f9944ca949267be929.scope: Deactivated successfully.
Oct 02 20:20:33 compute-0 podman[476303]: 2025-10-02 20:20:33.754563283 +0000 UTC m=+1.205236194 container died 0d27dec0e8d151b12d627b3466636ab2de3d433756ea94f9944ca949267be929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_archimedes, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:20:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a3c4d1667f820bb59ae907f333be40a6b9521b7df8d90e9a333caa917ff8e37-merged.mount: Deactivated successfully.
Oct 02 20:20:33 compute-0 podman[476303]: 2025-10-02 20:20:33.853520404 +0000 UTC m=+1.304193305 container remove 0d27dec0e8d151b12d627b3466636ab2de3d433756ea94f9944ca949267be929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 20:20:33 compute-0 systemd[1]: libpod-conmon-0d27dec0e8d151b12d627b3466636ab2de3d433756ea94f9944ca949267be929.scope: Deactivated successfully.
Oct 02 20:20:33 compute-0 sudo[476199]: pam_unix(sudo:session): session closed for user root
Oct 02 20:20:33 compute-0 podman[476329]: 2025-10-02 20:20:33.960912988 +0000 UTC m=+0.165358905 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 20:20:34 compute-0 sudo[476360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:20:34 compute-0 sudo[476360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:20:34 compute-0 sudo[476360]: pam_unix(sudo:session): session closed for user root
Oct 02 20:20:34 compute-0 sudo[476387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:20:34 compute-0 sudo[476387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:20:34 compute-0 sudo[476387]: pam_unix(sudo:session): session closed for user root
Oct 02 20:20:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2275: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:34 compute-0 sudo[476412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:20:34 compute-0 sudo[476412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:20:34 compute-0 sudo[476412]: pam_unix(sudo:session): session closed for user root
Oct 02 20:20:34 compute-0 sudo[476437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:20:34 compute-0 sudo[476437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:20:35 compute-0 podman[476499]: 2025-10-02 20:20:35.041682805 +0000 UTC m=+0.084578922 container create 23cb00bc0e665600428ed763454a9cb378503af6a4721fa98726c970c939059e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_buck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:20:35 compute-0 nova_compute[355794]: 2025-10-02 20:20:35.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:35 compute-0 nova_compute[355794]: 2025-10-02 20:20:35.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:35 compute-0 podman[476499]: 2025-10-02 20:20:35.014168509 +0000 UTC m=+0.057064646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:20:35 compute-0 systemd[1]: Started libpod-conmon-23cb00bc0e665600428ed763454a9cb378503af6a4721fa98726c970c939059e.scope.
Oct 02 20:20:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:20:35 compute-0 podman[476499]: 2025-10-02 20:20:35.218971713 +0000 UTC m=+0.261867830 container init 23cb00bc0e665600428ed763454a9cb378503af6a4721fa98726c970c939059e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:20:35 compute-0 podman[476499]: 2025-10-02 20:20:35.237921063 +0000 UTC m=+0.280817150 container start 23cb00bc0e665600428ed763454a9cb378503af6a4721fa98726c970c939059e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_buck, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 20:20:35 compute-0 podman[476499]: 2025-10-02 20:20:35.243745807 +0000 UTC m=+0.286641974 container attach 23cb00bc0e665600428ed763454a9cb378503af6a4721fa98726c970c939059e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:20:35 compute-0 busy_buck[476515]: 167 167
Oct 02 20:20:35 compute-0 systemd[1]: libpod-23cb00bc0e665600428ed763454a9cb378503af6a4721fa98726c970c939059e.scope: Deactivated successfully.
Oct 02 20:20:35 compute-0 podman[476499]: 2025-10-02 20:20:35.251538673 +0000 UTC m=+0.294434810 container died 23cb00bc0e665600428ed763454a9cb378503af6a4721fa98726c970c939059e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_buck, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 20:20:35 compute-0 ceph-mon[191910]: pgmap v2275: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-062473a842e881e524e1e5ea7ba2d1ec4f301162692d62d436d70d01c4eb4e6b-merged.mount: Deactivated successfully.
Oct 02 20:20:35 compute-0 podman[476499]: 2025-10-02 20:20:35.335600141 +0000 UTC m=+0.378496228 container remove 23cb00bc0e665600428ed763454a9cb378503af6a4721fa98726c970c939059e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_buck, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 20:20:35 compute-0 systemd[1]: libpod-conmon-23cb00bc0e665600428ed763454a9cb378503af6a4721fa98726c970c939059e.scope: Deactivated successfully.
Oct 02 20:20:35 compute-0 podman[476537]: 2025-10-02 20:20:35.673909918 +0000 UTC m=+0.111607196 container create 0974ed2ef3c2dab483559c0f6ede53fa8b6f7e3b9cf56441ac02e8ae0fe8c34c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct 02 20:20:35 compute-0 podman[476537]: 2025-10-02 20:20:35.629040914 +0000 UTC m=+0.066738232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:20:35 compute-0 systemd[1]: Started libpod-conmon-0974ed2ef3c2dab483559c0f6ede53fa8b6f7e3b9cf56441ac02e8ae0fe8c34c.scope.
Oct 02 20:20:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:20:35 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5a11d16a9e2e4f9255a0487f6d358767340fa9b64ff277e70cb03a0285e4164/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5a11d16a9e2e4f9255a0487f6d358767340fa9b64ff277e70cb03a0285e4164/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5a11d16a9e2e4f9255a0487f6d358767340fa9b64ff277e70cb03a0285e4164/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5a11d16a9e2e4f9255a0487f6d358767340fa9b64ff277e70cb03a0285e4164/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:20:35 compute-0 podman[476537]: 2025-10-02 20:20:35.859832254 +0000 UTC m=+0.297529532 container init 0974ed2ef3c2dab483559c0f6ede53fa8b6f7e3b9cf56441ac02e8ae0fe8c34c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 20:20:35 compute-0 podman[476537]: 2025-10-02 20:20:35.885013138 +0000 UTC m=+0.322710386 container start 0974ed2ef3c2dab483559c0f6ede53fa8b6f7e3b9cf56441ac02e8ae0fe8c34c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 20:20:35 compute-0 podman[476537]: 2025-10-02 20:20:35.891587302 +0000 UTC m=+0.329284580 container attach 0974ed2ef3c2dab483559c0f6ede53fa8b6f7e3b9cf56441ac02e8ae0fe8c34c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:20:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]: {
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:         "osd_id": 1,
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:         "type": "bluestore"
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:     },
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:         "osd_id": 2,
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:         "type": "bluestore"
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:     },
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:         "osd_id": 0,
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:         "type": "bluestore"
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]:     }
Oct 02 20:20:37 compute-0 zealous_leavitt[476553]: }
Oct 02 20:20:37 compute-0 systemd[1]: libpod-0974ed2ef3c2dab483559c0f6ede53fa8b6f7e3b9cf56441ac02e8ae0fe8c34c.scope: Deactivated successfully.
Oct 02 20:20:37 compute-0 podman[476537]: 2025-10-02 20:20:37.193232228 +0000 UTC m=+1.630929516 container died 0974ed2ef3c2dab483559c0f6ede53fa8b6f7e3b9cf56441ac02e8ae0fe8c34c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:20:37 compute-0 systemd[1]: libpod-0974ed2ef3c2dab483559c0f6ede53fa8b6f7e3b9cf56441ac02e8ae0fe8c34c.scope: Consumed 1.290s CPU time.
Oct 02 20:20:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5a11d16a9e2e4f9255a0487f6d358767340fa9b64ff277e70cb03a0285e4164-merged.mount: Deactivated successfully.
Oct 02 20:20:37 compute-0 ceph-mon[191910]: pgmap v2276: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:37 compute-0 podman[476537]: 2025-10-02 20:20:37.298709792 +0000 UTC m=+1.736407040 container remove 0974ed2ef3c2dab483559c0f6ede53fa8b6f7e3b9cf56441ac02e8ae0fe8c34c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 20:20:37 compute-0 systemd[1]: libpod-conmon-0974ed2ef3c2dab483559c0f6ede53fa8b6f7e3b9cf56441ac02e8ae0fe8c34c.scope: Deactivated successfully.
Oct 02 20:20:37 compute-0 sudo[476437]: pam_unix(sudo:session): session closed for user root
Oct 02 20:20:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:20:37 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:20:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:20:37 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:20:37 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 95690f9c-c205-48d7-a155-cfaa841d8a89 does not exist
Oct 02 20:20:37 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev f0055bc0-18f4-4925-80ca-0ab22bb0ce5b does not exist
Oct 02 20:20:37 compute-0 sudo[476598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:20:37 compute-0 sudo[476598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:20:37 compute-0 sudo[476598]: pam_unix(sudo:session): session closed for user root
Oct 02 20:20:37 compute-0 sudo[476623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:20:37 compute-0 sudo[476623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:20:37 compute-0 sudo[476623]: pam_unix(sudo:session): session closed for user root
Oct 02 20:20:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2277: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:38 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:20:38 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:20:38 compute-0 ceph-mon[191910]: pgmap v2277: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:40 compute-0 nova_compute[355794]: 2025-10-02 20:20:40.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:40 compute-0 nova_compute[355794]: 2025-10-02 20:20:40.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2278: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:20:41 compute-0 ceph-mon[191910]: pgmap v2278: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2279: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:43 compute-0 ceph-mon[191910]: pgmap v2279: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2280: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:44 compute-0 podman[476648]: 2025-10-02 20:20:44.748183803 +0000 UTC m=+0.148387336 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 20:20:44 compute-0 podman[476649]: 2025-10-02 20:20:44.753527504 +0000 UTC m=+0.150860451 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm)
Oct 02 20:20:45 compute-0 nova_compute[355794]: 2025-10-02 20:20:45.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:45 compute-0 nova_compute[355794]: 2025-10-02 20:20:45.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:45 compute-0 ceph-mon[191910]: pgmap v2280: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:20:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:46 compute-0 ceph-mon[191910]: pgmap v2281: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2282: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:49 compute-0 ceph-mon[191910]: pgmap v2282: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:50 compute-0 nova_compute[355794]: 2025-10-02 20:20:50.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:50 compute-0 nova_compute[355794]: 2025-10-02 20:20:50.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2283: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:50 compute-0 podman[476691]: 2025-10-02 20:20:50.727093298 +0000 UTC m=+0.137624482 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:20:50 compute-0 podman[476692]: 2025-10-02 20:20:50.75217281 +0000 UTC m=+0.153187483 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, release-0.7.12=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4)
Oct 02 20:20:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:20:51 compute-0 ceph-mon[191910]: pgmap v2283: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2284: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:53 compute-0 ceph-mon[191910]: pgmap v2284: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:54 compute-0 podman[476730]: 2025-10-02 20:20:54.695699328 +0000 UTC m=+0.098467349 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 20:20:54 compute-0 podman[476737]: 2025-10-02 20:20:54.715986584 +0000 UTC m=+0.101014687 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 20:20:54 compute-0 podman[476732]: 2025-10-02 20:20:54.727003285 +0000 UTC m=+0.129356265 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, managed_by=edpm_ansible, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 02 20:20:54 compute-0 podman[476731]: 2025-10-02 20:20:54.732081889 +0000 UTC m=+0.119490764 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:20:54 compute-0 podman[476733]: 2025-10-02 20:20:54.760811427 +0000 UTC m=+0.146024655 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 20:20:55 compute-0 nova_compute[355794]: 2025-10-02 20:20:55.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:55 compute-0 nova_compute[355794]: 2025-10-02 20:20:55.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:20:55 compute-0 ceph-mon[191910]: pgmap v2285: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:20:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2286: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:57 compute-0 ceph-mon[191910]: pgmap v2286: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2287: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:58 compute-0 ceph-mon[191910]: pgmap v2287: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:20:59 compute-0 podman[157186]: time="2025-10-02T20:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:20:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:20:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9566 "" "Go-http-client/1.1"
Oct 02 20:21:00 compute-0 nova_compute[355794]: 2025-10-02 20:21:00.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:00 compute-0 nova_compute[355794]: 2025-10-02 20:21:00.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:21:01 compute-0 ceph-mon[191910]: pgmap v2288: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:01 compute-0 openstack_network_exporter[372736]: ERROR   20:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:21:01 compute-0 openstack_network_exporter[372736]: ERROR   20:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:21:01 compute-0 openstack_network_exporter[372736]: ERROR   20:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:21:01 compute-0 openstack_network_exporter[372736]: ERROR   20:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:21:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:21:01 compute-0 openstack_network_exporter[372736]: ERROR   20:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:21:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:21:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:03 compute-0 ceph-mon[191910]: pgmap v2289: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:21:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:21:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:21:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:21:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:21:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:21:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:21:03
Oct 02 20:21:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:21:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:21:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.meta', 'backups', '.rgw.root', 'default.rgw.log', 'volumes', 'default.rgw.control', 'vms']
Oct 02 20:21:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:21:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2290: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.308 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.309 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.317 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.320 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '03794a5e-b5ab-4b9e-8052-6de08e4c9f84', 'name': 'te-6077278-asg-4covgfuc3mpk-ztq7ffpxzcnr-y6lmqetg2enr', 'flavor': {'id': '2a4d7fef-934e-4921-8c3b-c6783966faa5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'fe71959f-8f59-4b45-ae05-4216d5f12fab'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '16e65e6cbbf848e5bb5755e6da3b1d33', 'user_id': 'e5d4abc29b2e475e9c7c54249ca341c4', 'hostId': '01141092f0da7a843904c1ec95a2fbe7e7386c3244d2a85e02147c4e', 'status': 'active', 'metadata': {'metering.server_group': 'f724f930-b01d-4568-9d24-c7060da9fe9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.323 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f50e6a55-f3b5-402b-91b2-12d34386f656', 'name': 'te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6', 'flavor': {'id': '2a4d7fef-934e-4921-8c3b-c6783966faa5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'fe71959f-8f59-4b45-ae05-4216d5f12fab'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '16e65e6cbbf848e5bb5755e6da3b1d33', 'user_id': 'e5d4abc29b2e475e9c7c54249ca341c4', 'hostId': '01141092f0da7a843904c1ec95a2fbe7e7386c3244d2a85e02147c4e', 'status': 'active', 'metadata': {'metering.server_group': 'f724f930-b01d-4568-9d24-c7060da9fe9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.324 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.324 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.325 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T20:21:04.324284) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.378 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.380 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.380 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.419 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.requests volume: 1099 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.420 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.455 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.requests volume: 1115 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.456 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.458 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.458 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.458 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.458 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.458 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.459 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:21:04.458880) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.481 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.481 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.482 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.500 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.501 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.521 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.522 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.523 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.524 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.524 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.524 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.524 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.525 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.526 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:21:04.524850) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.526 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.527 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.527 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.bytes volume: 73154560 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.528 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.528 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.529 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.530 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.530 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.530 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.531 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.531 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:21:04.531314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.533 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.533 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.534 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.latency volume: 10218551533 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.534 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:21:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:21:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.536 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.latency volume: 61686940867 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.537 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.539 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.539 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.540 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.540 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:21:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.541 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:21:04.540544) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.580 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.606 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.632 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.634 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.634 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.634 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.634 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.634 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.635 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.636 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.636 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.requests volume: 303 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.636 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:21:04.634695) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.637 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.637 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.requests volume: 336 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.638 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.639 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.639 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.639 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.639 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.640 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:21:04.640114) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.648 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.655 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.661 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.662 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.663 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.663 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.663 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.663 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.663 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.665 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.665 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.665 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.666 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.666 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.666 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.666 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.666 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.667 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:21:04.663883) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.668 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:21:04.666935) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.668 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.668 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.669 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.669 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.669 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.669 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.669 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:21:04.669594) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.670 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.671 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.671 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.672 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.672 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.672 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.672 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.672 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.673 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.673 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.674 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.675 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.675 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.675 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:21:04.672830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.675 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.675 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.676 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.676 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.676 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.677 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.677 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.678 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.679 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:21:04.676300) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.679 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.680 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.680 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.681 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.681 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.681 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:21:04.680324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.682 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.683 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.683 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.683 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.683 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.684 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.684 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.685 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:21:04.683748) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.685 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.bytes volume: 30812672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.686 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.686 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.bytes volume: 30829056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.687 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.688 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.688 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.689 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.689 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.689 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.689 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:21:04.689477) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.690 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.690 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.691 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.692 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.692 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.692 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.692 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.693 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.693 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.693 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.694 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.694 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.695 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.695 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.696 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.697 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.697 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.698 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.698 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:21:04.692950) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.698 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.698 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.698 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 podman[476828]: 2025-10-02 20:21:04.699260521 +0000 UTC m=+0.121317532 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.699 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.699 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.699 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/memory.usage volume: 42.20703125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.700 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/memory.usage volume: 42.4609375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.700 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.700 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.701 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.701 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.701 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.701 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2730 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.701 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.incoming.bytes volume: 2276 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.702 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.incoming.bytes volume: 2450 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.702 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.703 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.703 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.703 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.703 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.703 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.703 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.704 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.704 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.705 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:21:04.698996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:21:04.701185) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.705 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:21:04.703311) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.705 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.706 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.706 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.706 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.707 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:21:04.706000) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.707 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.708 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.708 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.709 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.709 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.710 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.710 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.710 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.710 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.710 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.710 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.711 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.711 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.711 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.711 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.711 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.711 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.711 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.712 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.712 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.713 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.713 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.713 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:21:04.710468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.714 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:21:04.711853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.714 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.714 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 68500000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.714 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/cpu volume: 335310000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.714 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/cpu volume: 340000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.715 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.715 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.715 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.715 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.715 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.715 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.716 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.716 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.716 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.latency volume: 2082740870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.716 14 DEBUG ceilometer.compute.pollsters [-] 03794a5e-b5ab-4b9e-8052-6de08e4c9f84/disk.device.read.latency volume: 153685830 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.716 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.latency volume: 3130282352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.717 14 DEBUG ceilometer.compute.pollsters [-] f50e6a55-f3b5-402b-91b2-12d34386f656/disk.device.read.latency volume: 223577318 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.717 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.718 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:21:04.714347) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.718 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:21:04.715689) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:21:04.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:21:05 compute-0 nova_compute[355794]: 2025-10-02 20:21:05.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:05 compute-0 nova_compute[355794]: 2025-10-02 20:21:05.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:05 compute-0 ceph-mon[191910]: pgmap v2290: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:21:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:07 compute-0 ceph-mon[191910]: pgmap v2291: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:09 compute-0 ceph-mon[191910]: pgmap v2292: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:10 compute-0 nova_compute[355794]: 2025-10-02 20:21:10.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:10 compute-0 nova_compute[355794]: 2025-10-02 20:21:10.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2293: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:10 compute-0 ceph-mon[191910]: pgmap v2293: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:21:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:13 compute-0 ceph-mon[191910]: pgmap v2294: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020729298935338934 of space, bias 1.0, pg target 0.6218789680601681 quantized to 32 (current 32)
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:21:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:21:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2295: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:15 compute-0 nova_compute[355794]: 2025-10-02 20:21:15.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:15 compute-0 nova_compute[355794]: 2025-10-02 20:21:15.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:15 compute-0 ceph-mon[191910]: pgmap v2295: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:15 compute-0 podman[476849]: 2025-10-02 20:21:15.735818762 +0000 UTC m=+0.153346698 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:21:15 compute-0 podman[476850]: 2025-10-02 20:21:15.773766483 +0000 UTC m=+0.182928028 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 20:21:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:21:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:17 compute-0 ceph-mon[191910]: pgmap v2296: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:17 compute-0 nova_compute[355794]: 2025-10-02 20:21:17.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:21:17 compute-0 nova_compute[355794]: 2025-10-02 20:21:17.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:21:18 compute-0 nova_compute[355794]: 2025-10-02 20:21:18.079 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-03794a5e-b5ab-4b9e-8052-6de08e4c9f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:21:18 compute-0 nova_compute[355794]: 2025-10-02 20:21:18.080 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-03794a5e-b5ab-4b9e-8052-6de08e4c9f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:21:18 compute-0 nova_compute[355794]: 2025-10-02 20:21:18.080 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:21:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2297: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:19 compute-0 ceph-mon[191910]: pgmap v2297: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:19 compute-0 nova_compute[355794]: 2025-10-02 20:21:19.627 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Updating instance_info_cache with network_info: [{"id": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "address": "fa:16:3e:a4:22:b0", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a7a2e73-ae", "ovs_interfaceid": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:21:19 compute-0 nova_compute[355794]: 2025-10-02 20:21:19.654 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-03794a5e-b5ab-4b9e-8052-6de08e4c9f84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:21:19 compute-0 nova_compute[355794]: 2025-10-02 20:21:19.656 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:21:19 compute-0 nova_compute[355794]: 2025-10-02 20:21:19.659 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:21:19 compute-0 nova_compute[355794]: 2025-10-02 20:21:19.660 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:21:19 compute-0 nova_compute[355794]: 2025-10-02 20:21:19.661 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:21:19 compute-0 nova_compute[355794]: 2025-10-02 20:21:19.661 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:21:19 compute-0 nova_compute[355794]: 2025-10-02 20:21:19.662 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:21:19 compute-0 nova_compute[355794]: 2025-10-02 20:21:19.689 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:21:19 compute-0 nova_compute[355794]: 2025-10-02 20:21:19.689 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:21:19 compute-0 nova_compute[355794]: 2025-10-02 20:21:19.689 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:21:19 compute-0 nova_compute[355794]: 2025-10-02 20:21:19.690 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:21:19 compute-0 nova_compute[355794]: 2025-10-02 20:21:19.690 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:21:20 compute-0 nova_compute[355794]: 2025-10-02 20:21:20.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:20 compute-0 nova_compute[355794]: 2025-10-02 20:21:20.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:21:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1924948828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:21:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2298: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:20 compute-0 nova_compute[355794]: 2025-10-02 20:21:20.189 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:21:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:21:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2377610375' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:21:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:21:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2377610375' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:21:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1924948828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:21:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2377610375' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:21:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2377610375' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:21:20 compute-0 nova_compute[355794]: 2025-10-02 20:21:20.303 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:21:20 compute-0 nova_compute[355794]: 2025-10-02 20:21:20.304 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:21:20 compute-0 nova_compute[355794]: 2025-10-02 20:21:20.304 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:21:20 compute-0 nova_compute[355794]: 2025-10-02 20:21:20.310 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:21:20 compute-0 nova_compute[355794]: 2025-10-02 20:21:20.311 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:21:20 compute-0 nova_compute[355794]: 2025-10-02 20:21:20.317 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:21:20 compute-0 nova_compute[355794]: 2025-10-02 20:21:20.318 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:21:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:21:20 compute-0 nova_compute[355794]: 2025-10-02 20:21:20.951 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:21:20 compute-0 nova_compute[355794]: 2025-10-02 20:21:20.953 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3204MB free_disk=59.863929748535156GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:21:20 compute-0 nova_compute[355794]: 2025-10-02 20:21:20.953 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:21:20 compute-0 nova_compute[355794]: 2025-10-02 20:21:20.954 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:21:21 compute-0 nova_compute[355794]: 2025-10-02 20:21:21.065 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:21:21 compute-0 nova_compute[355794]: 2025-10-02 20:21:21.065 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance f50e6a55-f3b5-402b-91b2-12d34386f656 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:21:21 compute-0 nova_compute[355794]: 2025-10-02 20:21:21.065 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:21:21 compute-0 nova_compute[355794]: 2025-10-02 20:21:21.065 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:21:21 compute-0 nova_compute[355794]: 2025-10-02 20:21:21.065 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:21:21 compute-0 nova_compute[355794]: 2025-10-02 20:21:21.081 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing inventories for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 20:21:21 compute-0 nova_compute[355794]: 2025-10-02 20:21:21.098 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating ProviderTree inventory for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 20:21:21 compute-0 nova_compute[355794]: 2025-10-02 20:21:21.098 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating inventory in ProviderTree for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 20:21:21 compute-0 nova_compute[355794]: 2025-10-02 20:21:21.124 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing aggregate associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 20:21:21 compute-0 nova_compute[355794]: 2025-10-02 20:21:21.149 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing trait associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, traits: COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 20:21:21 compute-0 nova_compute[355794]: 2025-10-02 20:21:21.228 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:21:21 compute-0 ceph-mon[191910]: pgmap v2298: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:21:21 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3711933610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:21:21 compute-0 podman[476937]: 2025-10-02 20:21:21.682420731 +0000 UTC m=+0.111221791 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, version=9.4, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, release-0.7.12=, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Oct 02 20:21:21 compute-0 nova_compute[355794]: 2025-10-02 20:21:21.695 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:21:21 compute-0 nova_compute[355794]: 2025-10-02 20:21:21.705 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:21:21 compute-0 podman[476936]: 2025-10-02 20:21:21.730252584 +0000 UTC m=+0.149524406 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 20:21:21 compute-0 nova_compute[355794]: 2025-10-02 20:21:21.732 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:21:21 compute-0 nova_compute[355794]: 2025-10-02 20:21:21.734 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:21:21 compute-0 nova_compute[355794]: 2025-10-02 20:21:21.734 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:21:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:22 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3711933610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:21:22 compute-0 ceph-mon[191910]: pgmap v2299: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:22 compute-0 nova_compute[355794]: 2025-10-02 20:21:22.651 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:21:22 compute-0 nova_compute[355794]: 2025-10-02 20:21:22.652 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:21:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:25 compute-0 nova_compute[355794]: 2025-10-02 20:21:25.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:25 compute-0 nova_compute[355794]: 2025-10-02 20:21:25.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:25 compute-0 ceph-mon[191910]: pgmap v2300: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:25 compute-0 podman[476974]: 2025-10-02 20:21:25.255812031 +0000 UTC m=+0.118538201 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 20:21:25 compute-0 podman[476975]: 2025-10-02 20:21:25.256006506 +0000 UTC m=+0.121074227 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001)
Oct 02 20:21:25 compute-0 podman[476978]: 2025-10-02 20:21:25.280452142 +0000 UTC m=+0.118420289 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 20:21:25 compute-0 podman[476976]: 2025-10-02 20:21:25.28884981 +0000 UTC m=+0.148491750 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git)
Oct 02 20:21:25 compute-0 podman[476977]: 2025-10-02 20:21:25.322773361 +0000 UTC m=+0.173734985 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Oct 02 20:21:25 compute-0 nova_compute[355794]: 2025-10-02 20:21:25.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:21:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:21:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2301: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:27 compute-0 ceph-mon[191910]: pgmap v2301: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2302: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:29 compute-0 ceph-mon[191910]: pgmap v2302: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:29 compute-0 nova_compute[355794]: 2025-10-02 20:21:29.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:21:29 compute-0 podman[157186]: time="2025-10-02T20:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:21:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47499 "" "Go-http-client/1.1"
Oct 02 20:21:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9561 "" "Go-http-client/1.1"
Oct 02 20:21:30 compute-0 nova_compute[355794]: 2025-10-02 20:21:30.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:30 compute-0 nova_compute[355794]: 2025-10-02 20:21:30.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2303: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:21:30 compute-0 nova_compute[355794]: 2025-10-02 20:21:30.878 2 DEBUG oslo_concurrency.lockutils [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquiring lock "f50e6a55-f3b5-402b-91b2-12d34386f656" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:21:30 compute-0 nova_compute[355794]: 2025-10-02 20:21:30.879 2 DEBUG oslo_concurrency.lockutils [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "f50e6a55-f3b5-402b-91b2-12d34386f656" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:21:30 compute-0 nova_compute[355794]: 2025-10-02 20:21:30.879 2 DEBUG oslo_concurrency.lockutils [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquiring lock "f50e6a55-f3b5-402b-91b2-12d34386f656-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:21:30 compute-0 nova_compute[355794]: 2025-10-02 20:21:30.880 2 DEBUG oslo_concurrency.lockutils [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "f50e6a55-f3b5-402b-91b2-12d34386f656-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:21:30 compute-0 nova_compute[355794]: 2025-10-02 20:21:30.880 2 DEBUG oslo_concurrency.lockutils [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "f50e6a55-f3b5-402b-91b2-12d34386f656-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:21:30 compute-0 nova_compute[355794]: 2025-10-02 20:21:30.882 2 INFO nova.compute.manager [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Terminating instance
Oct 02 20:21:30 compute-0 nova_compute[355794]: 2025-10-02 20:21:30.883 2 DEBUG nova.compute.manager [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 20:21:30 compute-0 kernel: tapf069cce3-85 (unregistering): left promiscuous mode
Oct 02 20:21:31 compute-0 NetworkManager[44968]: <info>  [1759436491.0018] device (tapf069cce3-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 20:21:31 compute-0 ovn_controller[88435]: 2025-10-02T20:21:31Z|00184|binding|INFO|Releasing lport f069cce3-8536-48d3-a068-b30f9a0107d5 from this chassis (sb_readonly=0)
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:31 compute-0 ovn_controller[88435]: 2025-10-02T20:21:31Z|00185|binding|INFO|Setting lport f069cce3-8536-48d3-a068-b30f9a0107d5 down in Southbound
Oct 02 20:21:31 compute-0 ovn_controller[88435]: 2025-10-02T20:21:31Z|00186|binding|INFO|Removing iface tapf069cce3-85 ovn-installed in OVS
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:31.042 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:37:9a 10.100.1.149'], port_security=['fa:16:3e:45:37:9a 10.100.1.149'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.149/16', 'neutron:device_id': 'f50e6a55-f3b5-402b-91b2-12d34386f656', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0deafed-687f-4945-b8e7-38e6d324244b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '16e65e6cbbf848e5bb5755e6da3b1d33', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7d0acfe3-81ce-4e08-8e78-709b63816024', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cba1ebe5-3c4d-41f0-9003-ea3a824c4dce, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=f069cce3-8536-48d3-a068-b30f9a0107d5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:21:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:31.045 285790 INFO neutron.agent.ovn.metadata.agent [-] Port f069cce3-8536-48d3-a068-b30f9a0107d5 in datapath f0deafed-687f-4945-b8e7-38e6d324244b unbound from our chassis
Oct 02 20:21:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:31.047 285790 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0deafed-687f-4945-b8e7-38e6d324244b
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:31.097 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[375c8393-069f-4e1e-a8a2-8e01e13dc854]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:21:31 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Oct 02 20:21:31 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 7min 18.609s CPU time.
Oct 02 20:21:31 compute-0 systemd-machined[137646]: Machine qemu-15-instance-0000000e terminated.
Oct 02 20:21:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:31.145 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[25ddd961-e411-4955-9246-a45daecf264b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:21:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:31.150 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[269f48cc-7e99-45da-809e-649e3977165b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:21:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:31.191 420769 DEBUG oslo.privsep.daemon [-] privsep: reply[435e8c3d-382e-4f82-8d41-b67b17da2aad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:21:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:31.217 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[60c7372c-c335-4858-89d7-039f24849bea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0deafed-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:dd:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 2260, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 2260, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691341, 'reachable_time': 35545, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 477089, 'error': None, 'target': 'ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:21:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:31.244 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4fa37c-9487-4ff4-b847-736ecb8c686b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf0deafed-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691359, 'tstamp': 691359}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 477090, 'error': None, 'target': 'ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapf0deafed-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691364, 'tstamp': 691364}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 477090, 'error': None, 'target': 'ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:21:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:31.247 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0deafed-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:31.258 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0deafed-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:21:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:31.259 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:21:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:31.259 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0deafed-60, col_values=(('external_ids', {'iface-id': 'ad4572b7-e012-418a-9c6b-97a8e10ee248'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:21:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:31.261 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.280 2 DEBUG nova.compute.manager [req-efc5f6c5-3532-4184-a6b3-aa23d146263f req-1a4d9231-6d24-49bc-ba99-c0b714067643 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Received event network-vif-unplugged-f069cce3-8536-48d3-a068-b30f9a0107d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.281 2 DEBUG oslo_concurrency.lockutils [req-efc5f6c5-3532-4184-a6b3-aa23d146263f req-1a4d9231-6d24-49bc-ba99-c0b714067643 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "f50e6a55-f3b5-402b-91b2-12d34386f656-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.282 2 DEBUG oslo_concurrency.lockutils [req-efc5f6c5-3532-4184-a6b3-aa23d146263f req-1a4d9231-6d24-49bc-ba99-c0b714067643 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "f50e6a55-f3b5-402b-91b2-12d34386f656-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.282 2 DEBUG oslo_concurrency.lockutils [req-efc5f6c5-3532-4184-a6b3-aa23d146263f req-1a4d9231-6d24-49bc-ba99-c0b714067643 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "f50e6a55-f3b5-402b-91b2-12d34386f656-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.282 2 DEBUG nova.compute.manager [req-efc5f6c5-3532-4184-a6b3-aa23d146263f req-1a4d9231-6d24-49bc-ba99-c0b714067643 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] No waiting events found dispatching network-vif-unplugged-f069cce3-8536-48d3-a068-b30f9a0107d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.283 2 DEBUG nova.compute.manager [req-efc5f6c5-3532-4184-a6b3-aa23d146263f req-1a4d9231-6d24-49bc-ba99-c0b714067643 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Received event network-vif-unplugged-f069cce3-8536-48d3-a068-b30f9a0107d5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 20:21:31 compute-0 ceph-mon[191910]: pgmap v2303: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.340 2 INFO nova.virt.libvirt.driver [-] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Instance destroyed successfully.
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.342 2 DEBUG nova.objects.instance [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lazy-loading 'resources' on Instance uuid f50e6a55-f3b5-402b-91b2-12d34386f656 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.361 2 DEBUG nova.virt.libvirt.vif [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T20:08:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6077278-asg-4covgfuc3mpk-k6mzcgziixdc-wicuimyk6ql6',id=14,image_ref='fe71959f-8f59-4b45-ae05-4216d5f12fab',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T20:09:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='f724f930-b01d-4568-9d24-c7060da9fe9c'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='16e65e6cbbf848e5bb5755e6da3b1d33',ramdisk_id='',reservation_id='r-0paxwoim',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='fe71959f-8f59-4b45-ae05-4216d5f12fab',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1246773106',owner_user_name='tempest-PrometheusGabbiTest-1246773106-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T20:09:03Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='e5d4abc29b2e475e9c7c54249ca341c4',uuid=f50e6a55-f3b5-402b-91b2-12d34386f656,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f069cce3-8536-48d3-a068-b30f9a0107d5", "address": "fa:16:3e:45:37:9a", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.149", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069cce3-85", "ovs_interfaceid": "f069cce3-8536-48d3-a068-b30f9a0107d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.362 2 DEBUG nova.network.os_vif_util [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Converting VIF {"id": "f069cce3-8536-48d3-a068-b30f9a0107d5", "address": "fa:16:3e:45:37:9a", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.149", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069cce3-85", "ovs_interfaceid": "f069cce3-8536-48d3-a068-b30f9a0107d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.363 2 DEBUG nova.network.os_vif_util [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:37:9a,bridge_name='br-int',has_traffic_filtering=True,id=f069cce3-8536-48d3-a068-b30f9a0107d5,network=Network(f0deafed-687f-4945-b8e7-38e6d324244b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069cce3-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.363 2 DEBUG os_vif [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:37:9a,bridge_name='br-int',has_traffic_filtering=True,id=f069cce3-8536-48d3-a068-b30f9a0107d5,network=Network(f0deafed-687f-4945-b8e7-38e6d324244b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069cce3-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.366 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf069cce3-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 20:21:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:31.371 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '6e:8a:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:85:f4:22:b9:4f'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:31 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:31.373 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:31 compute-0 nova_compute[355794]: 2025-10-02 20:21:31.376 2 INFO os_vif [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:37:9a,bridge_name='br-int',has_traffic_filtering=True,id=f069cce3-8536-48d3-a068-b30f9a0107d5,network=Network(f0deafed-687f-4945-b8e7-38e6d324244b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069cce3-85')
Oct 02 20:21:31 compute-0 openstack_network_exporter[372736]: ERROR   20:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:21:31 compute-0 openstack_network_exporter[372736]: ERROR   20:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:21:31 compute-0 openstack_network_exporter[372736]: ERROR   20:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:21:31 compute-0 openstack_network_exporter[372736]: ERROR   20:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:21:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:21:31 compute-0 openstack_network_exporter[372736]: ERROR   20:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:21:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:21:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:32.341 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:21:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:32.343 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:21:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:32.344 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:21:32 compute-0 nova_compute[355794]: 2025-10-02 20:21:32.347 2 INFO nova.virt.libvirt.driver [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Deleting instance files /var/lib/nova/instances/f50e6a55-f3b5-402b-91b2-12d34386f656_del
Oct 02 20:21:32 compute-0 nova_compute[355794]: 2025-10-02 20:21:32.349 2 INFO nova.virt.libvirt.driver [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Deletion of /var/lib/nova/instances/f50e6a55-f3b5-402b-91b2-12d34386f656_del complete
Oct 02 20:21:32 compute-0 nova_compute[355794]: 2025-10-02 20:21:32.425 2 INFO nova.compute.manager [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Took 1.54 seconds to destroy the instance on the hypervisor.
Oct 02 20:21:32 compute-0 nova_compute[355794]: 2025-10-02 20:21:32.426 2 DEBUG oslo.service.loopingcall [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 20:21:32 compute-0 nova_compute[355794]: 2025-10-02 20:21:32.427 2 DEBUG nova.compute.manager [-] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 20:21:32 compute-0 nova_compute[355794]: 2025-10-02 20:21:32.428 2 DEBUG nova.network.neutron [-] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 20:21:33 compute-0 ceph-mon[191910]: pgmap v2304: 321 pgs: 321 active+clean; 298 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:21:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:21:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:21:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:21:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:21:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:21:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:21:33 compute-0 nova_compute[355794]: 2025-10-02 20:21:33.923 2 DEBUG nova.compute.manager [req-69223d54-a96d-469e-ac47-d4347da69b36 req-6d70caa5-f2e5-451c-b869-78075d766a37 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Received event network-vif-plugged-f069cce3-8536-48d3-a068-b30f9a0107d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:21:33 compute-0 nova_compute[355794]: 2025-10-02 20:21:33.924 2 DEBUG oslo_concurrency.lockutils [req-69223d54-a96d-469e-ac47-d4347da69b36 req-6d70caa5-f2e5-451c-b869-78075d766a37 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "f50e6a55-f3b5-402b-91b2-12d34386f656-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:21:33 compute-0 nova_compute[355794]: 2025-10-02 20:21:33.924 2 DEBUG oslo_concurrency.lockutils [req-69223d54-a96d-469e-ac47-d4347da69b36 req-6d70caa5-f2e5-451c-b869-78075d766a37 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "f50e6a55-f3b5-402b-91b2-12d34386f656-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:21:33 compute-0 nova_compute[355794]: 2025-10-02 20:21:33.925 2 DEBUG oslo_concurrency.lockutils [req-69223d54-a96d-469e-ac47-d4347da69b36 req-6d70caa5-f2e5-451c-b869-78075d766a37 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "f50e6a55-f3b5-402b-91b2-12d34386f656-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:21:33 compute-0 nova_compute[355794]: 2025-10-02 20:21:33.925 2 DEBUG nova.compute.manager [req-69223d54-a96d-469e-ac47-d4347da69b36 req-6d70caa5-f2e5-451c-b869-78075d766a37 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] No waiting events found dispatching network-vif-plugged-f069cce3-8536-48d3-a068-b30f9a0107d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:21:33 compute-0 nova_compute[355794]: 2025-10-02 20:21:33.926 2 WARNING nova.compute.manager [req-69223d54-a96d-469e-ac47-d4347da69b36 req-6d70caa5-f2e5-451c-b869-78075d766a37 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Received unexpected event network-vif-plugged-f069cce3-8536-48d3-a068-b30f9a0107d5 for instance with vm_state active and task_state deleting.
Oct 02 20:21:34 compute-0 nova_compute[355794]: 2025-10-02 20:21:34.127 2 DEBUG nova.network.neutron [-] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:21:34 compute-0 nova_compute[355794]: 2025-10-02 20:21:34.157 2 INFO nova.compute.manager [-] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Took 1.73 seconds to deallocate network for instance.
Oct 02 20:21:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 245 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 682 B/s wr, 9 op/s
Oct 02 20:21:34 compute-0 nova_compute[355794]: 2025-10-02 20:21:34.226 2 DEBUG oslo_concurrency.lockutils [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:21:34 compute-0 nova_compute[355794]: 2025-10-02 20:21:34.227 2 DEBUG oslo_concurrency.lockutils [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:21:34 compute-0 nova_compute[355794]: 2025-10-02 20:21:34.295 2 DEBUG nova.compute.manager [req-df40a0f3-a721-4fcb-bd67-f142759ccf87 req-5ac8ff51-7fde-4c2b-b312-c7e8fab074e8 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Received event network-vif-deleted-f069cce3-8536-48d3-a068-b30f9a0107d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:21:34 compute-0 ceph-mon[191910]: pgmap v2305: 321 pgs: 321 active+clean; 245 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 682 B/s wr, 9 op/s
Oct 02 20:21:34 compute-0 nova_compute[355794]: 2025-10-02 20:21:34.365 2 DEBUG oslo_concurrency.processutils [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:21:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:21:34 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1086771207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:21:34 compute-0 nova_compute[355794]: 2025-10-02 20:21:34.898 2 DEBUG oslo_concurrency.processutils [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:21:34 compute-0 nova_compute[355794]: 2025-10-02 20:21:34.912 2 DEBUG nova.compute.provider_tree [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:21:34 compute-0 nova_compute[355794]: 2025-10-02 20:21:34.933 2 DEBUG nova.scheduler.client.report [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:21:34 compute-0 nova_compute[355794]: 2025-10-02 20:21:34.960 2 DEBUG oslo_concurrency.lockutils [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:21:34 compute-0 nova_compute[355794]: 2025-10-02 20:21:34.987 2 INFO nova.scheduler.client.report [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Deleted allocations for instance f50e6a55-f3b5-402b-91b2-12d34386f656
Oct 02 20:21:35 compute-0 nova_compute[355794]: 2025-10-02 20:21:35.050 2 DEBUG oslo_concurrency.lockutils [None req-d042a5ea-0c49-41df-b004-0681fd8bdbf0 e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "f50e6a55-f3b5-402b-91b2-12d34386f656" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:21:35 compute-0 nova_compute[355794]: 2025-10-02 20:21:35.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:35 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1086771207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:21:35 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:35.375 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1de2af17-a89c-45e5-97c6-db433f26bbb6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:21:35 compute-0 podman[477144]: 2025-10-02 20:21:35.73927742 +0000 UTC m=+0.153341636 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:21:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:21:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2306: 321 pgs: 321 active+clean; 218 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 20:21:36 compute-0 ceph-mon[191910]: pgmap v2306: 321 pgs: 321 active+clean; 218 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 20:21:36 compute-0 nova_compute[355794]: 2025-10-02 20:21:36.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:37 compute-0 sudo[477164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:21:37 compute-0 sudo[477164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:21:37 compute-0 sudo[477164]: pam_unix(sudo:session): session closed for user root
Oct 02 20:21:38 compute-0 sudo[477189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:21:38 compute-0 sudo[477189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:21:38 compute-0 sudo[477189]: pam_unix(sudo:session): session closed for user root
Oct 02 20:21:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2307: 321 pgs: 321 active+clean; 218 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 20:21:38 compute-0 sudo[477214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:21:38 compute-0 sudo[477214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:21:38 compute-0 sudo[477214]: pam_unix(sudo:session): session closed for user root
Oct 02 20:21:38 compute-0 sudo[477239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:21:38 compute-0 sudo[477239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:21:39 compute-0 sudo[477239]: pam_unix(sudo:session): session closed for user root
Oct 02 20:21:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:21:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:21:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:21:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:21:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:21:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:21:39 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev a472452e-4273-4a91-b0f4-67bb2f421ce8 does not exist
Oct 02 20:21:39 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 28113423-b976-4f83-b5ac-3e29a6b5aa1c does not exist
Oct 02 20:21:39 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev f539fcbe-81cb-4dea-8793-d7ce8efdeffc does not exist
Oct 02 20:21:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:21:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:21:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:21:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:21:39 compute-0 ceph-mon[191910]: pgmap v2307: 321 pgs: 321 active+clean; 218 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 20:21:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:21:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:21:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:21:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:21:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:21:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:21:39 compute-0 sudo[477294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:21:39 compute-0 sudo[477294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:21:39 compute-0 sudo[477294]: pam_unix(sudo:session): session closed for user root
Oct 02 20:21:39 compute-0 sudo[477319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:21:39 compute-0 sudo[477319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:21:39 compute-0 sudo[477319]: pam_unix(sudo:session): session closed for user root
Oct 02 20:21:39 compute-0 sudo[477344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:21:39 compute-0 sudo[477344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:21:39 compute-0 sudo[477344]: pam_unix(sudo:session): session closed for user root
Oct 02 20:21:39 compute-0 sudo[477369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:21:39 compute-0 sudo[477369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:21:40 compute-0 nova_compute[355794]: 2025-10-02 20:21:40.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2308: 321 pgs: 321 active+clean; 218 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 20:21:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:21:40 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:21:40 compute-0 podman[477433]: 2025-10-02 20:21:40.554259425 +0000 UTC m=+0.113915671 container create 15ec7e4233b30786d072e3090d9c67ed713b9ae74b1cefb3f70bf8d5ffb84078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 20:21:40 compute-0 podman[477433]: 2025-10-02 20:21:40.508771113 +0000 UTC m=+0.068427439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:21:40 compute-0 systemd[1]: Started libpod-conmon-15ec7e4233b30786d072e3090d9c67ed713b9ae74b1cefb3f70bf8d5ffb84078.scope.
Oct 02 20:21:40 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:21:40 compute-0 podman[477433]: 2025-10-02 20:21:40.742216389 +0000 UTC m=+0.301872705 container init 15ec7e4233b30786d072e3090d9c67ed713b9ae74b1cefb3f70bf8d5ffb84078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 20:21:40 compute-0 podman[477433]: 2025-10-02 20:21:40.760561846 +0000 UTC m=+0.320218132 container start 15ec7e4233b30786d072e3090d9c67ed713b9ae74b1cefb3f70bf8d5ffb84078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kirch, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:21:40 compute-0 podman[477433]: 2025-10-02 20:21:40.766829909 +0000 UTC m=+0.326486235 container attach 15ec7e4233b30786d072e3090d9c67ed713b9ae74b1cefb3f70bf8d5ffb84078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 20:21:40 compute-0 zealous_kirch[477449]: 167 167
Oct 02 20:21:40 compute-0 systemd[1]: libpod-15ec7e4233b30786d072e3090d9c67ed713b9ae74b1cefb3f70bf8d5ffb84078.scope: Deactivated successfully.
Oct 02 20:21:40 compute-0 podman[477433]: 2025-10-02 20:21:40.778627305 +0000 UTC m=+0.338283551 container died 15ec7e4233b30786d072e3090d9c67ed713b9ae74b1cefb3f70bf8d5ffb84078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kirch, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:21:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:21:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcf21e1f98475f6ff7ddb8b3b0a7715204fe6320d1d107ebd39d58940236a02f-merged.mount: Deactivated successfully.
Oct 02 20:21:40 compute-0 podman[477433]: 2025-10-02 20:21:40.850912594 +0000 UTC m=+0.410568880 container remove 15ec7e4233b30786d072e3090d9c67ed713b9ae74b1cefb3f70bf8d5ffb84078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kirch, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:21:40 compute-0 systemd[1]: libpod-conmon-15ec7e4233b30786d072e3090d9c67ed713b9ae74b1cefb3f70bf8d5ffb84078.scope: Deactivated successfully.
Oct 02 20:21:41 compute-0 podman[477475]: 2025-10-02 20:21:41.180425477 +0000 UTC m=+0.098236564 container create f01534714096d27617f9318a68ebb7557ff690b81f78f5ce64f5e51e2480172d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 20:21:41 compute-0 podman[477475]: 2025-10-02 20:21:41.141721511 +0000 UTC m=+0.059532658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:21:41 compute-0 systemd[1]: Started libpod-conmon-f01534714096d27617f9318a68ebb7557ff690b81f78f5ce64f5e51e2480172d.scope.
Oct 02 20:21:41 compute-0 ceph-mon[191910]: pgmap v2308: 321 pgs: 321 active+clean; 218 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 20:21:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3461161155e44700603bde1441ead595eb909d91c80f8d882a50c70e7bce27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3461161155e44700603bde1441ead595eb909d91c80f8d882a50c70e7bce27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3461161155e44700603bde1441ead595eb909d91c80f8d882a50c70e7bce27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3461161155e44700603bde1441ead595eb909d91c80f8d882a50c70e7bce27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3461161155e44700603bde1441ead595eb909d91c80f8d882a50c70e7bce27/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:21:41 compute-0 podman[477475]: 2025-10-02 20:21:41.365875466 +0000 UTC m=+0.283686563 container init f01534714096d27617f9318a68ebb7557ff690b81f78f5ce64f5e51e2480172d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shaw, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 20:21:41 compute-0 nova_compute[355794]: 2025-10-02 20:21:41.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:41 compute-0 podman[477475]: 2025-10-02 20:21:41.405859605 +0000 UTC m=+0.323670702 container start f01534714096d27617f9318a68ebb7557ff690b81f78f5ce64f5e51e2480172d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:21:41 compute-0 podman[477475]: 2025-10-02 20:21:41.413469113 +0000 UTC m=+0.331280260 container attach f01534714096d27617f9318a68ebb7557ff690b81f78f5ce64f5e51e2480172d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shaw, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:21:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 218 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.581 2 DEBUG oslo_concurrency.lockutils [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquiring lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.583 2 DEBUG oslo_concurrency.lockutils [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.584 2 DEBUG oslo_concurrency.lockutils [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquiring lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.585 2 DEBUG oslo_concurrency.lockutils [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.586 2 DEBUG oslo_concurrency.lockutils [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.588 2 INFO nova.compute.manager [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Terminating instance
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.590 2 DEBUG nova.compute.manager [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 20:21:42 compute-0 kernel: tap8a7a2e73-ae (unregistering): left promiscuous mode
Oct 02 20:21:42 compute-0 NetworkManager[44968]: <info>  [1759436502.7393] device (tap8a7a2e73-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 20:21:42 compute-0 ovn_controller[88435]: 2025-10-02T20:21:42Z|00187|binding|INFO|Releasing lport 8a7a2e73-aec8-473f-8f6e-6da1c63ae426 from this chassis (sb_readonly=0)
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:42 compute-0 ovn_controller[88435]: 2025-10-02T20:21:42Z|00188|binding|INFO|Setting lport 8a7a2e73-aec8-473f-8f6e-6da1c63ae426 down in Southbound
Oct 02 20:21:42 compute-0 ovn_controller[88435]: 2025-10-02T20:21:42Z|00189|binding|INFO|Removing iface tap8a7a2e73-ae ovn-installed in OVS
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:42.761 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:22:b0 10.100.3.13'], port_security=['fa:16:3e:a4:22:b0 10.100.3.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.13/16', 'neutron:device_id': '03794a5e-b5ab-4b9e-8052-6de08e4c9f84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0deafed-687f-4945-b8e7-38e6d324244b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '16e65e6cbbf848e5bb5755e6da3b1d33', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7d0acfe3-81ce-4e08-8e78-709b63816024', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cba1ebe5-3c4d-41f0-9003-ea3a824c4dce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>], logical_port=8a7a2e73-aec8-473f-8f6e-6da1c63ae426) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b8770dc10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:21:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:42.764 285790 INFO neutron.agent.ovn.metadata.agent [-] Port 8a7a2e73-aec8-473f-8f6e-6da1c63ae426 in datapath f0deafed-687f-4945-b8e7-38e6d324244b unbound from our chassis
Oct 02 20:21:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:42.771 285790 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f0deafed-687f-4945-b8e7-38e6d324244b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:42.779 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[f5bcf05c-7617-4817-8a23-5d043b3138e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:21:42 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:42.781 285790 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b namespace which is not needed anymore
Oct 02 20:21:42 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Oct 02 20:21:42 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 6min 55.935s CPU time.
Oct 02 20:21:42 compute-0 systemd-machined[137646]: Machine qemu-16-instance-0000000f terminated.
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:42 compute-0 compassionate_shaw[477491]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:21:42 compute-0 compassionate_shaw[477491]: --> relative data size: 1.0
Oct 02 20:21:42 compute-0 compassionate_shaw[477491]: --> All data devices are unavailable
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.866 2 INFO nova.virt.libvirt.driver [-] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Instance destroyed successfully.
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.866 2 DEBUG nova.objects.instance [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lazy-loading 'resources' on Instance uuid 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:21:42 compute-0 systemd[1]: libpod-f01534714096d27617f9318a68ebb7557ff690b81f78f5ce64f5e51e2480172d.scope: Deactivated successfully.
Oct 02 20:21:42 compute-0 systemd[1]: libpod-f01534714096d27617f9318a68ebb7557ff690b81f78f5ce64f5e51e2480172d.scope: Consumed 1.357s CPU time.
Oct 02 20:21:42 compute-0 podman[477475]: 2025-10-02 20:21:42.87990266 +0000 UTC m=+1.797713727 container died f01534714096d27617f9318a68ebb7557ff690b81f78f5ce64f5e51e2480172d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shaw, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.885 2 DEBUG nova.virt.libvirt.vif [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T20:11:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-6077278-asg-4covgfuc3mpk-ztq7ffpxzcnr-y6lmqetg2enr',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6077278-asg-4covgfuc3mpk-ztq7ffpxzcnr-y6lmqetg2enr',id=15,image_ref='fe71959f-8f59-4b45-ae05-4216d5f12fab',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T20:11:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='f724f930-b01d-4568-9d24-c7060da9fe9c'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='16e65e6cbbf848e5bb5755e6da3b1d33',ramdisk_id='',reservation_id='r-ixmbsl0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='fe71959f-8f59-4b45-ae05-4216d5f12fab',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1246773106',owner_user_name='tempest-PrometheusGabbiTest-1246773106-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T20:11:43Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='e5d4abc29b2e475e9c7c54249ca341c4',uuid=03794a5e-b5ab-4b9e-8052-6de08e4c9f84,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "address": "fa:16:3e:a4:22:b0", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a7a2e73-ae", "ovs_interfaceid": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.886 2 DEBUG nova.network.os_vif_util [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Converting VIF {"id": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "address": "fa:16:3e:a4:22:b0", "network": {"id": "f0deafed-687f-4945-b8e7-38e6d324244b", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "16e65e6cbbf848e5bb5755e6da3b1d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a7a2e73-ae", "ovs_interfaceid": "8a7a2e73-aec8-473f-8f6e-6da1c63ae426", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.887 2 DEBUG nova.network.os_vif_util [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:22:b0,bridge_name='br-int',has_traffic_filtering=True,id=8a7a2e73-aec8-473f-8f6e-6da1c63ae426,network=Network(f0deafed-687f-4945-b8e7-38e6d324244b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a7a2e73-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.887 2 DEBUG os_vif [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:22:b0,bridge_name='br-int',has_traffic_filtering=True,id=8a7a2e73-aec8-473f-8f6e-6da1c63ae426,network=Network(f0deafed-687f-4945-b8e7-38e6d324244b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a7a2e73-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.891 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a7a2e73-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:42 compute-0 nova_compute[355794]: 2025-10-02 20:21:42.910 2 INFO os_vif [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:22:b0,bridge_name='br-int',has_traffic_filtering=True,id=8a7a2e73-aec8-473f-8f6e-6da1c63ae426,network=Network(f0deafed-687f-4945-b8e7-38e6d324244b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a7a2e73-ae')
Oct 02 20:21:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a3461161155e44700603bde1441ead595eb909d91c80f8d882a50c70e7bce27-merged.mount: Deactivated successfully.
Oct 02 20:21:42 compute-0 podman[477475]: 2025-10-02 20:21:42.976919501 +0000 UTC m=+1.894730558 container remove f01534714096d27617f9318a68ebb7557ff690b81f78f5ce64f5e51e2480172d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shaw, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:21:42 compute-0 systemd[1]: libpod-conmon-f01534714096d27617f9318a68ebb7557ff690b81f78f5ce64f5e51e2480172d.scope: Deactivated successfully.
Oct 02 20:21:43 compute-0 sudo[477369]: pam_unix(sudo:session): session closed for user root
Oct 02 20:21:43 compute-0 neutron-haproxy-ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b[458975]: [NOTICE]   (458979) : haproxy version is 2.8.14-c23fe91
Oct 02 20:21:43 compute-0 neutron-haproxy-ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b[458975]: [NOTICE]   (458979) : path to executable is /usr/sbin/haproxy
Oct 02 20:21:43 compute-0 neutron-haproxy-ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b[458975]: [WARNING]  (458979) : Exiting Master process...
Oct 02 20:21:43 compute-0 neutron-haproxy-ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b[458975]: [ALERT]    (458979) : Current worker (458981) exited with code 143 (Terminated)
Oct 02 20:21:43 compute-0 neutron-haproxy-ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b[458975]: [WARNING]  (458979) : All workers exited. Exiting... (0)
Oct 02 20:21:43 compute-0 systemd[1]: libpod-89b21111d7b12e91c340950d1f36d25f839b0382cb63e7dadb0776b8d9a8b5b3.scope: Deactivated successfully.
Oct 02 20:21:43 compute-0 podman[477576]: 2025-10-02 20:21:43.048894632 +0000 UTC m=+0.077295740 container died 89b21111d7b12e91c340950d1f36d25f839b0382cb63e7dadb0776b8d9a8b5b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:21:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-89b21111d7b12e91c340950d1f36d25f839b0382cb63e7dadb0776b8d9a8b5b3-userdata-shm.mount: Deactivated successfully.
Oct 02 20:21:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d047fb53c528f113faad6f7f0d3b14603e1269fa48d7f993431f73ae26521763-merged.mount: Deactivated successfully.
Oct 02 20:21:43 compute-0 podman[477576]: 2025-10-02 20:21:43.10579816 +0000 UTC m=+0.134199268 container cleanup 89b21111d7b12e91c340950d1f36d25f839b0382cb63e7dadb0776b8d9a8b5b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 20:21:43 compute-0 sudo[477593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:21:43 compute-0 sudo[477593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:21:43 compute-0 systemd[1]: libpod-conmon-89b21111d7b12e91c340950d1f36d25f839b0382cb63e7dadb0776b8d9a8b5b3.scope: Deactivated successfully.
Oct 02 20:21:43 compute-0 sudo[477593]: pam_unix(sudo:session): session closed for user root
Oct 02 20:21:43 compute-0 podman[477631]: 2025-10-02 20:21:43.222047551 +0000 UTC m=+0.080551474 container remove 89b21111d7b12e91c340950d1f36d25f839b0382cb63e7dadb0776b8d9a8b5b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 20:21:43 compute-0 sudo[477637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:21:43 compute-0 sudo[477637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:21:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:43.238 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec08298-f9ed-4286-ac55-3a0916d36193]: (4, ('Thu Oct  2 08:21:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b (89b21111d7b12e91c340950d1f36d25f839b0382cb63e7dadb0776b8d9a8b5b3)\n89b21111d7b12e91c340950d1f36d25f839b0382cb63e7dadb0776b8d9a8b5b3\nThu Oct  2 08:21:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b (89b21111d7b12e91c340950d1f36d25f839b0382cb63e7dadb0776b8d9a8b5b3)\n89b21111d7b12e91c340950d1f36d25f839b0382cb63e7dadb0776b8d9a8b5b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:21:43 compute-0 sudo[477637]: pam_unix(sudo:session): session closed for user root
Oct 02 20:21:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:43.242 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[77989170-c5b2-493c-a486-b5ecdfd25be9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:21:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:43.244 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0deafed-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:21:43 compute-0 nova_compute[355794]: 2025-10-02 20:21:43.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:43 compute-0 kernel: tapf0deafed-60: left promiscuous mode
Oct 02 20:21:43 compute-0 nova_compute[355794]: 2025-10-02 20:21:43.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:43.269 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[c6cb743d-b882-4d05-8456-84e597386e95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:21:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:43.289 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[761906d4-d364-4017-9a66-35879ad1179d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:21:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:43.292 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[3bee0997-b264-4238-a20e-80e2943b152e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:21:43 compute-0 ceph-mon[191910]: pgmap v2309: 321 pgs: 321 active+clean; 218 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 20:21:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:43.320 420728 DEBUG oslo.privsep.daemon [-] privsep: reply[7c1e90d7-6579-4201-bf98-585ac63ae86f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691329, 'reachable_time': 21718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 477691, 'error': None, 'target': 'ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:21:43 compute-0 systemd[1]: run-netns-ovnmeta\x2df0deafed\x2d687f\x2d4945\x2db8e7\x2d38e6d324244b.mount: Deactivated successfully.
Oct 02 20:21:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:43.325 285947 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f0deafed-687f-4945-b8e7-38e6d324244b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 20:21:43 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:21:43.325 285947 DEBUG oslo.privsep.daemon [-] privsep: reply[85e2f8fc-9378-4efa-9f61-eecc7e30dee5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 20:21:43 compute-0 sudo[477670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:21:43 compute-0 sudo[477670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:21:43 compute-0 sudo[477670]: pam_unix(sudo:session): session closed for user root
Oct 02 20:21:43 compute-0 nova_compute[355794]: 2025-10-02 20:21:43.355 2 DEBUG nova.compute.manager [req-30d6244b-274d-4451-aa3e-5d47b81dc1a6 req-1de698bf-c45d-46b1-aca0-4c98d468ca89 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Received event network-vif-unplugged-8a7a2e73-aec8-473f-8f6e-6da1c63ae426 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:21:43 compute-0 nova_compute[355794]: 2025-10-02 20:21:43.356 2 DEBUG oslo_concurrency.lockutils [req-30d6244b-274d-4451-aa3e-5d47b81dc1a6 req-1de698bf-c45d-46b1-aca0-4c98d468ca89 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:21:43 compute-0 nova_compute[355794]: 2025-10-02 20:21:43.356 2 DEBUG oslo_concurrency.lockutils [req-30d6244b-274d-4451-aa3e-5d47b81dc1a6 req-1de698bf-c45d-46b1-aca0-4c98d468ca89 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:21:43 compute-0 nova_compute[355794]: 2025-10-02 20:21:43.357 2 DEBUG oslo_concurrency.lockutils [req-30d6244b-274d-4451-aa3e-5d47b81dc1a6 req-1de698bf-c45d-46b1-aca0-4c98d468ca89 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:21:43 compute-0 nova_compute[355794]: 2025-10-02 20:21:43.357 2 DEBUG nova.compute.manager [req-30d6244b-274d-4451-aa3e-5d47b81dc1a6 req-1de698bf-c45d-46b1-aca0-4c98d468ca89 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] No waiting events found dispatching network-vif-unplugged-8a7a2e73-aec8-473f-8f6e-6da1c63ae426 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:21:43 compute-0 nova_compute[355794]: 2025-10-02 20:21:43.357 2 DEBUG nova.compute.manager [req-30d6244b-274d-4451-aa3e-5d47b81dc1a6 req-1de698bf-c45d-46b1-aca0-4c98d468ca89 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Received event network-vif-unplugged-8a7a2e73-aec8-473f-8f6e-6da1c63ae426 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 20:21:43 compute-0 sudo[477699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:21:43 compute-0 sudo[477699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:21:43 compute-0 nova_compute[355794]: 2025-10-02 20:21:43.680 2 INFO nova.virt.libvirt.driver [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Deleting instance files /var/lib/nova/instances/03794a5e-b5ab-4b9e-8052-6de08e4c9f84_del
Oct 02 20:21:43 compute-0 nova_compute[355794]: 2025-10-02 20:21:43.682 2 INFO nova.virt.libvirt.driver [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Deletion of /var/lib/nova/instances/03794a5e-b5ab-4b9e-8052-6de08e4c9f84_del complete
Oct 02 20:21:43 compute-0 nova_compute[355794]: 2025-10-02 20:21:43.789 2 INFO nova.compute.manager [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Took 1.20 seconds to destroy the instance on the hypervisor.
Oct 02 20:21:43 compute-0 nova_compute[355794]: 2025-10-02 20:21:43.790 2 DEBUG oslo.service.loopingcall [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 20:21:43 compute-0 nova_compute[355794]: 2025-10-02 20:21:43.790 2 DEBUG nova.compute.manager [-] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 20:21:43 compute-0 nova_compute[355794]: 2025-10-02 20:21:43.791 2 DEBUG nova.network.neutron [-] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 20:21:44 compute-0 podman[477763]: 2025-10-02 20:21:44.063539249 +0000 UTC m=+0.095775860 container create 108b7b6ab22a11b09eefcd3bf18e47bc2a5dfc37df5c5fd8eece6a3bf9786ee1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:21:44 compute-0 podman[477763]: 2025-10-02 20:21:44.020659465 +0000 UTC m=+0.052896156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:21:44 compute-0 systemd[1]: Started libpod-conmon-108b7b6ab22a11b09eefcd3bf18e47bc2a5dfc37df5c5fd8eece6a3bf9786ee1.scope.
Oct 02 20:21:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2310: 321 pgs: 321 active+clean; 204 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Oct 02 20:21:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:21:44 compute-0 podman[477763]: 2025-10-02 20:21:44.240821486 +0000 UTC m=+0.273058187 container init 108b7b6ab22a11b09eefcd3bf18e47bc2a5dfc37df5c5fd8eece6a3bf9786ee1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:21:44 compute-0 podman[477763]: 2025-10-02 20:21:44.261221556 +0000 UTC m=+0.293458177 container start 108b7b6ab22a11b09eefcd3bf18e47bc2a5dfc37df5c5fd8eece6a3bf9786ee1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_meitner, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:21:44 compute-0 podman[477763]: 2025-10-02 20:21:44.267722355 +0000 UTC m=+0.299959056 container attach 108b7b6ab22a11b09eefcd3bf18e47bc2a5dfc37df5c5fd8eece6a3bf9786ee1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 20:21:44 compute-0 ecstatic_meitner[477779]: 167 167
Oct 02 20:21:44 compute-0 systemd[1]: libpod-108b7b6ab22a11b09eefcd3bf18e47bc2a5dfc37df5c5fd8eece6a3bf9786ee1.scope: Deactivated successfully.
Oct 02 20:21:44 compute-0 conmon[477779]: conmon 108b7b6ab22a11b09eef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-108b7b6ab22a11b09eefcd3bf18e47bc2a5dfc37df5c5fd8eece6a3bf9786ee1.scope/container/memory.events
Oct 02 20:21:44 compute-0 podman[477763]: 2025-10-02 20:21:44.279965963 +0000 UTC m=+0.312202614 container died 108b7b6ab22a11b09eefcd3bf18e47bc2a5dfc37df5c5fd8eece6a3bf9786ee1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 20:21:44 compute-0 ceph-mon[191910]: pgmap v2310: 321 pgs: 321 active+clean; 204 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Oct 02 20:21:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d492c89645277f6873be3c6399e55a05e350bbd5999bbcb73d2d2d01f6c59183-merged.mount: Deactivated successfully.
Oct 02 20:21:44 compute-0 podman[477763]: 2025-10-02 20:21:44.366715417 +0000 UTC m=+0.398952028 container remove 108b7b6ab22a11b09eefcd3bf18e47bc2a5dfc37df5c5fd8eece6a3bf9786ee1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 02 20:21:44 compute-0 systemd[1]: libpod-conmon-108b7b6ab22a11b09eefcd3bf18e47bc2a5dfc37df5c5fd8eece6a3bf9786ee1.scope: Deactivated successfully.
Oct 02 20:21:44 compute-0 nova_compute[355794]: 2025-10-02 20:21:44.655 2 DEBUG nova.network.neutron [-] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:21:44 compute-0 podman[477802]: 2025-10-02 20:21:44.653935091 +0000 UTC m=+0.101204951 container create 69efc6bb9cdfd3c3b54b098389445c408636df4b9c01f116b18d581eda97849e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_germain, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 20:21:44 compute-0 nova_compute[355794]: 2025-10-02 20:21:44.685 2 INFO nova.compute.manager [-] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Took 0.89 seconds to deallocate network for instance.
Oct 02 20:21:44 compute-0 podman[477802]: 2025-10-02 20:21:44.611207981 +0000 UTC m=+0.058477891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:21:44 compute-0 nova_compute[355794]: 2025-10-02 20:21:44.737 2 DEBUG nova.compute.manager [req-aad539f0-d187-4ded-bf38-a55ca4276e2a req-8b7fb8e0-6bdd-40ff-818a-650d1be3f952 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Received event network-vif-deleted-8a7a2e73-aec8-473f-8f6e-6da1c63ae426 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:21:44 compute-0 systemd[1]: Started libpod-conmon-69efc6bb9cdfd3c3b54b098389445c408636df4b9c01f116b18d581eda97849e.scope.
Oct 02 20:21:44 compute-0 nova_compute[355794]: 2025-10-02 20:21:44.759 2 DEBUG oslo_concurrency.lockutils [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:21:44 compute-0 nova_compute[355794]: 2025-10-02 20:21:44.760 2 DEBUG oslo_concurrency.lockutils [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:21:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:21:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f52fd24f35a4f2a7c7899cc83ee918f231c41d154098563604fa95303b33b89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:21:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f52fd24f35a4f2a7c7899cc83ee918f231c41d154098563604fa95303b33b89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:21:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f52fd24f35a4f2a7c7899cc83ee918f231c41d154098563604fa95303b33b89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:21:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f52fd24f35a4f2a7c7899cc83ee918f231c41d154098563604fa95303b33b89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:21:44 compute-0 podman[477802]: 2025-10-02 20:21:44.827108472 +0000 UTC m=+0.274378372 container init 69efc6bb9cdfd3c3b54b098389445c408636df4b9c01f116b18d581eda97849e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:21:44 compute-0 podman[477802]: 2025-10-02 20:21:44.849248617 +0000 UTC m=+0.296518477 container start 69efc6bb9cdfd3c3b54b098389445c408636df4b9c01f116b18d581eda97849e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_germain, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:21:44 compute-0 podman[477802]: 2025-10-02 20:21:44.856051374 +0000 UTC m=+0.303321234 container attach 69efc6bb9cdfd3c3b54b098389445c408636df4b9c01f116b18d581eda97849e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_germain, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:21:44 compute-0 nova_compute[355794]: 2025-10-02 20:21:44.897 2 DEBUG oslo_concurrency.processutils [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:21:45 compute-0 nova_compute[355794]: 2025-10-02 20:21:45.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:21:45 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2788033608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:21:45 compute-0 nova_compute[355794]: 2025-10-02 20:21:45.455 2 DEBUG oslo_concurrency.processutils [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:21:45 compute-0 nova_compute[355794]: 2025-10-02 20:21:45.470 2 DEBUG nova.compute.provider_tree [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:21:45 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2788033608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:21:45 compute-0 nova_compute[355794]: 2025-10-02 20:21:45.498 2 DEBUG nova.scheduler.client.report [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:21:45 compute-0 nova_compute[355794]: 2025-10-02 20:21:45.508 2 DEBUG nova.compute.manager [req-cc05bb0f-7c08-4c4e-a7ac-0d24ca6286b4 req-1e63ef02-d697-46e1-8632-157e6f18eab8 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Received event network-vif-plugged-8a7a2e73-aec8-473f-8f6e-6da1c63ae426 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 20:21:45 compute-0 nova_compute[355794]: 2025-10-02 20:21:45.508 2 DEBUG oslo_concurrency.lockutils [req-cc05bb0f-7c08-4c4e-a7ac-0d24ca6286b4 req-1e63ef02-d697-46e1-8632-157e6f18eab8 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Acquiring lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:21:45 compute-0 nova_compute[355794]: 2025-10-02 20:21:45.509 2 DEBUG oslo_concurrency.lockutils [req-cc05bb0f-7c08-4c4e-a7ac-0d24ca6286b4 req-1e63ef02-d697-46e1-8632-157e6f18eab8 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:21:45 compute-0 nova_compute[355794]: 2025-10-02 20:21:45.509 2 DEBUG oslo_concurrency.lockutils [req-cc05bb0f-7c08-4c4e-a7ac-0d24ca6286b4 req-1e63ef02-d697-46e1-8632-157e6f18eab8 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] Lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:21:45 compute-0 nova_compute[355794]: 2025-10-02 20:21:45.510 2 DEBUG nova.compute.manager [req-cc05bb0f-7c08-4c4e-a7ac-0d24ca6286b4 req-1e63ef02-d697-46e1-8632-157e6f18eab8 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] No waiting events found dispatching network-vif-plugged-8a7a2e73-aec8-473f-8f6e-6da1c63ae426 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 20:21:45 compute-0 nova_compute[355794]: 2025-10-02 20:21:45.510 2 WARNING nova.compute.manager [req-cc05bb0f-7c08-4c4e-a7ac-0d24ca6286b4 req-1e63ef02-d697-46e1-8632-157e6f18eab8 39204399503a40e19d042c83b36d8468 530af6e91cbd4e8684b0497d3543c484 - - default default] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Received unexpected event network-vif-plugged-8a7a2e73-aec8-473f-8f6e-6da1c63ae426 for instance with vm_state deleted and task_state None.
Oct 02 20:21:45 compute-0 nova_compute[355794]: 2025-10-02 20:21:45.525 2 DEBUG oslo_concurrency.lockutils [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:21:45 compute-0 nova_compute[355794]: 2025-10-02 20:21:45.582 2 INFO nova.scheduler.client.report [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Deleted allocations for instance 03794a5e-b5ab-4b9e-8052-6de08e4c9f84
Oct 02 20:21:45 compute-0 loving_germain[477817]: {
Oct 02 20:21:45 compute-0 loving_germain[477817]:     "0": [
Oct 02 20:21:45 compute-0 loving_germain[477817]:         {
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "devices": [
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "/dev/loop3"
Oct 02 20:21:45 compute-0 loving_germain[477817]:             ],
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "lv_name": "ceph_lv0",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "lv_size": "21470642176",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "name": "ceph_lv0",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "tags": {
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.cluster_name": "ceph",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.crush_device_class": "",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.encrypted": "0",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.osd_id": "0",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.type": "block",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.vdo": "0"
Oct 02 20:21:45 compute-0 loving_germain[477817]:             },
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "type": "block",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "vg_name": "ceph_vg0"
Oct 02 20:21:45 compute-0 loving_germain[477817]:         }
Oct 02 20:21:45 compute-0 loving_germain[477817]:     ],
Oct 02 20:21:45 compute-0 loving_germain[477817]:     "1": [
Oct 02 20:21:45 compute-0 loving_germain[477817]:         {
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "devices": [
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "/dev/loop4"
Oct 02 20:21:45 compute-0 loving_germain[477817]:             ],
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "lv_name": "ceph_lv1",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "lv_size": "21470642176",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "name": "ceph_lv1",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "tags": {
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.cluster_name": "ceph",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.crush_device_class": "",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.encrypted": "0",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.osd_id": "1",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.type": "block",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.vdo": "0"
Oct 02 20:21:45 compute-0 loving_germain[477817]:             },
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "type": "block",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "vg_name": "ceph_vg1"
Oct 02 20:21:45 compute-0 loving_germain[477817]:         }
Oct 02 20:21:45 compute-0 loving_germain[477817]:     ],
Oct 02 20:21:45 compute-0 loving_germain[477817]:     "2": [
Oct 02 20:21:45 compute-0 loving_germain[477817]:         {
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "devices": [
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "/dev/loop5"
Oct 02 20:21:45 compute-0 loving_germain[477817]:             ],
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "lv_name": "ceph_lv2",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "lv_size": "21470642176",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "name": "ceph_lv2",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "tags": {
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.cluster_name": "ceph",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.crush_device_class": "",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.encrypted": "0",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.osd_id": "2",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.type": "block",
Oct 02 20:21:45 compute-0 loving_germain[477817]:                 "ceph.vdo": "0"
Oct 02 20:21:45 compute-0 loving_germain[477817]:             },
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "type": "block",
Oct 02 20:21:45 compute-0 loving_germain[477817]:             "vg_name": "ceph_vg2"
Oct 02 20:21:45 compute-0 loving_germain[477817]:         }
Oct 02 20:21:45 compute-0 loving_germain[477817]:     ]
Oct 02 20:21:45 compute-0 loving_germain[477817]: }
Oct 02 20:21:45 compute-0 nova_compute[355794]: 2025-10-02 20:21:45.653 2 DEBUG oslo_concurrency.lockutils [None req-1642bf54-ea53-4d68-8023-0cc095a7bd9e e5d4abc29b2e475e9c7c54249ca341c4 16e65e6cbbf848e5bb5755e6da3b1d33 - - default default] Lock "03794a5e-b5ab-4b9e-8052-6de08e4c9f84" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:21:45 compute-0 systemd[1]: libpod-69efc6bb9cdfd3c3b54b098389445c408636df4b9c01f116b18d581eda97849e.scope: Deactivated successfully.
Oct 02 20:21:45 compute-0 podman[477802]: 2025-10-02 20:21:45.673120706 +0000 UTC m=+1.120390596 container died 69efc6bb9cdfd3c3b54b098389445c408636df4b9c01f116b18d581eda97849e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_germain, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:21:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f52fd24f35a4f2a7c7899cc83ee918f231c41d154098563604fa95303b33b89-merged.mount: Deactivated successfully.
Oct 02 20:21:45 compute-0 podman[477802]: 2025-10-02 20:21:45.789131971 +0000 UTC m=+1.236401831 container remove 69efc6bb9cdfd3c3b54b098389445c408636df4b9c01f116b18d581eda97849e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_germain, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:21:45 compute-0 systemd[1]: libpod-conmon-69efc6bb9cdfd3c3b54b098389445c408636df4b9c01f116b18d581eda97849e.scope: Deactivated successfully.
Oct 02 20:21:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:21:45 compute-0 sudo[477699]: pam_unix(sudo:session): session closed for user root
Oct 02 20:21:45 compute-0 podman[477861]: 2025-10-02 20:21:45.979297252 +0000 UTC m=+0.119770913 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:21:45 compute-0 podman[477862]: 2025-10-02 20:21:45.999444516 +0000 UTC m=+0.134850645 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute)
Oct 02 20:21:46 compute-0 sudo[477879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:21:46 compute-0 sudo[477879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:21:46 compute-0 sudo[477879]: pam_unix(sudo:session): session closed for user root
Oct 02 20:21:46 compute-0 sudo[477926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:21:46 compute-0 sudo[477926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:21:46 compute-0 sudo[477926]: pam_unix(sudo:session): session closed for user root
Oct 02 20:21:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 160 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.7 KiB/s wr, 36 op/s
Oct 02 20:21:46 compute-0 sudo[477951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:21:46 compute-0 sudo[477951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:21:46 compute-0 sudo[477951]: pam_unix(sudo:session): session closed for user root
Oct 02 20:21:46 compute-0 nova_compute[355794]: 2025-10-02 20:21:46.333 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759436491.3323066, f50e6a55-f3b5-402b-91b2-12d34386f656 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:21:46 compute-0 nova_compute[355794]: 2025-10-02 20:21:46.335 2 INFO nova.compute.manager [-] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] VM Stopped (Lifecycle Event)
Oct 02 20:21:46 compute-0 nova_compute[355794]: 2025-10-02 20:21:46.356 2 DEBUG nova.compute.manager [None req-800b5d65-322c-46c7-b6e0-d3000963e967 - - - - - -] [instance: f50e6a55-f3b5-402b-91b2-12d34386f656] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:21:46 compute-0 sudo[477976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:21:46 compute-0 sudo[477976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:21:46 compute-0 ceph-mon[191910]: pgmap v2311: 321 pgs: 321 active+clean; 160 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.7 KiB/s wr, 36 op/s
Oct 02 20:21:47 compute-0 podman[478040]: 2025-10-02 20:21:47.123844745 +0000 UTC m=+0.132476213 container create 2b5491746c15b580bb293ca5bd0661bad5677d940fb3fdb204f4306ed6b57221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_raman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:21:47 compute-0 podman[478040]: 2025-10-02 20:21:47.070261643 +0000 UTC m=+0.078893171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:21:47 compute-0 systemd[1]: Started libpod-conmon-2b5491746c15b580bb293ca5bd0661bad5677d940fb3fdb204f4306ed6b57221.scope.
Oct 02 20:21:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:21:47 compute-0 podman[478040]: 2025-10-02 20:21:47.266182754 +0000 UTC m=+0.274814222 container init 2b5491746c15b580bb293ca5bd0661bad5677d940fb3fdb204f4306ed6b57221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_raman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:21:47 compute-0 podman[478040]: 2025-10-02 20:21:47.285593579 +0000 UTC m=+0.294225017 container start 2b5491746c15b580bb293ca5bd0661bad5677d940fb3fdb204f4306ed6b57221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 20:21:47 compute-0 podman[478040]: 2025-10-02 20:21:47.291869512 +0000 UTC m=+0.300501040 container attach 2b5491746c15b580bb293ca5bd0661bad5677d940fb3fdb204f4306ed6b57221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:21:47 compute-0 flamboyant_raman[478056]: 167 167
Oct 02 20:21:47 compute-0 systemd[1]: libpod-2b5491746c15b580bb293ca5bd0661bad5677d940fb3fdb204f4306ed6b57221.scope: Deactivated successfully.
Oct 02 20:21:47 compute-0 podman[478040]: 2025-10-02 20:21:47.304157611 +0000 UTC m=+0.312789089 container died 2b5491746c15b580bb293ca5bd0661bad5677d940fb3fdb204f4306ed6b57221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_raman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 20:21:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-c25ba8eb5e0523342ac13122f577a3bb667de1f27866879253899b631707233b-merged.mount: Deactivated successfully.
Oct 02 20:21:47 compute-0 podman[478040]: 2025-10-02 20:21:47.377011444 +0000 UTC m=+0.385642882 container remove 2b5491746c15b580bb293ca5bd0661bad5677d940fb3fdb204f4306ed6b57221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 20:21:47 compute-0 systemd[1]: libpod-conmon-2b5491746c15b580bb293ca5bd0661bad5677d940fb3fdb204f4306ed6b57221.scope: Deactivated successfully.
Oct 02 20:21:47 compute-0 podman[478079]: 2025-10-02 20:21:47.655587254 +0000 UTC m=+0.096349245 container create b1d4b8c684fd08bbf266bfbc0c2d875de44c126498bce9ca56b533881d607adb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_vaughan, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:21:47 compute-0 podman[478079]: 2025-10-02 20:21:47.618832649 +0000 UTC m=+0.059594700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:21:47 compute-0 systemd[1]: Started libpod-conmon-b1d4b8c684fd08bbf266bfbc0c2d875de44c126498bce9ca56b533881d607adb.scope.
Oct 02 20:21:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:21:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca2a21ca4fcdc7e2801902bc9526fe10de9b842f4732d76bcd115b70a84916d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:21:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca2a21ca4fcdc7e2801902bc9526fe10de9b842f4732d76bcd115b70a84916d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:21:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca2a21ca4fcdc7e2801902bc9526fe10de9b842f4732d76bcd115b70a84916d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:21:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca2a21ca4fcdc7e2801902bc9526fe10de9b842f4732d76bcd115b70a84916d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:21:47 compute-0 podman[478079]: 2025-10-02 20:21:47.858986839 +0000 UTC m=+0.299748810 container init b1d4b8c684fd08bbf266bfbc0c2d875de44c126498bce9ca56b533881d607adb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:21:47 compute-0 podman[478079]: 2025-10-02 20:21:47.888673881 +0000 UTC m=+0.329435842 container start b1d4b8c684fd08bbf266bfbc0c2d875de44c126498bce9ca56b533881d607adb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 20:21:47 compute-0 nova_compute[355794]: 2025-10-02 20:21:47.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:47 compute-0 podman[478079]: 2025-10-02 20:21:47.939244925 +0000 UTC m=+0.380006996 container attach b1d4b8c684fd08bbf266bfbc0c2d875de44c126498bce9ca56b533881d607adb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_vaughan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 20:21:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2312: 321 pgs: 321 active+clean; 139 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 20:21:49 compute-0 modest_vaughan[478095]: {
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:         "osd_id": 1,
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:         "type": "bluestore"
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:     },
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:         "osd_id": 2,
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:         "type": "bluestore"
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:     },
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:         "osd_id": 0,
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:         "type": "bluestore"
Oct 02 20:21:49 compute-0 modest_vaughan[478095]:     }
Oct 02 20:21:49 compute-0 modest_vaughan[478095]: }
Oct 02 20:21:49 compute-0 systemd[1]: libpod-b1d4b8c684fd08bbf266bfbc0c2d875de44c126498bce9ca56b533881d607adb.scope: Deactivated successfully.
Oct 02 20:21:49 compute-0 systemd[1]: libpod-b1d4b8c684fd08bbf266bfbc0c2d875de44c126498bce9ca56b533881d607adb.scope: Consumed 1.229s CPU time.
Oct 02 20:21:49 compute-0 podman[478128]: 2025-10-02 20:21:49.237894072 +0000 UTC m=+0.065686207 container died b1d4b8c684fd08bbf266bfbc0c2d875de44c126498bce9ca56b533881d607adb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 02 20:21:49 compute-0 ceph-mon[191910]: pgmap v2312: 321 pgs: 321 active+clean; 139 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 20:21:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ca2a21ca4fcdc7e2801902bc9526fe10de9b842f4732d76bcd115b70a84916d-merged.mount: Deactivated successfully.
Oct 02 20:21:49 compute-0 podman[478128]: 2025-10-02 20:21:49.378386113 +0000 UTC m=+0.206178228 container remove b1d4b8c684fd08bbf266bfbc0c2d875de44c126498bce9ca56b533881d607adb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_vaughan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:21:49 compute-0 systemd[1]: libpod-conmon-b1d4b8c684fd08bbf266bfbc0c2d875de44c126498bce9ca56b533881d607adb.scope: Deactivated successfully.
Oct 02 20:21:49 compute-0 sudo[477976]: pam_unix(sudo:session): session closed for user root
Oct 02 20:21:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:21:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:21:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:21:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:21:49 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev a7709121-a683-41f4-b3c1-8a580ab1e7f2 does not exist
Oct 02 20:21:49 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 89a0ae84-ed2c-4d1e-a69d-e57a3550c036 does not exist
Oct 02 20:21:49 compute-0 sudo[478143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:21:49 compute-0 sudo[478143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:21:49 compute-0 sudo[478143]: pam_unix(sudo:session): session closed for user root
Oct 02 20:21:49 compute-0 sudo[478168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:21:49 compute-0 sudo[478168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:21:49 compute-0 sudo[478168]: pam_unix(sudo:session): session closed for user root
Oct 02 20:21:50 compute-0 nova_compute[355794]: 2025-10-02 20:21:50.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2313: 321 pgs: 321 active+clean; 139 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 20:21:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:21:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:21:50 compute-0 ceph-mon[191910]: pgmap v2313: 321 pgs: 321 active+clean; 139 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 20:21:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:21:50.826839) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436510826885, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1214, "num_deletes": 255, "total_data_size": 1804355, "memory_usage": 1834336, "flush_reason": "Manual Compaction"}
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436510842674, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 1775856, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46309, "largest_seqno": 47522, "table_properties": {"data_size": 1770071, "index_size": 3116, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12071, "raw_average_key_size": 19, "raw_value_size": 1758471, "raw_average_value_size": 2827, "num_data_blocks": 140, "num_entries": 622, "num_filter_entries": 622, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759436392, "oldest_key_time": 1759436392, "file_creation_time": 1759436510, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 15894 microseconds, and 10323 cpu microseconds.
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:21:50.842733) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 1775856 bytes OK
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:21:50.842756) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:21:50.845216) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:21:50.845229) EVENT_LOG_v1 {"time_micros": 1759436510845225, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:21:50.845247) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1798818, prev total WAL file size 1798818, number of live WAL files 2.
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:21:50.846337) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373536' seq:72057594037927935, type:22 .. '6C6F676D0032303037' seq:0, type:0; will stop at (end)
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(1734KB)], [110(7750KB)]
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436510846501, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 9712443, "oldest_snapshot_seqno": -1}
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6193 keys, 9605783 bytes, temperature: kUnknown
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436510959530, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 9605783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9564847, "index_size": 24378, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15493, "raw_key_size": 161522, "raw_average_key_size": 26, "raw_value_size": 9453247, "raw_average_value_size": 1526, "num_data_blocks": 974, "num_entries": 6193, "num_filter_entries": 6193, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759436510, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:21:50.959938) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 9605783 bytes
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:21:50.968833) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 85.8 rd, 84.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.6 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(10.9) write-amplify(5.4) OK, records in: 6715, records dropped: 522 output_compression: NoCompression
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:21:50.968866) EVENT_LOG_v1 {"time_micros": 1759436510968850, "job": 66, "event": "compaction_finished", "compaction_time_micros": 113153, "compaction_time_cpu_micros": 46866, "output_level": 6, "num_output_files": 1, "total_output_size": 9605783, "num_input_records": 6715, "num_output_records": 6193, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436510969736, "job": 66, "event": "table_file_deletion", "file_number": 112}
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436510973333, "job": 66, "event": "table_file_deletion", "file_number": 110}
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:21:50.846044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:21:50.973762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:21:50.973769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:21:50.973772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:21:50.973776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:21:50 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:21:50.973779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:21:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2314: 321 pgs: 321 active+clean; 139 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 20:21:52 compute-0 podman[478193]: 2025-10-02 20:21:52.700088663 +0000 UTC m=+0.123885750 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 20:21:52 compute-0 podman[478194]: 2025-10-02 20:21:52.717000823 +0000 UTC m=+0.127620588 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, container_name=kepler, name=ubi9, vcs-type=git, version=9.4, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, config_id=edpm, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Oct 02 20:21:52 compute-0 nova_compute[355794]: 2025-10-02 20:21:52.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:53 compute-0 ceph-mon[191910]: pgmap v2314: 321 pgs: 321 active+clean; 139 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 20:21:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2315: 321 pgs: 321 active+clean; 139 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 20:21:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct 02 20:21:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct 02 20:21:54 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct 02 20:21:55 compute-0 nova_compute[355794]: 2025-10-02 20:21:55.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:55 compute-0 ceph-mon[191910]: pgmap v2315: 321 pgs: 321 active+clean; 139 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 20:21:55 compute-0 ceph-mon[191910]: osdmap e141: 3 total, 3 up, 3 in
Oct 02 20:21:55 compute-0 podman[478235]: 2025-10-02 20:21:55.721322464 +0000 UTC m=+0.120748848 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, distribution-scope=public, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 02 20:21:55 compute-0 podman[478233]: 2025-10-02 20:21:55.736106369 +0000 UTC m=+0.156102338 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:21:55 compute-0 podman[478234]: 2025-10-02 20:21:55.740235786 +0000 UTC m=+0.152451113 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 20:21:55 compute-0 podman[478240]: 2025-10-02 20:21:55.751771106 +0000 UTC m=+0.143763467 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:21:55 compute-0 podman[478236]: 2025-10-02 20:21:55.771794156 +0000 UTC m=+0.163513780 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:21:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:21:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2317: 321 pgs: 321 active+clean; 139 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 819 KiB/s wr, 16 op/s
Oct 02 20:21:56 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct 02 20:21:56 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct 02 20:21:56 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct 02 20:21:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct 02 20:21:57 compute-0 ceph-mon[191910]: pgmap v2317: 321 pgs: 321 active+clean; 139 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 819 KiB/s wr, 16 op/s
Oct 02 20:21:57 compute-0 ceph-mon[191910]: osdmap e142: 3 total, 3 up, 3 in
Oct 02 20:21:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct 02 20:21:57 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct 02 20:21:57 compute-0 nova_compute[355794]: 2025-10-02 20:21:57.860 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759436502.8487961, 03794a5e-b5ab-4b9e-8052-6de08e4c9f84 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 20:21:57 compute-0 nova_compute[355794]: 2025-10-02 20:21:57.861 2 INFO nova.compute.manager [-] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] VM Stopped (Lifecycle Event)
Oct 02 20:21:57 compute-0 nova_compute[355794]: 2025-10-02 20:21:57.892 2 DEBUG nova.compute.manager [None req-23a33d24-b7ec-4b36-8ce7-6d901876c2f7 - - - - - -] [instance: 03794a5e-b5ab-4b9e-8052-6de08e4c9f84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 20:21:57 compute-0 nova_compute[355794]: 2025-10-02 20:21:57.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:21:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2320: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 134 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 2.7 MiB/s wr, 70 op/s
Oct 02 20:21:58 compute-0 ceph-mon[191910]: osdmap e143: 3 total, 3 up, 3 in
Oct 02 20:21:58 compute-0 ceph-mon[191910]: pgmap v2320: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 134 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 2.7 MiB/s wr, 70 op/s
Oct 02 20:21:59 compute-0 podman[157186]: time="2025-10-02T20:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:21:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:21:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9096 "" "Go-http-client/1.1"
Oct 02 20:22:00 compute-0 nova_compute[355794]: 2025-10-02 20:22:00.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2321: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 134 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 2.7 MiB/s wr, 162 op/s
Oct 02 20:22:00 compute-0 ovn_controller[88435]: 2025-10-02T20:22:00Z|00190|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 20:22:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:22:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct 02 20:22:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct 02 20:22:00 compute-0 nova_compute[355794]: 2025-10-02 20:22:00.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:00 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct 02 20:22:01 compute-0 ceph-mon[191910]: pgmap v2321: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 134 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 2.7 MiB/s wr, 162 op/s
Oct 02 20:22:01 compute-0 ceph-mon[191910]: osdmap e144: 3 total, 3 up, 3 in
Oct 02 20:22:01 compute-0 openstack_network_exporter[372736]: ERROR   20:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:22:01 compute-0 openstack_network_exporter[372736]: ERROR   20:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:22:01 compute-0 openstack_network_exporter[372736]: ERROR   20:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:22:01 compute-0 openstack_network_exporter[372736]: ERROR   20:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:22:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:22:01 compute-0 openstack_network_exporter[372736]: ERROR   20:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:22:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:22:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2323: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 126 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 129 KiB/s rd, 2.1 MiB/s wr, 192 op/s
Oct 02 20:22:02 compute-0 nova_compute[355794]: 2025-10-02 20:22:02.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:03 compute-0 ceph-mon[191910]: pgmap v2323: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 126 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 129 KiB/s rd, 2.1 MiB/s wr, 192 op/s
Oct 02 20:22:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:22:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:22:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:22:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:22:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:22:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:22:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:22:03
Oct 02 20:22:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:22:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:22:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'backups', 'images', '.mgr', 'cephfs.cephfs.data']
Oct 02 20:22:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:22:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2324: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 1.6 MiB/s wr, 170 op/s
Oct 02 20:22:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:22:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:22:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:22:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:22:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:22:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:22:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:22:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:22:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:22:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:22:05 compute-0 nova_compute[355794]: 2025-10-02 20:22:05.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:05 compute-0 ceph-mon[191910]: pgmap v2324: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 1.6 MiB/s wr, 170 op/s
Oct 02 20:22:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:22:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct 02 20:22:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct 02 20:22:05 compute-0 ceph-mon[191910]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct 02 20:22:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2326: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 571 KiB/s wr, 123 op/s
Oct 02 20:22:06 compute-0 podman[478334]: 2025-10-02 20:22:06.752215299 +0000 UTC m=+0.170002509 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 20:22:06 compute-0 ceph-mon[191910]: osdmap e145: 3 total, 3 up, 3 in
Oct 02 20:22:06 compute-0 ceph-mon[191910]: pgmap v2326: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 571 KiB/s wr, 123 op/s
Oct 02 20:22:07 compute-0 ovn_controller[88435]: 2025-10-02T20:22:07Z|00191|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 20:22:07 compute-0 nova_compute[355794]: 2025-10-02 20:22:07.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:07 compute-0 nova_compute[355794]: 2025-10-02 20:22:07.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2327: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 568 KiB/s wr, 54 op/s
Oct 02 20:22:09 compute-0 ceph-mon[191910]: pgmap v2327: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 568 KiB/s wr, 54 op/s
Oct 02 20:22:10 compute-0 nova_compute[355794]: 2025-10-02 20:22:10.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2328: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 436 B/s wr, 20 op/s
Oct 02 20:22:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:22:11 compute-0 ceph-mon[191910]: pgmap v2328: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 436 B/s wr, 20 op/s
Oct 02 20:22:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2329: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 409 B/s wr, 19 op/s
Oct 02 20:22:12 compute-0 nova_compute[355794]: 2025-10-02 20:22:12.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:13 compute-0 ceph-mon[191910]: pgmap v2329: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 409 B/s wr, 19 op/s
Oct 02 20:22:13 compute-0 ovn_controller[88435]: 2025-10-02T20:22:13Z|00192|binding|INFO|Releasing lport 4a8af5bc-5352-4506-b6d3-43b5d33802a4 from this chassis (sb_readonly=0)
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:22:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:22:13 compute-0 nova_compute[355794]: 2025-10-02 20:22:13.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2330: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:15 compute-0 nova_compute[355794]: 2025-10-02 20:22:15.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:15 compute-0 ceph-mon[191910]: pgmap v2330: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:22:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2331: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:16 compute-0 ceph-mon[191910]: pgmap v2331: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:16 compute-0 podman[478355]: 2025-10-02 20:22:16.737900152 +0000 UTC m=+0.141165089 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 20:22:16 compute-0 podman[478354]: 2025-10-02 20:22:16.770565701 +0000 UTC m=+0.182927145 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:22:17 compute-0 nova_compute[355794]: 2025-10-02 20:22:17.577 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:22:17 compute-0 nova_compute[355794]: 2025-10-02 20:22:17.613 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:22:17 compute-0 nova_compute[355794]: 2025-10-02 20:22:17.613 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:22:17 compute-0 nova_compute[355794]: 2025-10-02 20:22:17.613 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:22:17 compute-0 nova_compute[355794]: 2025-10-02 20:22:17.614 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:22:17 compute-0 nova_compute[355794]: 2025-10-02 20:22:17.614 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:22:17 compute-0 nova_compute[355794]: 2025-10-02 20:22:17.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:22:18 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2861233401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:22:18 compute-0 nova_compute[355794]: 2025-10-02 20:22:18.205 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:22:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2332: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:18 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2861233401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:22:18 compute-0 nova_compute[355794]: 2025-10-02 20:22:18.337 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:22:18 compute-0 nova_compute[355794]: 2025-10-02 20:22:18.338 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:22:18 compute-0 nova_compute[355794]: 2025-10-02 20:22:18.338 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:22:19 compute-0 nova_compute[355794]: 2025-10-02 20:22:19.008 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:22:19 compute-0 nova_compute[355794]: 2025-10-02 20:22:19.011 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3694MB free_disk=59.955204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:22:19 compute-0 nova_compute[355794]: 2025-10-02 20:22:19.011 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:22:19 compute-0 nova_compute[355794]: 2025-10-02 20:22:19.012 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:22:19 compute-0 ceph-mon[191910]: pgmap v2332: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:19 compute-0 nova_compute[355794]: 2025-10-02 20:22:19.351 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:22:19 compute-0 nova_compute[355794]: 2025-10-02 20:22:19.352 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:22:19 compute-0 nova_compute[355794]: 2025-10-02 20:22:19.352 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:22:19 compute-0 nova_compute[355794]: 2025-10-02 20:22:19.495 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:22:19 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:22:19 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/716095957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:22:20 compute-0 nova_compute[355794]: 2025-10-02 20:22:20.020 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:22:20 compute-0 nova_compute[355794]: 2025-10-02 20:22:20.035 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:22:20 compute-0 nova_compute[355794]: 2025-10-02 20:22:20.060 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:22:20 compute-0 nova_compute[355794]: 2025-10-02 20:22:20.089 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:22:20 compute-0 nova_compute[355794]: 2025-10-02 20:22:20.090 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:22:20 compute-0 nova_compute[355794]: 2025-10-02 20:22:20.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:22:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2900002245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:22:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:22:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2900002245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:22:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2333: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/716095957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:22:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2900002245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:22:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2900002245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:22:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:22:21 compute-0 nova_compute[355794]: 2025-10-02 20:22:21.090 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:22:21 compute-0 nova_compute[355794]: 2025-10-02 20:22:21.091 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:22:21 compute-0 nova_compute[355794]: 2025-10-02 20:22:21.091 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:22:21 compute-0 ceph-mon[191910]: pgmap v2333: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:22 compute-0 nova_compute[355794]: 2025-10-02 20:22:22.135 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:22:22 compute-0 nova_compute[355794]: 2025-10-02 20:22:22.136 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:22:22 compute-0 nova_compute[355794]: 2025-10-02 20:22:22.137 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:22:22 compute-0 nova_compute[355794]: 2025-10-02 20:22:22.138 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:22:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2334: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:22 compute-0 ceph-mon[191910]: pgmap v2334: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:22 compute-0 nova_compute[355794]: 2025-10-02 20:22:22.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:23 compute-0 podman[478441]: 2025-10-02 20:22:23.739097919 +0000 UTC m=+0.149004293 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Oct 02 20:22:23 compute-0 podman[478442]: 2025-10-02 20:22:23.760486825 +0000 UTC m=+0.166217851 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, release-0.7.12=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.openshift.expose-services=, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, release=1214.1726694543, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, name=ubi9)
Oct 02 20:22:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2335: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:24 compute-0 nova_compute[355794]: 2025-10-02 20:22:24.242 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:22:24 compute-0 nova_compute[355794]: 2025-10-02 20:22:24.262 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:22:24 compute-0 nova_compute[355794]: 2025-10-02 20:22:24.263 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:22:24 compute-0 nova_compute[355794]: 2025-10-02 20:22:24.264 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:22:24 compute-0 nova_compute[355794]: 2025-10-02 20:22:24.264 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:22:24 compute-0 nova_compute[355794]: 2025-10-02 20:22:24.265 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:22:24 compute-0 nova_compute[355794]: 2025-10-02 20:22:24.265 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:22:24 compute-0 nova_compute[355794]: 2025-10-02 20:22:24.266 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:22:24 compute-0 nova_compute[355794]: 2025-10-02 20:22:24.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:22:24 compute-0 nova_compute[355794]: 2025-10-02 20:22:24.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:22:25 compute-0 nova_compute[355794]: 2025-10-02 20:22:25.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:25 compute-0 ceph-mon[191910]: pgmap v2335: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:22:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2336: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:26 compute-0 nova_compute[355794]: 2025-10-02 20:22:26.593 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:22:26 compute-0 podman[478482]: 2025-10-02 20:22:26.717900157 +0000 UTC m=+0.124740822 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 20:22:26 compute-0 podman[478490]: 2025-10-02 20:22:26.727268601 +0000 UTC m=+0.109649621 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 20:22:26 compute-0 podman[478483]: 2025-10-02 20:22:26.734358605 +0000 UTC m=+0.130890402 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41)
Oct 02 20:22:26 compute-0 podman[478481]: 2025-10-02 20:22:26.7368411 +0000 UTC m=+0.151292703 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 20:22:26 compute-0 podman[478484]: 2025-10-02 20:22:26.764249582 +0000 UTC m=+0.154785234 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2)
Oct 02 20:22:27 compute-0 ceph-mon[191910]: pgmap v2336: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:27 compute-0 nova_compute[355794]: 2025-10-02 20:22:27.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2337: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:29 compute-0 ceph-mon[191910]: pgmap v2337: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:29 compute-0 nova_compute[355794]: 2025-10-02 20:22:29.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:22:29 compute-0 podman[157186]: time="2025-10-02T20:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:22:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:22:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9106 "" "Go-http-client/1.1"
Oct 02 20:22:30 compute-0 nova_compute[355794]: 2025-10-02 20:22:30.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2338: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:30 compute-0 ceph-mon[191910]: pgmap v2338: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:22:31 compute-0 openstack_network_exporter[372736]: ERROR   20:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:22:31 compute-0 openstack_network_exporter[372736]: ERROR   20:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:22:31 compute-0 openstack_network_exporter[372736]: ERROR   20:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:22:31 compute-0 openstack_network_exporter[372736]: ERROR   20:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:22:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:22:31 compute-0 openstack_network_exporter[372736]: ERROR   20:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:22:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:22:31 compute-0 nova_compute[355794]: 2025-10-02 20:22:31.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:22:31 compute-0 nova_compute[355794]: 2025-10-02 20:22:31.620 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:22:31 compute-0 nova_compute[355794]: 2025-10-02 20:22:31.621 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 20:22:31 compute-0 nova_compute[355794]: 2025-10-02 20:22:31.662 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 20:22:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2339: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:22:32.343 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:22:32.344 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:22:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:22:32.345 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:22:32 compute-0 nova_compute[355794]: 2025-10-02 20:22:32.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:33 compute-0 ceph-mon[191910]: pgmap v2339: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:22:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:22:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:22:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:22:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:22:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:22:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2340: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:35 compute-0 nova_compute[355794]: 2025-10-02 20:22:35.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:35 compute-0 ceph-mon[191910]: pgmap v2340: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:22:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2341: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:37 compute-0 ceph-mon[191910]: pgmap v2341: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:37 compute-0 podman[478581]: 2025-10-02 20:22:37.743845713 +0000 UTC m=+0.156970920 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:22:37 compute-0 nova_compute[355794]: 2025-10-02 20:22:37.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2342: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:39 compute-0 ceph-mon[191910]: pgmap v2342: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:40 compute-0 nova_compute[355794]: 2025-10-02 20:22:40.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2343: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:40 compute-0 ceph-mon[191910]: pgmap v2343: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:22:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2344: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:42 compute-0 nova_compute[355794]: 2025-10-02 20:22:42.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:22:42 compute-0 nova_compute[355794]: 2025-10-02 20:22:42.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 20:22:42 compute-0 nova_compute[355794]: 2025-10-02 20:22:42.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:43 compute-0 ceph-mon[191910]: pgmap v2344: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2345: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:44 compute-0 ovn_controller[88435]: 2025-10-02T20:22:44Z|00193|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct 02 20:22:45 compute-0 nova_compute[355794]: 2025-10-02 20:22:45.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:45 compute-0 ceph-mon[191910]: pgmap v2345: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:22:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2346: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:47 compute-0 ceph-mon[191910]: pgmap v2346: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:47 compute-0 podman[478601]: 2025-10-02 20:22:47.680982945 +0000 UTC m=+0.100246316 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:22:47 compute-0 podman[478602]: 2025-10-02 20:22:47.750311886 +0000 UTC m=+0.156510188 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Oct 02 20:22:47 compute-0 nova_compute[355794]: 2025-10-02 20:22:47.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2347: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:49 compute-0 ceph-mon[191910]: pgmap v2347: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:49 compute-0 sudo[478643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:22:49 compute-0 sudo[478643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:22:49 compute-0 sudo[478643]: pam_unix(sudo:session): session closed for user root
Oct 02 20:22:50 compute-0 sudo[478668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:22:50 compute-0 sudo[478668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:22:50 compute-0 sudo[478668]: pam_unix(sudo:session): session closed for user root
Oct 02 20:22:50 compute-0 sudo[478693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:22:50 compute-0 nova_compute[355794]: 2025-10-02 20:22:50.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:50 compute-0 sudo[478693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:22:50 compute-0 sudo[478693]: pam_unix(sudo:session): session closed for user root
Oct 02 20:22:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2348: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:50 compute-0 sudo[478718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:22:50 compute-0 sudo[478718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:22:50 compute-0 ceph-mon[191910]: pgmap v2348: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:22:50 compute-0 sudo[478718]: pam_unix(sudo:session): session closed for user root
Oct 02 20:22:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:22:51 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:22:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:22:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:22:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:22:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:22:51 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev c6e99a0f-8cef-45a1-9cc8-c860b1c04d8d does not exist
Oct 02 20:22:51 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev ec39301f-36da-454f-b48a-a9a955aae103 does not exist
Oct 02 20:22:51 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 36a3cb2e-f2f3-4eee-be93-3eaea1ad72f3 does not exist
Oct 02 20:22:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:22:51 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:22:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:22:51 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:22:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:22:51 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:22:51 compute-0 sudo[478773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:22:51 compute-0 sudo[478773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:22:51 compute-0 sudo[478773]: pam_unix(sudo:session): session closed for user root
Oct 02 20:22:51 compute-0 sudo[478798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:22:51 compute-0 sudo[478798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:22:51 compute-0 sudo[478798]: pam_unix(sudo:session): session closed for user root
Oct 02 20:22:51 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:22:51 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:22:51 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:22:51 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:22:51 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:22:51 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:22:51 compute-0 sudo[478823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:22:51 compute-0 sudo[478823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:22:51 compute-0 sudo[478823]: pam_unix(sudo:session): session closed for user root
Oct 02 20:22:51 compute-0 sudo[478848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:22:51 compute-0 sudo[478848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:22:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2349: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:52 compute-0 podman[478909]: 2025-10-02 20:22:52.252744709 +0000 UTC m=+0.072868384 container create 3ce8bc8210a1d4235342ef3df9393704ff47fff564282619385c234d0d760d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mccarthy, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 20:22:52 compute-0 systemd[1]: Started libpod-conmon-3ce8bc8210a1d4235342ef3df9393704ff47fff564282619385c234d0d760d99.scope.
Oct 02 20:22:52 compute-0 podman[478909]: 2025-10-02 20:22:52.230099601 +0000 UTC m=+0.050223246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:22:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:22:52 compute-0 ceph-mon[191910]: pgmap v2349: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:52 compute-0 podman[478909]: 2025-10-02 20:22:52.389310258 +0000 UTC m=+0.209433913 container init 3ce8bc8210a1d4235342ef3df9393704ff47fff564282619385c234d0d760d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mccarthy, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:22:52 compute-0 podman[478909]: 2025-10-02 20:22:52.406508845 +0000 UTC m=+0.226632520 container start 3ce8bc8210a1d4235342ef3df9393704ff47fff564282619385c234d0d760d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 20:22:52 compute-0 podman[478909]: 2025-10-02 20:22:52.41285372 +0000 UTC m=+0.232977355 container attach 3ce8bc8210a1d4235342ef3df9393704ff47fff564282619385c234d0d760d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 20:22:52 compute-0 upbeat_mccarthy[478926]: 167 167
Oct 02 20:22:52 compute-0 systemd[1]: libpod-3ce8bc8210a1d4235342ef3df9393704ff47fff564282619385c234d0d760d99.scope: Deactivated successfully.
Oct 02 20:22:52 compute-0 podman[478909]: 2025-10-02 20:22:52.418969229 +0000 UTC m=+0.239092884 container died 3ce8bc8210a1d4235342ef3df9393704ff47fff564282619385c234d0d760d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mccarthy, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 20:22:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e49cccbee00b7bf2239e830e438e2d2d7c82124ea22727015f89ddfff5e2e5f2-merged.mount: Deactivated successfully.
Oct 02 20:22:52 compute-0 podman[478909]: 2025-10-02 20:22:52.492460209 +0000 UTC m=+0.312583844 container remove 3ce8bc8210a1d4235342ef3df9393704ff47fff564282619385c234d0d760d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:22:52 compute-0 systemd[1]: libpod-conmon-3ce8bc8210a1d4235342ef3df9393704ff47fff564282619385c234d0d760d99.scope: Deactivated successfully.
Oct 02 20:22:52 compute-0 podman[478950]: 2025-10-02 20:22:52.782850195 +0000 UTC m=+0.102856444 container create 23b8a686d7e6ab90110eacba4deffb7d17b5d25c74e2dcef9732c5fa4f54071e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 20:22:52 compute-0 podman[478950]: 2025-10-02 20:22:52.744825277 +0000 UTC m=+0.064831576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:22:52 compute-0 systemd[1]: Started libpod-conmon-23b8a686d7e6ab90110eacba4deffb7d17b5d25c74e2dcef9732c5fa4f54071e.scope.
Oct 02 20:22:52 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d694f4f4fbe187a9a74da66792ada46e3278cf91a60c3dc4cec9a19cdf4d993/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d694f4f4fbe187a9a74da66792ada46e3278cf91a60c3dc4cec9a19cdf4d993/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d694f4f4fbe187a9a74da66792ada46e3278cf91a60c3dc4cec9a19cdf4d993/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d694f4f4fbe187a9a74da66792ada46e3278cf91a60c3dc4cec9a19cdf4d993/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:22:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d694f4f4fbe187a9a74da66792ada46e3278cf91a60c3dc4cec9a19cdf4d993/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:22:52 compute-0 nova_compute[355794]: 2025-10-02 20:22:52.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:52 compute-0 podman[478950]: 2025-10-02 20:22:52.983981552 +0000 UTC m=+0.303987791 container init 23b8a686d7e6ab90110eacba4deffb7d17b5d25c74e2dcef9732c5fa4f54071e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:22:53 compute-0 podman[478950]: 2025-10-02 20:22:53.024506475 +0000 UTC m=+0.344512694 container start 23b8a686d7e6ab90110eacba4deffb7d17b5d25c74e2dcef9732c5fa4f54071e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:22:53 compute-0 podman[478950]: 2025-10-02 20:22:53.030972903 +0000 UTC m=+0.350979122 container attach 23b8a686d7e6ab90110eacba4deffb7d17b5d25c74e2dcef9732c5fa4f54071e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:22:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2350: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:54 compute-0 quizzical_antonelli[478966]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:22:54 compute-0 quizzical_antonelli[478966]: --> relative data size: 1.0
Oct 02 20:22:54 compute-0 quizzical_antonelli[478966]: --> All data devices are unavailable
Oct 02 20:22:54 compute-0 systemd[1]: libpod-23b8a686d7e6ab90110eacba4deffb7d17b5d25c74e2dcef9732c5fa4f54071e.scope: Deactivated successfully.
Oct 02 20:22:54 compute-0 systemd[1]: libpod-23b8a686d7e6ab90110eacba4deffb7d17b5d25c74e2dcef9732c5fa4f54071e.scope: Consumed 1.381s CPU time.
Oct 02 20:22:54 compute-0 podman[478950]: 2025-10-02 20:22:54.460317786 +0000 UTC m=+1.780324025 container died 23b8a686d7e6ab90110eacba4deffb7d17b5d25c74e2dcef9732c5fa4f54071e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 20:22:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d694f4f4fbe187a9a74da66792ada46e3278cf91a60c3dc4cec9a19cdf4d993-merged.mount: Deactivated successfully.
Oct 02 20:22:54 compute-0 podman[478950]: 2025-10-02 20:22:54.583722283 +0000 UTC m=+1.903728502 container remove 23b8a686d7e6ab90110eacba4deffb7d17b5d25c74e2dcef9732c5fa4f54071e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:22:54 compute-0 systemd[1]: libpod-conmon-23b8a686d7e6ab90110eacba4deffb7d17b5d25c74e2dcef9732c5fa4f54071e.scope: Deactivated successfully.
Oct 02 20:22:54 compute-0 sudo[478848]: pam_unix(sudo:session): session closed for user root
Oct 02 20:22:54 compute-0 podman[478996]: 2025-10-02 20:22:54.636251398 +0000 UTC m=+0.132155545 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 20:22:54 compute-0 podman[479003]: 2025-10-02 20:22:54.663895286 +0000 UTC m=+0.137658518 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=kepler, architecture=x86_64, io.openshift.tags=base rhel9, name=ubi9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., release=1214.1726694543, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, distribution-scope=public, release-0.7.12=, io.openshift.expose-services=)
Oct 02 20:22:54 compute-0 sudo[479043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:22:54 compute-0 sudo[479043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:22:54 compute-0 sudo[479043]: pam_unix(sudo:session): session closed for user root
Oct 02 20:22:54 compute-0 sudo[479070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:22:54 compute-0 sudo[479070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:22:54 compute-0 sudo[479070]: pam_unix(sudo:session): session closed for user root
Oct 02 20:22:55 compute-0 sudo[479095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:22:55 compute-0 sudo[479095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:22:55 compute-0 sudo[479095]: pam_unix(sudo:session): session closed for user root
Oct 02 20:22:55 compute-0 nova_compute[355794]: 2025-10-02 20:22:55.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:55 compute-0 sudo[479120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:22:55 compute-0 sudo[479120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:22:55 compute-0 ceph-mon[191910]: pgmap v2350: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:55 compute-0 podman[479185]: 2025-10-02 20:22:55.770025811 +0000 UTC m=+0.083128821 container create 090ff26489bd2470f6ebd17fd2393cf30ed233a3f4dd7a1653dcec81a1914fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 20:22:55 compute-0 podman[479185]: 2025-10-02 20:22:55.736374367 +0000 UTC m=+0.049477477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:22:55 compute-0 systemd[1]: Started libpod-conmon-090ff26489bd2470f6ebd17fd2393cf30ed233a3f4dd7a1653dcec81a1914fd4.scope.
Oct 02 20:22:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:22:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:22:55 compute-0 podman[479185]: 2025-10-02 20:22:55.903994302 +0000 UTC m=+0.217097362 container init 090ff26489bd2470f6ebd17fd2393cf30ed233a3f4dd7a1653dcec81a1914fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 20:22:55 compute-0 podman[479185]: 2025-10-02 20:22:55.915749308 +0000 UTC m=+0.228852368 container start 090ff26489bd2470f6ebd17fd2393cf30ed233a3f4dd7a1653dcec81a1914fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:22:55 compute-0 podman[479185]: 2025-10-02 20:22:55.922855603 +0000 UTC m=+0.235958663 container attach 090ff26489bd2470f6ebd17fd2393cf30ed233a3f4dd7a1653dcec81a1914fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:22:55 compute-0 bold_sinoussi[479201]: 167 167
Oct 02 20:22:55 compute-0 systemd[1]: libpod-090ff26489bd2470f6ebd17fd2393cf30ed233a3f4dd7a1653dcec81a1914fd4.scope: Deactivated successfully.
Oct 02 20:22:55 compute-0 podman[479185]: 2025-10-02 20:22:55.927146064 +0000 UTC m=+0.240249074 container died 090ff26489bd2470f6ebd17fd2393cf30ed233a3f4dd7a1653dcec81a1914fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 20:22:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbbe60a5f8722c4a614cfd2e1133273c4efd6f1854f67538da3fff80e369c7e8-merged.mount: Deactivated successfully.
Oct 02 20:22:56 compute-0 podman[479185]: 2025-10-02 20:22:56.003099538 +0000 UTC m=+0.316202568 container remove 090ff26489bd2470f6ebd17fd2393cf30ed233a3f4dd7a1653dcec81a1914fd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 20:22:56 compute-0 systemd[1]: libpod-conmon-090ff26489bd2470f6ebd17fd2393cf30ed233a3f4dd7a1653dcec81a1914fd4.scope: Deactivated successfully.
Oct 02 20:22:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2351: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:56 compute-0 podman[479223]: 2025-10-02 20:22:56.260257051 +0000 UTC m=+0.080596586 container create d247a049ea1575a5fe721cca39a0aca96e7a0bb7bf4066e269b6abf4b1d64420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 02 20:22:56 compute-0 podman[479223]: 2025-10-02 20:22:56.225006585 +0000 UTC m=+0.045346200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:22:56 compute-0 systemd[1]: Started libpod-conmon-d247a049ea1575a5fe721cca39a0aca96e7a0bb7bf4066e269b6abf4b1d64420.scope.
Oct 02 20:22:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71233f89347f396ce5ffea9d7365cf7475b44daa107922a29a2d8d7bfbbcdeda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71233f89347f396ce5ffea9d7365cf7475b44daa107922a29a2d8d7bfbbcdeda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71233f89347f396ce5ffea9d7365cf7475b44daa107922a29a2d8d7bfbbcdeda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71233f89347f396ce5ffea9d7365cf7475b44daa107922a29a2d8d7bfbbcdeda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:22:56 compute-0 podman[479223]: 2025-10-02 20:22:56.418892993 +0000 UTC m=+0.239232538 container init d247a049ea1575a5fe721cca39a0aca96e7a0bb7bf4066e269b6abf4b1d64420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 20:22:56 compute-0 podman[479223]: 2025-10-02 20:22:56.435911615 +0000 UTC m=+0.256251140 container start d247a049ea1575a5fe721cca39a0aca96e7a0bb7bf4066e269b6abf4b1d64420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 20:22:56 compute-0 podman[479223]: 2025-10-02 20:22:56.440491714 +0000 UTC m=+0.260831229 container attach d247a049ea1575a5fe721cca39a0aca96e7a0bb7bf4066e269b6abf4b1d64420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 02 20:22:57 compute-0 musing_cray[479238]: {
Oct 02 20:22:57 compute-0 musing_cray[479238]:     "0": [
Oct 02 20:22:57 compute-0 musing_cray[479238]:         {
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "devices": [
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "/dev/loop3"
Oct 02 20:22:57 compute-0 musing_cray[479238]:             ],
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "lv_name": "ceph_lv0",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "lv_size": "21470642176",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "name": "ceph_lv0",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "tags": {
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.cluster_name": "ceph",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.crush_device_class": "",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.encrypted": "0",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.osd_id": "0",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.type": "block",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.vdo": "0"
Oct 02 20:22:57 compute-0 musing_cray[479238]:             },
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "type": "block",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "vg_name": "ceph_vg0"
Oct 02 20:22:57 compute-0 musing_cray[479238]:         }
Oct 02 20:22:57 compute-0 musing_cray[479238]:     ],
Oct 02 20:22:57 compute-0 musing_cray[479238]:     "1": [
Oct 02 20:22:57 compute-0 musing_cray[479238]:         {
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "devices": [
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "/dev/loop4"
Oct 02 20:22:57 compute-0 musing_cray[479238]:             ],
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "lv_name": "ceph_lv1",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "lv_size": "21470642176",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "name": "ceph_lv1",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "tags": {
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.cluster_name": "ceph",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.crush_device_class": "",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.encrypted": "0",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.osd_id": "1",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.type": "block",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.vdo": "0"
Oct 02 20:22:57 compute-0 musing_cray[479238]:             },
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "type": "block",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "vg_name": "ceph_vg1"
Oct 02 20:22:57 compute-0 musing_cray[479238]:         }
Oct 02 20:22:57 compute-0 musing_cray[479238]:     ],
Oct 02 20:22:57 compute-0 musing_cray[479238]:     "2": [
Oct 02 20:22:57 compute-0 musing_cray[479238]:         {
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "devices": [
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "/dev/loop5"
Oct 02 20:22:57 compute-0 musing_cray[479238]:             ],
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "lv_name": "ceph_lv2",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "lv_size": "21470642176",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "name": "ceph_lv2",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "tags": {
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.cluster_name": "ceph",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.crush_device_class": "",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.encrypted": "0",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.osd_id": "2",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.type": "block",
Oct 02 20:22:57 compute-0 musing_cray[479238]:                 "ceph.vdo": "0"
Oct 02 20:22:57 compute-0 musing_cray[479238]:             },
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "type": "block",
Oct 02 20:22:57 compute-0 musing_cray[479238]:             "vg_name": "ceph_vg2"
Oct 02 20:22:57 compute-0 musing_cray[479238]:         }
Oct 02 20:22:57 compute-0 musing_cray[479238]:     ]
Oct 02 20:22:57 compute-0 musing_cray[479238]: }
Oct 02 20:22:57 compute-0 systemd[1]: libpod-d247a049ea1575a5fe721cca39a0aca96e7a0bb7bf4066e269b6abf4b1d64420.scope: Deactivated successfully.
Oct 02 20:22:57 compute-0 ceph-mon[191910]: pgmap v2351: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:57 compute-0 podman[479247]: 2025-10-02 20:22:57.411964439 +0000 UTC m=+0.067336141 container died d247a049ea1575a5fe721cca39a0aca96e7a0bb7bf4066e269b6abf4b1d64420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:22:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-71233f89347f396ce5ffea9d7365cf7475b44daa107922a29a2d8d7bfbbcdeda-merged.mount: Deactivated successfully.
Oct 02 20:22:57 compute-0 podman[479247]: 2025-10-02 20:22:57.490384977 +0000 UTC m=+0.145756639 container remove d247a049ea1575a5fe721cca39a0aca96e7a0bb7bf4066e269b6abf4b1d64420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 20:22:57 compute-0 systemd[1]: libpod-conmon-d247a049ea1575a5fe721cca39a0aca96e7a0bb7bf4066e269b6abf4b1d64420.scope: Deactivated successfully.
Oct 02 20:22:57 compute-0 podman[479255]: 2025-10-02 20:22:57.508148778 +0000 UTC m=+0.129074235 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.6, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9)
Oct 02 20:22:57 compute-0 podman[479248]: 2025-10-02 20:22:57.515870919 +0000 UTC m=+0.142245347 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:22:57 compute-0 podman[479256]: 2025-10-02 20:22:57.540980442 +0000 UTC m=+0.150317608 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:22:57 compute-0 sudo[479120]: pam_unix(sudo:session): session closed for user root
Oct 02 20:22:57 compute-0 podman[479264]: 2025-10-02 20:22:57.545227712 +0000 UTC m=+0.148636074 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 20:22:57 compute-0 podman[479254]: 2025-10-02 20:22:57.548918268 +0000 UTC m=+0.179690281 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 20:22:57 compute-0 sudo[479362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:22:57 compute-0 sudo[479362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:22:57 compute-0 sudo[479362]: pam_unix(sudo:session): session closed for user root
Oct 02 20:22:57 compute-0 sudo[479387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:22:57 compute-0 sudo[479387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:22:57 compute-0 sudo[479387]: pam_unix(sudo:session): session closed for user root
Oct 02 20:22:57 compute-0 sudo[479412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:22:57 compute-0 sudo[479412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:22:57 compute-0 sudo[479412]: pam_unix(sudo:session): session closed for user root
Oct 02 20:22:57 compute-0 nova_compute[355794]: 2025-10-02 20:22:57.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:22:58 compute-0 sudo[479437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:22:58 compute-0 sudo[479437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:22:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2352: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:58 compute-0 ceph-mon[191910]: pgmap v2352: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:22:58 compute-0 podman[479502]: 2025-10-02 20:22:58.592171709 +0000 UTC m=+0.073981924 container create bc2fd7859333bea4059f41f95cf8cebcf6d56b182d54768cc495b7aade5b9896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_johnson, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:22:58 compute-0 podman[479502]: 2025-10-02 20:22:58.553197396 +0000 UTC m=+0.035007661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:22:58 compute-0 systemd[1]: Started libpod-conmon-bc2fd7859333bea4059f41f95cf8cebcf6d56b182d54768cc495b7aade5b9896.scope.
Oct 02 20:22:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:22:58 compute-0 podman[479502]: 2025-10-02 20:22:58.730210016 +0000 UTC m=+0.212020211 container init bc2fd7859333bea4059f41f95cf8cebcf6d56b182d54768cc495b7aade5b9896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 20:22:58 compute-0 podman[479502]: 2025-10-02 20:22:58.748497081 +0000 UTC m=+0.230307296 container start bc2fd7859333bea4059f41f95cf8cebcf6d56b182d54768cc495b7aade5b9896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_johnson, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:22:58 compute-0 podman[479502]: 2025-10-02 20:22:58.758280215 +0000 UTC m=+0.240090400 container attach bc2fd7859333bea4059f41f95cf8cebcf6d56b182d54768cc495b7aade5b9896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 20:22:58 compute-0 vibrant_johnson[479517]: 167 167
Oct 02 20:22:58 compute-0 systemd[1]: libpod-bc2fd7859333bea4059f41f95cf8cebcf6d56b182d54768cc495b7aade5b9896.scope: Deactivated successfully.
Oct 02 20:22:58 compute-0 podman[479502]: 2025-10-02 20:22:58.762158756 +0000 UTC m=+0.243968941 container died bc2fd7859333bea4059f41f95cf8cebcf6d56b182d54768cc495b7aade5b9896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:22:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-be4d58e8511b5f9c1b2b13c9ce7f5885e7a3f045551951d482fadf9d27910775-merged.mount: Deactivated successfully.
Oct 02 20:22:58 compute-0 podman[479502]: 2025-10-02 20:22:58.821370175 +0000 UTC m=+0.303180350 container remove bc2fd7859333bea4059f41f95cf8cebcf6d56b182d54768cc495b7aade5b9896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:22:58 compute-0 systemd[1]: libpod-conmon-bc2fd7859333bea4059f41f95cf8cebcf6d56b182d54768cc495b7aade5b9896.scope: Deactivated successfully.
Oct 02 20:22:59 compute-0 podman[479542]: 2025-10-02 20:22:59.108676131 +0000 UTC m=+0.078865570 container create 56e9bf79fd9cf18258af0b303218cc601931b79f0e614bb6cd4bbe9b310f1c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 02 20:22:59 compute-0 podman[479542]: 2025-10-02 20:22:59.083124687 +0000 UTC m=+0.053314156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:22:59 compute-0 systemd[1]: Started libpod-conmon-56e9bf79fd9cf18258af0b303218cc601931b79f0e614bb6cd4bbe9b310f1c99.scope.
Oct 02 20:22:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29ca21af33141e570b26cf017ba2242f335f839c0af5607bb773db432d37f32f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29ca21af33141e570b26cf017ba2242f335f839c0af5607bb773db432d37f32f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29ca21af33141e570b26cf017ba2242f335f839c0af5607bb773db432d37f32f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29ca21af33141e570b26cf017ba2242f335f839c0af5607bb773db432d37f32f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:22:59 compute-0 podman[479542]: 2025-10-02 20:22:59.282465987 +0000 UTC m=+0.252655446 container init 56e9bf79fd9cf18258af0b303218cc601931b79f0e614bb6cd4bbe9b310f1c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_allen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:22:59 compute-0 podman[479542]: 2025-10-02 20:22:59.303463863 +0000 UTC m=+0.273653332 container start 56e9bf79fd9cf18258af0b303218cc601931b79f0e614bb6cd4bbe9b310f1c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_allen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 20:22:59 compute-0 podman[479542]: 2025-10-02 20:22:59.309354036 +0000 UTC m=+0.279543475 container attach 56e9bf79fd9cf18258af0b303218cc601931b79f0e614bb6cd4bbe9b310f1c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_allen, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 20:22:59 compute-0 podman[157186]: time="2025-10-02T20:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:22:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47835 "" "Go-http-client/1.1"
Oct 02 20:22:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9517 "" "Go-http-client/1.1"
Oct 02 20:23:00 compute-0 nova_compute[355794]: 2025-10-02 20:23:00.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2353: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:00 compute-0 eloquent_allen[479559]: {
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:         "osd_id": 1,
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:         "type": "bluestore"
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:     },
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:         "osd_id": 2,
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:         "type": "bluestore"
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:     },
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:         "osd_id": 0,
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:         "type": "bluestore"
Oct 02 20:23:00 compute-0 eloquent_allen[479559]:     }
Oct 02 20:23:00 compute-0 eloquent_allen[479559]: }
Oct 02 20:23:00 compute-0 systemd[1]: libpod-56e9bf79fd9cf18258af0b303218cc601931b79f0e614bb6cd4bbe9b310f1c99.scope: Deactivated successfully.
Oct 02 20:23:00 compute-0 systemd[1]: libpod-56e9bf79fd9cf18258af0b303218cc601931b79f0e614bb6cd4bbe9b310f1c99.scope: Consumed 1.290s CPU time.
Oct 02 20:23:00 compute-0 podman[479592]: 2025-10-02 20:23:00.694899962 +0000 UTC m=+0.056343415 container died 56e9bf79fd9cf18258af0b303218cc601931b79f0e614bb6cd4bbe9b310f1c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:23:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-29ca21af33141e570b26cf017ba2242f335f839c0af5607bb773db432d37f32f-merged.mount: Deactivated successfully.
Oct 02 20:23:00 compute-0 podman[479592]: 2025-10-02 20:23:00.789640603 +0000 UTC m=+0.151084006 container remove 56e9bf79fd9cf18258af0b303218cc601931b79f0e614bb6cd4bbe9b310f1c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_allen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 20:23:00 compute-0 systemd[1]: libpod-conmon-56e9bf79fd9cf18258af0b303218cc601931b79f0e614bb6cd4bbe9b310f1c99.scope: Deactivated successfully.
Oct 02 20:23:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:23:00 compute-0 sudo[479437]: pam_unix(sudo:session): session closed for user root
Oct 02 20:23:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:23:00 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:23:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:23:00 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:23:00 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 9b349b70-47cb-47f4-9639-e5c7dfede327 does not exist
Oct 02 20:23:00 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev a775afbd-2ab7-4a1d-8f8c-de6bbc17317a does not exist
Oct 02 20:23:01 compute-0 sudo[479607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:23:01 compute-0 sudo[479607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:23:01 compute-0 sudo[479607]: pam_unix(sudo:session): session closed for user root
Oct 02 20:23:01 compute-0 sudo[479632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:23:01 compute-0 sudo[479632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:23:01 compute-0 sudo[479632]: pam_unix(sudo:session): session closed for user root
Oct 02 20:23:01 compute-0 ceph-mon[191910]: pgmap v2353: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:01 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:23:01 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:23:01 compute-0 openstack_network_exporter[372736]: ERROR   20:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:23:01 compute-0 openstack_network_exporter[372736]: ERROR   20:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:23:01 compute-0 openstack_network_exporter[372736]: ERROR   20:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:23:01 compute-0 openstack_network_exporter[372736]: ERROR   20:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:23:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:23:01 compute-0 openstack_network_exporter[372736]: ERROR   20:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:23:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:23:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2354: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:02 compute-0 nova_compute[355794]: 2025-10-02 20:23:02.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:03 compute-0 ceph-mon[191910]: pgmap v2354: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:23:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:23:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:23:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:23:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:23:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:23:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:23:03
Oct 02 20:23:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:23:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:23:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'images']
Oct 02 20:23:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:23:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2355: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.309 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.311 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.324 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.324 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.325 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.325 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.327 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T20:23:04.325588) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.390 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.392 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.393 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.393 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.394 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.394 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.394 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.394 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.394 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.396 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:23:04.394850) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.429 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.430 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.430 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.431 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.431 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.432 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.432 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.433 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.434 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.435 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.435 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.435 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.435 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.436 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.436 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.436 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.437 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.437 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.438 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.438 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.438 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.438 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.438 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.439 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:23:04.432463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.439 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:23:04.435964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.439 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:23:04.438787) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.475 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.476 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.476 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.476 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.477 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.477 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.477 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.477 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.478 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.478 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.479 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.479 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.479 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.480 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:23:04.477306) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.480 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.480 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.481 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:23:04.480733) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.485 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.486 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.486 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.487 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.487 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.487 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.487 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.488 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.489 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.489 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.489 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.490 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:23:04.487597) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.490 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.490 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.491 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.491 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.491 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.491 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.492 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.492 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.493 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.493 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.493 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.493 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.494 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:23:04.490148) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.494 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:23:04.492147) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.494 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.495 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.495 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.496 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.496 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.496 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.496 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:23:04.495728) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.496 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.496 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.496 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.496 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.497 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.497 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.497 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.497 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.497 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.497 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.497 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.498 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.498 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.498 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.498 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.498 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:23:04.496811) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.499 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:23:04.497656) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.499 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.499 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.499 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.499 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.499 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.500 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.500 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.500 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2524 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:23:04.498936) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.500 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.500 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.501 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.501 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.501 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.501 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.501 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.501 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:23:04.500424) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.501 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.501 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:23:04.501345) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.502 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.502 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.502 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.503 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.503 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.503 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.503 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.503 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2730 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:23:04.502889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.504 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.504 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.504 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.504 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.504 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.505 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.505 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.505 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.505 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.505 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.505 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.506 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.506 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.507 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.507 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.507 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.507 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:23:04.503801) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:23:04.504649) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:23:04.505464) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.508 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.508 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:23:04.506642) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.508 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.508 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:23:04.507497) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.508 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 70540000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.509 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.509 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.509 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.509 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.509 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.509 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.510 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:23:04.508543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:23:04.509332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.510 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.511 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.511 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.511 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.514 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:23:04.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:23:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:23:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:23:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:23:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:23:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:23:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:23:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:23:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:23:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:23:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:23:05 compute-0 nova_compute[355794]: 2025-10-02 20:23:05.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:05 compute-0 ceph-mon[191910]: pgmap v2355: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:23:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2356: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:06 compute-0 ceph-mon[191910]: pgmap v2356: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:07 compute-0 nova_compute[355794]: 2025-10-02 20:23:07.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2357: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:08 compute-0 podman[479658]: 2025-10-02 20:23:08.72477933 +0000 UTC m=+0.140052401 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.vendor=CentOS)
Oct 02 20:23:09 compute-0 ceph-mon[191910]: pgmap v2357: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:10 compute-0 nova_compute[355794]: 2025-10-02 20:23:10.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2358: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:10 compute-0 sshd-session[479677]: Accepted publickey for zuul from 192.168.122.10 port 51088 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 20:23:10 compute-0 systemd-logind[793]: New session 65 of user zuul.
Oct 02 20:23:10 compute-0 systemd[1]: Started Session 65 of User zuul.
Oct 02 20:23:10 compute-0 sshd-session[479677]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 20:23:10 compute-0 sudo[479681]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 02 20:23:10 compute-0 sudo[479681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 20:23:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:23:11 compute-0 ceph-mon[191910]: pgmap v2358: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2359: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:12 compute-0 nova_compute[355794]: 2025-10-02 20:23:12.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:13 compute-0 ceph-mon[191910]: pgmap v2359: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:23:14 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15533 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2360: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:14 compute-0 ceph-mon[191910]: from='client.15533 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:14 compute-0 ceph-mon[191910]: pgmap v2360: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:14 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15535 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:15 compute-0 nova_compute[355794]: 2025-10-02 20:23:15.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 20:23:15 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3928380905' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 20:23:15 compute-0 ceph-mon[191910]: from='client.15535 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:15 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3928380905' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 20:23:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:23:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2361: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:16 compute-0 ceph-mon[191910]: pgmap v2361: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:18 compute-0 nova_compute[355794]: 2025-10-02 20:23:18.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2362: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:18 compute-0 podman[479934]: 2025-10-02 20:23:18.718573943 +0000 UTC m=+0.125977943 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 20:23:18 compute-0 podman[479935]: 2025-10-02 20:23:18.758948073 +0000 UTC m=+0.168879669 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Oct 02 20:23:19 compute-0 ceph-mon[191910]: pgmap v2362: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:19 compute-0 nova_compute[355794]: 2025-10-02 20:23:19.600 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:23:19 compute-0 nova_compute[355794]: 2025-10-02 20:23:19.600 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:23:19 compute-0 nova_compute[355794]: 2025-10-02 20:23:19.601 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:23:20 compute-0 ovs-vsctl[480013]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 02 20:23:20 compute-0 nova_compute[355794]: 2025-10-02 20:23:20.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:20 compute-0 nova_compute[355794]: 2025-10-02 20:23:20.199 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:23:20 compute-0 nova_compute[355794]: 2025-10-02 20:23:20.201 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:23:20 compute-0 nova_compute[355794]: 2025-10-02 20:23:20.201 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:23:20 compute-0 nova_compute[355794]: 2025-10-02 20:23:20.202 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:23:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:23:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/634748824' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:23:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:23:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/634748824' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:23:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2363: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/634748824' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:23:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/634748824' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:23:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:23:21 compute-0 ceph-mon[191910]: pgmap v2363: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:21 compute-0 virtqemud[153606]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 02 20:23:21 compute-0 virtqemud[153606]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 02 20:23:21 compute-0 virtqemud[153606]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 02 20:23:21 compute-0 nova_compute[355794]: 2025-10-02 20:23:21.809 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:23:21 compute-0 nova_compute[355794]: 2025-10-02 20:23:21.848 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:23:21 compute-0 nova_compute[355794]: 2025-10-02 20:23:21.849 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:23:21 compute-0 nova_compute[355794]: 2025-10-02 20:23:21.850 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:23:21 compute-0 nova_compute[355794]: 2025-10-02 20:23:21.850 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:23:21 compute-0 nova_compute[355794]: 2025-10-02 20:23:21.851 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:23:21 compute-0 nova_compute[355794]: 2025-10-02 20:23:21.852 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:23:21 compute-0 nova_compute[355794]: 2025-10-02 20:23:21.886 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:23:21 compute-0 nova_compute[355794]: 2025-10-02 20:23:21.887 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:23:21 compute-0 nova_compute[355794]: 2025-10-02 20:23:21.888 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:23:21 compute-0 nova_compute[355794]: 2025-10-02 20:23:21.888 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:23:21 compute-0 nova_compute[355794]: 2025-10-02 20:23:21.889 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:23:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2364: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:22 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:23:22 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1533114951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:23:22 compute-0 nova_compute[355794]: 2025-10-02 20:23:22.408 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:23:22 compute-0 ceph-mon[191910]: pgmap v2364: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:22 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1533114951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:23:22 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: cache status {prefix=cache status} (starting...)
Oct 02 20:23:22 compute-0 nova_compute[355794]: 2025-10-02 20:23:22.622 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:23:22 compute-0 nova_compute[355794]: 2025-10-02 20:23:22.623 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:23:22 compute-0 nova_compute[355794]: 2025-10-02 20:23:22.623 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:23:22 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: client ls {prefix=client ls} (starting...)
Oct 02 20:23:22 compute-0 lvm[480357]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 02 20:23:22 compute-0 lvm[480357]: VG ceph_vg1 finished
Oct 02 20:23:22 compute-0 nova_compute[355794]: 2025-10-02 20:23:22.937 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:23:22 compute-0 nova_compute[355794]: 2025-10-02 20:23:22.939 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3557MB free_disk=59.955204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:23:22 compute-0 nova_compute[355794]: 2025-10-02 20:23:22.940 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:23:22 compute-0 nova_compute[355794]: 2025-10-02 20:23:22.940 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:23:22 compute-0 lvm[480379]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 20:23:22 compute-0 lvm[480379]: VG ceph_vg0 finished
Oct 02 20:23:23 compute-0 nova_compute[355794]: 2025-10-02 20:23:23.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:23 compute-0 nova_compute[355794]: 2025-10-02 20:23:23.019 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:23:23 compute-0 nova_compute[355794]: 2025-10-02 20:23:23.019 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:23:23 compute-0 nova_compute[355794]: 2025-10-02 20:23:23.020 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:23:23 compute-0 nova_compute[355794]: 2025-10-02 20:23:23.074 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:23:23 compute-0 lvm[480428]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 02 20:23:23 compute-0 lvm[480428]: VG ceph_vg2 finished
Oct 02 20:23:23 compute-0 kernel: block loop4: the capability attribute has been deprecated.
Oct 02 20:23:23 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: damage ls {prefix=damage ls} (starting...)
Oct 02 20:23:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:23:23 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3276864750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:23:23 compute-0 nova_compute[355794]: 2025-10-02 20:23:23.558 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:23:23 compute-0 nova_compute[355794]: 2025-10-02 20:23:23.569 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:23:23 compute-0 nova_compute[355794]: 2025-10-02 20:23:23.594 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:23:23 compute-0 nova_compute[355794]: 2025-10-02 20:23:23.596 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:23:23 compute-0 nova_compute[355794]: 2025-10-02 20:23:23.596 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:23:23 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3276864750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:23:23 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: dump loads {prefix=dump loads} (starting...)
Oct 02 20:23:23 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 02 20:23:23 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15547 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:23 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 02 20:23:24 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 02 20:23:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2365: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:24 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 02 20:23:24 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15549 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:24 compute-0 nova_compute[355794]: 2025-10-02 20:23:24.321 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:23:24 compute-0 nova_compute[355794]: 2025-10-02 20:23:24.322 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:23:24 compute-0 nova_compute[355794]: 2025-10-02 20:23:24.322 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:23:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 02 20:23:24 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3235050633' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 20:23:24 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 02 20:23:24 compute-0 ceph-mon[191910]: from='client.15547 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:24 compute-0 ceph-mon[191910]: pgmap v2365: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:24 compute-0 ceph-mon[191910]: from='client.15549 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:24 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3235050633' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 20:23:24 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 02 20:23:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:23:24 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3614737926' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:23:25 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: ops {prefix=ops} (starting...)
Oct 02 20:23:25 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15557 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:25 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T20:23:25.128+0000 7f5cb7835640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 20:23:25 compute-0 ceph-mgr[192222]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 20:23:25 compute-0 nova_compute[355794]: 2025-10-02 20:23:25.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 02 20:23:25 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/537887146' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 20:23:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 02 20:23:25 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2188161928' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 20:23:25 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3614737926' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:23:25 compute-0 ceph-mon[191910]: from='client.15557 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:25 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/537887146' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 20:23:25 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2188161928' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 20:23:25 compute-0 podman[480772]: 2025-10-02 20:23:25.70278618 +0000 UTC m=+0.111555300 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 20:23:25 compute-0 podman[480774]: 2025-10-02 20:23:25.748037726 +0000 UTC m=+0.163946092 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, container_name=kepler, architecture=x86_64)
Oct 02 20:23:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:23:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 02 20:23:25 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/740682174' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 20:23:25 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: session ls {prefix=session ls} (starting...)
Oct 02 20:23:26 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: status {prefix=status} (starting...)
Oct 02 20:23:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 20:23:26 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510038462' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 20:23:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2366: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:26 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15567 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 20:23:26 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1046518495' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 20:23:26 compute-0 nova_compute[355794]: 2025-10-02 20:23:26.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:23:26 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/740682174' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 20:23:26 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3510038462' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 20:23:26 compute-0 ceph-mon[191910]: pgmap v2366: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:26 compute-0 ceph-mon[191910]: from='client.15567 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:26 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1046518495' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 20:23:26 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15571 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 20:23:26 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159265276' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 20:23:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 02 20:23:27 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/686792619' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 20:23:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 20:23:27 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1188371577' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 20:23:27 compute-0 ceph-mon[191910]: from='client.15571 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:27 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2159265276' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 20:23:27 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/686792619' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 20:23:27 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1188371577' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 20:23:27 compute-0 podman[481037]: 2025-10-02 20:23:27.731977551 +0000 UTC m=+0.164469405 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 20:23:27 compute-0 podman[481057]: 2025-10-02 20:23:27.732010452 +0000 UTC m=+0.136026626 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible)
Oct 02 20:23:27 compute-0 podman[481042]: 2025-10-02 20:23:27.750135723 +0000 UTC m=+0.166768895 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 02 20:23:27 compute-0 podman[481064]: 2025-10-02 20:23:27.753165832 +0000 UTC m=+0.153866010 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 20:23:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 02 20:23:27 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1924143260' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 20:23:27 compute-0 podman[481066]: 2025-10-02 20:23:27.771274442 +0000 UTC m=+0.162703839 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 20:23:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 02 20:23:27 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/535215590' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 20:23:28 compute-0 nova_compute[355794]: 2025-10-02 20:23:28.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:28 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15583 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:28 compute-0 ceph-mgr[192222]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 20:23:28 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T20:23:28.158+0000 7f5cb7835640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 20:23:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2367: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 20:23:28 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/197574664' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 20:23:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 02 20:23:28 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1308947180' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 20:23:28 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1924143260' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 20:23:28 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/535215590' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 20:23:28 compute-0 ceph-mon[191910]: from='client.15583 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:28 compute-0 ceph-mon[191910]: pgmap v2367: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:28 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/197574664' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 20:23:28 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1308947180' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 20:23:28 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15589 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 02 20:23:29 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/594074815' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 20:23:29 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15593 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:29 compute-0 nova_compute[355794]: 2025-10-02 20:23:29.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:23:29 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15595 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 20:23:29 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/70603066' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 20:23:29 compute-0 ceph-mon[191910]: from='client.15589 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:29 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/594074815' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 20:23:29 compute-0 ceph-mon[191910]: from='client.15593 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:29 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/70603066' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 20:23:29 compute-0 podman[157186]: time="2025-10-02T20:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:23:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:23:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9121 "" "Go-http-client/1.1"
Oct 02 20:23:30 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15599 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:30 compute-0 nova_compute[355794]: 2025-10-02 20:23:30.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:36.611175+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 8208384 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:37.611587+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104990 data_alloc: 234881024 data_used: 10403840
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 8208384 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:38.611924+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 8208384 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:39.612174+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 8208384 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:40.612519+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb681000/0x0/0x4ffc00000, data 0x14d49dc/0x159d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:41.612728+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:42.613061+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104990 data_alloc: 234881024 data_used: 10403840
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:43.613429+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:44.613728+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb681000/0x0/0x4ffc00000, data 0x14d49dc/0x159d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:45.614133+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:46.614508+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb681000/0x0/0x4ffc00000, data 0x14d49dc/0x159d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:47.614825+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 20:23:30 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/939876340' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104990 data_alloc: 234881024 data_used: 10403840
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:48.615130+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb681000/0x0/0x4ffc00000, data 0x14d49dc/0x159d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:49.615409+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:50.615756+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:51.616054+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:52.616426+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104990 data_alloc: 234881024 data_used: 10403840
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:53.616726+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb681000/0x0/0x4ffc00000, data 0x14d49dc/0x159d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:54.617099+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:55.617534+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:56.617778+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:57.618072+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104990 data_alloc: 234881024 data_used: 10403840
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:58.618360+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb681000/0x0/0x4ffc00000, data 0x14d49dc/0x159d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:59.618774+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:00.621499+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:01.621803+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:02.622017+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104990 data_alloc: 234881024 data_used: 10403840
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:03.622516+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:04.622751+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb681000/0x0/0x4ffc00000, data 0x14d49dc/0x159d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:05.623067+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb681000/0x0/0x4ffc00000, data 0x14d49dc/0x159d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:06.623495+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:07.623882+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104990 data_alloc: 234881024 data_used: 10403840
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:08.624113+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:09.624349+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb681000/0x0/0x4ffc00000, data 0x14d49dc/0x159d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:10.624579+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:11.624809+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb681000/0x0/0x4ffc00000, data 0x14d49dc/0x159d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:12.625090+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104990 data_alloc: 234881024 data_used: 10403840
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 8282112 heap: 93790208 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:13.625421+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 86.432342529s of 86.568977356s, submitted: 18
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x563964e4e800 session 0x5639675c8960
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583a400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x56396583a400 session 0x563964fca1e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x5639683e6000 session 0x563964fca000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 14934016 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x563964e4e800 session 0x563964ef4960
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x5639683e6800 session 0x563964ef43c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb01b000/0x0/0x4ffc00000, data 0x1b3a9dc/0x1c03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:14.625742+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x5639683e6400 session 0x563964ef5680
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 14934016 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583ac00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x56396583ac00 session 0x5639656eab40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583a400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x56396583a400 session 0x563967403c20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x563964e4e800 session 0x56396581f860
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:15.626206+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 14950400 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:16.626366+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 14950400 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb01b000/0x0/0x4ffc00000, data 0x1b3a9dc/0x1c03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583ac00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x56396583ac00 session 0x563965812b40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:17.626715+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154469 data_alloc: 234881024 data_used: 10403840
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 14950400 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x5639683e6400 session 0x5639675c83c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:18.627202+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x5639683e6800 session 0x5639675c92c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639669b7800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x5639669b7800 session 0x563966a84b40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 15343616 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:19.627543+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583ac00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 15269888 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:20.627919+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 15269888 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fafde000/0x0/0x4ffc00000, data 0x1b769ec/0x1c40000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:21.628214+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 15065088 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:22.628534+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168823 data_alloc: 234881024 data_used: 11583488
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fafde000/0x0/0x4ffc00000, data 0x1b769ec/0x1c40000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87818240 unmapped: 12787712 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:23.628842+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fafde000/0x0/0x4ffc00000, data 0x1b769ec/0x1c40000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88694784 unmapped: 11911168 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:24.629042+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.022837639s of 11.171451569s, submitted: 15
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 8568832 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:25.629872+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fafde000/0x0/0x4ffc00000, data 0x1b769ec/0x1c40000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [0,0,1])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92119040 unmapped: 8486912 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:26.630357+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 7340032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:27.630670+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208167 data_alloc: 234881024 data_used: 17117184
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 7323648 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:28.630901+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 7323648 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:29.631247+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fafde000/0x0/0x4ffc00000, data 0x1b769ec/0x1c40000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93315072 unmapped: 7290880 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:30.631521+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93315072 unmapped: 7290880 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:31.631847+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93315072 unmapped: 7290880 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:32.632128+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208167 data_alloc: 234881024 data_used: 17117184
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93315072 unmapped: 7290880 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:33.632450+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fafde000/0x0/0x4ffc00000, data 0x1b769ec/0x1c40000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93315072 unmapped: 7290880 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:34.632758+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93315072 unmapped: 7290880 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:35.633106+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93315072 unmapped: 7290880 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:36.633589+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93315072 unmapped: 7290880 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:37.633884+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fafde000/0x0/0x4ffc00000, data 0x1b769ec/0x1c40000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208167 data_alloc: 234881024 data_used: 17117184
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93315072 unmapped: 7290880 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:38.634182+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93315072 unmapped: 7290880 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:39.634544+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93315072 unmapped: 7290880 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:40.635000+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93315072 unmapped: 7290880 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:41.635461+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93323264 unmapped: 7282688 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:42.635805+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208167 data_alloc: 234881024 data_used: 17117184
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93323264 unmapped: 7282688 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:43.636114+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fafde000/0x0/0x4ffc00000, data 0x1b769ec/0x1c40000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93323264 unmapped: 7282688 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:44.636536+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93323264 unmapped: 7282688 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:45.636971+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93323264 unmapped: 7282688 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fafde000/0x0/0x4ffc00000, data 0x1b769ec/0x1c40000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:46.637307+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fafde000/0x0/0x4ffc00000, data 0x1b769ec/0x1c40000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93323264 unmapped: 7282688 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:47.637638+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208167 data_alloc: 234881024 data_used: 17117184
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93323264 unmapped: 7282688 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:48.637906+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fafde000/0x0/0x4ffc00000, data 0x1b769ec/0x1c40000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93323264 unmapped: 7282688 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:49.638224+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93323264 unmapped: 7282688 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fafde000/0x0/0x4ffc00000, data 0x1b769ec/0x1c40000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:50.638620+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93323264 unmapped: 7282688 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:51.638858+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93323264 unmapped: 7282688 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:52.639221+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208167 data_alloc: 234881024 data_used: 17117184
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93323264 unmapped: 7282688 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:53.639587+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93323264 unmapped: 7282688 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:54.639937+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 93323264 unmapped: 7282688 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:55.640455+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fafde000/0x0/0x4ffc00000, data 0x1b769ec/0x1c40000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.613363266s of 31.154979706s, submitted: 90
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 7847936 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:56.640818+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 94093312 unmapped: 6512640 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:57.641128+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faf3b000/0x0/0x4ffc00000, data 0x1c149ec/0x1cde000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221631 data_alloc: 234881024 data_used: 17412096
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 6299648 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:58.641431+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 3399680 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:59.641865+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 3399680 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:00.642155+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 3399680 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:01.642666+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 3399680 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:02.642953+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228967 data_alloc: 234881024 data_used: 17313792
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d82000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 3366912 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:03.644298+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d82000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97271808 unmapped: 3334144 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:04.644809+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97271808 unmapped: 3334144 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:05.645237+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97271808 unmapped: 3334144 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:06.645522+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97280000 unmapped: 3325952 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:07.645765+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228967 data_alloc: 234881024 data_used: 17313792
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97280000 unmapped: 3325952 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d82000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:08.646107+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:09.646351+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d82000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:10.646679+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:11.646970+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:12.647339+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228967 data_alloc: 234881024 data_used: 17313792
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:13.647656+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d82000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:14.647905+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d82000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:15.648324+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:16.648720+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:17.648990+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228967 data_alloc: 234881024 data_used: 17313792
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:18.649325+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d82000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:19.649946+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:20.650197+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:21.650599+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d82000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:22.651077+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228967 data_alloc: 234881024 data_used: 17313792
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:23.651637+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:24.651887+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d82000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:25.652660+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:26.652986+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:27.653681+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228967 data_alloc: 234881024 data_used: 17313792
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:28.654088+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 3293184 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:29.654673+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 33.222660065s of 33.421955109s, submitted: 39
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97329152 unmapped: 3276800 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:30.655469+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d82000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97329152 unmapped: 3276800 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:31.655949+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97329152 unmapped: 3276800 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:32.656457+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97329152 unmapped: 3276800 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229143 data_alloc: 234881024 data_used: 17313792
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:33.656842+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97329152 unmapped: 3276800 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d82000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:34.657058+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 3268608 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:35.657737+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 3268608 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:36.658238+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 3268608 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:37.658596+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d82000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 3268608 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229143 data_alloc: 234881024 data_used: 17313792
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:38.659034+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 3268608 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:39.659314+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 3268608 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.233178139s of 10.241373062s, submitted: 1
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:40.659819+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:41.660132+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:42.660425+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:43.660707+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:44.661060+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:45.661671+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:46.661968+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:47.662236+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:48.662719+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:49.663159+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:50.663541+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:51.663925+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:52.664348+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:53.664906+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:54.665317+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:55.665917+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:56.666331+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:57.666540+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:58.666994+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:59.667291+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:00.667583+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:01.667976+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:02.668544+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2368: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:03.668938+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:04.669314+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:05.669856+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:06.670271+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:07.670692+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:08.671032+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:09.671748+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:10.672177+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:11.672644+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:12.672825+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:13.673261+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:14.673638+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:15.674134+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:16.674430+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:17.674763+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 3137536 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:18.675033+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:19.675523+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:20.675900+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:21.676266+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:22.676519+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:23.676761+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:24.676959+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:25.677230+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:26.677459+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:27.677943+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:28.678263+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:29.678517+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:30.678944+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:31.679290+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:32.680414+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:33.680728+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:34.681031+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:35.681567+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:36.681904+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:37.682277+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:38.682674+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:39.682923+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:40.683275+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:41.683657+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:42.684073+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:43.684840+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:44.685004+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:45.685280+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:46.685564+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:47.685895+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:48.686282+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:49.686585+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:50.686824+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:51.687155+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:52.687631+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:53.688013+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:54.688421+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:55.688888+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:56.689298+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:57.689604+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:58.689936+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:59.690196+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:00.690649+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:01.691023+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:02.691353+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:03.691844+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:04.692218+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:05.692631+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 3129344 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:06.693059+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:07.693529+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:08.693866+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:09.694172+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:10.694418+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:11.694870+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:12.695280+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:13.695662+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:14.695945+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:15.696452+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:16.696797+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:17.697106+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:18.697660+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:19.698149+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:20.698614+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:21.698970+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:22.699488+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:23.699901+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:24.700325+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:25.700685+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:26.700982+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:27.701202+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:28.701731+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:29.702084+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:30.702491+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:31.702922+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:32.703121+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:33.703440+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:34.703693+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:35.704076+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:36.704466+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97484800 unmapped: 3121152 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:37.704798+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97492992 unmapped: 3112960 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:38.705193+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97492992 unmapped: 3112960 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:39.705521+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97492992 unmapped: 3112960 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:40.705966+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97492992 unmapped: 3112960 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:41.706457+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97492992 unmapped: 3112960 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:42.706747+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97492992 unmapped: 3112960 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:43.706982+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97492992 unmapped: 3112960 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:44.707268+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97492992 unmapped: 3112960 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:45.707835+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:46.708244+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97492992 unmapped: 3112960 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:47.708620+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97492992 unmapped: 3112960 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:48.708917+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97492992 unmapped: 3112960 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:49.709124+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97492992 unmapped: 3112960 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:50.709554+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97492992 unmapped: 3112960 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:51.710037+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97492992 unmapped: 3112960 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:52.710369+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97492992 unmapped: 3112960 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:53.710740+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97492992 unmapped: 3112960 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:54.711058+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97501184 unmapped: 3104768 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:55.711590+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97501184 unmapped: 3104768 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:56.711966+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97501184 unmapped: 3104768 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:57.712307+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97501184 unmapped: 3104768 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:58.712740+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97501184 unmapped: 3104768 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226775 data_alloc: 234881024 data_used: 17305600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:59.713142+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97501184 unmapped: 3104768 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x5639655a7000 session 0x563964ef83c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x5639655d2800 session 0x563964d21e00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x56396583a800 session 0x5639674081e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets getting new tickets!
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:00.713553+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _finish_auth 0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:00.714914+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97509376 unmapped: 3096576 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c289ec/0x1cf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 141.229934692s of 141.257766724s, submitted: 8
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:01.713911+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x5639683e6800 session 0x5639690e6b40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:02.714148+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:03.714422+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:04.714753+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:05.715130+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:06.715619+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:07.716058+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:08.716319+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:09.716675+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:10.717165+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:11.717528+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:12.717922+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:13.718246+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:14.718583+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:15.719014+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:16.719464+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:17.719789+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:18.720111+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:19.720543+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:20.720861+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:21.721299+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:22.721637+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:23.721972+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:24.722341+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:25.722836+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:26.723065+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:27.723258+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:28.723652+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:29.724082+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:30.724528+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:31.724941+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:32.725256+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:33.725671+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:34.726012+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:35.726626+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:36.726992+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:37.727353+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:38.727692+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:39.727979+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:40.728199+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:41.728604+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:42.728852+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:43.729348+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:44.729726+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:45.730076+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:46.730865+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:47.731184+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:48.731615+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:49.731904+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:50.732247+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:51.732623+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:52.732864+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:53.733226+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:54.733598+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:55.734028+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:56.734965+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:57.735206+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:58.735583+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:59.735997+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:00.736623+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:01.736972+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:02.737346+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:03.737860+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:04.738264+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:05.738551+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:06.738959+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:07.739341+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92520448 unmapped: 8085504 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:08.739797+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:09.740242+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:10.740628+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:11.741028+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:12.741801+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:13.742097+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:14.743120+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:15.743584+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:16.743988+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x563966fd2000 session 0x563967402960
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639655a7000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:17.744312+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:18.744774+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:19.745146+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:20.745810+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:21.746229+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:22.746686+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:23.746906+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:24.747268+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:25.747708+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:26.748032+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:27.748455+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:28.748641+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:29.751355+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:30.751808+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:31.752045+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:32.752627+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:33.752996+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:34.753434+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:35.753894+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:36.754316+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:37.754725+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:38.755075+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:39.755318+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:40.755676+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:41.756093+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:42.756515+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:43.756919+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:44.757155+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:45.757617+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:46.757995+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:47.758338+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:48.758560+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:49.758969+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:50.759511+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:51.760843+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:52.761264+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:53.761690+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:54.762059+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:55.762565+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:56.762904+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:57.763217+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:58.763581+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:59.763870+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077835 data_alloc: 234881024 data_used: 10407936
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:00.764192+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x5639669b7400 session 0x5639656d4d20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 119.751098633s of 119.909606934s, submitted: 16
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x5639669b7c00 session 0x5639674910e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x563963576000 session 0x563967408960
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4faa85000/0x0/0x4ffc00000, data 0xf309c9/0xff9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639655d2800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 8069120 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:01.764420+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90013696 unmapped: 10592256 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:02.764683+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x5639655d2800 session 0x5639690e7c20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:03.764941+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb159000/0x0/0x4ffc00000, data 0x85d944/0x924000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:04.765362+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004639 data_alloc: 218103808 data_used: 7225344
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:05.765723+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:06.766006+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:07.766317+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:08.766652+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:09.767264+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb159000/0x0/0x4ffc00000, data 0x85d944/0x924000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004639 data_alloc: 218103808 data_used: 7225344
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:10.767726+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:11.768089+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:12.768522+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb159000/0x0/0x4ffc00000, data 0x85d944/0x924000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:13.768877+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:14.769214+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004639 data_alloc: 218103808 data_used: 7225344
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:15.769564+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:16.769951+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:17.770510+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb159000/0x0/0x4ffc00000, data 0x85d944/0x924000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:18.770829+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:19.771192+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004639 data_alloc: 218103808 data_used: 7225344
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:20.771482+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:21.771900+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:22.772486+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:23.772885+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb159000/0x0/0x4ffc00000, data 0x85d944/0x924000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb159000/0x0/0x4ffc00000, data 0x85d944/0x924000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:24.773255+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 90030080 unmapped: 10575872 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x563964e4e800 session 0x563966a85a40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.723873138s of 23.977312088s, submitted: 22
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x56396583ac00 session 0x563967514f00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x5639683e6400 session 0x563964ee8780
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004239 data_alloc: 218103808 data_used: 7225344
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:25.773789+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 14548992 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 ms_handle_reset con 0x563963576000 session 0x56396850e000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:26.774174+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:27.774543+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:28.774783+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb8af000/0x0/0x4ffc00000, data 0x109934/0x1cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:29.775012+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892640 data_alloc: 218103808 data_used: 323584
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:30.775330+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:31.775660+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:32.775981+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:33.776354+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:34.776746+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb8af000/0x0/0x4ffc00000, data 0x109934/0x1cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892640 data_alloc: 218103808 data_used: 323584
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:35.777121+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:36.777546+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:37.777963+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:38.778357+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb8af000/0x0/0x4ffc00000, data 0x109934/0x1cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:39.778855+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892640 data_alloc: 218103808 data_used: 323584
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:40.779258+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb8af000/0x0/0x4ffc00000, data 0x109934/0x1cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:41.779662+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb8af000/0x0/0x4ffc00000, data 0x109934/0x1cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb8af000/0x0/0x4ffc00000, data 0x109934/0x1cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:42.780070+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:43.780796+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb8af000/0x0/0x4ffc00000, data 0x109934/0x1cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:44.781204+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892640 data_alloc: 218103808 data_used: 323584
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:45.781604+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:46.782149+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:47.782637+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:48.783056+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:49.783425+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb8af000/0x0/0x4ffc00000, data 0x109934/0x1cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892640 data_alloc: 218103808 data_used: 323584
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb8af000/0x0/0x4ffc00000, data 0x109934/0x1cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:50.783721+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:51.784091+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:52.784545+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:53.784918+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:54.785262+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892640 data_alloc: 218103808 data_used: 323584
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:55.785672+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb8af000/0x0/0x4ffc00000, data 0x109934/0x1cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:56.786059+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:57.786331+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:58.786691+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb8af000/0x0/0x4ffc00000, data 0x109934/0x1cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:59.786997+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892640 data_alloc: 218103808 data_used: 323584
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:00.787368+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:01.787849+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 13484032 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb8af000/0x0/0x4ffc00000, data 0x109934/0x1cf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 37.516857147s of 37.667316437s, submitted: 24
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:02.788251+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87138304 unmapped: 13467648 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:03.788522+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87138304 unmapped: 13467648 heap: 100605952 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:04.788919+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87179264 unmapped: 30212096 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fb0ad000/0x0/0x4ffc00000, data 0x909967/0x9d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:05.789290+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951027 data_alloc: 218103808 data_used: 323584
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87179264 unmapped: 30212096 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:06.789568+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87212032 unmapped: 30179328 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 ms_handle_reset con 0x563964e4e800 session 0x563964ef52c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:07.790004+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87220224 unmapped: 30171136 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa91d000/0x0/0x4ffc00000, data 0x1096ee4/0x1160000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:08.790352+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 30375936 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:09.791106+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 30375936 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:10.791552+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007441 data_alloc: 218103808 data_used: 331776
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 30375936 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:11.791863+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 30375936 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:12.792347+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 30375936 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:13.792900+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa91d000/0x0/0x4ffc00000, data 0x1096ee4/0x1160000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 30375936 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:14.793113+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 30375936 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:15.793586+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007441 data_alloc: 218103808 data_used: 331776
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 30375936 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:16.794796+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639655d2800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 30375936 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 ms_handle_reset con 0x5639655d2800 session 0x563965dc10e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 ms_handle_reset con 0x563963576000 session 0x563964b05e00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:17.795209+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 21921792 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa91d000/0x0/0x4ffc00000, data 0x1096ee4/0x1160000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:18.795678+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.776172638s of 16.094839096s, submitted: 16
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 ms_handle_reset con 0x563964e4e800 session 0x563966a843c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583ac00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 ms_handle_reset con 0x56396583ac00 session 0x5639656f1a40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 ms_handle_reset con 0x5639683e6400 session 0x563964ef7c20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639669b7000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 ms_handle_reset con 0x5639669b7000 session 0x563964ef92c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95576064 unmapped: 21815296 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 ms_handle_reset con 0x563963576000 session 0x563964ef83c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:19.795923+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 ms_handle_reset con 0x563964e4e800 session 0x563964ef81e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583ac00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 ms_handle_reset con 0x56396583ac00 session 0x563964ef8960
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 ms_handle_reset con 0x5639683e6400 session 0x5639656d4d20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 21798912 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639669b7c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 ms_handle_reset con 0x5639669b7c00 session 0x5639656d43c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0x11e0ee4/0x12aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:20.796258+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048301 data_alloc: 218103808 data_used: 7135232
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 21798912 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:21.796598+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 21798912 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:22.797010+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 21798912 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 ms_handle_reset con 0x563963576000 session 0x56396581e000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:23.797303+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0x11e0ee4/0x12aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 21798912 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583ac00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:24.797837+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 21798912 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0x11e0ee4/0x12aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:25.798350+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048301 data_alloc: 218103808 data_used: 7135232
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 21798912 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:26.798702+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 21798912 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:27.799151+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 21790720 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:28.799657+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0x11e0ee4/0x12aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 21790720 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:29.799948+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0x11e0ee4/0x12aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 21790720 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:30.800169+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057101 data_alloc: 218103808 data_used: 8376320
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 21790720 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:31.800363+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0x11e0ee4/0x12aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 21790720 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:32.800620+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 21790720 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:33.801065+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 21782528 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:34.801517+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 21782528 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:35.801918+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057261 data_alloc: 218103808 data_used: 8380416
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 21782528 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:36.802149+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 21782528 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:37.802660+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0x11e0ee4/0x12aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 21782528 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:38.803027+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 21782528 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0x11e0ee4/0x12aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:39.803601+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0x11e0ee4/0x12aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 21782528 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0x11e0ee4/0x12aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:40.803792+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057261 data_alloc: 218103808 data_used: 8380416
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 21782528 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:41.804012+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0x11e0ee4/0x12aa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 21782528 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:42.804650+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 21782528 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:43.804882+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 21782528 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:44.805206+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 21782528 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:45.805987+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 ms_handle_reset con 0x563964e4e800 session 0x563966633e00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 ms_handle_reset con 0x56396583ac00 session 0x563966a25e00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 ms_handle_reset con 0x5639683e6400 session 0x5639690e43c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057261 data_alloc: 218103808 data_used: 8380416
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639669b7400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 21798912 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.796934128s of 27.857275009s, submitted: 10
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:46.806521+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa91a000/0x0/0x4ffc00000, data 0x109aee4/0x1164000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 22298624 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 ms_handle_reset con 0x5639669b7400 session 0x56396850fc20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:47.806896+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 22298624 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:48.807282+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 22298624 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:49.807631+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa91e000/0x0/0x4ffc00000, data 0x1096ee4/0x1160000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 22298624 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:50.808217+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033849 data_alloc: 218103808 data_used: 7139328
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _renew_subs
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95150080 unmapped: 22241280 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:51.808611+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 130 ms_handle_reset con 0x563963576000 session 0x563964b08d20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 28917760 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:52.808957+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 28917760 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:53.809534+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 28917760 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:54.810068+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fb8a8000/0x0/0x4ffc00000, data 0x10d082/0x1d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 28917760 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:55.810664+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905766 data_alloc: 218103808 data_used: 344064
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 28917760 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:56.811080+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 28917760 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:57.811594+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 28917760 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:58.812277+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 29057024 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:59.812666+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 29057024 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:00.813109+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.060924530s of 14.266002655s, submitted: 33
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fb8a8000/0x0/0x4ffc00000, data 0x10d082/0x1d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909764 data_alloc: 218103808 data_used: 352256
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 29147136 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:01.813662+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 28082176 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:02.814044+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 28082176 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:03.814582+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 28082176 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:04.815009+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fb8a5000/0x0/0x4ffc00000, data 0x10eae5/0x1d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fb8a6000/0x0/0x4ffc00000, data 0x10eae5/0x1d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 28082176 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:05.815512+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fb8a6000/0x0/0x4ffc00000, data 0x10eae5/0x1d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910605 data_alloc: 218103808 data_used: 352256
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 28082176 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:06.815916+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _renew_subs
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 132 ms_handle_reset con 0x563964e4e800 session 0x563966ca7860
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 35987456 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:07.816295+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 35987456 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:08.816690+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 35987456 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:09.817103+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 35987456 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:10.817669+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fb116000/0x0/0x4ffc00000, data 0x89c062/0x967000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966835 data_alloc: 218103808 data_used: 360448
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 35987456 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:11.818020+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fb116000/0x0/0x4ffc00000, data 0x89c062/0x967000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 35987456 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:12.818472+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fb116000/0x0/0x4ffc00000, data 0x89c062/0x967000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 35987456 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:13.818814+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fb116000/0x0/0x4ffc00000, data 0x89c062/0x967000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 35987456 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:14.819233+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583ac00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.120377541s of 14.281970024s, submitted: 27
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _renew_subs
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89341952 unmapped: 35962880 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:15.819519+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 133 ms_handle_reset con 0x56396583ac00 session 0x563964ef83c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917633 data_alloc: 218103808 data_used: 368640
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb8a0000/0x0/0x4ffc00000, data 0x112233/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:16.819804+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 36552704 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb8a0000/0x0/0x4ffc00000, data 0x112233/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:17.820095+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 36552704 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:18.820429+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 36552704 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:19.820759+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 36552704 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:20.821055+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 36552704 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917633 data_alloc: 218103808 data_used: 368640
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb8a0000/0x0/0x4ffc00000, data 0x112233/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:21.821567+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 36552704 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:22.821882+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 36552704 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:23.822276+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 36552704 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:24.822728+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 36552704 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.971796036s of 10.189674377s, submitted: 38
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:25.823051+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 36528128 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: get_auth_request con 0x563964c64000 auth_method 0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:26.823480+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 36528128 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:27.823827+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 36528128 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:28.824080+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 36528128 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:29.824499+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 36528128 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:30.824819+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 36528128 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:31.825076+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 36528128 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:32.825313+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 36528128 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:33.825635+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:34.825992+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:35.826469+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:36.826937+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:37.827492+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:38.828497+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:39.828925+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:40.829347+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:41.829840+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:42.830195+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:43.830591+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 36511744 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:44.830921+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 36511744 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:45.831521+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 36511744 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:46.831987+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 36511744 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:47.832360+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 36511744 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:48.832831+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:49.833097+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:50.833760+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:51.834140+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:52.834618+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:53.835081+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:54.835568+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:55.835929+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:56.836130+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:57.836608+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:58.836986+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:59.837202+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 7023 writes, 27K keys, 7023 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 7023 writes, 1462 syncs, 4.80 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 644 writes, 1936 keys, 644 commit groups, 1.0 writes per commit group, ingest: 1.10 MB, 0.00 MB/s
                                            Interval WAL: 644 writes, 290 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:00.837602+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:01.838006+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:02.838560+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:03.838953+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:04.839206+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:05.839711+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:06.840150+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:07.840648+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:08.841017+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:09.841528+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:10.841909+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:11.842240+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:12.842541+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:13.842915+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:14.843189+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:15.843701+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:16.843985+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:17.844327+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:18.844770+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:19.845019+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:20.845542+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:21.845879+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:22.846262+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:23.846588+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:24.846996+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:25.847533+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:26.847944+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:27.848304+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:28.848645+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:29.848989+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:30.849481+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:31.849890+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:32.850263+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:33.850799+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:34.851031+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:35.851364+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:36.851770+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:37.852149+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:38.852520+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:39.852720+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:40.853108+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:41.853511+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:42.853725+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:43.854141+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:44.854655+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:45.855034+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:46.855522+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:47.855914+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:48.856302+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:49.856694+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:50.857110+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:51.857678+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:52.858193+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:53.858666+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:54.859836+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:55.860471+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:56.861250+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:57.861776+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:58.862495+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:59.862980+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:00.863780+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:01.864370+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:02.865560+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:03.865950+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:04.866705+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:05.867113+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:06.867645+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:07.868294+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:08.868804+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:09.869180+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:10.869589+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:11.870004+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:12.870577+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:13.870935+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:14.871316+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:15.871824+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:16.872521+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:17.873216+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:18.873715+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:19.874177+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:20.874612+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:21.874944+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:22.875480+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:23.875983+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:24.876553+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 120.192123413s of 120.216201782s, submitted: 15
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 36470784 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:25.876961+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 36470784 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:26.877323+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 36438016 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:27.877587+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:28.878058+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:29.878321+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:30.878798+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:31.879240+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:32.879482+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:33.879730+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:34.880183+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:35.880714+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:36.881120+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:37.881724+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:38.882131+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:39.882676+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:40.883137+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:41.883637+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:42.884062+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:43.884531+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:44.884963+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:45.885542+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:46.885938+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:47.886151+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:48.886463+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:49.886864+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:50.887195+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:51.888214+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:52.888592+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:53.888964+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:54.889315+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:55.889862+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:56.890274+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:57.890661+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:58.891193+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:59.891669+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:00.892151+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:01.892612+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:02.893152+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:03.893626+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:04.894673+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:05.895137+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:06.895664+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:07.896083+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:08.896603+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:09.896964+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:10.897353+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:11.897820+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:12.898224+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:13.898607+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:14.899033+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:15.899624+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:16.900023+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:17.900554+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:18.900980+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:19.901262+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:20.901724+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:21.902195+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:22.902727+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:23.903076+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:24.903443+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:25.903857+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:26.904320+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:27.904607+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:28.904911+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:29.905334+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:30.905528+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:31.906056+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:32.906610+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:33.907120+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:34.907399+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:35.907924+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:36.908486+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:37.908959+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:38.909633+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:39.910053+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:40.910340+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:41.910807+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:42.911226+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:43.911575+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:44.911849+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:45.912235+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:46.912545+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:47.912917+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:48.913251+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:49.913598+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:50.913917+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:51.914252+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:52.914517+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:53.914786+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:54.915121+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:55.915663+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:56.916037+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:57.916331+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:58.916694+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:59.916943+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:00.917290+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:01.917608+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:02.917873+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:03.918111+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:04.918490+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:05.918932+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:06.919431+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:07.921292+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:08.921729+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:09.921965+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:10.922511+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:11.922887+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:12.923332+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:13.923704+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:14.924240+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:15.924559+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:16.925034+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:17.925469+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:18.925817+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:19.926044+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:20.926332+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:21.926631+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:22.927075+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:23.927505+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:24.927980+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:25.928729+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:26.929142+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:27.929589+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:28.930007+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:29.930545+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:30.931008+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:31.931492+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:32.931970+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:33.932470+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:34.932851+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:35.933346+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:36.933869+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:37.934235+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:38.934676+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:39.934927+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:40.935561+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:41.935916+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:42.936500+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:43.937068+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:44.937470+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:45.938107+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:46.938646+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:47.939100+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:48.939633+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:49.940003+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:50.940497+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:51.940798+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:52.941275+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:53.941656+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:54.942024+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:55.942321+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:56.942681+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:57.942922+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:58.943183+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:59.943606+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:00.943913+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:01.944183+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:02.944621+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:03.944978+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:04.945300+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:05.946016+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:06.946478+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:07.946861+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:08.947141+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:09.947611+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 163.764251709s of 164.363204956s, submitted: 90
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:10.948109+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 36372480 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _renew_subs
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 135 ms_handle_reset con 0x5639683e6400 session 0x563964c45680
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:11.948531+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 36372480 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583a800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:12.948754+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fb098000/0x0/0x4ffc00000, data 0x915836/0x9e5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 36372480 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068030 data_alloc: 218103808 data_used: 385024
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 136 ms_handle_reset con 0x56396583a800 session 0x56396850e000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:13.949129+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x1585836/0x1655000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:14.949470+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:15.949905+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:16.950266+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:17.950687+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072204 data_alloc: 218103808 data_used: 393216
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:18.950992+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:19.951464+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:20.951685+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:21.952044+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:22.952734+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072204 data_alloc: 218103808 data_used: 393216
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:23.953153+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:24.953632+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:25.954018+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:26.954366+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:27.954873+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072204 data_alloc: 218103808 data_used: 393216
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:28.955257+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:29.955492+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:30.955999+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:31.956561+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:32.957002+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072204 data_alloc: 218103808 data_used: 393216
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:33.957526+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:34.957851+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:35.958842+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:36.959217+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:37.959827+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072364 data_alloc: 218103808 data_used: 397312
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:38.960242+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:39.960673+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:40.961175+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:41.961438+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:42.961795+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072364 data_alloc: 218103808 data_used: 397312
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:43.962224+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:44.962549+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:45.962927+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:46.963261+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:47.963605+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072364 data_alloc: 218103808 data_used: 397312
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:48.963952+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:49.964357+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:50.964848+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:51.965199+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:52.965539+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072364 data_alloc: 218103808 data_used: 397312
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:53.965883+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:54.966238+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:55.966514+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 46.095813751s of 46.311756134s, submitted: 20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _renew_subs
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 137 ms_handle_reset con 0x563963576000 session 0x563964ee8000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:56.967088+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x1588f63/0x165d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 137 ms_handle_reset con 0x563964e4e800 session 0x5639690e4960
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583ac00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 137 ms_handle_reset con 0x56396583ac00 session 0x5639690e5c20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:57.967667+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _renew_subs
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080270 data_alloc: 218103808 data_used: 405504
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:58.968054+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x158ab11/0x165f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 138 ms_handle_reset con 0x5639683e6400 session 0x563965dc10e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 36290560 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:59.968525+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 36290560 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:00.968849+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 36290560 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:01.969226+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 36290560 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fa41f000/0x0/0x4ffc00000, data 0x158ab01/0x165e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:02.969582+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 36290560 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:03.969925+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079557 data_alloc: 218103808 data_used: 405504
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 36290560 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:04.970225+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 36290560 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x5639683e6800 session 0x563964ef63c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x563964fcb2c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563964e4e800 session 0x563964b09c20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:05.970687+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583ac00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396583ac00 session 0x5639656f1a40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 36282368 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x5639683e6400 session 0x563967403860
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804dc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804dc00 session 0x563966a84b40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:06.971137+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.078349113s of 11.226018906s, submitted: 37
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x563964d21e00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 36282368 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:07.971617+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563964e4e800 session 0x563964cff2c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa41d000/0x0/0x4ffc00000, data 0x158c564/0x1661000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 30121984 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:08.972169+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102291 data_alloc: 218103808 data_used: 7225344
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa41d000/0x0/0x4ffc00000, data 0x158c564/0x1661000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 30121984 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583ac00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396583ac00 session 0x5639686d3e00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x5639683e6400 session 0x563965812d20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:09.972433+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804d800 session 0x563965be70e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x563967fcba40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804d400 session 0x563967fca1e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583ac00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563964e4e800 session 0x563964ee8b40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396583ac00 session 0x5639674901e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804d000 session 0x5639675c9860
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 29261824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x5639683e6400 session 0x563966ca70e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x5639656323c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804d000 session 0x563965632000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563964e4e800 session 0x563964ee9680
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:10.972786+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 29261824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cda000/0x0/0x4ffc00000, data 0x1ccd5d6/0x1da4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:11.973217+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 29261824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:12.973593+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 29261824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:13.973988+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171626 data_alloc: 218103808 data_used: 7225344
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 29261824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:14.974638+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 29261824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:15.974941+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cda000/0x0/0x4ffc00000, data 0x1ccd5d6/0x1da4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 29261824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:16.975697+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583ac00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396583ac00 session 0x5639690e4b40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 29302784 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:17.976254+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x5639675c92c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 29302784 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:18.976695+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172916 data_alloc: 218103808 data_used: 7229440
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 29294592 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:19.976937+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 29294592 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:20.977186+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96215040 unmapped: 29089792 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:21.977493+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 29302784 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:22.977681+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96690176 unmapped: 28614656 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:23.978073+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218036 data_alloc: 234881024 data_used: 13426688
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:24.978336+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:25.978575+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:26.978816+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:27.979025+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:28.979218+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218036 data_alloc: 234881024 data_used: 13426688
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:29.979415+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:30.979700+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:31.980018+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:32.980570+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:33.980884+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218036 data_alloc: 234881024 data_used: 13426688
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:34.981113+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:35.981577+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:36.981938+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:37.982522+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:38.982933+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218036 data_alloc: 234881024 data_used: 13426688
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:39.983136+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:40.983760+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:41.984188+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:42.985089+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:43.985501+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218036 data_alloc: 234881024 data_used: 13426688
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:44.985946+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:45.986507+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:46.987079+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:47.987516+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:48.988072+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218036 data_alloc: 234881024 data_used: 13426688
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:49.988697+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804cc00 session 0x5639690e72c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804c800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804c800 session 0x563964c443c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999800 session 0x563966c894a0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 27385856 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:50.988889+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999400 session 0x5639656ea000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.553535461s of 43.824424744s, submitted: 47
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x563966a241e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999800 session 0x56396738f2c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804c800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804c800 session 0x563965be63c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804cc00 session 0x563964ef74a0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999000 session 0x563967491a40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 27230208 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:51.989078+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 102834176 unmapped: 22470656 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:52.989347+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 102858752 unmapped: 22446080 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:53.989728+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315282 data_alloc: 234881024 data_used: 14438400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x563966ca7c20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999800 session 0x563966cb2f00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804c800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804c800 session 0x56396738fc20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f87a8000/0x0/0x4ffc00000, data 0x31f65ff/0x32ce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:54.989937+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 23248896 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804cc00 session 0x5639686d2000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966998c00 session 0x563967fca780
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:55.990336+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 105537536 unmapped: 23969792 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:56.990720+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 23724032 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:57.991163+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 23339008 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966998c00 session 0x563967fca000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7eba000/0x0/0x4ffc00000, data 0x3ae5638/0x3bbd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x5639674090e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:58.991959+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 23339008 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999800 session 0x563967490000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1485351 data_alloc: 234881024 data_used: 15548416
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804c800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804c800 session 0x5639674081e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:59.992529+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 24125440 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563967013c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:00.992747+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 23977984 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:01.993109+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 23977984 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7e9d000/0x0/0x4ffc00000, data 0x3b0865b/0x3be1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:02.993615+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 23977984 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563967013400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563967013400 session 0x563966a24b40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7e9d000/0x0/0x4ffc00000, data 0x3b0865b/0x3be1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:03.993911+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 21422080 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1502228 data_alloc: 234881024 data_used: 18731008
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:04.994259+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 21422080 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:05.994703+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 21422080 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:06.994868+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108863488 unmapped: 20643840 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:07.995231+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 12419072 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7e9d000/0x0/0x4ffc00000, data 0x3b0865b/0x3be1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:08.995487+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 118022144 unmapped: 11485184 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575828 data_alloc: 251658240 data_used: 28422144
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x563966a85c20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966998c00 session 0x563967409c20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:09.995681+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 118022144 unmapped: 11485184 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.924734116s of 18.826061249s, submitted: 212
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999800 session 0x563964ef41e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:10.995965+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 18358272 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:11.996248+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:12.996642+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:13.996893+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8175000/0x0/0x4ffc00000, data 0x31205f9/0x31f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430650 data_alloc: 234881024 data_used: 18112512
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:14.997230+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8175000/0x0/0x4ffc00000, data 0x31205f9/0x31f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:15.997623+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8175000/0x0/0x4ffc00000, data 0x31205f9/0x31f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:16.997894+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:17.998246+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:18.998509+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430650 data_alloc: 234881024 data_used: 18112512
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:19.998840+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8175000/0x0/0x4ffc00000, data 0x31205f9/0x31f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:20.999138+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:21.999490+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8175000/0x0/0x4ffc00000, data 0x31205f9/0x31f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:22.999900+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 18317312 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:24.000221+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 18317312 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430650 data_alloc: 234881024 data_used: 18112512
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:25.000452+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 18317312 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.332810402s of 15.400432587s, submitted: 16
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:26.000739+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 18317312 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8474000/0x0/0x4ffc00000, data 0x31215f9/0x31f9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:27.001227+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 18317312 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:28.001455+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 18317312 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:29.001741+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 18317312 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431762 data_alloc: 234881024 data_used: 18120704
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:30.002014+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 18317312 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:31.002295+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 18259968 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8472000/0x0/0x4ffc00000, data 0x31225f9/0x31fa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:32.002609+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 18259968 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:33.002896+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 18259968 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:34.003204+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 18259968 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431142 data_alloc: 234881024 data_used: 18120704
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:35.003529+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 18259968 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:36.003846+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 18259968 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x5639683e6400 session 0x5639690e65a0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804d400 session 0x5639673e8780
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.165545464s of 11.226758003s, submitted: 9
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:37.004040+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 14630912 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x5639683e6400 session 0x56396738ed20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7a2b000/0x0/0x4ffc00000, data 0x3b635f9/0x3c3b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:38.004226+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 14630912 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804c800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396699ac00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:39.004533+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 16236544 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1521206 data_alloc: 234881024 data_used: 18313216
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:40.004768+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 16203776 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7a04000/0x0/0x4ffc00000, data 0x3b925f9/0x3c6a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:41.005035+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 16195584 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:42.005436+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 16195584 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7a04000/0x0/0x4ffc00000, data 0x3b925f9/0x3c6a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:43.005668+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 16195584 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:44.005868+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 16195584 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1521774 data_alloc: 234881024 data_used: 18370560
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563964e4e800 session 0x5639674905a0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804d000 session 0x563965be6780
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:45.006060+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 16678912 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x563964ef4f00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:46.006309+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15603 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:47.006626+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:48.007008+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8297000/0x0/0x4ffc00000, data 0x2da7587/0x2e7d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:49.007362+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8297000/0x0/0x4ffc00000, data 0x2da7587/0x2e7d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1358462 data_alloc: 234881024 data_used: 12562432
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:50.007784+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:51.008256+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.231465340s of 14.594692230s, submitted: 100
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:52.008504+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8296000/0x0/0x4ffc00000, data 0x2da8587/0x2e7e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:53.008785+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:54.009109+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1358690 data_alloc: 234881024 data_used: 12562432
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:55.009451+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8296000/0x0/0x4ffc00000, data 0x2da8587/0x2e7e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:56.009811+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:57.010129+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:58.010544+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:59.010868+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999c00 session 0x563966c890e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x563967409a40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698b800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698b800 session 0x563967514960
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698a400 session 0x56396738e3c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698bc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698bc00 session 0x5639674083c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410492 data_alloc: 234881024 data_used: 12562432
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4f800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563964e4f800 session 0x563966cb2f00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x56396581eb40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:00.011196+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698a400 session 0x56396581f4a0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698b800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698b800 session 0x56396738f680
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108101632 unmapped: 27705344 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f81e6000/0x0/0x4ffc00000, data 0x33b1597/0x3488000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:01.011602+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108101632 unmapped: 27705344 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:02.011862+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108101632 unmapped: 27705344 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:03.012107+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f81e6000/0x0/0x4ffc00000, data 0x33b1597/0x3488000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 27770880 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698bc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f81e6000/0x0/0x4ffc00000, data 0x33b1597/0x3488000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.252738953s of 12.367694855s, submitted: 12
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698bc00 session 0x56396738ef00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563964e4e800 session 0x56396738f2c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:04.012515+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x56396738fc20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b37000/0x0/0x4ffc00000, data 0x3a60597/0x3b37000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563964e4e800 session 0x56396738f860
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698a400 session 0x56396738eb40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 27205632 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1461736 data_alloc: 234881024 data_used: 12562432
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:05.012721+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 27205632 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b37000/0x0/0x4ffc00000, data 0x3a60597/0x3b37000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:06.013163+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698b800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 27205632 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698b800 session 0x563964ef4000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:07.013439+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698bc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 27197440 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:08.013809+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b36000/0x0/0x4ffc00000, data 0x3a605ba/0x3b38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 27197440 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:09.014222+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 27197440 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1463677 data_alloc: 234881024 data_used: 12566528
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:10.014638+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 27189248 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:11.014865+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999800 session 0x563967409e00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 109191168 unmapped: 26615808 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:12.015057+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 109322240 unmapped: 26484736 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:13.015359+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 109322240 unmapped: 26484736 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:14.015828+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b36000/0x0/0x4ffc00000, data 0x3a605ba/0x3b38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 109322240 unmapped: 26484736 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1507837 data_alloc: 234881024 data_used: 18718720
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:15.016147+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.345785141s of 11.441099167s, submitted: 13
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 109322240 unmapped: 26484736 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:16.016515+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 109355008 unmapped: 26451968 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:17.016817+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 26320896 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:18.017048+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 24870912 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:19.017309+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b33000/0x0/0x4ffc00000, data 0x3a605ba/0x3b38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 24068096 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1545725 data_alloc: 234881024 data_used: 23162880
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:20.017582+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 22740992 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:21.017968+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 22740992 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b33000/0x0/0x4ffc00000, data 0x3a605ba/0x3b38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:22.018287+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 22650880 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:23.018686+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 22650880 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:24.019095+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 22650880 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1568515 data_alloc: 234881024 data_used: 24100864
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:25.019503+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 22650880 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b31000/0x0/0x4ffc00000, data 0x3a655ba/0x3b3d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:26.019975+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 22650880 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:27.020344+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 22650880 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:28.020750+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b31000/0x0/0x4ffc00000, data 0x3a655ba/0x3b3d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 22642688 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.566030502s of 13.616303444s, submitted: 10
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:29.021131+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 22634496 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b30000/0x0/0x4ffc00000, data 0x3a665ba/0x3b3e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:30.021332+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1568767 data_alloc: 234881024 data_used: 24100864
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 22634496 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:31.021507+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 22634496 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:32.021876+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b30000/0x0/0x4ffc00000, data 0x3a665ba/0x3b3e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 22634496 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:33.022100+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 22634496 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:34.022361+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b30000/0x0/0x4ffc00000, data 0x3a665ba/0x3b3e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698a400 session 0x56396738fe00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 22634496 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698b800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698b800 session 0x563966cb34a0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966998400 session 0x563964ee9e00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999c00 session 0x563964ee92c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:35.022662+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1568767 data_alloc: 234881024 data_used: 24100864
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804d400 session 0x563964ee83c0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698a400 session 0x563964ee9c20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698b800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698b800 session 0x563967514960
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966998400 session 0x563967515c20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999c00 session 0x563967514000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 22437888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:36.022941+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 22437888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:37.023364+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 22437888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:38.023595+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 22437888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:39.023797+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 22437888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:40.023992+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1607603 data_alloc: 234881024 data_used: 24100864
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f761b000/0x0/0x4ffc00000, data 0x3f7a5ca/0x4053000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 22437888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:41.024206+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804d400 session 0x5639675141e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 22429696 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:42.024444+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698a400 session 0x5639675154a0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 22429696 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:43.024849+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f761b000/0x0/0x4ffc00000, data 0x3f7a5ca/0x4053000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 22421504 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698b800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698b800 session 0x563967515860
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.065951347s of 15.131469727s, submitted: 8
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:44.025055+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966998400 session 0x563967515a40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 118161408 unmapped: 17645568 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:45.025272+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1674438 data_alloc: 234881024 data_used: 24723456
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 118652928 unmapped: 17154048 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804c800 session 0x5639690e6d20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396699ac00 session 0x56396738e000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:46.025558+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 16695296 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698a400 session 0x5639675c8d20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:47.025753+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f6da1000/0x0/0x4ffc00000, data 0x3d5e5fd/0x3e39000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 16564224 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:48.026147+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121454592 unmapped: 14352384 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:49.026439+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 122183680 unmapped: 13623296 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:50.026737+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1618935 data_alloc: 234881024 data_used: 26521600
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 122183680 unmapped: 13623296 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999c00 session 0x563967514f00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804d000 session 0x5639686d30e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:51.027068+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698b800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 16818176 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698b800 session 0x563967515c20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:52.027325+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 16809984 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:53.027542+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7d4b000/0x0/0x4ffc00000, data 0x384a5ba/0x3922000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 12681216 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.293957710s of 10.008902550s, submitted: 192
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:54.027744+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 11329536 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:55.027990+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1615324 data_alloc: 234881024 data_used: 23351296
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7468000/0x0/0x4ffc00000, data 0x412d5ba/0x4205000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124534784 unmapped: 11272192 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:56.028261+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 12214272 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:57.028658+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123600896 unmapped: 12206080 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:58.028871+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123600896 unmapped: 12206080 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f743c000/0x0/0x4ffc00000, data 0x415a5ba/0x4232000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:59.029239+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 12197888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:00.029600+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1619744 data_alloc: 234881024 data_used: 23511040
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f743c000/0x0/0x4ffc00000, data 0x415a5ba/0x4232000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 12197888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:01.029972+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 12197888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:02.030322+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 12197888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:03.030684+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x563964ef4f00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563964e4e800 session 0x5639656d5680
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 12214272 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:04.030936+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.840319633s of 10.184956551s, submitted: 30
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698a400 session 0x5639690e6960
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 18505728 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:05.031217+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448260 data_alloc: 234881024 data_used: 15392768
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 18505728 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f83fb000/0x0/0x4ffc00000, data 0x319b5ba/0x3273000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:06.031714+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 18505728 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:07.032061+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 18505728 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:08.032329+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 18505728 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:09.032758+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 18505728 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:10.033228+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448260 data_alloc: 234881024 data_used: 15392768
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f83fb000/0x0/0x4ffc00000, data 0x319b5ba/0x3273000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 18505728 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:11.033645+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804cc00 session 0x5639656ead20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563967013c00 session 0x563967408960
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 18505728 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:12.033989+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698b800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 19365888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:13.034343+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698b800 session 0x563967515a40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 36093952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:14.034743+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.779101372s of 10.012064934s, submitted: 33
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _renew_subs
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563964e4e800 session 0x563966a25c20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 36077568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:15.034957+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416099 data_alloc: 234881024 data_used: 14102528
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8513000/0x0/0x4ffc00000, data 0x3080137/0x3159000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 36077568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:16.035312+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 36077568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:17.035712+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 36028416 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:18.035994+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 36028416 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:19.036327+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 36028416 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:20.036661+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415367 data_alloc: 234881024 data_used: 14098432
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8513000/0x0/0x4ffc00000, data 0x3081137/0x315a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 36020224 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:21.036938+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:22.037265+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 36020224 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698a400 session 0x563964ef41e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563967013c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563967013c00 session 0x563964291860
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804cc00 session 0x5639656321e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966999c00 session 0x563965632780
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:23.037571+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 36118528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563964e4e800 session 0x563967408000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698bc00 session 0x563964ef45a0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966998c00 session 0x5639656d4f00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:24.037803+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 36102144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.746902466s of 10.012772560s, submitted: 50
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966999c00 session 0x5639674914a0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563967013c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563967013c00 session 0x5639673e8960
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563964e4e800 session 0x5639673e81e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698a400 session 0x5639690e70e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:25.038232+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 42041344 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324498 data_alloc: 218103808 data_used: 7229440
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698bc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698bc00 session 0x5639656485a0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:26.038731+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 34693120 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966998c00 session 0x563966cb3c20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8b6c000/0x0/0x4ffc00000, data 0x2a2a166/0x2b02000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:27.039062+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 34037760 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966999c00 session 0x563964ef6f00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563964e4e800 session 0x563967490960
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:28.039355+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 33488896 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698a400 session 0x5639674901e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698bc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698bc00 session 0x563964b04000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966998c00 session 0x563964ef4780
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804cc00 session 0x5639673e9c20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804cc00 session 0x563964f4fc20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:29.040308+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 33480704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698bc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698bc00 session 0x563967408780
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:30.040810+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8880000/0x0/0x4ffc00000, data 0x2d16166/0x2dee000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 33480704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966998c00 session 0x5639673e9680
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376443 data_alloc: 234881024 data_used: 14053376
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:31.041189+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 33480704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804d000 session 0x563966a254a0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966998400 session 0x563966a250e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698bc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:32.042158+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 33480704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8880000/0x0/0x4ffc00000, data 0x2d16166/0x2dee000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:33.042588+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 33800192 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563964e4e800 session 0x563964f4e000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698a400 session 0x5639673e8f00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:34.042774+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 34734080 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.598022461s of 10.010197639s, submitted: 60
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804cc00 session 0x563967409a40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:35.043094+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1316803 data_alloc: 234881024 data_used: 14163968
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:36.043426+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:37.044285+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:38.044690+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:39.045000+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:40.045425+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328323 data_alloc: 234881024 data_used: 15785984
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:41.045841+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:42.046081+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:43.046459+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:44.046711+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:45.046988+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328323 data_alloc: 234881024 data_used: 15785984
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:46.047446+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:47.047781+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:48.048082+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:49.048491+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:50.048898+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328323 data_alloc: 234881024 data_used: 15785984
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:51.049308+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:52.049751+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:53.050083+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:54.050520+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:55.050692+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328323 data_alloc: 234881024 data_used: 15785984
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:56.051110+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:57.051617+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:58.052036+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:59.053279+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:00.053646+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328323 data_alloc: 234881024 data_used: 15785984
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:01.054019+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:02.054686+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:03.054912+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 34611200 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:04.055243+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 34603008 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:05.055661+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 34603008 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328323 data_alloc: 234881024 data_used: 15785984
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:06.056094+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 34603008 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:07.056555+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 34603008 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:08.056926+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 34603008 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:09.057272+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 34.907386780s of 34.980773926s, submitted: 12
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 33161216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:10.057658+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 30949376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1371919 data_alloc: 234881024 data_used: 15798272
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:11.058009+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8c0b000/0x0/0x4ffc00000, data 0x298c104/0x2a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 31309824 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:12.058331+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 31301632 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:13.058765+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121167872 unmapped: 31424512 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:14.059117+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 31309824 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:15.059588+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8ab0000/0x0/0x4ffc00000, data 0x2ae1104/0x2bb8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 31309824 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1388143 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:16.059925+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 31309824 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:17.060292+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 31309824 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:18.060720+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 31309824 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:19.061001+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8ab0000/0x0/0x4ffc00000, data 0x2ae1104/0x2bb8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 31309824 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:20.061328+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.984402657s of 10.894448280s, submitted: 67
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 31899648 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386431 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:21.061670+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 31899648 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:22.061860+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 31899648 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:23.062217+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 31899648 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:24.062603+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 31899648 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:25.062975+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa6000/0x0/0x4ffc00000, data 0x2af1104/0x2bc8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 31899648 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386431 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:26.063469+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa6000/0x0/0x4ffc00000, data 0x2af1104/0x2bc8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 31899648 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa6000/0x0/0x4ffc00000, data 0x2af1104/0x2bc8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:27.063853+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 31891456 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa6000/0x0/0x4ffc00000, data 0x2af1104/0x2bc8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:28.064249+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 31891456 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:29.064551+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 31891456 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa6000/0x0/0x4ffc00000, data 0x2af1104/0x2bc8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:30.064925+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 31891456 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386431 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.951239586s of 10.992996216s, submitted: 2
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:31.065274+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 31842304 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:32.065713+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 31842304 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:33.066124+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 31842304 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:34.066555+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 31842304 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:35.066801+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 31842304 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386587 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:36.067181+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:37.067548+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:38.067783+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:39.068145+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:40.068549+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386587 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:41.069021+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:42.069288+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:43.069630+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:44.070011+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.040291786s of 13.055529594s, submitted: 2
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:45.070283+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386763 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:46.070574+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:47.070921+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:48.071299+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:49.071488+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:50.071845+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:51.072298+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386763 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:52.072684+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:53.073112+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:54.073596+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:55.073991+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:56.074485+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386763 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:57.074862+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:58.075275+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:59.075702+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.354290009s of 15.360252380s, submitted: 1
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:00.076112+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 8697 writes, 34K keys, 8697 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 8697 writes, 2105 syncs, 4.13 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1674 writes, 6961 keys, 1674 commit groups, 1.0 writes per commit group, ingest: 8.89 MB, 0.01 MB/s
                                            Interval WAL: 1674 writes, 643 syncs, 2.60 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:01.076538+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:02.076912+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:03.077345+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:04.077693+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:05.078046+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:06.078568+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:07.078952+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: mgrc ms_handle_reset ms_handle_reset con 0x563966fe1c00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2078717049
Oct 02 20:23:30 compute-0 ceph-osd[208121]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2078717049,v1:192.168.122.100:6801/2078717049]
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: get_auth_request con 0x563967013c00 auth_method 0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: mgrc handle_mgr_configure stats_period=5
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:08.079292+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:09.079619+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:10.079878+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:11.080200+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:12.080425+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:13.080844+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:14.081197+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:15.081573+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:16.081947+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:17.082211+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:18.082549+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:19.082804+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:20.083100+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:21.083440+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:22.083697+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:23.084073+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:24.084493+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:25.084686+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:26.084996+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:27.085205+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:28.085658+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:29.085989+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:30.086263+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:31.086669+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:32.087142+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:33.087718+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:34.088307+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:35.088706+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:36.089124+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:37.089566+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:38.089975+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:39.090551+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:40.090996+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:41.091485+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:42.091827+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:43.092236+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:44.092671+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:45.092973+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:46.093526+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:47.093955+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:48.094486+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:49.094942+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:50.095296+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:51.095698+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:52.096119+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:53.096365+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:54.096698+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:55.096969+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:56.097771+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:57.098900+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:58.099181+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:59.099541+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:00.099766+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:01.100049+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:02.100494+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:03.100926+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 64.250640869s of 64.275962830s, submitted: 8
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:04.101167+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 31170560 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804d000 session 0x56396581e960
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804c800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804c800 session 0x563966a25860
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563964e4e800 session 0x563966a25c20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:05.101668+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698a400 session 0x5639690e6000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804cc00 session 0x56396738e1e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8817000/0x0/0x4ffc00000, data 0x2d80104/0x2e57000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:06.102109+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8817000/0x0/0x4ffc00000, data 0x2d80104/0x2e57000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411848 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:07.102550+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:08.102863+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:09.103157+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804d000 session 0x563964f4ef00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396699b400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396699b400 session 0x563964ef6960
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:10.103465+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563964e4e800 session 0x563964ef7a40
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:11.103933+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413686 data_alloc: 234881024 data_used: 16306176
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698a400 session 0x563964ef74a0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:12.104229+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:13.104534+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:14.105302+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120889344 unmapped: 31703040 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:15.106798+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120922112 unmapped: 31670272 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:16.107870+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120922112 unmapped: 31670272 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1421818 data_alloc: 234881024 data_used: 17362944
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:17.108693+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121004032 unmapped: 31588352 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x5639655a7000 session 0x5639690e7680
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563967423000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:18.110033+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121004032 unmapped: 31588352 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:19.110513+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121004032 unmapped: 31588352 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:20.110827+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121012224 unmapped: 31580160 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:21.111112+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121012224 unmapped: 31580160 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1433178 data_alloc: 234881024 data_used: 18976768
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:22.111623+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121012224 unmapped: 31580160 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:23.111935+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121012224 unmapped: 31580160 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:24.112214+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.379343033s of 20.399175644s, submitted: 16
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121020416 unmapped: 31571968 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:25.112651+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121077760 unmapped: 31514624 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:26.112902+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121077760 unmapped: 31514624 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434442 data_alloc: 234881024 data_used: 19013632
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:27.113127+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121110528 unmapped: 31481856 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:28.113487+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:29.113943+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:30.114511+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:31.114985+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434442 data_alloc: 234881024 data_used: 19013632
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:32.115242+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:33.115694+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:34.116133+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:35.116418+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:36.116969+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434442 data_alloc: 234881024 data_used: 19013632
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:37.117195+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:38.117490+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:39.117709+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:40.117899+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:41.118192+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434442 data_alloc: 234881024 data_used: 19013632
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:42.118407+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:43.118641+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:44.118857+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:45.119090+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:46.119551+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.754760742s of 22.446420670s, submitted: 108
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1493018 data_alloc: 234881024 data_used: 19013632
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:47.119872+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124174336 unmapped: 28418048 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:48.120097+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124248064 unmapped: 28344320 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:49.120366+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x3716114/0x37ee000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:50.120598+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:51.120790+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:52.121017+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:53.121251+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:54.121534+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:55.121773+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:56.122008+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:57.122244+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:58.122494+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:59.122733+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:00.122989+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:01.123368+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:02.123644+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:03.123913+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:04.124141+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:05.124367+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:06.124687+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.048749924s of 20.456701279s, submitted: 60
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:07.125036+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:08.125552+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:09.125815+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:10.126103+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:11.126590+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:12.127041+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:13.127441+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:14.127720+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:15.127957+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:16.128356+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:17.128624+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:18.128849+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:19.129098+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:20.129413+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:21.129650+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:22.129941+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:23.130183+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:24.130823+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:25.131178+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:26.131663+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:27.131869+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:28.132199+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:29.132470+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:30.132766+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:31.133076+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:32.133366+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:33.133676+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:34.133961+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:35.134876+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:36.135272+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:37.135710+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:38.136046+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:39.136606+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:40.136982+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:41.137170+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:42.137553+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:43.137783+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:44.138093+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:45.138269+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:46.138504+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:47.138719+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:48.138919+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:49.139232+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:50.139636+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:51.139915+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:52.140277+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:53.141247+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:54.141563+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:55.141782+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:56.142143+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:57.142561+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:58.142880+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:59.143108+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:00.143306+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:01.143556+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:02.143906+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:03.144303+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:04.144673+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:05.145046+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:06.145348+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:07.145723+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:08.146052+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:09.146297+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:10.146654+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:11.146902+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 28303360 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:12.150782+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 28303360 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:13.151222+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:14.151672+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:15.152166+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:16.152653+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:17.153807+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:18.154351+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:19.154825+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:20.155277+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:21.155835+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:22.156284+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:23.156907+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:24.157239+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:25.157621+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:26.157863+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:27.158353+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:28.159030+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:29.159500+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:30.160191+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:31.160710+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:32.161012+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:33.161239+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:34.161895+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:35.162106+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:36.162640+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:37.163220+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:38.163515+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:39.163997+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:40.164629+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:41.165696+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:42.166691+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:43.167061+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:44.167521+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:45.167721+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:46.168140+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:47.168698+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:48.169034+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:49.169312+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:50.169669+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:51.169910+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:52.170151+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:53.170824+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:54.171139+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:55.171361+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:56.171622+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:57.171855+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:58.172098+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:59.172564+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:00.172815+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:01.173101+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:02.173508+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:03.173705+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:04.173885+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:05.174106+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:06.174343+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:07.174494+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 28270592 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:08.174692+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 28270592 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:09.174893+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 28270592 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:10.175107+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 28270592 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:11.175341+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966fd2400 session 0x563964d21860
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396781ec00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 28270592 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966fd2800 session 0x5639656d5e00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396781f000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966fd3800 session 0x5639656f1680
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966fd2800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:12.175509+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 28270592 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:13.175720+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 28270592 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:14.175940+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 28172288 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:15.176182+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 28172288 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:16.176632+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 28172288 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:17.176860+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 28172288 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511972 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:18.177173+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 28172288 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:19.178543+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 28172288 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 133.032196045s of 133.038803101s, submitted: 1
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:20.178910+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124608512 unmapped: 27983872 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:21.179255+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124608512 unmapped: 27983872 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e65000/0x0/0x4ffc00000, data 0x3731114/0x3809000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:22.179635+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511888 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:23.179977+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:24.180274+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:25.180618+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:26.180925+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:27.181225+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e65000/0x0/0x4ffc00000, data 0x3731114/0x3809000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511888 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:28.181566+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:29.181925+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:30.182142+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:31.182579+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e65000/0x0/0x4ffc00000, data 0x3731114/0x3809000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:32.182786+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1512368 data_alloc: 234881024 data_used: 19898368
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:33.183100+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:34.183530+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e65000/0x0/0x4ffc00000, data 0x3731114/0x3809000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:35.183857+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e65000/0x0/0x4ffc00000, data 0x3731114/0x3809000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.845728874s of 15.874156952s, submitted: 2
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:36.184361+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:37.184878+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1512788 data_alloc: 234881024 data_used: 19898368
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:38.186782+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:39.187108+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:40.187350+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:41.187964+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:42.188449+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1512788 data_alloc: 234881024 data_used: 19898368
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:43.188690+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:44.188982+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:45.189336+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:46.189946+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:47.190188+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1512788 data_alloc: 234881024 data_used: 19898368
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:48.190411+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:49.190605+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:50.190848+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:51.191075+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:52.191272+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1512788 data_alloc: 234881024 data_used: 19898368
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:53.191506+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:54.191738+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.920186996s of 18.941226959s, submitted: 2
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:55.191962+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:56.192783+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:57.193189+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:58.194277+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:59.194677+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:00.194882+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:01.195103+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:02.195585+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:03.195807+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:04.196008+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:05.196239+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:06.196514+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:07.196753+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:08.196991+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:09.197268+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:10.197546+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:11.197824+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:12.198135+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 27967488 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:13.198445+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 27967488 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:14.198694+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 27967488 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:15.198918+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 27967488 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:16.199244+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 27967488 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:17.199568+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:18.200055+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:19.200489+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:20.200945+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:21.201329+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:22.201670+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:23.201988+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:24.202349+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:25.202851+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:26.203281+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:27.203792+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:28.204177+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:29.204618+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:30.204941+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:31.205212+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:32.205598+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:33.206009+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:34.206359+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:35.206732+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:36.207233+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:37.207675+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:38.207964+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:39.208342+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:40.208782+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:41.209151+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:42.209755+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:43.210262+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:44.210686+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:45.211083+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:46.211571+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:47.211952+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:48.212315+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:49.212565+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:50.213040+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:51.213589+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:52.214011+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:53.214492+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:54.215147+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:55.215579+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:56.216660+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:57.217071+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:58.217566+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:59.217900+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:00.218707+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:01.219252+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:02.219891+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:03.220275+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:04.220928+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:05.221504+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:06.222243+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:07.222672+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:08.223158+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:09.223549+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:10.223982+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:11.224566+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:12.224901+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:13.225207+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:14.225581+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:15.225959+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:16.226229+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:17.227091+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:18.228465+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:19.229496+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:20.231640+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:21.233315+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:22.234560+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 27893760 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:23.236096+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:24.238135+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:25.238958+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:26.239235+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:27.239659+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:28.240085+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:29.240468+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:30.240795+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:31.241226+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:32.241756+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:33.242360+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:34.242877+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:35.243345+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:36.243858+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:37.244268+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:38.244674+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:39.245345+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:40.246001+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:41.246535+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:42.246765+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:43.247051+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:44.247428+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:45.247779+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:46.248173+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:47.248428+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:48.248717+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 114.282867432s of 114.314605713s, submitted: 15
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:49.249064+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124796928 unmapped: 27795456 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:50.249489+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124796928 unmapped: 27795456 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:51.249848+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124796928 unmapped: 27795456 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:52.250251+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124796928 unmapped: 27795456 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:53.250513+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518308 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:54.251097+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:55.251312+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:56.251878+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:57.252161+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:58.252583+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:59.252787+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:00.253120+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:01.254501+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:02.254843+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:03.255231+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124813312 unmapped: 27779072 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:04.255651+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124813312 unmapped: 27779072 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:05.256103+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124813312 unmapped: 27779072 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:06.256648+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124813312 unmapped: 27779072 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:07.256959+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124813312 unmapped: 27779072 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:08.257277+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124813312 unmapped: 27779072 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:09.257818+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124542976 unmapped: 28049408 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:10.258277+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124542976 unmapped: 28049408 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:11.258767+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:12.259365+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:13.259956+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:14.260327+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:15.260676+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:16.261120+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:17.261606+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:18.262010+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:19.262507+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:20.262930+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:21.263295+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:22.263506+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:23.264020+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:24.264779+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:25.265115+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:26.265621+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:27.265927+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124559360 unmapped: 28033024 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:28.266250+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:29.266619+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:30.266953+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:31.267190+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:32.267459+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:33.267717+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:34.267930+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:35.268193+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:36.268473+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:37.268852+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:38.269171+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:39.269421+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:40.269747+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:41.269987+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:42.270163+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:43.272319+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:44.272704+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:45.273056+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:46.273434+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:47.273845+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:48.274146+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:49.274571+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 28016640 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:50.274855+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 28016640 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:51.275102+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 28016640 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:52.279465+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 28016640 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:53.279922+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 28016640 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:54.280333+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 28016640 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:55.280741+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 28016640 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:56.281203+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 28016640 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:57.281669+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 28016640 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:58.282119+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:59.282637+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:00.282966+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:01.283309+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:02.283649+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:03.284099+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:04.284548+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:05.285021+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:06.285510+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:07.285964+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:08.286447+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:09.286881+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:10.287306+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:11.287679+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:12.287927+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:13.288282+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:14.288837+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:15.289268+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:16.289700+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:17.290173+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:18.290634+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:19.290956+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:20.291273+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:21.291687+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:22.294247+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124600320 unmapped: 27992064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:23.295003+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124600320 unmapped: 27992064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:24.295582+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124600320 unmapped: 27992064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:25.295994+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124600320 unmapped: 27992064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:26.296667+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124600320 unmapped: 27992064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:27.297034+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124600320 unmapped: 27992064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:28.297471+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124600320 unmapped: 27992064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:29.297835+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124600320 unmapped: 27992064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:30.298141+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124608512 unmapped: 27983872 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:31.298577+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:32.299057+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:33.299642+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:34.299906+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:35.300304+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:36.300673+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:37.301072+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:38.301605+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:39.302012+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:40.302360+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:41.302867+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:42.303214+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:43.303464+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:44.303684+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:45.303934+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:46.304337+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:47.304729+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:48.305043+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:49.305361+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:50.305649+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:51.305845+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:52.306048+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:53.306623+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:54.306928+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:55.307141+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:56.307601+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:57.307987+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:58.308343+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:59.308830+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:00.309256+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:01.309692+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:02.310232+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:03.310448+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:04.310776+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:05.311170+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:06.311597+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:07.311850+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:08.312194+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:09.312595+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:10.313042+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:11.313752+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:12.314156+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:13.314694+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:14.315002+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:15.315339+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:16.315855+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:17.316202+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:18.316474+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:19.316918+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:20.317262+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:21.317654+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:22.317922+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:23.318452+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:24.318870+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:25.319286+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:26.319861+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:27.320186+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:28.320741+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:29.321111+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:30.321530+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:31.321814+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:32.322179+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:33.322633+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:34.323128+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:35.323543+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:36.324012+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:37.324427+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 27934720 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:38.324836+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 27934720 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:39.325476+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 27934720 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:40.325862+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 27934720 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:41.326206+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 27934720 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:42.326737+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 27934720 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:43.327506+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:44.328008+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:45.328584+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:46.329054+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:47.329493+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:48.329773+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:49.330043+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:50.330301+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:51.330614+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:52.330945+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:53.331653+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:54.332088+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:55.332549+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:56.333055+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:57.333354+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:58.334795+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:59.334992+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:00.335323+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 9063 writes, 35K keys, 9063 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9063 writes, 2265 syncs, 4.00 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 366 writes, 1012 keys, 366 commit groups, 1.0 writes per commit group, ingest: 1.01 MB, 0.00 MB/s
                                            Interval WAL: 366 writes, 160 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:01.335591+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:02.335895+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:03.336184+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:04.336615+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:05.336895+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 27910144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:06.337331+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 27910144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:07.337571+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 27910144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:08.337983+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 27910144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:09.338302+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 27910144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:10.338697+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 27910144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:11.339111+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 27910144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:12.339543+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 27910144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:13.340052+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 27910144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:14.340535+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:15.340928+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:16.341351+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:17.341867+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:18.342191+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:19.342654+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:20.342936+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:21.343235+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:22.343475+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:23.343694+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:24.343889+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:25.344259+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:26.344662+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:27.345019+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:28.345302+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:29.345754+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 27893760 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:30.346086+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 27893760 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:31.346539+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 27893760 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:32.346982+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 27893760 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:33.347462+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 27893760 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:34.347791+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:35.348259+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 218103808 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:36.348883+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:37.349257+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:38.349628+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:39.350125+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:40.350859+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 218103808 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:41.351248+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:42.351688+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:43.352005+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:44.352475+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:45.352875+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 218103808 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:46.353287+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:47.353664+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:48.353967+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:49.354366+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:50.354731+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 218103808 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:51.354993+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:52.355309+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:53.355624+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:54.355914+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:55.356142+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 218103808 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:56.356658+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:57.357074+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:58.357524+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:59.357741+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:00.358194+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 218103808 data_used: 20148224
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698bc00 session 0x563966a24000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966998c00 session 0x5639674081e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:01.358656+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396781e000
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 27860992 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 252.500213623s of 252.522964478s, submitted: 2
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396781e000 session 0x563964291860
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:02.359479+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:03.359949+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:04.360491+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8772000/0x0/0x4ffc00000, data 0x2e24114/0x2efc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:05.360877+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1421904 data_alloc: 218103808 data_used: 17698816
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:06.361180+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:07.361615+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:08.361938+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8772000/0x0/0x4ffc00000, data 0x2e24114/0x2efc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:09.362300+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:10.362738+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8772000/0x0/0x4ffc00000, data 0x2e24114/0x2efc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1421904 data_alloc: 218103808 data_used: 17698816
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:11.363122+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:12.363601+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.848731995s of 10.890914917s, submitted: 8
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804cc00 session 0x563966cb21e0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804d000 session 0x5639656eb860
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8772000/0x0/0x4ffc00000, data 0x2e24114/0x2efc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:13.363842+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563964e4e800 session 0x563967403c20
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:14.364091+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:15.364558+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305401 data_alloc: 218103808 data_used: 14086144
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:16.364966+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:17.365316+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f9399000/0x0/0x4ffc00000, data 0x21fe104/0x22d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:18.366095+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f9399000/0x0/0x4ffc00000, data 0x21fe104/0x22d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:19.366728+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:20.367087+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305401 data_alloc: 218103808 data_used: 14086144
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:21.367900+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:22.368461+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.599736214s of 10.729805946s, submitted: 23
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:23.368794+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:24.369202+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f939a000/0x0/0x4ffc00000, data 0x21fe0e1/0x22d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 42237952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 141 ms_handle_reset con 0x56396698a400 session 0x563966cb3e00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698bc00
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:25.369880+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa005000/0x0/0x4ffc00000, data 0x158fce5/0x1669000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110403584 unmapped: 50585600 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250608 data_alloc: 218103808 data_used: 2621440
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _renew_subs
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:26.370413+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 142 ms_handle_reset con 0x56396698bc00 session 0x56396738f4a0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 51306496 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:27.370681+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _renew_subs
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 51273728 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 143 ms_handle_reset con 0x563964e4e800 session 0x5639675145a0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:28.371082+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:29.371497+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110837760 unmapped: 50151424 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:30.371934+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203663 data_alloc: 218103808 data_used: 2637824
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:31.372307+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa001000/0x0/0x4ffc00000, data 0x159342c/0x166d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: get_auth_request con 0x563964c64000 auth_method 0
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:32.372757+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:33.373709+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:34.374720+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:35.375229+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203663 data_alloc: 218103808 data_used: 2637824
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:36.376023+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9ffd000/0x0/0x4ffc00000, data 0x1594eab/0x1670000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.149994850s of 13.364699364s, submitted: 190
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:37.376604+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:38.377281+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:39.377746+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:40.378129+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:41.378628+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:42.379073+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:43.379518+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:44.380053+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:45.380754+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:46.381279+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:47.381836+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:48.382227+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:49.382614+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:50.383017+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:51.383472+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:52.383826+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:53.384120+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:54.384557+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:55.384890+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:56.385602+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:57.385963+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:58.386364+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:59.386811+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:00.387256+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:01.387715+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:02.388151+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:03.388573+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:04.389167+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:05.389701+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:06.390157+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:07.390665+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:08.391011+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:09.391485+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:10.391847+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:11.392235+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:12.392642+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:13.392987+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:14.393466+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:15.393881+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:16.394341+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:17.394563+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:18.394970+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:19.395327+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:20.395707+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:21.396302+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:22.396571+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:23.397946+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:24.398336+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:25.398695+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:26.399125+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:27.399508+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:28.399767+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:29.400266+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:30.400815+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:31.401208+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:32.403097+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:33.403980+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:34.404971+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:35.405547+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:36.406126+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:37.406641+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:38.407074+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:39.407872+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:40.409075+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:41.409577+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:42.409830+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:43.410141+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:44.410556+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:45.410961+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:46.411304+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:47.411607+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:48.412004+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:49.412322+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:50.412592+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:51.413284+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:52.413560+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:53.413840+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:54.414053+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:55.414287+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:56.414548+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 50044928 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:30 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:30 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:23:30 compute-0 ceph-osd[208121]: do_command 'config diff' '{prefix=config diff}'
Oct 02 20:23:30 compute-0 ceph-osd[208121]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 20:23:30 compute-0 ceph-osd[208121]: do_command 'config show' '{prefix=config show}'
Oct 02 20:23:30 compute-0 ceph-osd[208121]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:57.414759+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 20:23:30 compute-0 ceph-osd[208121]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 49938432 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 20:23:30 compute-0 ceph-osd[208121]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 20:23:30 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:58.415013+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:23:30 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:59.415553+0000)
Oct 02 20:23:30 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 49856512 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:30 compute-0 ceph-osd[208121]: do_command 'log dump' '{prefix=log dump}'
Oct 02 20:23:30 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 20:23:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 20:23:30 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/684446758' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 20:23:30 compute-0 ceph-mon[191910]: from='client.15595 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:30 compute-0 ceph-mon[191910]: from='client.15599 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:30 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/939876340' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 20:23:30 compute-0 ceph-mon[191910]: pgmap v2368: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:30 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/684446758' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:23:30.811713) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436610811748, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1139, "num_deletes": 253, "total_data_size": 1580580, "memory_usage": 1605056, "flush_reason": "Manual Compaction"}
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436610831880, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1564216, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47523, "largest_seqno": 48661, "table_properties": {"data_size": 1558682, "index_size": 2929, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12476, "raw_average_key_size": 20, "raw_value_size": 1547294, "raw_average_value_size": 2520, "num_data_blocks": 130, "num_entries": 614, "num_filter_entries": 614, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759436512, "oldest_key_time": 1759436512, "file_creation_time": 1759436610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 20214 microseconds, and 4127 cpu microseconds.
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:23:30.831921) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1564216 bytes OK
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:23:30.831944) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:23:30.848489) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:23:30.848524) EVENT_LOG_v1 {"time_micros": 1759436610848516, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:23:30.848547) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1575222, prev total WAL file size 1575222, number of live WAL files 2.
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:23:30.849303) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1527KB)], [113(9380KB)]
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436610849343, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 11169999, "oldest_snapshot_seqno": -1}
Oct 02 20:23:30 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15607 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6286 keys, 9399988 bytes, temperature: kUnknown
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436610911832, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 9399988, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9358507, "index_size": 24667, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 164322, "raw_average_key_size": 26, "raw_value_size": 9245274, "raw_average_value_size": 1470, "num_data_blocks": 979, "num_entries": 6286, "num_filter_entries": 6286, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759436610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:23:30.912150) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 9399988 bytes
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:23:30.915771) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.5 rd, 150.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.2 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(13.2) write-amplify(6.0) OK, records in: 6807, records dropped: 521 output_compression: NoCompression
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:23:30.915800) EVENT_LOG_v1 {"time_micros": 1759436610915788, "job": 68, "event": "compaction_finished", "compaction_time_micros": 62589, "compaction_time_cpu_micros": 23105, "output_level": 6, "num_output_files": 1, "total_output_size": 9399988, "num_input_records": 6807, "num_output_records": 6286, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436610916332, "job": 68, "event": "table_file_deletion", "file_number": 115}
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436610918744, "job": 68, "event": "table_file_deletion", "file_number": 113}
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:23:30.849202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:23:30.918881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:23:30.918889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:23:30.918891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:23:30.918893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:23:30 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:23:30.918895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:23:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 20:23:31 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3019155484' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 20:23:31 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15611 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:31 compute-0 openstack_network_exporter[372736]: ERROR   20:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:23:31 compute-0 openstack_network_exporter[372736]: ERROR   20:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:23:31 compute-0 openstack_network_exporter[372736]: ERROR   20:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:23:31 compute-0 openstack_network_exporter[372736]: ERROR   20:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:23:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:23:31 compute-0 openstack_network_exporter[372736]: ERROR   20:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:23:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:23:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 20:23:31 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/963815470' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 20:23:31 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15615 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:31 compute-0 nova_compute[355794]: 2025-10-02 20:23:31.978 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:23:31 compute-0 ceph-mon[191910]: from='client.15603 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:31 compute-0 ceph-mon[191910]: from='client.15607 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:31 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3019155484' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 20:23:31 compute-0 ceph-mon[191910]: from='client.15611 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:31 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/963815470' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 20:23:32 compute-0 nova_compute[355794]: 2025-10-02 20:23:32.008 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Triggering sync for uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 20:23:32 compute-0 nova_compute[355794]: 2025-10-02 20:23:32.009 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:23:32 compute-0 nova_compute[355794]: 2025-10-02 20:23:32.010 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:23:32 compute-0 nova_compute[355794]: 2025-10-02 20:23:32.046 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:23:32 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15619 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2369: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:23:32.344 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:23:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:23:32.345 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:23:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:23:32.345 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:23:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 02 20:23:32 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1065762942' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 20:23:33 compute-0 nova_compute[355794]: 2025-10-02 20:23:33.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:33 compute-0 ceph-mon[191910]: from='client.15615 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:33 compute-0 ceph-mon[191910]: from='client.15619 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:33 compute-0 ceph-mon[191910]: pgmap v2369: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:33 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1065762942' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 20:23:33 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15625 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:33 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T20:23:33.121+0000 7f5cb7835640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 20:23:33 compute-0 ceph-mgr[192222]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 20:23:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 02 20:23:33 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211465872' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 20:23:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 02 20:23:33 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/52514640' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 20:23:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:23:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:23:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:23:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:23:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:23:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:23:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 02 20:23:33 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3730474965' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 20:23:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 02 20:23:34 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1819481753' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 20:23:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2370: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 02 20:23:34 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3778210567' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 20:23:34 compute-0 ceph-mon[191910]: from='client.15625 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:34 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1211465872' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 20:23:34 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/52514640' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 20:23:34 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3730474965' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 20:23:34 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1819481753' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 20:23:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 02 20:23:34 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/634308042' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 20:23:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 02 20:23:34 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2505643086' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 20:23:35 compute-0 nova_compute[355794]: 2025-10-02 20:23:35.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 02 20:23:35 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/169097721' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 20:23:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 02 20:23:35 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2884067817' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 3768320 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270112 data_alloc: 234881024 data_used: 17158144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:40.792607+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 3768320 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:41.792821+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9650000/0x0/0x4ffc00000, data 0x2368f90/0x242e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 3768320 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:42.793083+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 3768320 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:43.793642+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 3768320 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9650000/0x0/0x4ffc00000, data 0x2368f90/0x242e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:44.794585+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 3768320 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270112 data_alloc: 234881024 data_used: 17158144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:45.796037+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 3768320 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:46.797465+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 3768320 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:47.798983+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 3768320 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9650000/0x0/0x4ffc00000, data 0x2368f90/0x242e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:48.799854+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 3768320 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:49.800060+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 3768320 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9650000/0x0/0x4ffc00000, data 0x2368f90/0x242e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.715040207s of 30.720905304s, submitted: 1
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270672 data_alloc: 234881024 data_used: 17149952
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:50.800500+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:51.800947+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:52.802140+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:53.803413+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:54.804935+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270672 data_alloc: 234881024 data_used: 17149952
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:55.806595+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f964b000/0x0/0x4ffc00000, data 0x2368f90/0x242e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:56.808369+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:57.810050+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f964b000/0x0/0x4ffc00000, data 0x2368f90/0x242e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:58.811824+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:59.813355+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270672 data_alloc: 234881024 data_used: 17149952
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:00.814837+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:01.816445+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f964b000/0x0/0x4ffc00000, data 0x2368f90/0x242e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:02.817679+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:03.819024+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f964b000/0x0/0x4ffc00000, data 0x2368f90/0x242e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:04.820210+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270672 data_alloc: 234881024 data_used: 17149952
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:05.821278+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f964b000/0x0/0x4ffc00000, data 0x2368f90/0x242e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:06.822317+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:07.823330+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:08.824310+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:09.825138+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:10.826529+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270672 data_alloc: 234881024 data_used: 17149952
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:11.828337+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f964b000/0x0/0x4ffc00000, data 0x2368f90/0x242e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:12.829443+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:13.830686+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f1c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.201864243s of 23.218345642s, submitted: 8
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2c4f1c00 session 0x563e2c5ec5a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c501000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2c501000 session 0x563e2bfc70e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f2800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2c4f2800 session 0x563e2c3034a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e28ecac00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e28ecac00 session 0x563e2a0ba1e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b410000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102719488 unmapped: 3751936 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2b410000 session 0x563e2c5183c0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f1c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2c4f1c00 session 0x563e2ba69680
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c501000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2c501000 session 0x563e2ba685a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bd5c800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2bd5c800 session 0x563e2c6dd680
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e28ecac00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:14.831221+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e28ecac00 session 0x563e2a4f23c0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b410000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2b410000 session 0x563e2a4f3860
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f1c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2c4f1c00 session 0x563e2c6c6f00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102719488 unmapped: 3751936 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c501000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2c501000 session 0x563e2b0df4a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:15.831542+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281802 data_alloc: 234881024 data_used: 17149952
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102719488 unmapped: 3751936 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:16.831930+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f95ed000/0x0/0x4ffc00000, data 0x23ca002/0x2491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102752256 unmapped: 3719168 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:17.832314+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102752256 unmapped: 3719168 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:18.832916+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f95ed000/0x0/0x4ffc00000, data 0x23ca002/0x2491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f5800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2c4f5800 session 0x563e2c5ef4a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f5800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e28ecac00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:19.833155+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f95e9000/0x0/0x4ffc00000, data 0x23cd025/0x2495000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:20.833475+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283871 data_alloc: 234881024 data_used: 17154048
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f95e9000/0x0/0x4ffc00000, data 0x23cd025/0x2495000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 3710976 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:21.833791+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102793216 unmapped: 3678208 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:22.834229+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102793216 unmapped: 3678208 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:23.834542+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102825984 unmapped: 3645440 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f95e9000/0x0/0x4ffc00000, data 0x23cd025/0x2495000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:24.834789+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.090365410s of 11.350853920s, submitted: 38
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102866944 unmapped: 3604480 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:25.835122+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285231 data_alloc: 234881024 data_used: 17375232
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 3579904 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:26.835563+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f95e9000/0x0/0x4ffc00000, data 0x23cd025/0x2495000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [0,1])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 3514368 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:27.835919+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f95e9000/0x0/0x4ffc00000, data 0x23cd025/0x2495000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:28.836278+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:29.836551+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:30.836926+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285231 data_alloc: 234881024 data_used: 17375232
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:31.837281+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:32.837590+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f95e9000/0x0/0x4ffc00000, data 0x23cd025/0x2495000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:33.837944+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:34.838175+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:35.839240+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285231 data_alloc: 234881024 data_used: 17375232
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:36.839593+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:37.840559+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f95e9000/0x0/0x4ffc00000, data 0x23cd025/0x2495000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:38.841025+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:39.841424+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:40.841774+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285231 data_alloc: 234881024 data_used: 17375232
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:41.842176+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:42.842428+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:43.842636+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f95e9000/0x0/0x4ffc00000, data 0x23cd025/0x2495000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:44.843061+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:45.843554+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285231 data_alloc: 234881024 data_used: 17375232
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:46.843833+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:47.844220+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:48.845208+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f95e9000/0x0/0x4ffc00000, data 0x23cd025/0x2495000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:49.846078+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:50.847143+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285231 data_alloc: 234881024 data_used: 17375232
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:51.848720+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:52.850329+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f95e9000/0x0/0x4ffc00000, data 0x23cd025/0x2495000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:53.852105+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:54.853840+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 3506176 heap: 106471424 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:55.855745+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285231 data_alloc: 234881024 data_used: 17375232
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.447784424s of 31.034017563s, submitted: 90
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104235008 unmapped: 4333568 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:56.856707+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8f62000/0x0/0x4ffc00000, data 0x263e025/0x2706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104235008 unmapped: 4333568 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:57.856941+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 4177920 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:58.857487+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8f34000/0x0/0x4ffc00000, data 0x266c025/0x2734000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,1,0,0,3])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 5758976 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:59.857835+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 5758976 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:00.858212+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315079 data_alloc: 234881024 data_used: 17383424
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 5758976 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:01.858599+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 5758976 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:02.858942+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ee1000/0x0/0x4ffc00000, data 0x26c5025/0x278d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 5758976 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:03.859197+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 5758976 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:04.859614+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 5677056 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:05.859980+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 5677056 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:06.860327+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 5677056 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:07.860777+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 5677056 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:08.861109+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 5677056 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:09.861546+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 5677056 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:10.861886+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 5677056 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:11.862158+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 5677056 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:12.862604+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 5677056 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:13.862951+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 5677056 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:14.863228+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 5677056 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:15.863558+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 5677056 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:16.863799+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 5677056 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:17.864666+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:18.865198+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:19.865750+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:20.866875+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:21.867147+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:22.867756+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:23.868113+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:24.868569+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:25.868828+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:26.869095+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:27.869687+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:28.870174+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.056991577s of 33.444557190s, submitted: 56
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:29.870430+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:30.871195+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:31.871462+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:32.871728+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:33.872027+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:34.872517+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:35.873042+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:36.873588+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:37.873881+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:38.874624+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:39.874896+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:40.875179+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:41.875475+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:42.875677+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:43.876107+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:44.876470+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:45.876905+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:46.877233+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:47.877587+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:48.877990+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:49.878462+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:50.878843+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:51.879136+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:52.879606+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:53.880068+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:54.880553+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:55.880895+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:56.881306+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:57.881752+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:58.882216+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:59.882604+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:00.883034+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:01.883567+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:02.884006+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:03.884560+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:04.884927+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:05.885293+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:06.885768+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:07.886199+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:08.886683+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:09.887107+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:10.888647+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:11.889007+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:12.889305+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 5668864 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:13.889725+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:14.890150+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:15.890646+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:16.891060+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:17.891532+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:18.891951+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:19.892171+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:20.892600+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:21.893064+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:22.893344+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:23.893643+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:24.893857+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:25.894065+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:26.894305+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:27.894616+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:28.894963+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:29.895180+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:30.895598+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:31.895864+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 5660672 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:32.896221+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 5652480 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:33.897196+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 5652480 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:34.897646+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 5652480 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:35.898059+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 5652480 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:36.898335+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 5652480 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:37.898735+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 5652480 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:38.899045+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 5652480 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:39.899452+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 5652480 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:40.899973+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 5652480 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:41.900504+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 5652480 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:42.900846+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 5644288 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:43.901215+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 5644288 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:44.901601+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 5644288 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:45.902169+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 5644288 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:46.902440+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 5644288 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:47.902832+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 5644288 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:48.903311+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 5644288 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:49.903622+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 5644288 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:50.903876+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 5644288 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:51.904269+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 5644288 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:52.904704+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102924288 unmapped: 5644288 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:53.905155+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 5636096 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:54.905695+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 5636096 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:55.906100+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 5636096 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:56.906458+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 5636096 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:57.906931+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 5636096 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:58.907561+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 5636096 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:59.908084+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 5636096 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:00.908509+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 5636096 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:01.908952+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 5636096 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:02.909517+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 5636096 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:03.909879+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 5636096 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:04.910339+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 5636096 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:05.910817+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102932480 unmapped: 5636096 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:06.911076+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 5619712 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:07.911270+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 5619712 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:08.911664+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 5619712 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:09.912064+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 5619712 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:10.912600+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 5619712 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:11.912976+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 5619712 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:12.913495+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 5619712 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:13.913837+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 5619712 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:14.914095+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 5619712 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:15.914530+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 5619712 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:16.914759+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 5619712 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:17.915147+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 5619712 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:18.915657+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 5619712 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:19.916004+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 5619712 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:20.916431+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 5619712 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:21.916694+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 5619712 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:22.917008+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102948864 unmapped: 5619712 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:23.917351+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 5611520 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:24.917791+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 5611520 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:25.918167+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 5611520 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:26.918580+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 5611520 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:27.918917+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 5611520 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:28.919478+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 5611520 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:29.919885+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 5611520 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:30.920119+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 5611520 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:31.920525+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 5611520 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:32.920850+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 5611520 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:33.921181+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 5611520 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:34.921513+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 5611520 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:35.921765+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102957056 unmapped: 5611520 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:36.922108+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 5603328 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:37.922276+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 5603328 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:38.922529+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 5603328 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:39.922729+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 5603328 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:40.922921+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 5603328 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:41.923259+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 5603328 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:42.923653+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 5603328 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:43.923827+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 5603328 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:44.924060+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 5603328 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:45.924545+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 5603328 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:46.924921+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:47.925243+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 5603328 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:48.925540+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 5603328 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:49.926003+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 5603328 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:50.926446+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102973440 unmapped: 5595136 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:51.926830+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102973440 unmapped: 5595136 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:52.927289+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102973440 unmapped: 5595136 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets getting new tickets!
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:53.927794+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _finish_auth 0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:53.930081+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 5586944 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:54.928063+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 5586944 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:55.928551+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 5586944 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:56.928753+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 5586944 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313083 data_alloc: 234881024 data_used: 17387520
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:57.929156+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 5586944 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:58.929521+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 5586944 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:59.929880+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 5586944 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2c4f3c00 session 0x563e2d6054a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2984f400 session 0x563e2980b860
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 150.448410034s of 150.455154419s, submitted: 1
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2b8f3800 session 0x563e2d604000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2bfbb800 session 0x563e28c3c780
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50e000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50a000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:00.930235+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 5586944 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8ecc000/0x0/0x4ffc00000, data 0x26da025/0x27a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2bfba400 session 0x563e2c6c70e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c504c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2a193400 session 0x563e2aed41e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfba400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:01.930695+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102998016 unmapped: 5570560 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 234881024 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2c50a000 session 0x563e2c5ede00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:02.930977+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 5922816 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:03.931349+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 5922816 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:04.931783+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 5922816 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:05.932204+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 5922816 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:06.932568+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 5922816 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 234881024 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:07.932840+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 5922816 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:08.933288+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 5914624 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:09.933630+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 5914624 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:10.933960+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 5914624 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:11.934337+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 5914624 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 234881024 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:12.934720+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 5906432 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:13.935141+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 5906432 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:14.935529+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 5906432 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:15.935937+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 5906432 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:16.936290+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 5906432 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 234881024 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:17.936710+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 5906432 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:18.937125+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 5906432 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:19.937519+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 5906432 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:20.937844+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 5906432 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:21.938172+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 5906432 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 234881024 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:22.938649+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 5906432 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:23.939013+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 5906432 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:24.939542+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 5906432 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:25.939918+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 5906432 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:26.940198+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 5906432 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 234881024 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:27.940588+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 5906432 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:28.940974+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102670336 unmapped: 5898240 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:29.941268+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102670336 unmapped: 5898240 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:30.941620+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102670336 unmapped: 5898240 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:31.942020+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102670336 unmapped: 5898240 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 234881024 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:32.942340+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102670336 unmapped: 5898240 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:33.942737+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102670336 unmapped: 5898240 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:34.943218+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102670336 unmapped: 5898240 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:35.943707+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102670336 unmapped: 5898240 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:36.944067+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102670336 unmapped: 5898240 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 234881024 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:37.944554+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102670336 unmapped: 5898240 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:38.944991+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102670336 unmapped: 5898240 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:39.945503+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:40.945983+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:41.946465+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 234881024 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:42.946918+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:43.947286+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:44.947683+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:45.948049+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:46.948461+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 234881024 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:47.948694+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:48.948919+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:49.949287+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:50.949517+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:51.949861+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 234881024 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:52.950179+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:53.950502+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:54.950950+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:55.951188+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:56.951641+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 234881024 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:57.952034+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:58.952532+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:59.952910+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:00.953311+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:01.953663+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 234881024 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:02.953889+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:03.954324+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:04.954747+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:05.955115+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:06.955548+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 234881024 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:07.955914+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:08.956226+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:09.956600+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:10.956983+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:11.957313+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:12.957677+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 234881024 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:13.958134+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:14.958525+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:15.958952+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:16.959600+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2bd5ac00 session 0x563e2c6dc3c0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfb3400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:17.959952+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 234881024 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:18.960575+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:19.960968+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:20.961355+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:21.961756+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:22.962066+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 218103808 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:23.962544+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:24.962928+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:25.963273+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:26.963470+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:27.963890+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 218103808 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:28.964352+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:29.964808+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:30.965149+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:31.965609+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:32.966041+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 218103808 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:33.966502+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:34.966935+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:35.967659+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:36.968301+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:37.968711+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 218103808 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:38.969104+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:39.969490+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:40.969987+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:41.970641+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:42.970997+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 218103808 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:43.971497+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:44.972072+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:45.972362+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:46.972813+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:47.973074+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 218103808 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:48.973504+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:49.973865+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 5890048 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:50.974286+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102686720 unmapped: 5881856 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:51.974646+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102686720 unmapped: 5881856 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:52.975081+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 218103808 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102686720 unmapped: 5881856 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:53.976024+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102686720 unmapped: 5881856 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:54.976337+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102686720 unmapped: 5881856 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:55.976639+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102686720 unmapped: 5881856 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:56.977051+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102686720 unmapped: 5881856 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:57.977464+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267458 data_alloc: 218103808 data_used: 17141760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102686720 unmapped: 5881856 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:58.977716+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102686720 unmapped: 5881856 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:59.978043+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x2201005/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102686720 unmapped: 5881856 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:00.978502+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2a2e0400 session 0x563e2c3c2780
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2c50a400 session 0x563e2a8183c0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f2400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 121.122047424s of 121.602539062s, submitted: 37
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101326848 unmapped: 7241728 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:01.978706+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:02.978990+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2c4f2400 session 0x563e29816d20
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159200 data_alloc: 218103808 data_used: 13774848
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9e48000/0x0/0x4ffc00000, data 0x1760ff5/0x1826000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:03.979358+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:04.979805+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9e4c000/0x0/0x4ffc00000, data 0x175cff5/0x1822000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:05.980104+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:06.980314+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9e4c000/0x0/0x4ffc00000, data 0x175cff5/0x1822000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:07.980606+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159200 data_alloc: 218103808 data_used: 13774848
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:08.980882+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:09.981239+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9e4c000/0x0/0x4ffc00000, data 0x175cff5/0x1822000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:10.981458+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:11.981794+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:12.982119+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159200 data_alloc: 218103808 data_used: 13774848
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9e4c000/0x0/0x4ffc00000, data 0x175cff5/0x1822000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:13.982733+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9e4c000/0x0/0x4ffc00000, data 0x175cff5/0x1822000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:14.983150+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:15.983629+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:16.984002+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9e4c000/0x0/0x4ffc00000, data 0x175cff5/0x1822000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:17.984287+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9e4c000/0x0/0x4ffc00000, data 0x175cff5/0x1822000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159200 data_alloc: 218103808 data_used: 13774848
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:18.984777+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:19.985172+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9e4c000/0x0/0x4ffc00000, data 0x175cff5/0x1822000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:20.985598+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:21.985967+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:22.986247+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9e4c000/0x0/0x4ffc00000, data 0x175cff5/0x1822000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159200 data_alloc: 218103808 data_used: 13774848
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:23.986636+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9e4c000/0x0/0x4ffc00000, data 0x175cff5/0x1822000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:24.987033+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.006849289s of 23.268310547s, submitted: 25
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2c4f5800 session 0x563e2c5edc20
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 7233536 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e28ecac00 session 0x563e2b0dfc20
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2a2e0400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:25.987436+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102424576 unmapped: 6144000 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fa1b8000/0x0/0x4ffc00000, data 0x13eeff5/0x14b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,1])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 ms_handle_reset con 0x563e2a2e0400 session 0x563e28ccd860
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:26.987837+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:27.988126+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125515 data_alloc: 218103808 data_used: 13537280
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:28.988645+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:29.988842+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:30.989161+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fa1b9000/0x0/0x4ffc00000, data 0x13eef60/0x14b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:31.989659+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:32.990074+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125515 data_alloc: 218103808 data_used: 13537280
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:33.990503+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:34.990940+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fa1b9000/0x0/0x4ffc00000, data 0x13eef60/0x14b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fa1b9000/0x0/0x4ffc00000, data 0x13eef60/0x14b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:35.991489+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:36.991915+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:37.992352+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fa1b9000/0x0/0x4ffc00000, data 0x13eef60/0x14b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125515 data_alloc: 218103808 data_used: 13537280
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:38.992888+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:39.993269+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:40.993581+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fa1b9000/0x0/0x4ffc00000, data 0x13eef60/0x14b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:41.994501+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:42.994887+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125515 data_alloc: 218103808 data_used: 13537280
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fa1b9000/0x0/0x4ffc00000, data 0x13eef60/0x14b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:43.995350+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fa1b9000/0x0/0x4ffc00000, data 0x13eef60/0x14b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-mon[191910]: pgmap v2370: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:35 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3778210567' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 20:23:35 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/634308042' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 20:23:35 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2505643086' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 20:23:35 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/169097721' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 20:23:35 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2884067817' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:44.995772+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:45.996218+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:46.996678+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fa1b9000/0x0/0x4ffc00000, data 0x13eef60/0x14b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:47.996943+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125515 data_alloc: 218103808 data_used: 13537280
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:48.997196+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fa1b9000/0x0/0x4ffc00000, data 0x13eef60/0x14b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:49.997672+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:50.998213+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:51.998665+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:53.000653+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fa1b9000/0x0/0x4ffc00000, data 0x13eef60/0x14b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125515 data_alloc: 218103808 data_used: 13537280
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:54.001107+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:55.001638+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:56.002131+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:57.002614+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:58.003030+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125515 data_alloc: 218103808 data_used: 13537280
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fa1b9000/0x0/0x4ffc00000, data 0x13eef60/0x14b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:59.003640+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:00.004073+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:01.004587+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:02.005000+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c501000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:03.005820+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125675 data_alloc: 218103808 data_used: 13541376
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:04.006313+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fa1b9000/0x0/0x4ffc00000, data 0x13eef60/0x14b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:05.006653+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:06.006973+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:07.007355+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.017051697s of 42.389614105s, submitted: 57
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2c501000 session 0x563e2c303a40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fa1b9000/0x0/0x4ffc00000, data 0x13eef60/0x14b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:08.007774+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129849 data_alloc: 218103808 data_used: 13549568
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:09.008367+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fa1b5000/0x0/0x4ffc00000, data 0x13f0add/0x14b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:10.008925+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:11.009223+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:12.009516+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:13.010043+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129849 data_alloc: 218103808 data_used: 13549568
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:14.010571+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:15.010883+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fa1b5000/0x0/0x4ffc00000, data 0x13f0add/0x14b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:16.011233+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b446800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2b446800 session 0x563e2c518f00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50ac00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2c50ac00 session 0x563e298a7680
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bd5dc00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2bd5dc00 session 0x563e2c6b65a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 6127616 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:17.012118+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2a2e0400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2a2e0400 session 0x563e29827680
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b446800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 6111232 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2b446800 session 0x563e2c6c63c0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:18.012481+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 6111232 heap: 108568576 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c501000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2c501000 session 0x563e2c6b7a40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50ac00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.276553154s of 11.286264420s, submitted: 1
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186386 data_alloc: 218103808 data_used: 13549568
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2c50ac00 session 0x563e2978a780
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:19.012832+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50f000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2c50f000 session 0x563e2c3c3e00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2a2e0400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2a2e0400 session 0x563e2c5ee5a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b446800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2b446800 session 0x563e297bde00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c501000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2c501000 session 0x563e2c5ef0e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50ac00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2c50ac00 session 0x563e2c518d20
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 103481344 unmapped: 15106048 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bd5a800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2bd5a800 session 0x563e2a0bb860
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2a2e0400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2a2e0400 session 0x563e2c5ec780
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b446800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:20.013172+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2b446800 session 0x563e2c5efc20
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c501000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2c501000 session 0x563e2a67e5a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50ac00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2c50ac00 session 0x563e2a240960
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f3c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2c4f3c00 session 0x563e2bfc74a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 103350272 unmapped: 15237120 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9a86000/0x0/0x4ffc00000, data 0x1b21b5f/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:21.013590+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29681c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e29681c00 session 0x563e2b0e6780
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 103350272 unmapped: 15237120 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:22.014069+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b411c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2b411c00 session 0x563e2c518b40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 103350272 unmapped: 15237120 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9a86000/0x0/0x4ffc00000, data 0x1b21b5f/0x1be8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:23.014683+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bd5b400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2bd5b400 session 0x563e2c6703c0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b418400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 103350272 unmapped: 15237120 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2b418400 session 0x563e298de960
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192991 data_alloc: 218103808 data_used: 13549568
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:24.015155+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b446c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29824800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29683800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 103530496 unmapped: 15056896 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:25.015617+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 103530496 unmapped: 15056896 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:26.016013+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 103530496 unmapped: 15056896 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:27.016437+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 103464960 unmapped: 15122432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:28.016772+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 15024128 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9a61000/0x0/0x4ffc00000, data 0x1b45b6f/0x1c0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212003 data_alloc: 218103808 data_used: 16023552
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:29.017109+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 11624448 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:30.017304+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 11624448 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:31.018545+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 11624448 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:32.018979+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 11624448 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:33.019540+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 11624448 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:34.019814+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241123 data_alloc: 218103808 data_used: 20144128
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 11624448 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9a61000/0x0/0x4ffc00000, data 0x1b45b6f/0x1c0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:35.021159+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 11624448 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:36.021626+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 11624448 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:37.021942+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 11624448 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:38.022495+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 11624448 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:39.022938+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241123 data_alloc: 218103808 data_used: 20144128
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 11624448 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9a61000/0x0/0x4ffc00000, data 0x1b45b6f/0x1c0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:40.023509+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 11624448 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:41.023787+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 11624448 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:42.024229+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 11624448 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:43.024600+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 11624448 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:44.025084+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241123 data_alloc: 218103808 data_used: 20144128
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 11624448 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:45.025523+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 11624448 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9a61000/0x0/0x4ffc00000, data 0x1b45b6f/0x1c0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e2b446c00 session 0x563e2a67eb40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e29824800 session 0x563e2c5212c0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:46.025871+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.121934891s of 27.372097015s, submitted: 36
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e29683800 session 0x563e29777680
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29681c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 11624448 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9a61000/0x0/0x4ffc00000, data 0x1b45b6f/0x1c0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,1])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:47.026150+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 14524416 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 ms_handle_reset con 0x563e29681c00 session 0x563e2ba69c20
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:48.026648+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fa0cd000/0x0/0x4ffc00000, data 0x13f0add/0x14b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 14524416 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:49.027090+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137361 data_alloc: 218103808 data_used: 13549568
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 14524416 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:50.027413+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fa0cd000/0x0/0x4ffc00000, data 0x13f0add/0x14b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b411c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 14524416 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:51.027711+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _renew_subs
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 14524416 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 130 ms_handle_reset con 0x563e2b411c00 session 0x563e2c5ee000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:52.028095+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 14524416 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:53.028686+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 14524416 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fa1b6000/0x0/0x4ffc00000, data 0x13f26ae/0x14b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:54.029026+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141359 data_alloc: 218103808 data_used: 13557760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 14524416 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:55.029588+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 14524416 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:56.030003+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 14524416 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:57.030502+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 14524416 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:58.030880+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 14524416 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:59.031506+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141359 data_alloc: 218103808 data_used: 13557760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fa1b6000/0x0/0x4ffc00000, data 0x13f26ae/0x14b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 14524416 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:00.032683+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 14524416 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.526273727s of 14.896162033s, submitted: 61
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:01.033208+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fa1b3000/0x0/0x4ffc00000, data 0x13f4111/0x14ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104112128 unmapped: 14475264 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:02.033682+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fa1b3000/0x0/0x4ffc00000, data 0x13f4111/0x14ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b446000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104120320 unmapped: 14467072 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:03.034200+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104120320 unmapped: 14467072 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:04.034613+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147049 data_alloc: 218103808 data_used: 13557760
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 14385152 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:05.035096+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 14385152 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:06.035622+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 14385152 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:07.036324+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fa1b2000/0x0/0x4ffc00000, data 0x13f4144/0x14bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 132 ms_handle_reset con 0x563e2b446000 session 0x563e2c5ed860
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104235008 unmapped: 14352384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:08.036789+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104235008 unmapped: 14352384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:09.037287+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152942 data_alloc: 218103808 data_used: 13565952
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104235008 unmapped: 14352384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:10.037712+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa1ad000/0x0/0x4ffc00000, data 0x13f5ce4/0x14c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104235008 unmapped: 14352384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:11.038154+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104235008 unmapped: 14352384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:12.038662+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa1ad000/0x0/0x4ffc00000, data 0x13f5ce4/0x14c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104235008 unmapped: 14352384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:13.039112+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104235008 unmapped: 14352384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:14.039592+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152942 data_alloc: 218103808 data_used: 13565952
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104235008 unmapped: 14352384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:15.040027+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c501400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.238119125s of 14.359631538s, submitted: 30
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _renew_subs
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104275968 unmapped: 14311424 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:16.040505+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 133 ms_handle_reset con 0x563e2c501400 session 0x563e2c5ecb40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:17.040901+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa1ac000/0x0/0x4ffc00000, data 0x13f785f/0x14c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:18.041280+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa1ac000/0x0/0x4ffc00000, data 0x13f785f/0x14c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa1ac000/0x0/0x4ffc00000, data 0x13f785f/0x14c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:19.041899+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153873 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:20.042311+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa1ac000/0x0/0x4ffc00000, data 0x13f785f/0x14c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:21.042548+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:22.043177+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:23.043689+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa1ac000/0x0/0x4ffc00000, data 0x13f785f/0x14c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:24.059077+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153873 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:25.059561+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:26.059867+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f4400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 133 ms_handle_reset con 0x563e2c4f4400 session 0x563e2b0deb40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.979585648s of 11.122095108s, submitted: 27
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ac000/0x0/0x4ffc00000, data 0x13f785f/0x14c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:27.060116+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:28.060453+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:29.060760+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:30.061095+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:31.061547+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:32.061757+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:33.061944+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:34.062315+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:35.062633+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:36.063075+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:37.063636+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:38.063930+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:39.064458+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:40.064903+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:41.065203+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:42.065630+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 14237696 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:43.066137+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 14237696 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:44.066605+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:45.066867+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:46.067284+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:47.067568+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:48.067860+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:49.068146+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:50.068601+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:51.069324+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:52.069838+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:53.070757+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 8518 writes, 32K keys, 8518 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 8518 writes, 1987 syncs, 4.29 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 891 writes, 2340 keys, 891 commit groups, 1.0 writes per commit group, ingest: 1.49 MB, 0.00 MB/s
                                            Interval WAL: 891 writes, 405 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:54.071287+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:55.072098+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:56.073092+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:57.074066+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:58.074637+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:59.075713+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:00.075993+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:01.077542+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:02.078936+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:03.079957+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:04.080581+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:05.081091+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:06.081517+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:07.082205+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:08.082550+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:09.082974+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:10.083250+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:11.083528+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:12.083936+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:13.084541+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:14.084977+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:15.085500+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:16.085907+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:17.086252+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:18.086592+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:19.087194+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:20.087849+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:21.088280+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:22.088706+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:23.089038+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:24.089655+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:25.090043+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:26.090465+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:27.090934+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:28.091317+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:29.092128+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:30.092637+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:31.093050+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:32.093739+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:33.094189+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:34.094598+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:35.094929+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:36.095298+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:37.095832+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:38.096204+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:39.096704+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:40.097102+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:41.097557+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:42.097962+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:43.098303+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:44.098702+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:45.099012+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:46.099457+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:47.099870+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:48.100323+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:49.100873+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:50.101320+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:51.101702+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:52.102116+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:53.103473+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:54.104719+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:55.106053+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:56.106600+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:57.107651+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:58.107983+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:59.108898+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:00.109619+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:01.110351+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:02.111049+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:03.111345+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:04.111877+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:05.112558+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:06.113064+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:07.113491+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:08.113965+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:09.114724+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:10.115324+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:11.115859+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:12.116502+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:13.116957+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:14.117290+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:15.117691+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:16.118208+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:17.118802+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:18.119278+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:19.119768+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:20.120061+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:21.120492+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:22.121039+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:23.121638+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:24.122028+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:25.122525+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 119.220855713s of 119.241088867s, submitted: 14
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104415232 unmapped: 14172160 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:26.123775+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104431616 unmapped: 14155776 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:27.124473+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104472576 unmapped: 14114816 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:28.124681+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:29.124961+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:30.125556+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:31.125889+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:32.126312+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:33.126634+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:34.126945+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:35.127266+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:36.128875+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:37.129290+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:38.129729+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:39.130161+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:40.130658+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:41.131114+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:42.131704+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:43.132166+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:44.132596+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:45.133094+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:46.133620+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:47.133936+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:48.134303+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:49.134890+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:50.135222+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:51.135632+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:52.135996+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:53.138768+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:54.139198+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:55.139483+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:56.139774+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:57.139995+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:58.141017+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:59.141590+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:00.143678+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:01.145314+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:02.145814+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:03.146346+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:04.146654+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:05.147063+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:06.147354+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:07.147726+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:08.148325+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:09.148900+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:10.149300+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:11.149753+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:12.150186+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:13.150654+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:14.151032+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:15.151537+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:16.152002+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:17.152473+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:18.152773+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:19.153193+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:20.153650+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:21.154058+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:22.154547+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:23.154858+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:24.155288+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:25.155789+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:26.156218+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:27.156672+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:28.157132+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:29.157635+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:30.158610+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:31.159160+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:32.159588+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:33.159908+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:34.160347+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:35.161024+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:36.161620+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:37.162190+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:38.162622+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:39.163095+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:40.163619+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:41.163947+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:42.164498+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:43.164956+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:44.165688+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:45.166035+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:46.166798+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:47.167158+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:48.167608+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:49.168065+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:50.168511+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:51.168940+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:52.169324+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:53.169793+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:54.170062+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:55.170786+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:56.171226+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:57.171546+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:58.171785+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:59.172158+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:00.172641+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:01.172979+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:02.173196+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:03.173600+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:04.174128+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:05.174760+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:06.175196+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:07.175582+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:08.176132+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:09.177839+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:10.178309+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:11.178654+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:12.179496+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:13.180214+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:14.180725+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:15.181256+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:16.181678+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:17.182070+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:18.183121+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:19.183697+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:20.184081+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:21.184499+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:22.184777+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:23.185200+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:24.185600+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:25.185989+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:26.186756+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:27.187107+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:28.187601+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:29.188083+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:30.188744+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:31.189183+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:32.189658+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:33.190197+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:34.190728+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:35.191137+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:36.191504+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:37.192034+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:38.192299+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:39.192804+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:40.193197+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:41.193580+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:42.193957+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:43.194364+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:44.194827+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:45.195170+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:46.195615+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:47.196015+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:48.196511+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:49.196911+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:50.197288+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:51.197837+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:52.198298+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:53.198723+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:54.199169+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:55.199616+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:56.200044+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:57.200912+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:58.201174+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:59.201632+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:00.202046+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:01.202527+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:02.202845+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:03.203095+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:04.203327+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:05.203602+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:06.203941+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:07.204318+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:08.204748+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:09.205267+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b445c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 163.753616333s of 164.341629028s, submitted: 90
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:10.205686+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104513536 unmapped: 14073856 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:11.206120+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104562688 unmapped: 22421504 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244277 data_alloc: 218103808 data_used: 13574144
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _renew_subs
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 135 ms_handle_reset con 0x563e2b445c00 session 0x563e29830b40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:12.206488+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50e400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104570880 unmapped: 22413312 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f9535000/0x0/0x4ffc00000, data 0x206ae72/0x2138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:13.206837+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 22364160 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _renew_subs
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 136 ms_handle_reset con 0x563e2c50e400 session 0x563e2a59fa40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d34000/0x0/0x4ffc00000, data 0x286ae82/0x2939000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:14.207083+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22331392 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:15.207683+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22331392 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:16.208041+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22331392 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309399 data_alloc: 218103808 data_used: 13582336
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:17.208838+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22323200 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:18.209265+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22323200 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:19.209787+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22323200 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:20.210207+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22323200 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:21.210670+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22323200 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309399 data_alloc: 218103808 data_used: 13582336
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:22.211025+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22323200 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:23.211471+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22323200 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:24.212042+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22323200 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:25.212553+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22323200 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:26.212999+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 22315008 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309399 data_alloc: 218103808 data_used: 13582336
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:27.213318+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 22306816 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:28.213695+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 22306816 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:29.214162+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:30.214590+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:31.214853+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309399 data_alloc: 218103808 data_used: 13582336
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:32.215209+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:33.215617+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:34.216003+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:35.216346+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:36.216741+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309399 data_alloc: 218103808 data_used: 13582336
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:37.217125+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:38.217575+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:39.218019+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:40.220574+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:41.220972+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309399 data_alloc: 218103808 data_used: 13582336
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:42.221275+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:43.221663+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:44.222012+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:45.222677+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:46.223049+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309399 data_alloc: 218103808 data_used: 13582336
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:47.223602+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:48.223958+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:49.224431+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:50.224843+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:51.225247+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309399 data_alloc: 218103808 data_used: 13582336
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:52.225477+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:53.225818+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:54.226186+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:55.226609+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:56.226942+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f0800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _renew_subs
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 46.658763885s of 46.857627869s, submitted: 20
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313045 data_alloc: 218103808 data_used: 13582336
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 137 ms_handle_reset con 0x563e2c4f0800 session 0x563e2a0eb2c0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:57.227310+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50d000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 137 ms_handle_reset con 0x563e2c50d000 session 0x563e2a0eaf00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b445c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 137 ms_handle_reset con 0x563e2b445c00 session 0x563e2a0ead20
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 22265856 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:58.227739+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f0800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 22265856 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8d2c000/0x0/0x4ffc00000, data 0x286e59f/0x2940000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:59.228256+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 138 ms_handle_reset con 0x563e2c4f0800 session 0x563e2a0eab40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 21151744 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:00.228618+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 21151744 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:01.229123+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 21151744 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315171 data_alloc: 218103808 data_used: 13582336
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:02.229525+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 21151744 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:03.229977+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 21151744 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8d2a000/0x0/0x4ffc00000, data 0x2870170/0x2943000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:04.230472+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 21151744 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8d2a000/0x0/0x4ffc00000, data 0x2870170/0x2943000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:05.230937+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 105848832 unmapped: 21135360 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:06.231281+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 105848832 unmapped: 21135360 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318145 data_alloc: 218103808 data_used: 13582336
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:07.231702+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b8f3800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b8f3800 session 0x563e2b10bc20
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bd5cc00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 105848832 unmapped: 21135360 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2bd5cc00 session 0x563e2a4f3a40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:08.232112+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 10043392 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8d27000/0x0/0x4ffc00000, data 0x2871bd3/0x2946000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:09.232498+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 10043392 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bd5b800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.062339783s of 13.161125183s, submitted: 28
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2bd5b800 session 0x563e2c5205a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b445c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:10.232805+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8d27000/0x0/0x4ffc00000, data 0x2871bd3/0x2946000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b445c00 session 0x563e28ccd860
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 13967360 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:11.233166+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 13967360 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1446005 data_alloc: 234881024 data_used: 25051136
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:12.233637+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f821f000/0x0/0x4ffc00000, data 0x337abd3/0x344f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 13959168 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:13.234032+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 13959168 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:14.234588+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 13959168 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:15.235055+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 13959168 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:16.235260+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfae400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2bfae400 session 0x563e2b0dfc20
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f821f000/0x0/0x4ffc00000, data 0x337abd3/0x344f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 13959168 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1446005 data_alloc: 234881024 data_used: 25051136
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:17.235644+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2a2e0400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2a2e0400 session 0x563e2a0e9680
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 13959168 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b429800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b429800 session 0x563e2a0e8960
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:18.235977+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c505400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c505400 session 0x563e2a0e81e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 14311424 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2a2e0400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b429800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:19.236206+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 14311424 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:20.236593+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 116883456 unmapped: 14303232 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:21.236969+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c508c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 14237696 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1459997 data_alloc: 234881024 data_used: 26419200
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:22.237202+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f81f4000/0x0/0x4ffc00000, data 0x33a4be3/0x347a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 12804096 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:23.237451+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 120160256 unmapped: 11026432 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:24.237714+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f81f4000/0x0/0x4ffc00000, data 0x33a4be3/0x347a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:25.237979+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:26.238215+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1523997 data_alloc: 234881024 data_used: 35049472
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:27.238659+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:28.238854+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:29.239156+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:30.239538+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f81f4000/0x0/0x4ffc00000, data 0x33a4be3/0x347a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:31.239752+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1523997 data_alloc: 234881024 data_used: 35049472
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:32.240041+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f81f4000/0x0/0x4ffc00000, data 0x33a4be3/0x347a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:33.240462+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:34.240708+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:35.241143+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f81f4000/0x0/0x4ffc00000, data 0x33a4be3/0x347a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:36.241565+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1523997 data_alloc: 234881024 data_used: 35049472
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:37.241942+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:38.242188+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f81f4000/0x0/0x4ffc00000, data 0x33a4be3/0x347a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123092992 unmapped: 8093696 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:39.242499+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123092992 unmapped: 8093696 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:40.242726+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123092992 unmapped: 8093696 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:41.243014+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123092992 unmapped: 8093696 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1523997 data_alloc: 234881024 data_used: 35049472
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:42.243323+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123092992 unmapped: 8093696 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:43.243546+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123092992 unmapped: 8093696 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:44.243957+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f81f4000/0x0/0x4ffc00000, data 0x33a4be3/0x347a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123092992 unmapped: 8093696 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:45.244236+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123101184 unmapped: 8085504 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:46.244522+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123101184 unmapped: 8085504 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1523997 data_alloc: 234881024 data_used: 35049472
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:47.244776+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123101184 unmapped: 8085504 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:48.245129+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123101184 unmapped: 8085504 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:49.245529+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123101184 unmapped: 8085504 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f81f4000/0x0/0x4ffc00000, data 0x33a4be3/0x347a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:50.245829+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f4800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f4800 session 0x563e2a0e9680
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c506c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c506c00 session 0x563e2a23da40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50a400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c50a400 session 0x563e2c5ed4a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123101184 unmapped: 8085504 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29824800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e29824800 session 0x563e2a23f4a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c502800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.027656555s of 41.223041534s, submitted: 22
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:51.246026+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c502800 session 0x563e297bd860
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29824800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e29824800 session 0x563e296ab860
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f4800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f4800 session 0x563e2c026000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c506c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c506c00 session 0x563e2c6c61e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50a400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c50a400 session 0x563e2c5bfe00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 124338176 unmapped: 17432576 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:52.246251+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1641312 data_alloc: 234881024 data_used: 35065856
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f750a000/0x0/0x4ffc00000, data 0x408cc55/0x4164000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,5])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 129556480 unmapped: 12214272 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:53.246501+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 129556480 unmapped: 12214272 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:54.246733+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f0c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f0c00 session 0x563e2c5ec960
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29824800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e29824800 session 0x563e2d604b40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f4800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f4800 session 0x563e2c6701e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c506c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 130138112 unmapped: 11632640 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c506c00 session 0x563e2a0eb860
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50a400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c50a400 session 0x563e2d6041e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:55.246904+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6324000/0x0/0x4ffc00000, data 0x5263c55/0x533b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 129548288 unmapped: 12222464 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:56.247242+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 13082624 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:57.247871+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1782029 data_alloc: 234881024 data_used: 35516416
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6325000/0x0/0x4ffc00000, data 0x526fc55/0x5347000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c507800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c507800 session 0x563e2a818780
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 128770048 unmapped: 13000704 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:58.248074+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29824800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e29824800 session 0x563e2c518000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 128770048 unmapped: 13000704 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:59.248597+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f4800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f4800 session 0x563e2c6c7a40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c506c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c506c00 session 0x563e2c6b74a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 128770048 unmapped: 13000704 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:00.248979+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c507800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50a400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 128794624 unmapped: 12976128 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:01.249218+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f1400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f1400 session 0x563e2c5ef2c0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 128827392 unmapped: 12943360 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:02.260210+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1787707 data_alloc: 234881024 data_used: 35520512
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c509400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c509400 session 0x563e2c6705a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6316000/0x0/0x4ffc00000, data 0x5277c65/0x5350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 128770048 unmapped: 13000704 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c509400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c509400 session 0x563e2c3c34a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:03.260508+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29824800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.324402809s of 12.147669792s, submitted: 194
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e29824800 session 0x563e2c6c6780
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f1400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 13115392 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f4800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:04.260790+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 133103616 unmapped: 8667136 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:05.260934+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 5521408 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:06.266360+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 5513216 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:07.266607+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1854029 data_alloc: 251658240 data_used: 45158400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 5513216 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f631d000/0x0/0x4ffc00000, data 0x5277c75/0x5351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:08.266811+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 5513216 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:09.267074+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f1400 session 0x563e29777860
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f4800 session 0x563e2c5210e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c503400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 5505024 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:10.267258+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c503400 session 0x563e2c0274a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 5488640 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:11.267606+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 5488640 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:12.267878+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1852228 data_alloc: 251658240 data_used: 45158400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 5488640 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:13.268142+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f631e000/0x0/0x4ffc00000, data 0x5277c65/0x5350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 5480448 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:14.268634+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 5480448 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:15.269026+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 5480448 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:16.269484+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 5472256 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:17.269869+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1852228 data_alloc: 251658240 data_used: 45158400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 5472256 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:18.270191+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 5472256 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:19.270587+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f631e000/0x0/0x4ffc00000, data 0x5277c65/0x5350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 5472256 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:20.270834+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 5472256 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:21.271218+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136306688 unmapped: 5464064 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:22.271564+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1852228 data_alloc: 251658240 data_used: 45158400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 5455872 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:23.271844+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 5455872 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:24.272159+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 5455872 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:25.272541+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f631e000/0x0/0x4ffc00000, data 0x5277c65/0x5350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 5455872 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:26.272912+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 5455872 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:27.273173+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1852228 data_alloc: 251658240 data_used: 45158400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:28.273656+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 5455872 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:29.274073+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 5455872 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f631e000/0x0/0x4ffc00000, data 0x5277c65/0x5350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:30.274472+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136323072 unmapped: 5447680 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:31.274900+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 5439488 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.639814377s of 28.783666611s, submitted: 29
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:32.275185+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136650752 unmapped: 5120000 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f631e000/0x0/0x4ffc00000, data 0x5277c65/0x5350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1855220 data_alloc: 251658240 data_used: 45146112
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:33.275495+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136683520 unmapped: 5087232 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:34.275676+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136683520 unmapped: 5087232 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:35.275891+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136683520 unmapped: 5087232 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:36.276078+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 5054464 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f631e000/0x0/0x4ffc00000, data 0x5277c65/0x5350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c508c00 session 0x563e29791860
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b429800 session 0x563e2a4f6960
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:37.276282+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144416768 unmapped: 1548288 heap: 145965056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b410c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b410c00 session 0x563e2c5ec3c0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1920636 data_alloc: 251658240 data_used: 46415872
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:38.276515+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144482304 unmapped: 1482752 heap: 145965056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c508400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:39.276733+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f5af5000/0x0/0x4ffc00000, data 0x5a9dc65/0x5b76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,1])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143155200 unmapped: 3858432 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:40.276930+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143310848 unmapped: 3702784 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c503c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:41.277366+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143507456 unmapped: 3506176 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:42.277684+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143556608 unmapped: 3457024 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1935816 data_alloc: 251658240 data_used: 47112192
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f5ab3000/0x0/0x4ffc00000, data 0x5ad9c65/0x5bb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:43.278001+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143597568 unmapped: 3416064 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:44.278234+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143597568 unmapped: 3416064 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f5ab3000/0x0/0x4ffc00000, data 0x5ad9c65/0x5bb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2a2e0400 session 0x563e29777e00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfae800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:45.278441+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 3407872 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.809200287s of 13.237763405s, submitted: 139
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2bfae800 session 0x563e2a6ae5a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:46.278588+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:47.278794+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1755600 data_alloc: 251658240 data_used: 41615360
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:48.279022+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6ca1000/0x0/0x4ffc00000, data 0x48efc65/0x49c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:49.279441+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:50.279797+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:51.280073+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6ca1000/0x0/0x4ffc00000, data 0x48efc65/0x49c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:52.280311+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1755844 data_alloc: 251658240 data_used: 41615360
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:53.280525+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:54.280848+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6c9d000/0x0/0x4ffc00000, data 0x48f8c65/0x49d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:55.281182+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:56.281584+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:57.281968+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140763136 unmapped: 6250496 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.015203476s of 12.132278442s, submitted: 25
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1755920 data_alloc: 251658240 data_used: 41615360
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:58.282264+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 6791168 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50b800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c50b800 session 0x563e2c303680
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29721c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e29721c00 session 0x563e298a7680
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b447000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b447000 session 0x563e28ccd860
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:59.282560+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50e400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c50e400 session 0x563e2c026960
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 6774784 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c505000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6c9a000/0x0/0x4ffc00000, data 0x48fbc65/0x49d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,1,2])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c505000 session 0x563e2a23fa40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29721c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e29721c00 session 0x563e2a4f30e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b447000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b447000 session 0x563e2a0e81e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50b800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c50b800 session 0x563e2a0e9e00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50e400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c50e400 session 0x563e2a59fc20
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:00.282858+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141352960 unmapped: 9871360 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:01.283043+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141352960 unmapped: 9871360 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:02.283351+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141352960 unmapped: 9871360 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1792350 data_alloc: 251658240 data_used: 41615360
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:03.283681+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141352960 unmapped: 9871360 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f2400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f2400 session 0x563e29816d20
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:04.284025+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141312000 unmapped: 14114816 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f623b000/0x0/0x4ffc00000, data 0x5359cc7/0x5433000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:05.284260+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141312000 unmapped: 14114816 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:06.284500+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141312000 unmapped: 14114816 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29681400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:07.284704+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141312000 unmapped: 14114816 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1845812 data_alloc: 251658240 data_used: 41615360
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:08.284908+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141312000 unmapped: 14114816 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c508c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c508c00 session 0x563e2c6dde00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f623b000/0x0/0x4ffc00000, data 0x5359cc7/0x5433000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:09.285158+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f1400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f1400 session 0x563e2c519680
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141312000 unmapped: 14114816 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.276736259s of 12.543600082s, submitted: 51
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f623b000/0x0/0x4ffc00000, data 0x5359cc7/0x5433000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:10.285435+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141099008 unmapped: 14327808 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f623b000/0x0/0x4ffc00000, data 0x5359cc7/0x5433000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c503000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c503000 session 0x563e29777680
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c501400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c501400 session 0x563e2b0df4a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:11.285587+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 142548992 unmapped: 12877824 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b3a8c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:12.285794+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 12165120 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1878755 data_alloc: 251658240 data_used: 45379584
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6216000/0x0/0x4ffc00000, data 0x537dcd7/0x5458000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:13.286009+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 12165120 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:14.286322+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143269888 unmapped: 12156928 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:15.287284+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143269888 unmapped: 12156928 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:16.287462+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 142630912 unmapped: 12795904 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6216000/0x0/0x4ffc00000, data 0x537dcd7/0x5458000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:17.287674+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfae400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 142835712 unmapped: 12591104 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1890499 data_alloc: 251658240 data_used: 46137344
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:18.287886+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143106048 unmapped: 12320768 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:19.288180+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 9453568 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:20.288423+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148447232 unmapped: 6979584 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6216000/0x0/0x4ffc00000, data 0x537dcd7/0x5458000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:21.288772+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148447232 unmapped: 6979584 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.734128952s of 11.828881264s, submitted: 30
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:22.288986+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6216000/0x0/0x4ffc00000, data 0x537dcd7/0x5458000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148799488 unmapped: 6627328 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1936515 data_alloc: 251658240 data_used: 52989952
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:23.289163+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148799488 unmapped: 6627328 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:24.289366+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148799488 unmapped: 6627328 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:25.289643+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148799488 unmapped: 6627328 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:26.289848+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148799488 unmapped: 6627328 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:27.290078+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148799488 unmapped: 6627328 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1936515 data_alloc: 251658240 data_used: 52989952
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6216000/0x0/0x4ffc00000, data 0x537dcd7/0x5458000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:28.290329+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148832256 unmapped: 6594560 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:29.290860+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148832256 unmapped: 6594560 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:30.291187+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148840448 unmapped: 6586368 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:31.291474+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148840448 unmapped: 6586368 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:32.291722+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148840448 unmapped: 6586368 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6216000/0x0/0x4ffc00000, data 0x537dcd7/0x5458000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1936163 data_alloc: 251658240 data_used: 52989952
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:33.292088+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148840448 unmapped: 6586368 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:34.292544+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148840448 unmapped: 6586368 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29825400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.136129379s of 13.184433937s, submitted: 6
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:35.292794+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e29825400 session 0x563e2ba694a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f1400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f1400 session 0x563e29777860
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c501400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c501400 session 0x563e29776b40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c503000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c503000 session 0x563e29791860
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c508c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c508c00 session 0x563e2bec8b40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 149159936 unmapped: 12099584 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f5e0f000/0x0/0x4ffc00000, data 0x5783d00/0x585f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:36.293006+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f5982000/0x0/0x4ffc00000, data 0x5c10d39/0x5cec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148611072 unmapped: 12648448 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:37.293298+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148611072 unmapped: 12648448 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2007891 data_alloc: 251658240 data_used: 52989952
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:38.293569+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148619264 unmapped: 12640256 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:39.294062+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148627456 unmapped: 12632064 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:40.294536+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f5982000/0x0/0x4ffc00000, data 0x5c10d39/0x5cec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148627456 unmapped: 12632064 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:41.294737+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b4ce400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b4ce400 session 0x563e2a0bab40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148717568 unmapped: 12541952 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:42.295025+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b4ce400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b4ce400 session 0x563e2c519a40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f5982000/0x0/0x4ffc00000, data 0x5c10d39/0x5cec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148717568 unmapped: 12541952 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2010003 data_alloc: 251658240 data_used: 52977664
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:43.295504+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148717568 unmapped: 12541952 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f1400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f1400 session 0x563e2b9b5c20
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c501400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c501400 session 0x563e299e1e00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:44.295705+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c503000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c508c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150544384 unmapped: 10715136 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:45.295959+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 149929984 unmapped: 11329536 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c503c00 session 0x563e2c303c20
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c508400 session 0x563e2d6052c0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f1000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.684912682s of 11.138339996s, submitted: 92
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:46.296281+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f1000 session 0x563e29777860
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146710528 unmapped: 14548992 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:47.296597+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146833408 unmapped: 14426112 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f63e3000/0x0/0x4ffc00000, data 0x50fad29/0x51d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1897107 data_alloc: 251658240 data_used: 49094656
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:48.296937+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148447232 unmapped: 12812288 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:49.297276+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 151085056 unmapped: 10174464 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:50.297676+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 151126016 unmapped: 10133504 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c503000 session 0x563e2ba681e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c508c00 session 0x563e2c302000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b8f3800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:51.297891+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148381696 unmapped: 12877824 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b8f3800 session 0x563e2be183c0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:52.298202+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 147881984 unmapped: 13377536 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1814543 data_alloc: 251658240 data_used: 47054848
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:53.298446+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6a36000/0x0/0x4ffc00000, data 0x485dcc7/0x4937000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150323200 unmapped: 10936320 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:54.298699+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150159360 unmapped: 11100160 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:55.299063+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150208512 unmapped: 11051008 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:56.299365+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.446751595s of 10.153461456s, submitted: 133
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 11272192 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:57.299878+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150020096 unmapped: 11239424 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f68af000/0x0/0x4ffc00000, data 0x4cd7cc7/0x4db1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1871715 data_alloc: 251658240 data_used: 47898624
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:58.300077+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150020096 unmapped: 11239424 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:59.300576+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150020096 unmapped: 11239424 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:00.300939+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f68af000/0x0/0x4ffc00000, data 0x4cd7cc7/0x4db1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150020096 unmapped: 11239424 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:01.301298+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150020096 unmapped: 11239424 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:02.301683+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150020096 unmapped: 11239424 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1872355 data_alloc: 251658240 data_used: 47915008
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:03.302079+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2bfae400 session 0x563e2a23c5a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b3a8c00 session 0x563e2c670f00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b8f3800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150020096 unmapped: 11239424 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:04.302283+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f68bd000/0x0/0x4ffc00000, data 0x4cd7cc7/0x4db1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [1,0,1])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b8f3800 session 0x563e29827a40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f68bd000/0x0/0x4ffc00000, data 0x4cd7cc7/0x4db1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144547840 unmapped: 16711680 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:05.302829+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144556032 unmapped: 16703488 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:06.303299+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144556032 unmapped: 16703488 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:07.303680+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144556032 unmapped: 16703488 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1716636 data_alloc: 234881024 data_used: 40157184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:08.304110+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144556032 unmapped: 16703488 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:09.304583+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144556032 unmapped: 16703488 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f73ec000/0x0/0x4ffc00000, data 0x41a9cb7/0x4282000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:10.304928+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144556032 unmapped: 16703488 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:11.305311+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.966886520s of 15.341312408s, submitted: 58
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c507800 session 0x563e2ba68960
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c50a400 session 0x563e2a23c1e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144556032 unmapped: 16703488 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b3a8400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:12.305541+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136060928 unmapped: 25198592 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b3a8400 session 0x563e2c5bf0e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491097 data_alloc: 234881024 data_used: 28659712
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:13.306031+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b3a8c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136142848 unmapped: 41902080 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:14.306318+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b3a8c00 session 0x563e2b10b0e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136151040 unmapped: 41893888 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:15.306668+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136151040 unmapped: 41893888 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7cb3000/0x0/0x4ffc00000, data 0x36917e5/0x376a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:16.307099+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136151040 unmapped: 41893888 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:17.308188+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136175616 unmapped: 41869312 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1552674 data_alloc: 234881024 data_used: 28667904
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:18.308583+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f04000/0x0/0x4ffc00000, data 0x36917e5/0x376a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136175616 unmapped: 41869312 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:19.308999+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136175616 unmapped: 41869312 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:20.309272+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136175616 unmapped: 41869312 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:21.309590+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.849703789s of 10.186175346s, submitted: 47
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f04000/0x0/0x4ffc00000, data 0x36917e5/0x376a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfb9c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bfb9c00 session 0x563e2a67f4a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bd5bc00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bd5bc00 session 0x563e2a819e00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b428400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b428400 session 0x563e2978a5a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136306688 unmapped: 41738240 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfb0c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bfb0c00 session 0x563e2b10ab40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:22.309823+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b3a8c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b3a8c00 session 0x563e2be19680
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b428400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b428400 session 0x563e2be19e00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bd5bc00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bd5bc00 session 0x563e2be18b40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfb9c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bfb9c00 session 0x563e2be18000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f4000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2c4f4000 session 0x563e2bf9a000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 41517056 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:23.310155+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1638880 data_alloc: 234881024 data_used: 28676096
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e29681400 session 0x563e2c303e00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b3a8c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 135872512 unmapped: 42172416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:24.310518+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b428400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b428400 session 0x563e2a23d860
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bd5bc00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bd5bc00 session 0x563e299e05a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfb9c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bfb9c00 session 0x563e2a0ea1e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b3a8c00 session 0x563e2a23e000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 131317760 unmapped: 46727168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:25.310915+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29681400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e29681400 session 0x563e2c04cf00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b428400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 138764288 unmapped: 39280640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b428400 session 0x563e2a56c1e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:26.311300+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f759a000/0x0/0x4ffc00000, data 0x3aa0793/0x3b79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b3a9800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b3a9800 session 0x563e2a0e9c20
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 138764288 unmapped: 39280640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:27.311550+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b447800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b447800 session 0x563e2bec8960
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c509800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29681c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e29681c00 session 0x563e2a0e9860
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2c509800 session 0x563e29776960
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 138919936 unmapped: 39124992 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29681400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e29681400 session 0x563e2a0e8d20
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b3a9800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:28.311919+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b428400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b428400 session 0x563e2b0e6960
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1642907 data_alloc: 234881024 data_used: 31825920
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b3a9800 session 0x563e2aed4b40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b447800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b426400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b447800 session 0x563e28ccc5a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29681400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b426400 session 0x563e2a241a40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e29681400 session 0x563e2a57a960
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 39108608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:29.312458+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b3a9800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b428400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 39108608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:30.312727+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 39108608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:31.313016+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f5800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2c4f5800 session 0x563e2bec9680
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f729a000/0x0/0x4ffc00000, data 0x42f87d6/0x43d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c506400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.249265671s of 10.107059479s, submitted: 83
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfafc00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139370496 unmapped: 38674432 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:32.313587+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139370496 unmapped: 38674432 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:33.313909+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b3a9800 session 0x563e2a57ab40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663417 data_alloc: 234881024 data_used: 33452032
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b428400 session 0x563e2a0bb680
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29681400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 39460864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:34.314324+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e29681400 session 0x563e2c0265a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 138264576 unmapped: 39780352 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:35.314603+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7cc9000/0x0/0x4ffc00000, data 0x38cb793/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 138264576 unmapped: 39780352 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:36.314906+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139378688 unmapped: 38666240 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:37.315231+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:38.315816+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1633604 data_alloc: 234881024 data_used: 40321024
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:39.316268+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:40.316694+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:41.317067+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7cc9000/0x0/0x4ffc00000, data 0x38cb793/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:42.317609+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:43.317947+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1633604 data_alloc: 234881024 data_used: 40321024
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:44.318313+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:45.318711+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:46.319076+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:47.319490+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7cc9000/0x0/0x4ffc00000, data 0x38cb793/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:48.319881+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1633604 data_alloc: 234881024 data_used: 40321024
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:49.320461+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:50.320847+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7cc9000/0x0/0x4ffc00000, data 0x38cb793/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:51.321260+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:52.321713+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:53.322169+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1633604 data_alloc: 234881024 data_used: 40321024
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:54.322591+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:55.322856+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:56.323262+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7cc9000/0x0/0x4ffc00000, data 0x38cb793/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:57.323606+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:58.323890+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139616256 unmapped: 38428672 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1633604 data_alloc: 234881024 data_used: 40321024
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:59.324269+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139616256 unmapped: 38428672 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:00.324611+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139616256 unmapped: 38428672 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:01.324846+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7cc9000/0x0/0x4ffc00000, data 0x38cb793/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 38748160 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:02.326340+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 38748160 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:03.326646+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 38748160 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1633604 data_alloc: 234881024 data_used: 40321024
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:04.327164+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 38748160 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:05.327499+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 38748160 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7cc9000/0x0/0x4ffc00000, data 0x38cb793/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:06.327826+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 38748160 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:07.328118+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 38748160 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:08.328462+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 38748160 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1633604 data_alloc: 234881024 data_used: 40321024
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.861461639s of 37.033191681s, submitted: 30
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:09.328806+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 142426112 unmapped: 35618816 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:10.329055+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 32358400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:11.329459+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145743872 unmapped: 32301056 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c5000/0x0/0x4ffc00000, data 0x3fb8793/0x4091000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:12.329856+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145752064 unmapped: 32292864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:13.330151+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145129472 unmapped: 32915456 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1692490 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:14.330469+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145170432 unmapped: 32874496 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:15.330785+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145170432 unmapped: 32874496 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:16.331059+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145170432 unmapped: 32874496 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:17.331458+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145170432 unmapped: 32874496 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:18.331858+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145170432 unmapped: 32874496 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1698668 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:19.332275+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 32866304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.486363411s of 10.983119965s, submitted: 51
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:20.332650+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 32866304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:21.332991+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 32866304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:22.333344+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 32866304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:23.333666+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 32866304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1698684 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:24.334667+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 32866304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:25.335005+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 32866304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:26.335344+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 32866304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:27.335704+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 32858112 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:28.336107+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 32858112 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1698684 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:29.336557+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 32858112 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:30.336899+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 32858112 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.996733665s of 11.008992195s, submitted: 1
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:31.337291+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 32858112 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:32.337731+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 32858112 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:33.338511+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 32858112 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:34.338869+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 32858112 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:35.339244+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:36.339666+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:37.340091+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:38.340649+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:39.341005+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:40.341479+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:41.341905+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:42.342320+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:43.342622+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:44.343023+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:45.343436+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:46.343805+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:47.344165+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 32841728 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:48.344547+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 32841728 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:49.344804+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 32841728 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:50.345140+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 32841728 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:51.345505+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145211392 unmapped: 32833536 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:52.345769+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145211392 unmapped: 32833536 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:53.346260+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2779 syncs, 3.76 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1922 writes, 6979 keys, 1922 commit groups, 1.0 writes per commit group, ingest: 7.75 MB, 0.01 MB/s
                                            Interval WAL: 1922 writes, 792 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145219584 unmapped: 32825344 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:54.346558+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145219584 unmapped: 32825344 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:55.346938+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145219584 unmapped: 32825344 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:56.347343+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145219584 unmapped: 32825344 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:57.347766+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145219584 unmapped: 32825344 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:58.348125+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145219584 unmapped: 32825344 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:59.348600+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:00.349143+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2c50e000 session 0x563e2bfc72c0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f2c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: mgrc ms_handle_reset ms_handle_reset con 0x563e2a142000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2078717049
Oct 02 20:23:35 compute-0 ceph-osd[207106]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2078717049,v1:192.168.122.100:6801/2078717049]
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: get_auth_request con 0x563e2b447800 auth_method 0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: mgrc handle_mgr_configure stats_period=5
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145383424 unmapped: 32661504 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:01.349652+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2c504c00 session 0x563e298303c0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c509000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bfba400 session 0x563e297a8f00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c504c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145383424 unmapped: 32661504 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:02.350181+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145383424 unmapped: 32661504 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:03.350684+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145383424 unmapped: 32661504 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:04.350967+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145383424 unmapped: 32661504 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:05.351498+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145367040 unmapped: 32677888 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:06.351859+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:07.352313+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:08.352757+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:09.353119+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:10.353346+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:11.353556+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:12.353762+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:13.353979+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:14.354712+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:15.354994+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:16.355302+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:17.355574+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:18.355804+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:19.356150+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:20.356638+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:21.356863+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:22.357194+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:23.357557+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:24.357889+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:25.358165+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:26.358542+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:27.359082+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:28.359317+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:29.359556+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:30.359917+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:31.360580+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:32.361084+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:33.361480+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:34.361925+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:35.362327+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:36.362713+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:37.362944+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 32800768 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:38.363357+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 32800768 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:39.363954+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 32800768 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:40.364224+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 32800768 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:41.364689+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 32800768 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:42.365076+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:43.365609+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:44.365938+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:45.366478+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:46.366914+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:47.367483+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:48.367832+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:49.368492+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:50.368991+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:51.369548+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:52.369838+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:53.370037+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 32784384 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:54.370458+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 32784384 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:55.371083+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 32784384 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:56.371540+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145268736 unmapped: 32776192 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:57.371861+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145268736 unmapped: 32776192 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:58.372044+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145268736 unmapped: 32776192 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:59.372457+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145268736 unmapped: 32776192 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:00.372841+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145268736 unmapped: 32776192 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:01.373178+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 32768000 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:02.373457+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 32768000 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:03.373692+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29720c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 92.732841492s of 92.740623474s, submitted: 1
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71bf000/0x0/0x4ffc00000, data 0x3fc57bc/0x409f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,19])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146563072 unmapped: 31481856 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:04.373960+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1852929 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e29720c00 session 0x563e29790960
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f5800
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:05.374359+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 32964608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2c4f5800 session 0x563e2a240f00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:06.374922+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 32964608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:07.375310+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 32964608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:08.375763+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 32964608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:09.376158+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 32964608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1816761 data_alloc: 234881024 data_used: 40574976
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b4000/0x0/0x4ffc00000, data 0x4ed07f5/0x4faa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:10.376530+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 32964608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29683400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:11.376873+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144457728 unmapped: 33587200 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e29683400 session 0x563e2aed4b40
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:12.377268+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144457728 unmapped: 33587200 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:13.377544+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144457728 unmapped: 33587200 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:14.378074+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfb3000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144465920 unmapped: 33579008 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1819214 data_alloc: 251658240 data_used: 40783872
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:15.378318+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146661376 unmapped: 31383552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:16.378523+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 153673728 unmapped: 24371200 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [1])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:17.378946+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156041216 unmapped: 22003712 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bfb3400 session 0x563e2a4f23c0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29681400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:18.379356+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156041216 unmapped: 22003712 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:19.380007+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156041216 unmapped: 22003712 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1927214 data_alloc: 251658240 data_used: 55971840
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:20.380512+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156033024 unmapped: 22011904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:21.380760+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156033024 unmapped: 22011904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:22.381027+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156033024 unmapped: 22011904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:23.381245+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156033024 unmapped: 22011904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.263887405s of 20.334077835s, submitted: 44
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:24.381518+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156041216 unmapped: 22003712 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1927326 data_alloc: 251658240 data_used: 55980032
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:25.381724+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156041216 unmapped: 22003712 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:26.381933+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156082176 unmapped: 21962752 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:27.382425+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156114944 unmapped: 21929984 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:28.382662+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156188672 unmapped: 21856256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:29.383037+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156188672 unmapped: 21856256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1927486 data_alloc: 251658240 data_used: 55984128
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:30.383253+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156188672 unmapped: 21856256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:31.383791+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156188672 unmapped: 21856256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:32.384236+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156188672 unmapped: 21856256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:33.384681+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156188672 unmapped: 21856256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:34.384912+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1927486 data_alloc: 251658240 data_used: 55984128
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:35.385254+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:36.385454+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:37.385624+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:38.385824+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:39.386164+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1927486 data_alloc: 251658240 data_used: 55984128
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:40.386351+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:41.386567+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:42.386766+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:43.386952+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:44.387175+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1927486 data_alloc: 251658240 data_used: 55984128
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:45.387369+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:46.387578+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.686796188s of 22.472694397s, submitted: 110
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 157483008 unmapped: 20561920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:47.387809+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163241984 unmapped: 14802944 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:48.388017+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163692544 unmapped: 14352384 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f5579000/0x0/0x4ffc00000, data 0x5bfc818/0x5cd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,1])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:49.388518+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165175296 unmapped: 12869632 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2039870 data_alloc: 251658240 data_used: 56713216
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:50.388765+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165175296 unmapped: 12869632 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:51.389068+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165175296 unmapped: 12869632 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:52.389354+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54f6000/0x0/0x4ffc00000, data 0x5c87818/0x5d62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165175296 unmapped: 12869632 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:53.389784+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165175296 unmapped: 12869632 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:54.390188+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165175296 unmapped: 12869632 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2039870 data_alloc: 251658240 data_used: 56713216
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:55.390509+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:56.390732+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54eb000/0x0/0x4ffc00000, data 0x5c98818/0x5d73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54eb000/0x0/0x4ffc00000, data 0x5c98818/0x5d73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:57.391033+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:58.391298+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:59.391821+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037570 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:00.392087+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54eb000/0x0/0x4ffc00000, data 0x5c98818/0x5d73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:01.392331+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:02.392525+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:03.392796+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:04.393057+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54eb000/0x0/0x4ffc00000, data 0x5c98818/0x5d73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037570 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:05.393258+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:06.393470+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54eb000/0x0/0x4ffc00000, data 0x5c98818/0x5d73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.506355286s of 20.436758041s, submitted: 159
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:07.393710+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54eb000/0x0/0x4ffc00000, data 0x5c98818/0x5d73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165347328 unmapped: 12697600 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:08.394041+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165347328 unmapped: 12697600 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:09.394493+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165355520 unmapped: 12689408 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037086 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:10.394891+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165355520 unmapped: 12689408 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:11.395061+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165355520 unmapped: 12689408 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:12.395510+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165355520 unmapped: 12689408 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:13.395870+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165355520 unmapped: 12689408 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:14.396117+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165355520 unmapped: 12689408 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037086 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:15.396558+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 12681216 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:16.396790+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 12681216 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:17.396990+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 12681216 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:18.397236+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 12681216 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:19.397515+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.786537170s of 12.801416397s, submitted: 2
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037086 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:20.397787+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:21.398033+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:22.398244+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:23.398648+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165380096 unmapped: 12664832 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:24.399072+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165380096 unmapped: 12664832 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037262 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:25.399349+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 12681216 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:26.399748+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 12681216 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:27.400044+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 12681216 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:28.400465+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 12681216 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:29.400819+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037262 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:30.401093+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:31.401449+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:32.401745+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:33.402080+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:34.402527+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037262 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:35.402940+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:36.403310+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:37.403730+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:38.404010+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:39.404270+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165380096 unmapped: 12664832 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037262 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:40.404556+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165380096 unmapped: 12664832 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:41.404759+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165380096 unmapped: 12664832 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:42.404938+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165388288 unmapped: 12656640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:43.405144+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165388288 unmapped: 12656640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:44.405348+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165388288 unmapped: 12656640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037262 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:45.405526+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165388288 unmapped: 12656640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:46.405833+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165388288 unmapped: 12656640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:47.406071+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165388288 unmapped: 12656640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:48.406427+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165388288 unmapped: 12656640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:49.406687+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165388288 unmapped: 12656640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037262 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:50.406887+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.686998367s of 30.720819473s, submitted: 3
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:51.407202+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:52.407582+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:53.407882+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:54.408314+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:55.408731+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:56.409167+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:57.409469+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:58.409888+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:59.410353+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:00.410663+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:01.411073+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:02.411513+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:03.412080+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:04.412359+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:05.412888+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:06.413247+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:07.413658+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:08.414122+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:09.414640+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:10.414859+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:11.415125+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:12.415437+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:13.415792+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165421056 unmapped: 12623872 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:14.416184+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165421056 unmapped: 12623872 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:15.416639+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165421056 unmapped: 12623872 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:16.417127+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165421056 unmapped: 12623872 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:17.417638+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165421056 unmapped: 12623872 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:18.417950+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:19.418360+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:20.418628+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:21.418952+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:22.419189+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:23.419603+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:24.419865+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:25.420268+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:26.420498+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:27.422327+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:28.422659+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:29.423009+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:30.423453+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:31.423742+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:32.424117+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:33.424702+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165445632 unmapped: 12599296 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:34.425079+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165445632 unmapped: 12599296 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:35.425420+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 12640256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:36.425709+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 12640256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:37.428477+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 12640256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:38.428753+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 12640256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:39.429173+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 12640256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:40.429623+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 12640256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:41.429878+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 12640256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:42.430130+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 12640256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:43.430446+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:44.430688+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163987456 unmapped: 14057472 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:45.431040+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163987456 unmapped: 14057472 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:46.431500+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:47.431894+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:48.432257+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:49.432638+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:50.432860+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:51.433054+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:52.433327+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:53.433749+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:54.434059+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:55.434270+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:56.434718+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:57.435047+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:58.435292+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:59.435514+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:00.435675+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:01.435869+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:02.436055+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:03.436265+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:04.436717+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:05.437144+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:06.437556+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:07.437790+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:08.438000+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:09.438296+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:10.438715+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:11.439087+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bd5d400 session 0x563e2c6dd2c0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29683400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:12.439304+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:13.439571+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 83.314064026s of 83.341011047s, submitted: 7
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:14.439954+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164012032 unmapped: 14032896 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:15.440501+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164012032 unmapped: 14032896 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:16.440784+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164020224 unmapped: 14024704 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:17.441101+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164020224 unmapped: 14024704 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:18.442050+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164020224 unmapped: 14024704 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:19.442428+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:20.442693+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:21.443067+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:22.443512+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:23.443952+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:24.444255+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:25.444606+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:26.444872+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:27.445113+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:28.445518+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:29.445895+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:30.446118+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:31.446532+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164036608 unmapped: 14008320 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:32.446802+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164036608 unmapped: 14008320 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:33.447162+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164036608 unmapped: 14008320 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:34.447596+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164036608 unmapped: 14008320 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:35.447935+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164036608 unmapped: 14008320 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:36.448317+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164036608 unmapped: 14008320 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:37.448682+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:38.448926+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:39.449350+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:40.449814+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:41.450092+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:42.450367+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:43.450795+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:44.451215+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:45.451592+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:46.451941+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:47.452282+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:48.452521+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:49.452820+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:50.453124+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:51.453423+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:52.453737+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:53.454083+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:54.454446+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:55.454847+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:56.455173+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:57.455638+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:58.456017+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:59.456521+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:00.456804+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:01.457112+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:02.457490+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:03.457767+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:04.457989+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:05.458500+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:06.458863+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:07.459182+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:08.459551+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:09.459856+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:10.460163+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:11.460364+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:12.460663+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:13.461043+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:14.461824+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:15.462225+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:16.462460+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:17.462689+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:18.463346+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164061184 unmapped: 13983744 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:19.463788+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:20.464062+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:21.464318+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:22.464733+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:23.465090+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:24.465557+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:25.465852+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:26.466233+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:27.467193+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:28.467448+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:29.467680+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:30.467848+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:31.468206+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:32.468552+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:33.468927+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164077568 unmapped: 13967360 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:34.469250+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164077568 unmapped: 13967360 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:35.469625+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164077568 unmapped: 13967360 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:36.469843+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164077568 unmapped: 13967360 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:37.470237+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164077568 unmapped: 13967360 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:38.470547+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164077568 unmapped: 13967360 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:39.471010+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164077568 unmapped: 13967360 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:40.471469+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164077568 unmapped: 13967360 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:41.471839+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164077568 unmapped: 13967360 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:42.472262+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:43.472735+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:44.473179+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:45.473652+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:46.474055+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:47.474528+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:48.474862+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:49.475528+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:50.476067+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:51.476519+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:52.476998+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:53.477816+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:54.478744+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:55.479926+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:56.480609+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:57.481027+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:58.481909+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:59.482247+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:00.482503+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:01.482785+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:02.483043+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:03.483529+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:04.484009+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:05.484285+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:06.484546+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:07.484809+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:08.485151+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:09.485625+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164102144 unmapped: 13942784 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:10.485886+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:11.486141+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:12.486567+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:13.486998+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:14.487508+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:15.487928+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:16.488270+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:17.488511+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:18.488722+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:19.489003+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:20.489231+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:21.489453+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:22.489639+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:23.489866+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:24.490084+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:25.490297+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164102144 unmapped: 13942784 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:26.490642+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164102144 unmapped: 13942784 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:27.490812+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:28.491052+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:29.491457+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:30.491807+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:31.492075+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:32.492468+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:33.492843+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:34.493228+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:35.493640+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:36.494019+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:37.494472+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:38.494804+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:39.495213+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:40.495614+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:41.495905+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:42.496256+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:43.496508+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:44.496819+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:45.497247+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:46.497667+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:47.498046+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:48.498430+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:49.498744+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:50.499121+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:51.499585+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038334 data_alloc: 251658240 data_used: 57057280
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:52.499924+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:53.500270+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164126720 unmapped: 13918208 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:54.500663+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164126720 unmapped: 13918208 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 161.123291016s of 161.148391724s, submitted: 3
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:55.501023+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164216832 unmapped: 13828096 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54cf000/0x0/0x4ffc00000, data 0x5cb4818/0x5d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:56.501474+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2039474 data_alloc: 251658240 data_used: 57057280
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164216832 unmapped: 13828096 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:57.502006+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164216832 unmapped: 13828096 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:58.502303+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164216832 unmapped: 13828096 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:59.503742+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164216832 unmapped: 13828096 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54cf000/0x0/0x4ffc00000, data 0x5cb4818/0x5d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54cf000/0x0/0x4ffc00000, data 0x5cb4818/0x5d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:00.504200+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164216832 unmapped: 13828096 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:01.506691+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2039474 data_alloc: 251658240 data_used: 57057280
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164216832 unmapped: 13828096 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:02.507286+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54cf000/0x0/0x4ffc00000, data 0x5cb4818/0x5d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164216832 unmapped: 13828096 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:03.508208+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164225024 unmapped: 13819904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:04.508613+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164225024 unmapped: 13819904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:05.508961+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164225024 unmapped: 13819904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54cf000/0x0/0x4ffc00000, data 0x5cb4818/0x5d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:06.509251+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2039794 data_alloc: 251658240 data_used: 57065472
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164225024 unmapped: 13819904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:07.509943+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164225024 unmapped: 13819904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:08.510362+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164225024 unmapped: 13819904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:09.510853+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54cf000/0x0/0x4ffc00000, data 0x5cb4818/0x5d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164233216 unmapped: 13811712 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:10.511469+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.815058708s of 15.871011734s, submitted: 3
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164405248 unmapped: 13639680 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:11.511920+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2040282 data_alloc: 251658240 data_used: 57065472
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164405248 unmapped: 13639680 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:12.512668+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164405248 unmapped: 13639680 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:13.513005+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164405248 unmapped: 13639680 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:14.513317+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164405248 unmapped: 13639680 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:15.513623+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164405248 unmapped: 13639680 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:16.514032+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2040282 data_alloc: 251658240 data_used: 57065472
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164405248 unmapped: 13639680 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:17.514330+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 13631488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:18.514714+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 13631488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:19.515184+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 13631488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:20.515640+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 13631488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:21.515995+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2040282 data_alloc: 251658240 data_used: 57065472
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 13631488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:22.516494+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 13631488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:23.516982+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 13631488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:24.517502+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 13631488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:25.517775+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 13631488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:26.518218+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2040282 data_alloc: 251658240 data_used: 57065472
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164421632 unmapped: 13623296 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:27.518711+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164421632 unmapped: 13623296 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:28.519133+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164421632 unmapped: 13623296 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:29.519594+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.172403336s of 19.199481964s, submitted: 3
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164487168 unmapped: 13557760 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:30.519898+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164487168 unmapped: 13557760 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:31.520174+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043082 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164487168 unmapped: 13557760 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:32.520599+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164487168 unmapped: 13557760 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:33.521018+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:34.521203+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164503552 unmapped: 13541376 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:35.521614+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:36.522191+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043082 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:37.522506+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:38.522769+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:39.523653+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.212740898s of 10.239569664s, submitted: 16
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:40.523954+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:41.524336+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:42.524662+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:43.525032+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:44.525496+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:45.525890+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:46.526250+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:47.526913+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:48.527322+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:49.527797+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:50.528244+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:51.528717+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:52.529078+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164519936 unmapped: 13524992 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:53.529654+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164519936 unmapped: 13524992 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:54.530060+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164519936 unmapped: 13524992 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:55.530538+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164519936 unmapped: 13524992 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:56.530933+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164519936 unmapped: 13524992 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:57.531356+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164519936 unmapped: 13524992 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:58.531855+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:59.532365+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:00.532897+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:01.533204+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:02.533627+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:03.534053+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:04.534638+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:05.535014+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:06.535504+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:07.535926+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:08.536545+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:09.537060+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:10.537614+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:11.537803+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:12.538285+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:13.538528+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:14.538852+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:15.539589+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:16.539865+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:17.541041+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:18.541602+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:19.542040+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:20.542990+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:21.543480+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:22.543910+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:23.544294+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:24.544630+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:25.545038+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:26.545548+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:27.545930+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:28.546323+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:29.546941+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:30.547455+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:31.547899+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:32.548312+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:33.548671+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:34.548983+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:35.549619+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:36.550022+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:37.550472+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:38.551080+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:39.551457+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:40.551908+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:41.552309+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:42.552681+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:43.554269+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:44.554705+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:45.554916+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:46.555078+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:47.555284+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:48.555572+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:49.555885+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:50.556259+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:51.561068+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:52.561467+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:53.561971+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:54.562187+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:55.562623+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164552704 unmapped: 13492224 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:56.563033+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:57.563484+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:58.564019+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:59.564642+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:00.565072+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:01.565452+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:02.565762+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:03.566159+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:04.566551+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:05.566929+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164552704 unmapped: 13492224 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:06.567366+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164552704 unmapped: 13492224 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:07.567674+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164552704 unmapped: 13492224 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:08.568145+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164552704 unmapped: 13492224 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:09.568636+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164552704 unmapped: 13492224 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:10.568994+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164552704 unmapped: 13492224 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:11.569520+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:12.569866+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:13.570076+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:14.570474+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:15.570702+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:16.571117+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:17.571578+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:18.572029+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:19.572615+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:20.572979+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:21.573552+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:22.574061+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:23.574701+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:24.575129+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:25.575599+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:26.575983+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:27.576573+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:28.576898+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:29.577574+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:30.577961+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:31.578489+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:32.578952+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:33.579223+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 02 20:23:35 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1265059625' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:34.579663+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:35.580079+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:36.580443+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:37.580762+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:38.581158+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:39.581647+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:40.582173+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:41.582638+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:42.582986+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:43.583487+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:44.583837+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164585472 unmapped: 13459456 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:45.584199+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164585472 unmapped: 13459456 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:46.584859+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164585472 unmapped: 13459456 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:47.585339+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164585472 unmapped: 13459456 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:48.585801+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:49.586661+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:50.587014+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:51.587614+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:52.588200+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:53.589166+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 2993 syncs, 3.68 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 565 writes, 1917 keys, 565 commit groups, 1.0 writes per commit group, ingest: 2.62 MB, 0.00 MB/s
                                            Interval WAL: 565 writes, 214 syncs, 2.64 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:54.589701+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:55.589952+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:56.590180+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:57.590484+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:58.590894+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:59.591345+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:00.591585+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:01.591936+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:02.592207+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:03.592622+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:04.592976+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:05.593244+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:06.593598+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:07.593931+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:08.594259+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:09.594726+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:10.595044+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:11.595538+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:12.595924+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:13.596287+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:14.596746+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:15.597008+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:16.597256+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:17.597565+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:18.597892+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:19.598462+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:20.598810+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:21.599157+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:22.599614+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:23.600006+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:24.600836+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:25.601202+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:26.601492+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:27.601780+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:28.602145+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:29.602647+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:30.602951+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:31.603364+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:32.603839+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 234881024 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:33.604142+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:34.604543+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:35.605068+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:36.605700+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:37.606089+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 234881024 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:38.606637+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:39.607065+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:40.607343+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:41.607695+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:42.608101+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 234881024 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:43.608581+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:44.608873+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:45.609271+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:46.609669+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:47.610032+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 234881024 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:48.610305+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:49.610710+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:50.611006+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:51.611465+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:52.611755+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 234881024 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:53.612119+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:54.612565+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:55.612990+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164626432 unmapped: 13418496 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:56.613330+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164634624 unmapped: 13410304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:57.613700+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164634624 unmapped: 13410304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 234881024 data_used: 57053184
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:58.613969+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164634624 unmapped: 13410304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:59.614351+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164634624 unmapped: 13410304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:00.614941+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164634624 unmapped: 13410304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 200.544860840s of 200.551040649s, submitted: 1
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2c506400 session 0x563e2ba69680
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bfafc00 session 0x563e2a818f00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bd5d400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:01.615255+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164651008 unmapped: 13393920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bd5d400 session 0x563e28cccf00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:02.615615+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f640c000/0x0/0x4ffc00000, data 0x4d78808/0x4e52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1859518 data_alloc: 234881024 data_used: 48300032
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:03.615976+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:04.616360+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:05.616791+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:06.617294+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:07.617981+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:08.618246+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1859518 data_alloc: 234881024 data_used: 48300032
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f640c000/0x0/0x4ffc00000, data 0x4d78808/0x4e52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:09.618748+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f640c000/0x0/0x4ffc00000, data 0x4d78808/0x4e52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:10.619106+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f640c000/0x0/0x4ffc00000, data 0x4d78808/0x4e52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:11.619514+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:12.619890+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.379122734s of 11.708549500s, submitted: 52
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bfb3000 session 0x563e297905a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29720c00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:13.620158+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e29720c00 session 0x563e2c5ecd20
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1533530 data_alloc: 218103808 data_used: 31834112
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:14.620647+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:15.621081+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:16.621533+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7d11000/0x0/0x4ffc00000, data 0x3073783/0x314b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:17.621871+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:18.622536+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1533530 data_alloc: 218103808 data_used: 31834112
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:19.622949+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7d11000/0x0/0x4ffc00000, data 0x3073783/0x314b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:20.623571+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:21.624540+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:22.624770+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bd5d400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.619196892s of 10.809016228s, submitted: 43
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:23.625288+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1533426 data_alloc: 218103808 data_used: 31834112
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7d11000/0x0/0x4ffc00000, data 0x3073783/0x314b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146399232 unmapped: 31645696 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _renew_subs
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:24.625878+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 141 ms_handle_reset con 0x563e2bd5d400 session 0x563e2b10b2c0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 37347328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f810f000/0x0/0x4ffc00000, data 0x3075354/0x314e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfafc00
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:25.626170+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140656640 unmapped: 45785088 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _renew_subs
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 142 ms_handle_reset con 0x563e2bfafc00 session 0x563e2a56c1e0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:26.626662+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfb3000
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140673024 unmapped: 45768704 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f7c9c000/0x0/0x4ffc00000, data 0x34e6eba/0x35bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _renew_subs
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:27.626917+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 143 ms_handle_reset con 0x563e2bfb3000 session 0x563e297a85a0
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140730368 unmapped: 45711360 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:28.627330+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1470363 data_alloc: 218103808 data_used: 24506368
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 45670400 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f890b000/0x0/0x4ffc00000, data 0x2878a9b/0x2952000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,1])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:29.628256+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140804096 unmapped: 45637632 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:30.628621+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140804096 unmapped: 45637632 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c506400
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 143 ms_handle_reset con 0x563e2c506400 session 0x563e2a240d20
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:31.629072+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 45613056 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:32.629660+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 45613056 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:33.630308+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473497 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 45613056 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8908000/0x0/0x4ffc00000, data 0x287a51a/0x2955000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:34.630950+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 45613056 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:35.631484+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.383268356s of 12.413671494s, submitted: 149
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:36.631966+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:37.632703+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:38.633455+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:39.633981+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:40.634304+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:41.634702+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:42.635099+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:43.635585+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:44.635938+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:45.636358+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:46.636808+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:47.637173+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:48.637497+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:49.637932+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:50.638204+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:51.638668+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:52.639001+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:53.639288+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:54.639655+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:55.639986+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:56.640496+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:57.641021+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:58.641500+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:59.641970+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:00.642476+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:01.642824+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:02.643012+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:03.643508+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:04.643911+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:05.644202+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:06.644656+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:07.645216+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:08.645620+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:09.646059+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:10.646506+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:11.646856+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:12.647234+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:13.647590+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:14.647968+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:15.648183+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:16.648501+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:17.648747+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:18.649164+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:19.649630+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:20.649846+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:21.650359+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:22.651057+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:23.651297+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:24.651681+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:25.651944+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:26.652629+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:27.652857+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:28.653095+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:29.653464+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:30.653821+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:31.654674+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:32.655230+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:33.655715+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:34.656367+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:35.656888+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:36.657736+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:37.658036+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:38.658579+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:39.659110+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:40.659363+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:41.659834+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:42.660232+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:43.660550+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:44.661161+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:45.661642+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:46.662088+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:47.662364+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:48.662764+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:49.663153+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:50.663553+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:51.663853+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:52.664055+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:53.664280+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:54.664508+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:55.664694+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:56.664906+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:57.672360+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:58.672773+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:59.673204+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:00.673446+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:01.673773+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: do_command 'config diff' '{prefix=config diff}'
Oct 02 20:23:35 compute-0 ceph-osd[207106]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 20:23:35 compute-0 ceph-osd[207106]: do_command 'config show' '{prefix=config show}'
Oct 02 20:23:35 compute-0 ceph-osd[207106]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:02.673989+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141099008 unmapped: 45342720 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 20:23:35 compute-0 ceph-osd[207106]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 20:23:35 compute-0 ceph-osd[207106]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 20:23:35 compute-0 ceph-osd[207106]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:03.674309+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 45998080 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:35 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:35 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:23:35 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:23:35 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:04.674543+0000)
Oct 02 20:23:35 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140230656 unmapped: 46211072 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:35 compute-0 ceph-osd[207106]: do_command 'log dump' '{prefix=log dump}'
Oct 02 20:23:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 02 20:23:35 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2441036819' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 20:23:36 compute-0 rsyslogd[187702]: imjournal from <compute-0:ceph-osd>: begin to drop messages due to rate-limiting
Oct 02 20:23:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2371: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:36 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 20:23:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 02 20:23:36 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/576637372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 20:23:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 02 20:23:36 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/138664456' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 20:23:36 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1265059625' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 20:23:36 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2441036819' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 20:23:36 compute-0 ceph-mon[191910]: pgmap v2371: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:36 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/576637372' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 20:23:36 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/138664456' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 20:23:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 02 20:23:36 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4061734682' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 20:23:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 02 20:23:37 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3260691345' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 20:23:37 compute-0 crontab[482424]: (root) LIST (root)
Oct 02 20:23:37 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 02 20:23:37 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3902216017' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 20:23:37 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15659 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:37 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4061734682' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 20:23:37 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3260691345' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 20:23:37 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3902216017' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 20:23:37 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15661 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:37 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15663 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:38 compute-0 nova_compute[355794]: 2025-10-02 20:23:38.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:38 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15667 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:38 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15665 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2372: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:38 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15671 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:38 compute-0 ceph-mon[191910]: from='client.15659 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:38 compute-0 ceph-mon[191910]: from='client.15661 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:38 compute-0 ceph-mon[191910]: from='client.15663 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:38 compute-0 ceph-mon[191910]: from='client.15667 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:38 compute-0 ceph-mon[191910]: from='client.15665 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:38 compute-0 ceph-mon[191910]: pgmap v2372: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 02 20:23:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4067310041' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 20:23:39 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15675 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:39 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15679 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 02 20:23:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1827780350' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 20:23:39 compute-0 podman[482747]: 2025-10-02 20:23:39.695886512 +0000 UTC m=+0.111694564 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct 02 20:23:39 compute-0 ceph-mon[191910]: from='client.15671 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:39 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4067310041' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 20:23:39 compute-0 ceph-mon[191910]: from='client.15675 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:39 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1827780350' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 20:23:40 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15681 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:40 compute-0 nova_compute[355794]: 2025-10-02 20:23:40.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 20:23:40 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2447397799' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 20:23:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2373: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 02 20:23:40 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/69176868' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 20:23:40 compute-0 ceph-mon[191910]: from='client.15679 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:40 compute-0 ceph-mon[191910]: from='client.15681 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:40 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2447397799' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 20:23:40 compute-0 ceph-mon[191910]: pgmap v2373: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:40 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/69176868' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 20:23:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1651904 data_alloc: 251658240 data_used: 34791424
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 6209536 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:45.213934+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 6209536 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:46.214286+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7a00000/0x0/0x4ffc00000, data 0x3fb44a8/0x407e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 6209536 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:47.214621+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 6201344 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:48.214997+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7a00000/0x0/0x4ffc00000, data 0x3fb44a8/0x407e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 6201344 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:49.215447+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7a00000/0x0/0x4ffc00000, data 0x3fb44a8/0x407e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1651904 data_alloc: 251658240 data_used: 34791424
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 6201344 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:50.215786+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 6201344 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7a00000/0x0/0x4ffc00000, data 0x3fb44a8/0x407e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:51.215980+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 6201344 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:52.216218+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 6201344 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:53.216446+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 6201344 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:54.216759+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1651904 data_alloc: 251658240 data_used: 34791424
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7a00000/0x0/0x4ffc00000, data 0x3fb44a8/0x407e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 6201344 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:55.217017+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 6201344 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:56.217357+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 6201344 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:57.217657+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 6201344 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:58.218221+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 6184960 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:50:59.218544+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1651904 data_alloc: 251658240 data_used: 34791424
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 6184960 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:00.218724+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7a00000/0x0/0x4ffc00000, data 0x3fb44a8/0x407e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 6184960 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:01.218961+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 6176768 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:02.219326+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 6176768 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:03.219753+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 6176768 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:04.219956+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1651904 data_alloc: 251658240 data_used: 34791424
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7a00000/0x0/0x4ffc00000, data 0x3fb44a8/0x407e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 6176768 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:05.220150+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 6176768 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:06.220490+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 6176768 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:07.220753+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 6176768 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7a00000/0x0/0x4ffc00000, data 0x3fb44a8/0x407e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:08.221203+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124436480 unmapped: 6168576 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:09.221589+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1651904 data_alloc: 251658240 data_used: 34791424
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124436480 unmapped: 6168576 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:10.221868+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124444672 unmapped: 6160384 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:11.222099+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124444672 unmapped: 6160384 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:12.222431+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7a00000/0x0/0x4ffc00000, data 0x3fb44a8/0x407e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124444672 unmapped: 6160384 heap: 130605056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:13.222745+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6800
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b4191a6800 session 0x55b418af61e0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b4161eb400 session 0x55b418b58b40
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41ad9a000
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b41ad9a000 session 0x55b416c7f0e0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41ad9a400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b41ad9a400 session 0x55b4195c3a40
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41ad9a800
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 53.896717072s of 53.916957855s, submitted: 3
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b41ad9a800 session 0x55b419679a40
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124936192 unmapped: 13565952 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b4191a6c00 session 0x55b417ad0000
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b4191a6c00 session 0x55b41958cf00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:14.222956+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b4161eb400 session 0x55b41958c3c0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41ad9a000
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b41ad9a000 session 0x55b41a20e5a0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7088000/0x0/0x4ffc00000, data 0x492b4b8/0x49f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1728934 data_alloc: 251658240 data_used: 34791424
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41ad9a400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124764160 unmapped: 13737984 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b41ad9a400 session 0x55b41a20e3c0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:15.223473+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124764160 unmapped: 13737984 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:16.223768+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7083000/0x0/0x4ffc00000, data 0x492f51a/0x49fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124764160 unmapped: 13737984 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:17.224093+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124764160 unmapped: 13737984 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:18.224332+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41ad9a800
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b41ad9a800 session 0x55b41958c3c0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7083000/0x0/0x4ffc00000, data 0x492f51a/0x49fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124788736 unmapped: 13713408 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:19.226571+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1733470 data_alloc: 251658240 data_used: 34795520
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41ad9a800
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124829696 unmapped: 13672448 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:20.226785+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7083000/0x0/0x4ffc00000, data 0x492f51a/0x49fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124829696 unmapped: 13672448 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:21.227122+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 124829696 unmapped: 13672448 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:22.227597+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 11862016 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:23.228051+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 7028736 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:24.228277+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1796802 data_alloc: 251658240 data_used: 43581440
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 133054464 unmapped: 5447680 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:25.228735+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.274334908s of 11.510054588s, submitted: 41
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7083000/0x0/0x4ffc00000, data 0x492f51a/0x49fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [0,0,1])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 133251072 unmapped: 5251072 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:26.228994+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7083000/0x0/0x4ffc00000, data 0x492f51a/0x49fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 133316608 unmapped: 5185536 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:27.229422+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132849664 unmapped: 5652480 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:28.229910+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7083000/0x0/0x4ffc00000, data 0x492f51a/0x49fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132882432 unmapped: 5619712 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:29.230096+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1796802 data_alloc: 251658240 data_used: 43581440
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132882432 unmapped: 5619712 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:30.230469+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132882432 unmapped: 5619712 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:31.230679+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7083000/0x0/0x4ffc00000, data 0x492f51a/0x49fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132882432 unmapped: 5619712 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:32.230942+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:33.231189+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132882432 unmapped: 5619712 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7083000/0x0/0x4ffc00000, data 0x492f51a/0x49fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:34.231625+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132882432 unmapped: 5619712 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1796802 data_alloc: 251658240 data_used: 43581440
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:35.231887+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132898816 unmapped: 5603328 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:36.232295+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132898816 unmapped: 5603328 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:37.232665+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132898816 unmapped: 5603328 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7083000/0x0/0x4ffc00000, data 0x492f51a/0x49fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:38.233115+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132898816 unmapped: 5603328 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:39.233603+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132898816 unmapped: 5603328 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1796802 data_alloc: 251658240 data_used: 43581440
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:40.233909+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132898816 unmapped: 5603328 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7083000/0x0/0x4ffc00000, data 0x492f51a/0x49fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:41.234527+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132898816 unmapped: 5603328 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:42.234954+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132907008 unmapped: 5595136 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:43.235317+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132907008 unmapped: 5595136 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:44.235534+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132907008 unmapped: 5595136 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7083000/0x0/0x4ffc00000, data 0x492f51a/0x49fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1796802 data_alloc: 251658240 data_used: 43581440
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:45.235898+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132907008 unmapped: 5595136 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:46.236267+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132907008 unmapped: 5595136 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:47.236532+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132907008 unmapped: 5595136 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7083000/0x0/0x4ffc00000, data 0x492f51a/0x49fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:48.236742+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132907008 unmapped: 5595136 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:49.237044+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132907008 unmapped: 5595136 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1796802 data_alloc: 251658240 data_used: 43581440
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7083000/0x0/0x4ffc00000, data 0x492f51a/0x49fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:50.237296+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132915200 unmapped: 5586944 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:51.237623+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132915200 unmapped: 5586944 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:52.237929+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132915200 unmapped: 5586944 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:53.238254+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132915200 unmapped: 5586944 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.499050140s of 28.276956558s, submitted: 106
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:54.238492+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132915200 unmapped: 5586944 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7083000/0x0/0x4ffc00000, data 0x492f51a/0x49fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1797458 data_alloc: 251658240 data_used: 43593728
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:55.238915+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132915200 unmapped: 5586944 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:56.239543+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 133038080 unmapped: 5464064 heap: 138502144 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:57.239993+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 3047424 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:58.240363+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136994816 unmapped: 3489792 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:51:59.240840+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137216000 unmapped: 3268608 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e051a/0x57ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1913032 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:00.241222+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137256960 unmapped: 3227648 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:01.241658+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137256960 unmapped: 3227648 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:02.242109+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137256960 unmapped: 3227648 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:03.242560+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137256960 unmapped: 3227648 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:04.243129+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.345760345s of 10.730489731s, submitted: 104
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137306112 unmapped: 3178496 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1913296 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:05.243634+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137314304 unmapped: 3170304 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d0000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:06.243840+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137314304 unmapped: 3170304 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:07.244223+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137322496 unmapped: 3162112 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:08.244565+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137322496 unmapped: 3162112 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:09.245002+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137322496 unmapped: 3162112 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1913296 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:10.245230+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137322496 unmapped: 3162112 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d0000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:11.245567+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 3153920 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:12.245823+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 3153920 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:13.246066+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 3153920 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:14.246258+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 3153920 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1913296 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d0000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:15.246643+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 3129344 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:16.246977+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 3129344 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:17.247340+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d0000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 3129344 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:18.248055+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 3129344 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:19.248493+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 3129344 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d0000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1913296 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:20.248953+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 3129344 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:21.250021+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137363456 unmapped: 3121152 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d0000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:22.252546+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137363456 unmapped: 3121152 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:23.252831+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 3112960 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:24.253162+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 3112960 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1913296 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:25.254084+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 3112960 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:26.254581+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 3112960 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:27.254937+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 3112960 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d0000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:28.255590+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 3112960 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:29.256017+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 3104768 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.326778412s of 25.336004257s, submitted: 1
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:30.256531+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136355840 unmapped: 4128768 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:31.256962+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136355840 unmapped: 4128768 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:32.257582+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136355840 unmapped: 4128768 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:33.257878+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136355840 unmapped: 4128768 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:34.258518+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136355840 unmapped: 4128768 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:35.258928+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136355840 unmapped: 4128768 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:36.259294+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136355840 unmapped: 4128768 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:37.259710+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136355840 unmapped: 4128768 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:38.260082+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136364032 unmapped: 4120576 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:39.260352+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136364032 unmapped: 4120576 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:40.260843+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136364032 unmapped: 4120576 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:41.261224+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136364032 unmapped: 4120576 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:42.261734+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136364032 unmapped: 4120576 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:43.262079+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136364032 unmapped: 4120576 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:44.262534+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136364032 unmapped: 4120576 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:45.262959+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136364032 unmapped: 4120576 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:46.263478+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136364032 unmapped: 4120576 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:47.263827+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136372224 unmapped: 4112384 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:48.264321+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136372224 unmapped: 4112384 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:49.264681+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136372224 unmapped: 4112384 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:50.265166+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136372224 unmapped: 4112384 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:51.265648+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136372224 unmapped: 4112384 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:52.266078+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136372224 unmapped: 4112384 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:53.266546+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136372224 unmapped: 4112384 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:54.266789+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136380416 unmapped: 4104192 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:55.267101+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136380416 unmapped: 4104192 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:56.267705+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136380416 unmapped: 4104192 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:57.268157+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136380416 unmapped: 4104192 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:58.268600+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136380416 unmapped: 4104192 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:52:59.268966+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136380416 unmapped: 4104192 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:00.269823+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136380416 unmapped: 4104192 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:01.270042+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 4096000 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:02.270747+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 4096000 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:03.271197+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 4096000 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:04.271597+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 4096000 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:05.271919+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 4096000 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:06.272149+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 4096000 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:07.272573+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 4096000 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:08.272856+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 4096000 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:09.273264+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 4096000 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:10.273714+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 4087808 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:11.274167+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 4087808 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:12.274665+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 4087808 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:13.275093+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 4087808 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:14.275588+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 4079616 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:15.276040+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 4079616 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:16.276366+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 4071424 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:17.276667+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 4071424 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:18.277068+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 4063232 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:19.277598+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 4063232 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:20.277928+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 4063232 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:21.278245+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 4063232 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:22.278550+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 4063232 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:23.279080+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 4063232 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:24.279419+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 4063232 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:25.280311+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 4063232 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:26.280848+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 4063232 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:27.281074+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 4063232 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:28.281328+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 4063232 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:29.281658+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 4063232 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:30.282174+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 4063232 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:31.282586+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 4063232 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:32.282992+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 4063232 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:33.283301+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 4055040 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:34.283573+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 4055040 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:35.283916+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 4055040 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:36.284513+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 4055040 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:37.284806+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 4055040 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:38.285227+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 4055040 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:39.285517+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136437760 unmapped: 4046848 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:40.285952+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136437760 unmapped: 4046848 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:41.286572+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136437760 unmapped: 4046848 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:42.287040+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136445952 unmapped: 4038656 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:43.287342+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136445952 unmapped: 4038656 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:44.287695+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136445952 unmapped: 4038656 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:45.288251+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 4030464 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:46.288631+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 4030464 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:47.288953+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 4030464 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:48.289263+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 4030464 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:49.289529+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 4030464 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:50.289717+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 4030464 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:51.289985+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 4030464 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:52.290294+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 4030464 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:53.290621+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 4030464 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:54.290927+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 4030464 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:55.291280+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 4030464 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:56.291553+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136462336 unmapped: 4022272 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:57.292050+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136462336 unmapped: 4022272 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:58.292428+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 4014080 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:53:59.292861+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 4014080 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:00.293293+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:01.293584+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 4014080 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:02.294044+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 4014080 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:03.294458+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 4014080 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:04.294893+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 4014080 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:05.295132+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 4014080 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:06.295669+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 4014080 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:07.296132+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 4005888 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:08.296607+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 4005888 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:09.296891+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 4005888 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:10.297342+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 4005888 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:11.297975+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 4005888 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:12.298586+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 4005888 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:13.298894+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 4005888 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:14.299256+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 4005888 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:15.299645+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 4005888 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:16.299938+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 4005888 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:17.300323+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 4005888 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:18.300714+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 4005888 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:19.301072+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 4005888 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:20.301467+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 4005888 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:21.301855+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 4005888 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:22.302213+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136486912 unmapped: 3997696 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:23.302591+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 3989504 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:24.302833+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 3989504 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:25.303232+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 3989504 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:26.303587+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 3989504 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:27.303965+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 3989504 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:28.304300+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 3989504 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:29.304695+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 3989504 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:30.305107+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 3989504 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:31.305495+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 3981312 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:32.305935+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 3981312 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:33.306423+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 3981312 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:34.306805+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 3948544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:35.307154+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 3948544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:36.307638+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 3948544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:37.307997+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 3948544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:38.308270+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 3948544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:39.308672+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 3932160 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:40.309044+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 3932160 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:41.309268+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 3932160 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:42.310048+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 3932160 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:43.310644+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 3932160 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:44.311143+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 3932160 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:45.311546+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 3932160 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:46.311936+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 3932160 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets getting new tickets!
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:47.312585+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _finish_auth 0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:47.314872+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 3899392 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:48.312962+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 3899392 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:49.313345+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 3899392 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:50.313703+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 3899392 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:51.313955+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 3899392 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:52.314661+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 3899392 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:53.314989+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 3899392 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:54.315518+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 3891200 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:55.315837+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 3883008 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903952 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:56.316263+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 3883008 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:57.316559+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 3883008 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:58.317050+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 3883008 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f62d1000/0x0/0x4ffc00000, data 0x56e151a/0x57ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:54:59.317568+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136617984 unmapped: 3866624 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 150.322708130s of 150.332305908s, submitted: 2
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b4196a8400 session 0x55b416bc03c0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b417c8ec00 session 0x55b41950a1e0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b419675400 session 0x55b4196781e0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:00.317895+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41ad9a000
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 3883008 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1903232 data_alloc: 251658240 data_used: 43700224
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:01.318242+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b419674000 session 0x55b41961ab40
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41ad9a400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 9650176 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:02.318583+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b41ad9a000 session 0x55b417aefe00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 9625600 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:03.318985+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 9625600 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:04.319546+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 9625600 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:05.319904+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 9625600 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 251658240 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:06.320136+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 9625600 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:07.320573+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 9625600 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:08.320954+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 9625600 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:09.321342+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 9625600 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:10.321660+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 9625600 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 251658240 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:11.322031+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 9625600 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:12.322526+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 9625600 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:13.322941+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 9625600 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:14.323362+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 9625600 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:15.323860+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 9625600 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 251658240 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:16.324218+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 9625600 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:17.324840+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 9625600 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:18.325314+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130457600 unmapped: 10027008 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:19.325636+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130473984 unmapped: 10010624 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:20.325975+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130473984 unmapped: 10010624 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 251658240 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:21.326260+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 10002432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:22.326637+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 10002432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:23.327048+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 10002432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:24.327514+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 10002432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:25.327764+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 10002432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 251658240 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:26.328134+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 10002432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:27.328627+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 10002432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:28.328981+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 10002432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:29.329291+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 10002432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:30.329620+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 10002432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:31.329937+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 251658240 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 10002432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:32.330544+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 10002432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:33.331057+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 10002432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:34.331497+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 10002432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:35.331879+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 10002432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:36.332248+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 251658240 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 10002432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:37.332637+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 10002432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:38.332899+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 10002432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:39.333220+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:40.333588+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:41.334040+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 251658240 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:42.334636+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:43.335014+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:44.335349+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:45.335711+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:46.336160+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 251658240 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:47.336574+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:48.336953+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:49.337136+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:50.337736+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:51.338035+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 251658240 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:52.338526+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:53.338753+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:54.339025+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:55.339439+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:56.339814+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 251658240 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:57.340136+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 9986048 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:58.340636+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:55:59.341091+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:00.341596+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:01.341989+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 251658240 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:02.342574+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:03.342996+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:04.343271+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:05.343511+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:06.343796+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 251658240 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:07.344119+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:08.344476+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:09.344870+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:10.345240+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:11.345540+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 251658240 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:12.345995+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:13.346356+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:14.346826+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:15.347296+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:16.347659+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 251658240 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:17.348097+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b419674400 session 0x55b41961ad20
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c8ec00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 9969664 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:18.348598+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:19.348872+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:20.349102+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:21.349618+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 234881024 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:22.350080+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:23.350536+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:24.350944+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:25.351341+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:26.351636+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 234881024 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:27.351948+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:28.352322+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:29.352672+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:30.353052+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:31.353533+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 234881024 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:32.353975+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:33.354436+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:34.354836+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:35.355299+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:36.355687+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 234881024 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:37.356159+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:38.356566+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 9953280 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:39.356932+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:40.357745+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:41.358062+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 234881024 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:42.358881+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:43.359290+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:44.359747+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:45.360154+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:46.360620+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 234881024 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:47.360995+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:48.361210+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:49.361598+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:50.362043+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:51.362353+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 234881024 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:52.362858+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:53.363242+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:54.363510+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:55.363927+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:56.364351+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711883 data_alloc: 234881024 data_used: 34754560
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:57.364773+0000)
Oct 02 20:23:40 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 20:23:40 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:58.364971+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 9936896 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:56:59.365178+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 9920512 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f7043000/0x0/0x4ffc00000, data 0x46b8446/0x4781000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:00.365441+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 9920512 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 120.138450623s of 121.031127930s, submitted: 68
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b41ad9ac00 session 0x55b4195efc20
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b41ad9b000 session 0x55b418b29c20
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b41ad9b400 session 0x55b417cb0f00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:01.365757+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419675400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 9912320 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f8339000/0x0/0x4ffc00000, data 0x367c446/0x3745000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [0,1])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1521493 data_alloc: 218103808 data_used: 25468928
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:02.366085+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 123142144 unmapped: 17342464 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b419675400 session 0x55b418b58b40
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:03.366334+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:04.366552+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:05.367268+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:06.367528+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1509125 data_alloc: 218103808 data_used: 25251840
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f8377000/0x0/0x4ffc00000, data 0x36403c4/0x3706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:07.367921+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:08.368340+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:09.368738+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:10.368982+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f8377000/0x0/0x4ffc00000, data 0x36403c4/0x3706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:11.369273+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1509125 data_alloc: 218103808 data_used: 25251840
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:12.369689+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:13.370186+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f8377000/0x0/0x4ffc00000, data 0x36403c4/0x3706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:14.370623+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:15.371005+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:16.371288+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1509125 data_alloc: 218103808 data_used: 25251840
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f8377000/0x0/0x4ffc00000, data 0x36403c4/0x3706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:17.371726+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:18.372121+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:19.372535+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:20.372879+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f8377000/0x0/0x4ffc00000, data 0x36403c4/0x3706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:21.373226+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1509125 data_alloc: 218103808 data_used: 25251840
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:22.373695+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:23.374082+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f8377000/0x0/0x4ffc00000, data 0x36403c4/0x3706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:24.374533+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b41ad9a800 session 0x55b416c7f0e0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.408994675s of 24.010629654s, submitted: 74
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b4161eb400 session 0x55b417ad12c0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b4191a6c00 session 0x55b41a20ed20
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f8377000/0x0/0x4ffc00000, data 0x36403c4/0x3706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:25.374827+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419675400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f8377000/0x0/0x4ffc00000, data 0x36403c4/0x3706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 122691584 unmapped: 17793024 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:26.375216+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 ms_handle_reset con 0x55b419675400 session 0x55b416ea8000
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 24428544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f9aa2000/0x0/0x4ffc00000, data 0x1f173b4/0x1fdc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270981 data_alloc: 218103808 data_used: 16343040
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:27.375511+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 24428544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:28.375894+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 24428544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:29.376262+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 24428544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:30.376609+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 24428544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:31.377031+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 24428544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f9aa6000/0x0/0x4ffc00000, data 0x1f13352/0x1fd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270981 data_alloc: 218103808 data_used: 16343040
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:32.377348+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 24428544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:33.377746+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f9aa6000/0x0/0x4ffc00000, data 0x1f13352/0x1fd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 24428544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:34.378068+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 24428544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:35.378433+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 24428544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:36.378819+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 24428544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270981 data_alloc: 218103808 data_used: 16343040
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:37.379185+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 24428544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:38.379606+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 24428544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f9aa6000/0x0/0x4ffc00000, data 0x1f13352/0x1fd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:39.379942+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 24428544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:40.380300+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 24428544 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:41.380643+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270981 data_alloc: 218103808 data_used: 16343040
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:42.381034+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f9aa6000/0x0/0x4ffc00000, data 0x1f13352/0x1fd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:43.381341+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f9aa6000/0x0/0x4ffc00000, data 0x1f13352/0x1fd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f9aa6000/0x0/0x4ffc00000, data 0x1f13352/0x1fd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:44.381648+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f9aa6000/0x0/0x4ffc00000, data 0x1f13352/0x1fd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:45.382090+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f9aa6000/0x0/0x4ffc00000, data 0x1f13352/0x1fd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:46.382547+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270981 data_alloc: 218103808 data_used: 16343040
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:47.382856+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:48.383262+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:49.383728+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:50.383929+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f9aa6000/0x0/0x4ffc00000, data 0x1f13352/0x1fd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:51.384230+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270981 data_alloc: 218103808 data_used: 16343040
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:52.384747+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:53.385163+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:54.385693+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:55.386073+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f9aa6000/0x0/0x4ffc00000, data 0x1f13352/0x1fd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:56.386543+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:57.387038+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270981 data_alloc: 218103808 data_used: 16343040
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f9aa6000/0x0/0x4ffc00000, data 0x1f13352/0x1fd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:58.387557+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:57:59.387956+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:00.388358+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:01.388879+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f9aa6000/0x0/0x4ffc00000, data 0x1f13352/0x1fd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 24420352 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 heartbeat osd_stat(store_statfs(0x4f9aa6000/0x0/0x4ffc00000, data 0x1f13352/0x1fd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:02.389231+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270981 data_alloc: 218103808 data_used: 16343040
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41ad9ac00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.506340027s of 37.712886810s, submitted: 40
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 24403968 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:03.389721+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 24403968 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:04.390196+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 24387584 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:05.390593+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 24387584 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:06.390945+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 24371200 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _renew_subs
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:07.391581+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 heartbeat osd_stat(store_statfs(0x4f9aa7000/0x0/0x4ffc00000, data 0x1f13352/0x1fd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278516 data_alloc: 218103808 data_used: 16351232
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b41ad9ac00 session 0x55b4195ee000
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 24338432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:08.392043+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 24338432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:09.392544+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 24338432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:10.392843+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 24338432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:11.393149+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 24338432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:12.393613+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278292 data_alloc: 218103808 data_used: 16351232
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 heartbeat osd_stat(store_statfs(0x4f9aa2000/0x0/0x4ffc00000, data 0x1f14ef2/0x1fdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 24338432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:13.394179+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 24338432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:14.394551+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 24338432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:15.394883+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41ad9b000
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b41ad9b000 session 0x55b4196081e0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 24338432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b4161eb400 session 0x55b419608000
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b4191a6c00 session 0x55b418afcb40
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:16.395155+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419675400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.177846909s of 14.303787231s, submitted: 13
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 24338432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b419675400 session 0x55b418b58780
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:17.395461+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41ad9ac00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277528 data_alloc: 218103808 data_used: 16355328
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b41ad9ac00 session 0x55b417c965a0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 24338432 heap: 140484608 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:18.395775+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41ad9b400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b41ad9b400 session 0x55b419678d20
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 heartbeat osd_stat(store_statfs(0x4f9693000/0x0/0x4ffc00000, data 0x1f14ef2/0x1fdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41ad9b400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b41ad9b400 session 0x55b418b58b40
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b4161eb400 session 0x55b41961a3c0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b4191a6c00 session 0x55b41961af00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 34054144 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419675400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b419675400 session 0x55b41961b2c0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41ad9ac00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b41ad9ac00 session 0x55b417aefe00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:19.396020+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41ad9ac00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b41ad9ac00 session 0x55b416bc0960
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b4161eb400 session 0x55b41a20e960
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 34447360 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b4191a6c00 session 0x55b41a20e5a0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419675400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b419675400 session 0x55b4196081e0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:20.396538+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 34447360 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:21.396928+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 34447360 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:22.397242+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346259 data_alloc: 218103808 data_used: 16355328
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 34447360 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:23.397582+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41ad9b400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b41ad9b400 session 0x55b416ea8000
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 34447360 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:24.397835+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 heartbeat osd_stat(store_statfs(0x4f8eca000/0x0/0x4ffc00000, data 0x26dbf77/0x27a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 34439168 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:25.398093+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 heartbeat osd_stat(store_statfs(0x4f8eca000/0x0/0x4ffc00000, data 0x26dbf77/0x27a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 34439168 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:26.398350+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 heartbeat osd_stat(store_statfs(0x4f8eca000/0x0/0x4ffc00000, data 0x26dbf77/0x27a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 34430976 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:27.398815+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356848 data_alloc: 218103808 data_used: 17625088
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 34136064 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:28.399153+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 33144832 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:29.399523+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 33144832 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:30.399770+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 33144832 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:31.400845+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 heartbeat osd_stat(store_statfs(0x4f8eca000/0x0/0x4ffc00000, data 0x26dbf77/0x27a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 33144832 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:32.401491+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403568 data_alloc: 218103808 data_used: 24231936
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 heartbeat osd_stat(store_statfs(0x4f8eca000/0x0/0x4ffc00000, data 0x26dbf77/0x27a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 33144832 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:33.402138+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 33144832 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:34.403320+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 33144832 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:35.403537+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 heartbeat osd_stat(store_statfs(0x4f8eca000/0x0/0x4ffc00000, data 0x26dbf77/0x27a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 33144832 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:36.403944+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 33144832 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:37.404304+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403568 data_alloc: 218103808 data_used: 24231936
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 heartbeat osd_stat(store_statfs(0x4f8eca000/0x0/0x4ffc00000, data 0x26dbf77/0x27a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 33144832 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:38.404688+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 heartbeat osd_stat(store_statfs(0x4f8eca000/0x0/0x4ffc00000, data 0x26dbf77/0x27a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 33144832 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:39.405119+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 33144832 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:40.405606+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 33144832 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:41.409878+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 heartbeat osd_stat(store_statfs(0x4f8eca000/0x0/0x4ffc00000, data 0x26dbf77/0x27a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 33144832 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:42.410349+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403568 data_alloc: 218103808 data_used: 24231936
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 33144832 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:43.411065+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:44.411460+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 33144832 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:45.411984+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 33144832 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.607160568s of 28.915983200s, submitted: 49
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b4161eb400 session 0x55b419679c20
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b4191a6c00 session 0x55b418a483c0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41965f400
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:46.412456+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 33128448 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 heartbeat osd_stat(store_statfs(0x4f8eca000/0x0/0x4ffc00000, data 0x26dbf77/0x27a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:47.412858+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 35995648 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 ms_handle_reset con 0x55b41965f400 session 0x55b418b29680
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287040 data_alloc: 218103808 data_used: 16355328
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:48.413319+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35938304 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:49.413559+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35938304 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 heartbeat osd_stat(store_statfs(0x4f9692000/0x0/0x4ffc00000, data 0x1f14ef2/0x1fdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4196a9c00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:50.413757+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 heartbeat osd_stat(store_statfs(0x4f9692000/0x0/0x4ffc00000, data 0x1f14ef2/0x1fdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115064832 unmapped: 35921920 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _renew_subs
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 130 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:51.414174+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115097600 unmapped: 35889152 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 130 ms_handle_reset con 0x55b4196a9c00 session 0x55b41a20f0e0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:52.414667+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 35872768 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291018 data_alloc: 218103808 data_used: 16359424
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:53.415029+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 35872768 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:54.415367+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 35872768 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:55.415870+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 35872768 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 130 heartbeat osd_stat(store_statfs(0x4f9690000/0x0/0x4ffc00000, data 0x1f16aa0/0x1fdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:56.416232+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 35872768 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 130 heartbeat osd_stat(store_statfs(0x4f9690000/0x0/0x4ffc00000, data 0x1f16aa0/0x1fdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:57.416622+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 35872768 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291018 data_alloc: 218103808 data_used: 16359424
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:58.417004+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 35872768 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:58:59.417366+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 35872768 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.768211365s of 14.420727730s, submitted: 93
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:00.417770+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 35872768 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 131 heartbeat osd_stat(store_statfs(0x4f9690000/0x0/0x4ffc00000, data 0x1f16aa0/0x1fdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:01.423128+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 35872768 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:02.423624+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 35872768 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293992 data_alloc: 218103808 data_used: 16359424
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:03.423997+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 35872768 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:04.424558+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 35872768 heap: 150986752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417715800
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:05.425017+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 44261376 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 131 heartbeat osd_stat(store_statfs(0x4f8e8e000/0x0/0x4ffc00000, data 0x2718503/0x27e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:06.425492+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 44261376 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 131 heartbeat osd_stat(store_statfs(0x4f8e8e000/0x0/0x4ffc00000, data 0x2718503/0x27e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:07.425670+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 44261376 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 131 ms_handle_reset con 0x55b417715800 session 0x55b41a20fe00
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347960 data_alloc: 218103808 data_used: 16359424
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:08.426035+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 44261376 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _renew_subs
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:09.426469+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 132 heartbeat osd_stat(store_statfs(0x4f8e8a000/0x0/0x4ffc00000, data 0x271a080/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:10.426754+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:11.427194+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:12.427672+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352134 data_alloc: 218103808 data_used: 16367616
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:13.428073+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 132 heartbeat osd_stat(store_statfs(0x4f8e8a000/0x0/0x4ffc00000, data 0x271a080/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:14.428564+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417715800
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:15.428964+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.089331627s of 15.191314697s, submitted: 17
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 133 ms_handle_reset con 0x55b417715800 session 0x55b41cb0cd20
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:16.429368+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:17.429835+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301532 data_alloc: 218103808 data_used: 16367616
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:18.430239+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:19.430653+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9687000/0x0/0x4ffc00000, data 0x1f1bc51/0x1fe6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:20.431050+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:21.431521+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:22.431904+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301852 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:23.432439+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9687000/0x0/0x4ffc00000, data 0x1f1bc51/0x1fe6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:24.433102+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:25.433589+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.738893509s of 10.827541351s, submitted: 16
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:26.433901+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:27.434343+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:28.434762+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:29.435154+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:30.436016+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:31.437236+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:32.437615+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:33.438017+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:34.438491+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:35.438848+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:36.439674+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:37.440071+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:38.440527+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:39.440909+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:40.441489+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:41.442010+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:42.442549+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:40 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:43.442966+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:44.443269+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:45.443671+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:46.444037+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 8068 writes, 31K keys, 8068 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 8068 writes, 1863 syncs, 4.33 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1254 writes, 4156 keys, 1254 commit groups, 1.0 writes per commit group, ingest: 3.24 MB, 0.01 MB/s
                                            Interval WAL: 1254 writes, 527 syncs, 2.38 writes per sync, written: 0.00 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:40 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:47.444282+0000)
Oct 02 20:23:40 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:40 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:48.444698+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:49.445311+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:50.445659+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:51.446559+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:52.447062+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:53.447603+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:54.447963+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:55.448319+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:56.448616+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:57.449199+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:58.449590+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:59.450451+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:00.450871+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:01.451168+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:02.451621+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:03.451994+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:04.452229+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:05.453967+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:06.454466+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:07.454984+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:08.455575+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:09.455922+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:10.456235+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:11.456809+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:12.457546+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:13.457896+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:14.458203+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:15.458652+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:16.458982+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:17.459546+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:18.459918+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:19.460707+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:20.460921+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:21.461307+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:22.461755+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:23.462114+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:24.462447+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:25.462795+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:26.463066+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:27.463551+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:28.464100+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:29.464518+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:30.464896+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:31.465634+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:32.466111+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:33.466637+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:34.467011+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:35.467493+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:36.467784+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:37.468192+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:38.468624+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:39.469074+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:40.469346+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:41.469706+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:42.470125+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:43.470520+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:44.470880+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:45.471289+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:46.471669+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:47.471942+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:48.472291+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:49.472676+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:50.473145+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:51.473620+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:52.474097+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:53.474850+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:54.475107+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:55.475552+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:56.475963+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:57.477015+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:58.477681+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:59.478902+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:00.479296+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:01.479773+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:02.480583+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:03.481085+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:04.481962+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:05.482562+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:06.483077+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:07.483672+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:08.484178+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:09.484614+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:10.485098+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:11.485648+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:12.486248+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:13.486708+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:14.487226+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:15.487633+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:16.488059+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:17.488490+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:18.488830+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:19.489193+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:20.489668+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:21.490149+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:22.490807+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:23.491159+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:24.491645+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:25.492120+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 119.251411438s of 119.269851685s, submitted: 9
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114614272 unmapped: 44769280 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:26.492916+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114647040 unmapped: 44736512 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:27.493290+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 44703744 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:28.493625+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:29.494084+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:30.494639+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:31.494947+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:32.495569+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:33.495882+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:34.496331+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:35.496771+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:36.497175+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:37.497688+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:38.498090+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:39.498665+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:40.499104+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:41.499434+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:42.499928+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:43.500332+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:44.500835+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:45.501187+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:46.501610+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:47.502040+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:48.502788+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:49.503037+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:50.503366+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:51.503726+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:52.504111+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:53.504533+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:54.504898+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:55.505177+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:56.505515+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:57.505911+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:58.506683+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:59.508357+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:00.509525+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:01.509815+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 44638208 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:02.510670+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 44638208 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:03.511891+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 44638208 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:04.512303+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 44638208 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:05.512667+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 44638208 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:06.512993+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:07.513620+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:08.513934+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:09.514465+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:10.514898+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:11.515353+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:12.515972+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:13.516297+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:14.516831+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:15.517205+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:16.517649+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:17.518114+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:18.518640+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:19.519137+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:20.519463+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:21.519882+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:22.520294+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:23.520694+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:24.521129+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:25.521548+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:26.521955+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:27.522348+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:28.522850+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:29.523213+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:30.523598+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:31.524053+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:32.524540+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:33.525312+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:34.525740+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:35.526266+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:36.526778+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:37.527161+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:38.527643+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:39.528183+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:40.528617+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:41.529005+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:42.529639+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:43.530020+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:44.530489+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:45.530816+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:46.531240+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:47.531617+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:48.531949+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:49.532310+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:50.532675+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:51.533062+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:52.533609+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:53.533943+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:54.534295+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:55.534649+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:56.535064+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:57.535445+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:58.535823+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:59.536445+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:00.536848+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:01.537076+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:02.537641+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:03.538091+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:04.538501+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:05.539535+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:06.539922+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:07.540245+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:08.540744+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:09.541117+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:10.541661+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:11.542166+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:12.542698+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:13.543079+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:14.543585+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:15.543991+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:16.544450+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:17.544823+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:18.545363+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:19.545806+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:20.546216+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:21.546611+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:22.547072+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:23.547528+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:24.547875+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:25.548189+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:26.548576+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:27.549101+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:28.549507+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:29.549960+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:30.550473+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:31.550887+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:32.551339+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:33.551685+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:34.552293+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:35.552680+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:36.553079+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:37.553635+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:38.555087+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:39.555547+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:40.556118+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:41.556591+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:42.557008+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:43.558180+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:44.559269+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:45.559680+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:46.561003+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:47.561686+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:48.562810+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:49.563087+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:50.563528+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:51.563943+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:52.564505+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:53.564921+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:54.565289+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:55.565764+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:56.566219+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:57.566677+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:58.566955+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:59.567326+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:00.567627+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:01.568152+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:02.568652+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:03.569068+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:04.569527+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:05.569970+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:06.570279+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:07.570700+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:08.571136+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:09.571476+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:10.571883+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:11.572282+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 165.190155029s of 165.937301636s, submitted: 106
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114794496 unmapped: 44589056 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:12.572740+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f9681000/0x0/0x4ffc00000, data 0x1f1f231/0x1fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 44556288 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:13.573136+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _renew_subs
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 136 ms_handle_reset con 0x55b4161eb400 session 0x55b4195c2d20
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:14.573639+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315220 data_alloc: 218103808 data_used: 16384000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:15.574034+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:16.574480+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:17.574995+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:18.575329+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:19.575816+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315220 data_alloc: 218103808 data_used: 16384000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:20.576070+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:21.576870+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:22.577704+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:23.578123+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:24.578562+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315220 data_alloc: 218103808 data_used: 16384000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:25.579018+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:26.579629+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:27.579876+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:28.580291+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 44539904 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:29.580519+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 44539904 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315220 data_alloc: 218103808 data_used: 16384000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:30.580900+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 44539904 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:31.581289+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 44539904 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:32.581864+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 44539904 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:33.582283+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114851840 unmapped: 44531712 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:34.582623+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 44515328 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315220 data_alloc: 218103808 data_used: 16384000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:35.583074+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 44515328 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:36.583522+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 44515328 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:37.583891+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:38.584244+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:39.584545+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315220 data_alloc: 218103808 data_used: 16384000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:40.584894+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:41.585179+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:42.585631+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:43.585944+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:44.586272+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315220 data_alloc: 218103808 data_used: 16384000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:45.586523+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:46.586858+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:47.587235+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:48.587580+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114884608 unmapped: 44498944 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:49.587935+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114884608 unmapped: 44498944 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315220 data_alloc: 218103808 data_used: 16384000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:50.588158+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114884608 unmapped: 44498944 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:51.588609+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114884608 unmapped: 44498944 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:52.589049+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114884608 unmapped: 44498944 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:53.589468+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114884608 unmapped: 44498944 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:54.589880+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114892800 unmapped: 44490752 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:55.590268+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315220 data_alloc: 218103808 data_used: 16384000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.528549194s of 44.705825806s, submitted: 22
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 43466752 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:56.590623+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _renew_subs
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 136 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 137 ms_handle_reset con 0x55b4191a6c00 session 0x55b41958c1e0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f9678000/0x0/0x4ffc00000, data 0x1f22d71/0x1ff5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41965f400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 43466752 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 137 ms_handle_reset con 0x55b41965f400 session 0x55b419509680
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4196a9c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:57.591030+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 137 ms_handle_reset con 0x55b4196a9c00 session 0x55b419678d20
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f9678000/0x0/0x4ffc00000, data 0x1f22d71/0x1ff5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f9678000/0x0/0x4ffc00000, data 0x1f22d71/0x1ff5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4196a9c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 43622400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:58.591258+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _renew_subs
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 138 ms_handle_reset con 0x55b4196a9c00 session 0x55b419679860
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 43573248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:59.591721+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9674000/0x0/0x4ffc00000, data 0x1f2451f/0x1ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 43573248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:00.592073+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324238 data_alloc: 218103808 data_used: 16392192
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 43573248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:01.592564+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 43573248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:02.592869+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 43573248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:03.593353+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9674000/0x0/0x4ffc00000, data 0x1f2451f/0x1ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 43573248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:04.593763+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 43565056 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:05.594098+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f9674000/0x0/0x4ffc00000, data 0x1f2451f/0x1ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325268 data_alloc: 218103808 data_used: 16392192
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b41cb0c000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417715800
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417715800 session 0x55b4195921e0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4191a6c00 session 0x55b41cb0c1e0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41965f400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 43597824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b41965f400 session 0x55b417accd20
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:06.594470+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b416f3eb40
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417715800
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417715800 session 0x55b419679680
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 43597824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:07.594837+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 43597824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:08.595232+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 43597824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:09.595659+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4191a6c00 session 0x55b4195085a0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f9674000/0x0/0x4ffc00000, data 0x1f25f82/0x1ff9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4196a9c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4196a9c00 session 0x55b4176ea960
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418aea000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.985056877s of 13.554063797s, submitted: 80
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b418aea000 session 0x55b41c9754a0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418aea000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b418aea000 session 0x55b41cb0c960
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16000 session 0x55b418afd680
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417715800
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b4195083c0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417715800 session 0x55b41c974f00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 46579712 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4196a9c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:10.595916+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4196a9c00 session 0x55b41c974780
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4191a6c00 session 0x55b41961b4a0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1516733 data_alloc: 218103808 data_used: 16392192
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417715800
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f9675000/0x0/0x4ffc00000, data 0x1f25f82/0x1ff9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [1])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417715800 session 0x55b416ef4d20
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b4195ef2c0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16000 session 0x55b41958c780
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 46678016 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:11.596230+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 46678016 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:12.596730+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:13.597224+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 46678016 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e41000/0x0/0x4ffc00000, data 0x3757ff3/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:14.597570+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 46678016 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418aea000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b418aea000 session 0x55b418b294a0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:15.597941+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 46678016 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1516617 data_alloc: 218103808 data_used: 16392192
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b4196083c0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417715800
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417715800 session 0x55b41779c960
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:16.598323+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 46678016 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16000 session 0x55b41779de00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e41000/0x0/0x4ffc00000, data 0x3757ff3/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,1])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4191a6c00 session 0x55b418b4d680
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16400 session 0x55b416d42f00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:17.598629+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 47177728 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16400 session 0x55b4195c2960
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b419678b40
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:18.598839+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 47169536 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417715800
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16800
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:19.599097+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 47161344 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:20.599324+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 47161344 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523859 data_alloc: 218103808 data_used: 16404480
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e3e000/0x0/0x4ffc00000, data 0x3758049/0x3830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:21.599628+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 47153152 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:22.599894+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 121028608 unmapped: 42033152 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:23.600320+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 126115840 unmapped: 36945920 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:24.600530+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128401408 unmapped: 34660352 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:25.600753+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128401408 unmapped: 34660352 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701139 data_alloc: 234881024 data_used: 36032512
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:26.601001+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128401408 unmapped: 34660352 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e3e000/0x0/0x4ffc00000, data 0x3758049/0x3830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:27.601235+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128401408 unmapped: 34660352 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e3e000/0x0/0x4ffc00000, data 0x3758049/0x3830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:28.605324+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:29.605858+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:30.606023+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701139 data_alloc: 234881024 data_used: 36032512
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:31.606239+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e3e000/0x0/0x4ffc00000, data 0x3758049/0x3830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:32.606507+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:33.606720+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:34.606957+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e3e000/0x0/0x4ffc00000, data 0x3758049/0x3830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:35.607155+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701139 data_alloc: 234881024 data_used: 36032512
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:36.607331+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:37.607502+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:38.607695+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:39.608077+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:40.608651+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701139 data_alloc: 234881024 data_used: 36032512
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e3e000/0x0/0x4ffc00000, data 0x3758049/0x3830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:41.608853+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:42.609112+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:43.609313+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:44.609522+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e3e000/0x0/0x4ffc00000, data 0x3758049/0x3830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:45.609715+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701139 data_alloc: 234881024 data_used: 36032512
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:46.609929+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e3e000/0x0/0x4ffc00000, data 0x3758049/0x3830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:47.610135+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:48.610428+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:49.610772+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:50.611084+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701139 data_alloc: 234881024 data_used: 36032512
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.761749268s of 41.352859497s, submitted: 93
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16c00 session 0x55b418b4c5a0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17000 session 0x55b417accd20
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:51.611466+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17400 session 0x55b41958d860
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b418b4cb40
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 34406400 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16400 session 0x55b416bb9a40
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f76a9000/0x0/0x4ffc00000, data 0x3eed049/0x3fc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,1])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:52.611952+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 27148288 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:53.612162+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 27148288 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16c00 session 0x55b416ef5e00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17000 session 0x55b417c96f00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17800
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17800 session 0x55b4195090e0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b41958c000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:54.612496+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16400 session 0x55b416a62780
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16c00 session 0x55b419593a40
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17000 session 0x55b418b4cf00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138330112 unmapped: 24731648 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17800
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17800 session 0x55b4195921e0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b41a20f0e0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5ef1000/0x0/0x4ffc00000, data 0x56a4059/0x577d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:55.612687+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 141189120 unmapped: 21872640 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1977734 data_alloc: 234881024 data_used: 37883904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:56.612970+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 141303808 unmapped: 21757952 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:57.613407+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 21716992 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:58.613640+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5e54000/0x0/0x4ffc00000, data 0x5741059/0x581a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 21716992 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5e54000/0x0/0x4ffc00000, data 0x5741059/0x581a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16400 session 0x55b418b4d680
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:59.613844+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 141762560 unmapped: 21299200 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:00.614143+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 141770752 unmapped: 21291008 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1985883 data_alloc: 234881024 data_used: 38105088
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:01.614406+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17c00 session 0x55b41cb0c5a0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 141770752 unmapped: 21291008 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b416bb4400 session 0x55b417cafc20
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:02.614655+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 141770752 unmapped: 21291008 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418b17000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b418b17000 session 0x55b418af72c0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.103260994s of 12.145826340s, submitted: 212
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b418a6f0e0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:03.614936+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 143540224 unmapped: 19521536 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:04.615102+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147005440 unmapped: 16056320 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5de1000/0x0/0x4ffc00000, data 0x57b307c/0x588d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:05.615304+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147046400 unmapped: 16015360 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2051411 data_alloc: 251658240 data_used: 44470272
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:06.615543+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 148275200 unmapped: 14786560 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5de1000/0x0/0x4ffc00000, data 0x57b307c/0x588d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:07.615788+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 149921792 unmapped: 13139968 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:08.615964+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 156483584 unmapped: 6578176 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:09.616218+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b416bb4400 session 0x55b418afc5a0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16400 session 0x55b41a3a81e0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157810688 unmapped: 5251072 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:10.616437+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17c00 session 0x55b417acbe00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151175168 unmapped: 11886592 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1959003 data_alloc: 251658240 data_used: 44462080
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:11.616647+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151175168 unmapped: 11886592 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f695f000/0x0/0x4ffc00000, data 0x4c36049/0x4d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:12.616977+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151175168 unmapped: 11886592 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:13.617341+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f695f000/0x0/0x4ffc00000, data 0x4c36049/0x4d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 11853824 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f695f000/0x0/0x4ffc00000, data 0x4c36049/0x4d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:14.617727+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 11853824 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.932250023s of 12.194139481s, submitted: 54
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:15.617975+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 11853824 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1959223 data_alloc: 251658240 data_used: 44462080
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:16.618346+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 11853824 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:17.618757+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 11853824 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:18.619123+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f695d000/0x0/0x4ffc00000, data 0x4c39049/0x4d11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 11853824 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:19.619536+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 11853824 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:20.619925+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 11853824 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1959223 data_alloc: 251658240 data_used: 44462080
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:21.620261+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 11853824 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:22.620560+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151216128 unmapped: 11845632 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f695d000/0x0/0x4ffc00000, data 0x4c39049/0x4d11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:23.620840+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151216128 unmapped: 11845632 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:24.621127+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151216128 unmapped: 11845632 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.978276253s of 10.003231049s, submitted: 3
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:25.621415+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150732800 unmapped: 12328960 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1959223 data_alloc: 251658240 data_used: 44462080
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:26.621792+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150732800 unmapped: 12328960 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f695d000/0x0/0x4ffc00000, data 0x4c39049/0x4d11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:27.622127+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150773760 unmapped: 12288000 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:28.622450+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150781952 unmapped: 12279808 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:29.622767+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150781952 unmapped: 12279808 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:30.623094+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150790144 unmapped: 12271616 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1959719 data_alloc: 251658240 data_used: 44470272
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:31.623466+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150790144 unmapped: 12271616 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f695d000/0x0/0x4ffc00000, data 0x4c39049/0x4d11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:32.623846+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150798336 unmapped: 12263424 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:33.624139+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150798336 unmapped: 12263424 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:34.624464+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f6957000/0x0/0x4ffc00000, data 0x4c3f049/0x4d17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150798336 unmapped: 12263424 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:35.624753+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150798336 unmapped: 12263424 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1960287 data_alloc: 251658240 data_used: 44482560
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.874515533s of 10.945261955s, submitted: 10
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4191a6c00 session 0x55b419609680
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16800 session 0x55b418b58b40
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:36.625080+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151461888 unmapped: 11599872 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b418b4d4a0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:37.625290+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151420928 unmapped: 11640832 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:38.627638+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151420928 unmapped: 11640832 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:39.627870+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150274048 unmapped: 12787712 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:40.628058+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f6310000/0x0/0x4ffc00000, data 0x5286049/0x535e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150282240 unmapped: 12779520 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2016787 data_alloc: 251658240 data_used: 45211648
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:41.628238+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150282240 unmapped: 12779520 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:42.628515+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150290432 unmapped: 12771328 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:43.628756+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150290432 unmapped: 12771328 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:44.629044+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417715800 session 0x55b416bb83c0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16000 session 0x55b4195ef4a0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f6310000/0x0/0x4ffc00000, data 0x5286049/0x535e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150290432 unmapped: 12771328 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:45.629239+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17c00 session 0x55b417caef00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f6310000/0x0/0x4ffc00000, data 0x5286049/0x535e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1830524 data_alloc: 234881024 data_used: 38776832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:46.629473+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:47.629773+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:48.630114+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:49.630522+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:50.630879+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f71c3000/0x0/0x4ffc00000, data 0x43d4016/0x44aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1830524 data_alloc: 234881024 data_used: 38776832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:51.631286+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:52.631580+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:53.631782+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:54.632185+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f71c3000/0x0/0x4ffc00000, data 0x43d4016/0x44aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:55.632594+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1830524 data_alloc: 234881024 data_used: 38776832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:56.632863+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.567926407s of 21.020404816s, submitted: 99
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:57.633267+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:58.633665+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:59.633951+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17c00 session 0x55b41950a3c0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f665a000/0x0/0x4ffc00000, data 0x4f3e016/0x5014000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145358848 unmapped: 35069952 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:00.634201+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145358848 unmapped: 35069952 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1919852 data_alloc: 234881024 data_used: 38776832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:01.634458+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145358848 unmapped: 35069952 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:02.634834+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b41a3a92c0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417715800
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417715800 session 0x55b418afdc20
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16000 session 0x55b41961b680
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145358848 unmapped: 35069952 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16800
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16800 session 0x55b419592960
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16800
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:03.635277+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16800 session 0x55b418b290e0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17c00 session 0x55b41958cb40
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4191a6c00 session 0x55b418b583c0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b418873c00 session 0x55b41cb0d860
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419224400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 35225600 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b419224400 session 0x55b41cb0c960
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:04.635654+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419224400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b419224400 session 0x55b41961b680
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5e5c000/0x0/0x4ffc00000, data 0x573a088/0x5812000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 35225600 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16800
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16800 session 0x55b41961a960
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:05.635881+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17c00 session 0x55b41961a5a0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 35225600 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1996427 data_alloc: 234881024 data_used: 38735872
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b418873c00 session 0x55b417acd680
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:06.636061+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419225c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 35266560 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:07.636445+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 35266560 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:08.636699+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b41b8f5400 session 0x55b417caeb40
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 35266560 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b41b8f5400 session 0x55b416f3e3c0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:09.636926+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5e31000/0x0/0x4ffc00000, data 0x5764098/0x583d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.122673988s of 12.545958519s, submitted: 72
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 144531456 unmapped: 35897344 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:10.637253+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17c00 session 0x55b41a20fa40
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b418873c00 session 0x55b41a20e3c0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 144547840 unmapped: 35880960 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2020628 data_alloc: 234881024 data_used: 41746432
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419224400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:11.637450+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 146472960 unmapped: 33955840 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:12.637676+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 148168704 unmapped: 32260096 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5e2e000/0x0/0x4ffc00000, data 0x57650bb/0x583f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:13.638346+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 148201472 unmapped: 32227328 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:14.638547+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 148201472 unmapped: 32227328 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:15.638745+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 148357120 unmapped: 32071680 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2082172 data_alloc: 251658240 data_used: 50143232
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:16.638935+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 149028864 unmapped: 31399936 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:17.639146+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 31006720 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:18.639367+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5e2f000/0x0/0x4ffc00000, data 0x57650bb/0x583f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152772608 unmapped: 27656192 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:19.639604+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154820608 unmapped: 25608192 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:20.639859+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154820608 unmapped: 25608192 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2136412 data_alloc: 251658240 data_used: 57307136
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:21.640163+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.759449005s of 11.826215744s, submitted: 11
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154836992 unmapped: 25591808 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:22.640568+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154836992 unmapped: 25591808 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:23.640761+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5e2f000/0x0/0x4ffc00000, data 0x57650bb/0x583f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 25550848 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:24.640999+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 25550848 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:25.641235+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 25550848 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:26.641469+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2136412 data_alloc: 251658240 data_used: 57307136
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 25550848 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:27.641698+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 25550848 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:28.641922+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5e2f000/0x0/0x4ffc00000, data 0x57650bb/0x583f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154910720 unmapped: 25518080 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:29.642275+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154910720 unmapped: 25518080 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:30.642563+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154910720 unmapped: 25518080 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:31.642784+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2136412 data_alloc: 251658240 data_used: 57307136
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5e2f000/0x0/0x4ffc00000, data 0x57650bb/0x583f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154910720 unmapped: 25518080 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:32.643086+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154910720 unmapped: 25518080 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:33.643508+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154910720 unmapped: 25518080 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:34.643794+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5800
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.154161453s of 13.161978722s, submitted: 1
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b41b8f5800 session 0x55b416ef5860
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157532160 unmapped: 22896640 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:35.644131+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157532160 unmapped: 22896640 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:36.644321+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2202704 data_alloc: 251658240 data_used: 57307136
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5698000/0x0/0x4ffc00000, data 0x5efc0bb/0x5fd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157532160 unmapped: 22896640 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:37.644607+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157532160 unmapped: 22896640 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:38.644996+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157532160 unmapped: 22896640 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:39.645306+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:40.645632+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157540352 unmapped: 22888448 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:41.645923+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157540352 unmapped: 22888448 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2202704 data_alloc: 251658240 data_used: 57307136
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:42.646465+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157540352 unmapped: 22888448 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5698000/0x0/0x4ffc00000, data 0x5efc0bb/0x5fd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:43.646716+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157548544 unmapped: 22880256 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b41b8f5c00 session 0x55b419508b40
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:44.646923+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5698000/0x0/0x4ffc00000, data 0x5efc0bb/0x5fd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,7])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 158302208 unmapped: 22126592 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:45.647114+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 16359424 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.382223129s of 10.757692337s, submitted: 94
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b416bb4400 session 0x55b4176eb2c0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16400 session 0x55b418a48960
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:46.647443+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152526848 unmapped: 27901952 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2026518 data_alloc: 234881024 data_used: 41459712
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b418873c00 session 0x55b418afc960
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:47.647635+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152543232 unmapped: 27885568 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:48.647822+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152543232 unmapped: 27885568 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:49.648050+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 153714688 unmapped: 26714112 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5ed7000/0x0/0x4ffc00000, data 0x56bd027/0x5794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:50.648515+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 153714688 unmapped: 26714112 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b41b8f5400 session 0x55b4196794a0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5800
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:51.650501+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151445504 unmapped: 28983296 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1966314 data_alloc: 234881024 data_used: 41656320
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b41b8f5800 session 0x55b418a6f680
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:52.652249+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151445504 unmapped: 28983296 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:53.652507+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 27951104 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f666e000/0x0/0x4ffc00000, data 0x4f29027/0x5000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5b5e000/0x0/0x4ffc00000, data 0x5a31027/0x5b08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:54.652714+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154107904 unmapped: 26320896 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:55.652966+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154173440 unmapped: 26255360 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.284941673s of 10.033411026s, submitted: 154
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:56.653359+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152911872 unmapped: 27516928 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2061334 data_alloc: 234881024 data_used: 41906176
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:57.653870+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152920064 unmapped: 27508736 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:58.654124+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152920064 unmapped: 27508736 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5ae3000/0x0/0x4ffc00000, data 0x5ab3027/0x5b8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:59.654537+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152920064 unmapped: 27508736 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:00.654841+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152920064 unmapped: 27508736 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:01.655094+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152920064 unmapped: 27508736 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2061526 data_alloc: 234881024 data_used: 41910272
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5ae3000/0x0/0x4ffc00000, data 0x5ab3027/0x5b8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:02.655528+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152920064 unmapped: 27508736 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b419224400 session 0x55b418b29c20
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:03.656175+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 153067520 unmapped: 27361280 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f6ad3000/0x0/0x4ffc00000, data 0x472cfb5/0x4802000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b416bb4400 session 0x55b41950ab40
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:04.656445+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147431424 unmapped: 32997376 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:05.656692+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147431424 unmapped: 32997376 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:06.657044+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147431424 unmapped: 32997376 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1859028 data_alloc: 234881024 data_used: 34693120
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:07.657325+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147431424 unmapped: 32997376 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:08.657636+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147431424 unmapped: 32997376 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f6ad3000/0x0/0x4ffc00000, data 0x472cf92/0x4801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:09.657977+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147439616 unmapped: 32989184 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:10.658603+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147439616 unmapped: 32989184 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:11.659023+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.170475960s of 15.876229286s, submitted: 80
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16c00 session 0x55b418b4d2c0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17000 session 0x55b417b532c0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147447808 unmapped: 32980992 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1858320 data_alloc: 234881024 data_used: 34693120
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:12.659875+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139968512 unmapped: 40460288 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16400 session 0x55b41c9743c0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7c6e000/0x0/0x4ffc00000, data 0x392bf92/0x3a00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:13.660207+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139993088 unmapped: 40435712 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:14.660457+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139993088 unmapped: 40435712 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:15.660711+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139993088 unmapped: 40435712 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:16.661074+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c6a000/0x0/0x4ffc00000, data 0x392db0f/0x3a03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139968512 unmapped: 40460288 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691332 data_alloc: 234881024 data_used: 26165248
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:17.661731+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139968512 unmapped: 40460288 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:18.662134+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139968512 unmapped: 40460288 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:19.662495+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139968512 unmapped: 40460288 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:20.662789+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139968512 unmapped: 40460288 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:21.663103+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.873231888s of 10.174868584s, submitted: 44
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139968512 unmapped: 40460288 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b416bb4400 session 0x55b416e47c20
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c16c00 session 0x55b419679680
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1690696 data_alloc: 234881024 data_used: 26165248
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c6a000/0x0/0x4ffc00000, data 0x392db0f/0x3a03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c17000 session 0x55b4196790e0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419224400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b419224400 session 0x55b41958d860
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b418873c00 session 0x55b41a20e000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:22.663433+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b416bb4400 session 0x55b41cb0cf00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c16c00 session 0x55b41961b860
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138649600 unmapped: 41779200 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c17000 session 0x55b418b285a0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b418873c00 session 0x55b416bb8960
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b4191a6c00 session 0x55b417caef00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b419225c00 session 0x55b41958c780
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:23.663730+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138649600 unmapped: 41779200 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:24.663960+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b416bb4400 session 0x55b41961b680
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128802816 unmapped: 51625984 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f8f10000/0x0/0x4ffc00000, data 0x220caff/0x22e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:25.664169+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128802816 unmapped: 51625984 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:26.664610+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128802816 unmapped: 51625984 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437130 data_alloc: 218103808 data_used: 14290944
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f8f10000/0x0/0x4ffc00000, data 0x220caff/0x22e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:27.664821+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 54796288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c16c00 session 0x55b41a20f2c0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b418873c00 session 0x55b416ef4f00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:28.665203+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c17000 session 0x55b4176eba40
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130154496 unmapped: 54476800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b416bb4400 session 0x55b418a6f0e0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:29.665473+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130129920 unmapped: 54501376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b418873c00 session 0x55b4196794a0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:30.665898+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f8534000/0x0/0x4ffc00000, data 0x2c54b61/0x2d2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130129920 unmapped: 54501376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419225c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b419225c00 session 0x55b419679680
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f8534000/0x0/0x4ffc00000, data 0x2c54b61/0x2d2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419224400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b419224400 session 0x55b4196790e0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:31.666113+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130129920 unmapped: 54501376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528772 data_alloc: 218103808 data_used: 14290944
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:32.666355+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130129920 unmapped: 54501376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.741597176s of 11.463397026s, submitted: 94
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c17000 session 0x55b418b294a0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c16c00 session 0x55b419678960
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f8532000/0x0/0x4ffc00000, data 0x2c54b94/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:33.666630+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 129007616 unmapped: 55623680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b416bb4400 session 0x55b416bc0960
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:34.667035+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128999424 unmapped: 55631872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x294bb94/0x2a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:35.667544+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128851968 unmapped: 55779328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:36.667777+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128868352 unmapped: 55762944 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1549376 data_alloc: 218103808 data_used: 19161088
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:37.668143+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:38.668602+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:39.668954+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:40.669308+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x294bb94/0x2a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:41.669641+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578816 data_alloc: 234881024 data_used: 22302720
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:42.670059+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:43.670548+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:44.674822+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:45.675079+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x294bb94/0x2a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:46.675449+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578816 data_alloc: 234881024 data_used: 22302720
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:47.675761+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:48.676140+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:49.677063+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:50.677441+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:51.678024+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x294bb94/0x2a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578816 data_alloc: 234881024 data_used: 22302720
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:52.678611+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:53.679006+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:54.679274+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x294bb94/0x2a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:55.679586+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:56.680061+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578816 data_alloc: 234881024 data_used: 22302720
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x294bb94/0x2a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:57.680480+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:58.680952+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:59.681263+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:00.681633+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:01.681988+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578816 data_alloc: 234881024 data_used: 22302720
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:02.682289+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 54575104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x294bb94/0x2a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:03.682492+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 54575104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:04.682890+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 54575104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:05.683199+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 54575104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x294bb94/0x2a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:06.683554+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 54575104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578816 data_alloc: 234881024 data_used: 22302720
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:07.683836+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 54575104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:08.684241+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.585948944s of 35.731628418s, submitted: 20
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x294bb94/0x2a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 54575104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:09.684704+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139722752 unmapped: 44908544 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:10.684957+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7cc0000/0x0/0x4ffc00000, data 0x34c6b94/0x359e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 47013888 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:11.685503+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 47013888 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1669322 data_alloc: 234881024 data_used: 23547904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:12.685957+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137715712 unmapped: 46915584 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:13.686715+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c21000/0x0/0x4ffc00000, data 0x3565b94/0x363d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:14.687108+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:15.687550+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:16.687842+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673848 data_alloc: 234881024 data_used: 23543808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:17.688157+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:18.688656+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 46628864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:19.689183+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.388254166s of 10.945773125s, submitted: 128
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c21000/0x0/0x4ffc00000, data 0x3565b94/0x363d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 46628864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:20.689566+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 46628864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:21.689953+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 46628864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1674064 data_alloc: 234881024 data_used: 23543808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:22.690590+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 46628864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:23.691033+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 46628864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:24.691469+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 46628864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:25.691704+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 46628864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:26.692102+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 46620672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1674064 data_alloc: 234881024 data_used: 23543808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:27.692607+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 46620672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:28.693049+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 46620672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:29.693544+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 46620672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:30.693919+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 46620672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:31.694295+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 46620672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1674064 data_alloc: 234881024 data_used: 23543808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:32.694613+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 46620672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:33.694963+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 46620672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:34.695295+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 46612480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:35.695753+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 46612480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:36.696226+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 46612480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1674064 data_alloc: 234881024 data_used: 23543808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:37.696479+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 46612480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:38.696898+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 46612480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:39.697301+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 46612480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:40.697705+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 46612480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:41.697939+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 46612480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1674064 data_alloc: 234881024 data_used: 23543808
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:42.698298+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138027008 unmapped: 46604288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:43.698742+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.028068542s of 24.047372818s, submitted: 1
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138027008 unmapped: 46604288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:44.699143+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138027008 unmapped: 46604288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:45.699557+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138027008 unmapped: 46604288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:46.699894+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2829 syncs, 3.68 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 2346 writes, 8456 keys, 2346 commit groups, 1.0 writes per commit group, ingest: 8.13 MB, 0.01 MB/s
                                            Interval WAL: 2346 writes, 966 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138027008 unmapped: 46604288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:47.700203+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138027008 unmapped: 46604288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:48.700778+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138035200 unmapped: 46596096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:49.701188+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138035200 unmapped: 46596096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:50.701571+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138035200 unmapped: 46596096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:51.701934+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138035200 unmapped: 46596096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:52.702474+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:53.702926+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138035200 unmapped: 46596096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: mgrc ms_handle_reset ms_handle_reset con 0x55b417714800
Oct 02 20:23:41 compute-0 ceph-osd[206053]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2078717049
Oct 02 20:23:41 compute-0 ceph-osd[206053]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2078717049,v1:192.168.122.100:6801/2078717049]
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: get_auth_request con 0x55b417c17000 auth_method 0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: mgrc handle_mgr_configure stats_period=5
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:54.703320+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:55.703685+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:56.704019+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:57.704330+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:58.704666+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:59.705036+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 46751744 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:00.705545+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 46751744 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b41ad9a400 session 0x55b41cb0de00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419225c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:01.706004+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 46751744 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:02.706670+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 46751744 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:03.706908+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 46751744 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:04.707088+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:05.707579+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:06.707831+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:07.708339+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:08.708797+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:09.709215+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137854976 unmapped: 46776320 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:10.709507+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137854976 unmapped: 46776320 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:11.709961+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137854976 unmapped: 46776320 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:12.710475+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137854976 unmapped: 46776320 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:13.710804+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137854976 unmapped: 46776320 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:14.711007+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137854976 unmapped: 46776320 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:15.711241+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:16.711679+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:17.712034+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:18.712283+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:19.712584+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:20.712825+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:21.713821+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:22.714595+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:23.714817+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 46751744 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:24.715168+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 46751744 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:25.715606+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 46751744 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:26.715911+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137887744 unmapped: 46743552 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:27.716230+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137887744 unmapped: 46743552 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:28.716585+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137887744 unmapped: 46743552 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:29.716833+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137887744 unmapped: 46743552 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:30.717078+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137887744 unmapped: 46743552 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:31.717548+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 46735360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:32.717922+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 46735360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:33.718178+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 46735360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:34.718440+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 46735360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:35.718644+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 46735360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:36.718879+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 46735360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:37.719449+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 46735360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:38.720187+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 46735360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:39.720939+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:40.721718+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:41.723122+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:42.724896+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:43.726640+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:44.727873+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:45.728652+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:46.729328+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 46718976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:47.730091+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 46718976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:48.730859+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 46718976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:49.731633+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 46718976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:50.732352+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 46718976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:51.733195+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 46718976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:52.735213+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 46718976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:53.736596+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 46718976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:54.738118+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 46710784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:55.739131+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 46710784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:56.739851+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 46710784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:57.740632+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 46710784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:58.742197+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 46710784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:59.743967+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 46710784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:00.745194+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 46710784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:01.746162+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 46710784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:02.747057+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 46710784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b41b8f5c00 session 0x55b418afd0e0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41c60a000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b41c60a000 session 0x55b41958d860
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41c60a400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b41c60a400 session 0x55b41958c780
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b416bb4400 session 0x55b41958cb40
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 79.477409363s of 79.490234375s, submitted: 2
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:03.748441+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 141189120 unmapped: 43442176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:04.749886+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c16c00 session 0x55b41958c5a0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b41b8f5c00 session 0x55b416ef4000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41c60a000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b41c60a000 session 0x55b41cb0cf00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419660800
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b419660800 session 0x55b41a20e000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b416bb4400 session 0x55b41a20f2c0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:05.751689+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:06.752977+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:07.754744+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706622 data_alloc: 234881024 data_used: 23547904
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:08.755698+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7877000/0x0/0x4ffc00000, data 0x390eba4/0x39e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:09.756265+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:10.756495+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419660c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7877000/0x0/0x4ffc00000, data 0x390eba4/0x39e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 46620672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b419660c00 session 0x55b41961b860
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:11.756801+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138182656 unmapped: 46448640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419660400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419661000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:12.757123+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138182656 unmapped: 46448640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1710869 data_alloc: 234881024 data_used: 23556096
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:13.757345+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138182656 unmapped: 46448640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:14.757515+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 46047232 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:15.757721+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f784d000/0x0/0x4ffc00000, data 0x3938ba4/0x3a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 46047232 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:16.758293+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 46047232 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c8ec00 session 0x55b418af63c0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419661c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:17.758702+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 46047232 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1729749 data_alloc: 234881024 data_used: 26247168
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f784d000/0x0/0x4ffc00000, data 0x3938ba4/0x3a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:18.758903+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 46047232 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:19.759107+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 46047232 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:20.759365+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 46047232 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:21.759620+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f784d000/0x0/0x4ffc00000, data 0x3938ba4/0x3a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 46047232 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:22.759873+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 46047232 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1729749 data_alloc: 234881024 data_used: 26247168
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:23.760100+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.415521622s of 20.502935410s, submitted: 19
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138575872 unmapped: 46055424 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:24.760354+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138608640 unmapped: 46022656 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:25.760852+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f784d000/0x0/0x4ffc00000, data 0x3938ba4/0x3a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,1])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138616832 unmapped: 46014464 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:26.761245+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138665984 unmapped: 45965312 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:27.761942+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138698752 unmapped: 45932544 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1731829 data_alloc: 234881024 data_used: 26300416
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:28.762874+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138723328 unmapped: 45907968 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:29.763318+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f784d000/0x0/0x4ffc00000, data 0x3938ba4/0x3a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138723328 unmapped: 45907968 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:30.763516+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138723328 unmapped: 45907968 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:31.763752+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f784d000/0x0/0x4ffc00000, data 0x3938ba4/0x3a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138723328 unmapped: 45907968 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:32.764162+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138723328 unmapped: 45907968 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1731829 data_alloc: 234881024 data_used: 26300416
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:33.764572+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138731520 unmapped: 45899776 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:34.768963+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138731520 unmapped: 45899776 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:35.770095+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138731520 unmapped: 45899776 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f784d000/0x0/0x4ffc00000, data 0x3938ba4/0x3a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:36.770856+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138731520 unmapped: 45899776 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:37.771859+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138731520 unmapped: 45899776 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1731829 data_alloc: 234881024 data_used: 26300416
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:38.773038+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138731520 unmapped: 45899776 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f784d000/0x0/0x4ffc00000, data 0x3938ba4/0x3a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:39.773494+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138739712 unmapped: 45891584 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:40.773693+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138739712 unmapped: 45891584 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:41.774125+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138739712 unmapped: 45891584 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:42.774447+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138747904 unmapped: 45883392 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1731829 data_alloc: 234881024 data_used: 26300416
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:43.774643+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138747904 unmapped: 45883392 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:44.774849+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f784d000/0x0/0x4ffc00000, data 0x3938ba4/0x3a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138747904 unmapped: 45883392 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:45.775046+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138747904 unmapped: 45883392 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.649646759s of 22.576417923s, submitted: 132
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:46.775340+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f76a3000/0x0/0x4ffc00000, data 0x3ae2ba4/0x3bbb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139993088 unmapped: 44638208 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:47.775607+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1751883 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:48.776060+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f769e000/0x0/0x4ffc00000, data 0x3ae6ba4/0x3bbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:49.776310+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:50.776624+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:51.776894+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:52.777177+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:53.777671+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:54.778025+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:55.778465+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:56.778834+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:57.779224+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:58.779630+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:59.779862+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:00.780200+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:01.780425+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:02.780710+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:03.780938+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:04.781194+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:05.781499+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:06.781661+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:07.782014+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:08.782287+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:09.782680+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:10.782912+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:11.783174+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:12.783496+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:13.783768+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:14.784193+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140042240 unmapped: 44589056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:15.784496+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140042240 unmapped: 44589056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:16.784725+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140042240 unmapped: 44589056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:17.785079+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140042240 unmapped: 44589056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:18.785438+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140042240 unmapped: 44589056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:19.785734+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140042240 unmapped: 44589056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:20.785990+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140042240 unmapped: 44589056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:21.786230+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140050432 unmapped: 44580864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:22.786513+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:23.786731+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140050432 unmapped: 44580864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:24.787958+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140050432 unmapped: 44580864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:25.788308+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140050432 unmapped: 44580864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:26.788906+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140050432 unmapped: 44580864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:27.789143+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140050432 unmapped: 44580864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:28.789518+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140050432 unmapped: 44580864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:29.789760+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140058624 unmapped: 44572672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:30.790321+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140058624 unmapped: 44572672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:31.793916+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140058624 unmapped: 44572672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:32.794740+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140058624 unmapped: 44572672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:33.794987+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140058624 unmapped: 44572672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:34.795230+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140058624 unmapped: 44572672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:35.795599+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140058624 unmapped: 44572672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:36.795986+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140058624 unmapped: 44572672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:37.796333+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140066816 unmapped: 44564480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:38.796688+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140066816 unmapped: 44564480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:39.797032+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140066816 unmapped: 44564480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:40.797252+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140066816 unmapped: 44564480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:41.797469+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:42.797714+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:43.798062+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:44.798264+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:45.798545+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:46.798829+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:47.799125+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:48.799624+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:49.799933+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:50.800290+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:51.800628+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140083200 unmapped: 44548096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:52.800938+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140083200 unmapped: 44548096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:53.801283+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140091392 unmapped: 44539904 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:54.801773+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140091392 unmapped: 44539904 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:55.802153+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:56.802605+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:57.803022+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:58.803512+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:59.803854+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:00.804244+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:01.804483+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:02.804870+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:03.805296+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:04.805577+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:05.806004+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:06.806474+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:07.806868+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:08.807167+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:09.807563+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:10.807913+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:11.808248+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:12.808615+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:13.808894+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:14.809152+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:15.809607+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:16.810184+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:17.810609+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:18.811088+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:19.811323+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:20.811662+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:21.811925+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:22.812293+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:23.812584+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:24.812881+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:25.813277+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:26.813627+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:27.813859+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:28.814141+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:29.814433+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:30.814726+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:31.815004+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:32.815359+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:33.815696+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:34.816048+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:35.816553+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:36.816972+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:37.817262+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:38.817755+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:39.818125+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:40.818364+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:41.818727+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:42.819034+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:43.819326+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:44.819614+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:45.820142+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:46.820507+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:47.820771+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:48.821171+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:49.821581+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:50.821775+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:51.821955+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 44498944 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:52.822295+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 44498944 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:53.822726+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 44498944 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:54.822923+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 44498944 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:55.823257+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 44498944 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:56.823646+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 44498944 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:57.824033+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:58.824279+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:59.824533+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:00.824758+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:01.824981+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:02.825239+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:03.825535+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:04.825900+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:05.826240+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:06.826605+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:07.826828+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 44482560 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:08.827205+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 44482560 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:09.827583+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 44482560 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:10.828046+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 44482560 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:11.828466+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 44482560 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:12.828864+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 44482560 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b4196a9000 session 0x55b416bc0b40
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:13.829126+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 44482560 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760663 data_alloc: 234881024 data_used: 26554368
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:14.829598+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:15.829830+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:16.830214+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:17.830504+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:18.830913+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760663 data_alloc: 234881024 data_used: 26554368
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:19.831199+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 153.187469482s of 153.408279419s, submitted: 21
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:20.831465+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af4ba4/0x3bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:21.831721+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:22.832171+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:23.832551+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af4ba4/0x3bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760847 data_alloc: 234881024 data_used: 26554368
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:24.832851+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:25.833182+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:26.833476+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:27.833802+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:28.834101+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760847 data_alloc: 234881024 data_used: 26554368
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:29.834349+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af4ba4/0x3bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:30.834588+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:31.834913+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:32.835246+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:33.835580+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1761007 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:34.835982+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af4ba4/0x3bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:35.836446+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.862440109s of 15.872762680s, submitted: 1
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:36.836808+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:37.837186+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af5ba4/0x3bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:38.837503+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759827 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af5ba4/0x3bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:39.837795+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af5ba4/0x3bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:40.838083+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:41.838508+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:42.838961+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:43.839348+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759827 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:44.839563+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af5ba4/0x3bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:45.839913+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:46.840273+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:47.840561+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af5ba4/0x3bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:48.841348+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759827 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:49.841659+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af5ba4/0x3bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:50.841914+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:51.842303+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af5ba4/0x3bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:52.842780+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af5ba4/0x3bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:53.843188+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759827 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:54.843542+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.933099747s of 18.943176270s, submitted: 1
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:55.843873+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 44457984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:56.844156+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 44457984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:57.844565+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 44457984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:58.844879+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 44457984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759959 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:59.845150+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 44457984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:00.845607+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 44457984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:01.845916+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 44457984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:02.846313+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 44457984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:03.846595+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759959 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:04.846967+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.221395493s of 10.239365578s, submitted: 2
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:05.847320+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:06.847667+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:07.848063+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:08.848488+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:09.848775+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:10.849025+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:11.849491+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:12.851223+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:13.851757+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:14.852188+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:15.852480+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:16.852685+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140189696 unmapped: 44441600 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:17.852934+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140189696 unmapped: 44441600 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:18.853128+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140189696 unmapped: 44441600 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:19.853628+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:20.853895+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:21.854318+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:22.858244+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:23.858698+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:24.859108+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:25.859590+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:26.859906+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:27.860189+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:28.860436+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:29.860654+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140206080 unmapped: 44425216 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:30.860844+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140206080 unmapped: 44425216 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:31.861293+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:32.861747+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:33.862161+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:34.863294+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:35.863695+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:36.864091+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:37.864650+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:38.864946+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:39.865448+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:40.868339+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:41.868823+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:42.869325+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:43.869767+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:44.870358+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:45.870864+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:46.871186+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:47.871440+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:48.871890+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:49.872461+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:50.872812+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:51.873221+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:52.873712+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:53.874193+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:54.874972+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:55.875647+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:56.876115+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:57.876775+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:58.877086+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140230656 unmapped: 44400640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:59.877323+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140230656 unmapped: 44400640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:00.877557+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140230656 unmapped: 44400640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:01.877853+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140230656 unmapped: 44400640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:02.878333+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140230656 unmapped: 44400640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:03.878638+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140230656 unmapped: 44400640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:04.878976+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140230656 unmapped: 44400640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:05.879275+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 44392448 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:06.879581+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 44392448 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:07.880083+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 44392448 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:08.880443+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 44392448 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:09.880706+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 44392448 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:10.881126+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 44392448 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:11.881528+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 44392448 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:12.881905+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 44392448 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:13.882494+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 44392448 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:14.882990+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:15.884072+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:16.884460+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:17.884714+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:18.884944+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:19.885140+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:20.885346+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:21.885528+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:22.885767+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:23.886015+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:24.886267+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:25.886640+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140255232 unmapped: 44376064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:26.886878+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140255232 unmapped: 44376064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:27.887076+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140255232 unmapped: 44376064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:28.887356+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:29.887782+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:30.888133+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:31.888630+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:32.889020+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:33.889473+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:34.889862+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:35.890262+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:36.890579+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:37.891018+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:38.891538+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:39.891923+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:40.892352+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:41.892666+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:42.893023+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:43.893463+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:44.893676+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:45.893967+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:46.894217+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:47.894524+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:48.894796+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 104.154281616s of 104.162818909s, submitted: 1
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:49.895165+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759783 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:50.895655+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:51.895929+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:52.896503+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:53.896883+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:54.897255+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759783 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:55.897552+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:56.897988+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:57.898609+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:58.899647+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:59.899972+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759783 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:00.900556+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:01.900866+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:02.901630+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:03.901997+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140288000 unmapped: 44343296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:04.902583+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140288000 unmapped: 44343296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759783 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:05.902933+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140288000 unmapped: 44343296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:06.903285+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140288000 unmapped: 44343296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:07.903764+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:08.904806+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:09.905119+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759783 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:10.905667+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:11.906020+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:12.906563+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:13.907039+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:14.907612+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759783 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:15.908084+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:16.908640+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:17.909113+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 44326912 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:18.909450+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 44326912 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:19.909949+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 44326912 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759783 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:20.910325+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 44326912 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:21.910786+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 44326912 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:22.911151+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 44326912 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:23.911628+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 44326912 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:24.912114+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 44326912 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759783 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:25.912508+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 44326912 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:26.912963+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140312576 unmapped: 44318720 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:27.913332+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140312576 unmapped: 44318720 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:28.913717+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140312576 unmapped: 44318720 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:29.914140+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.837165833s of 40.845638275s, submitted: 1
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:30.914542+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:31.914869+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:32.915298+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:33.915706+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:34.915938+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:35.916271+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:36.916579+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:37.916929+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:38.917296+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:39.917646+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:40.918005+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:41.918472+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:42.918917+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:43.919253+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:44.919701+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:45.920140+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:46.920694+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:47.920927+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:48.921254+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:49.921666+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:50.921990+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:51.922746+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:52.923121+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:53.923592+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:54.923915+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:55.924304+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:56.924671+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:57.925069+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 44294144 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:58.925481+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:59.925872+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:00.926121+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:01.926658+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:02.928022+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:03.929022+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:04.929363+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:05.929983+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:06.930336+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:07.930891+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:08.931320+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:09.931686+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:10.932139+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:11.932563+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:12.933030+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:13.933345+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:14.933710+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 44294144 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:15.934090+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 44294144 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:16.934484+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 44294144 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:17.935006+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 44294144 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:18.935436+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 44294144 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:19.936019+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 44294144 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:20.936264+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 44294144 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:21.936709+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 44294144 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:22.937278+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:23.937642+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:24.938036+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:25.938562+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:26.938839+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:27.939222+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:28.939595+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:29.939830+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:30.940225+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:31.940592+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:32.941079+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:33.941540+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:34.941856+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:35.942097+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:36.942489+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:37.942985+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:38.943349+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:39.943760+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:40.944224+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:41.944514+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:42.944913+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:43.945493+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:44.946708+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:45.947126+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:46.947429+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:47.947873+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:48.948336+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140369920 unmapped: 44261376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:49.948846+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140369920 unmapped: 44261376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:50.949554+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140369920 unmapped: 44261376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:51.950025+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:52.950497+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140369920 unmapped: 44261376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:53.950698+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140369920 unmapped: 44261376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:54.950993+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140369920 unmapped: 44261376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:55.951227+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:56.951587+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:57.952023+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:58.952476+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:59.952768+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:00.953671+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:01.954114+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:02.954646+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:03.954907+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:04.955193+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:05.955683+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:06.956108+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:07.956680+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:08.957124+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:09.957630+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 44244992 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:10.958002+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:11.958582+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:12.959061+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:13.959560+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:14.959962+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:15.960785+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:16.961098+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:17.961557+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:18.961811+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:19.962149+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:20.962455+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:21.962830+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:22.963425+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:23.963883+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140402688 unmapped: 44228608 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:24.964290+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140402688 unmapped: 44228608 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:25.964830+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140402688 unmapped: 44228608 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:26.965131+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140402688 unmapped: 44228608 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:27.965562+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140402688 unmapped: 44228608 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:28.965982+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140402688 unmapped: 44228608 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:29.966301+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 44220416 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:30.966748+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 44220416 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:31.967146+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 44220416 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:32.967470+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 44220416 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:33.967698+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 44220416 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:34.968159+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:35.968461+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:36.968790+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:37.969200+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:38.969613+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:39.970511+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:40.971099+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:41.971563+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:42.971988+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:43.972517+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:44.973149+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:45.973619+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:46.974192+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2997 syncs, 3.59 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 347 writes, 721 keys, 347 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s
                                            Interval WAL: 347 writes, 168 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:47.974646+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:48.975086+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:49.975670+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:50.976030+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 44204032 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:51.976648+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 44204032 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:52.977166+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 44204032 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:53.977676+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 44204032 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:54.977907+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 44204032 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:55.978243+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 44204032 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:56.978644+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 44204032 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:57.978919+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 44204032 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:58.979252+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 44195840 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:59.979498+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 44195840 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:00.979823+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 44195840 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:01.980163+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 44195840 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:02.980642+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 44195840 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:03.980998+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 44195840 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:04.981465+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 44195840 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:05.981903+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 44195840 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:06.982259+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:07.982671+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:08.983087+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:09.983554+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:10.983955+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:11.984351+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:12.985073+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:13.985684+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:14.986490+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:15.987053+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:16.987720+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:17.988316+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:18.988852+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:19.989203+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:20.989646+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140451840 unmapped: 44179456 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:21.989987+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140451840 unmapped: 44179456 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:22.990578+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:23.990917+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:24.991147+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:25.991640+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:26.992093+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:27.992605+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:28.993000+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:29.993463+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:30.993901+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:31.994218+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:32.994728+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:33.995051+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:34.995542+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:35.995814+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:36.996094+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:37.996769+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:38.997128+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:39.997522+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:40.997734+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:41.998079+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 218103808 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:42.998638+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:43.999126+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:44.999603+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:45.999913+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:47.000215+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 218103808 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 44154880 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:48.000507+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 44154880 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:49.000845+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 44154880 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:50.001237+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 44154880 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:51.001603+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 44154880 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:52.001984+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 218103808 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 44154880 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:53.002490+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 44154880 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:54.002801+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 44154880 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:55.003194+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 44146688 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:56.003619+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 44146688 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:57.004018+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 218103808 data_used: 26558464
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 44146688 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:58.004513+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 44146688 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:59.004859+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 44146688 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:00.005231+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 210.742996216s of 210.749771118s, submitted: 1
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 44146688 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b418873c00 session 0x55b4195ee000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b41b8f5400 session 0x55b419678d20
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:01.005794+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418afa000
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 133963776 unmapped: 50667520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:02.006465+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b418afa000 session 0x55b419508960
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1514044 data_alloc: 218103808 data_used: 14692352
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:03.006865+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:04.007179+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f8cce000/0x0/0x4ffc00000, data 0x24b3b0f/0x2589000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:05.007590+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:06.007942+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:07.008332+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1514044 data_alloc: 218103808 data_used: 14692352
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:08.008720+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f8cce000/0x0/0x4ffc00000, data 0x24b3b0f/0x2589000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:09.009095+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:10.009474+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:11.009808+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:12.010155+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f8cce000/0x0/0x4ffc00000, data 0x24b3b0f/0x2589000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1514044 data_alloc: 218103808 data_used: 14692352
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b419660400 session 0x55b418b285a0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.586451530s of 11.778108597s, submitted: 32
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b419661000 session 0x55b417cb0b40
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:13.010527+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b416bb4400 session 0x55b416c7f0e0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:14.010817+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:15.011043+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:16.011289+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:17.011667+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439084 data_alloc: 218103808 data_used: 11816960
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:18.012459+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f9262000/0x0/0x4ffc00000, data 0x1f27aff/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:19.012789+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:20.013617+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:21.014604+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:22.015732+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439084 data_alloc: 218103808 data_used: 11816960
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:23.016480+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:24.017241+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f9262000/0x0/0x4ffc00000, data 0x1f27aff/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.601349831s of 11.746925354s, submitted: 26
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 141 ms_handle_reset con 0x55b418873c00 session 0x55b417aee960
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:25.017848+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419660c00
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:26.018256+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 53731328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _renew_subs
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 141 handle_osd_map epochs [142,142], i have 142, src has [1,142]
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 142 ms_handle_reset con 0x55b419660c00 session 0x55b4195ee960
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:27.018511+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130981888 unmapped: 53649408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452749 data_alloc: 218103808 data_used: 11833344
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 143 ms_handle_reset con 0x55b416bb4400 session 0x55b418af61e0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:28.019362+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131063808 unmapped: 53567488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9256000/0x0/0x4ffc00000, data 0x1f2ce4a/0x2005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:29.019845+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:30.020721+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:31.021662+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:32.022110+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1453725 data_alloc: 218103808 data_used: 11841536
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9255000/0x0/0x4ffc00000, data 0x1f2e8c9/0x2008000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:33.022660+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:34.023357+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:35.023849+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9255000/0x0/0x4ffc00000, data 0x1f2e8c9/0x2008000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:36.024156+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:37.024592+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1453885 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:38.024971+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _renew_subs
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.299741745s of 14.475193024s, submitted: 177
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:39.025624+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:40.026285+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:41.026712+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:42.026950+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:43.027471+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:44.027906+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:45.028303+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:46.028739+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:47.029117+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:48.029676+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:49.030008+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:50.030548+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:51.030890+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:52.031270+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:53.031703+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:54.032055+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:55.032655+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:56.032952+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:57.033537+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:58.033923+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:59.034301+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:00.034644+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:01.035023+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:02.035349+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:03.035789+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:04.036113+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:05.036545+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:06.036745+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:07.037131+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:08.037626+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:09.038094+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:10.039089+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:11.039561+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:12.040001+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:13.040563+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:14.040780+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:15.041191+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:16.041749+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:17.042103+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:18.042920+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:19.043345+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:20.043657+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:21.044101+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:22.044531+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:23.044984+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:24.045475+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:25.045850+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:26.046125+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:27.046582+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:28.046973+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:29.047209+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:30.047513+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:31.047862+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:32.048095+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:33.048591+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:34.049109+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:35.049842+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:36.050245+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:37.050519+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:38.050953+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:39.051535+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:40.051997+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:41.052318+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:42.052702+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:43.053177+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:44.053550+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:45.053905+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:46.054327+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:47.054574+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:48.054790+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:49.055295+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:50.055657+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:51.056062+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:52.056533+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:53.056801+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:54.056987+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:55.057173+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:56.057536+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:57.057817+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:58.058034+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:59.058263+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:00.058540+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:01.058754+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:02.059027+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:03.059271+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:04.059533+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:05.059805+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:06.060017+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:07.060264+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132120576 unmapped: 52510720 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: do_command 'config diff' '{prefix=config diff}'
Oct 02 20:23:41 compute-0 ceph-osd[206053]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:08.060493+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: do_command 'config show' '{prefix=config show}'
Oct 02 20:23:41 compute-0 ceph-osd[206053]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 20:23:41 compute-0 ceph-osd[206053]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 20:23:41 compute-0 ceph-osd[206053]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:23:41 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:23:41 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:23:41 compute-0 ceph-osd[206053]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 20:23:41 compute-0 ceph-osd[206053]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132136960 unmapped: 52494336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:09.060692+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132161536 unmapped: 52469760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:23:41 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:10.061011+0000)
Oct 02 20:23:41 compute-0 ceph-osd[206053]: do_command 'log dump' '{prefix=log dump}'
Oct 02 20:23:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 02 20:23:41 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3049234660' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 20:23:41 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15693 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:41 compute-0 ceph-mon[191910]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 20:23:41 compute-0 ceph-mon[191910]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 20:23:41 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3049234660' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 20:23:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 02 20:23:42 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2802434029' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 20:23:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2374: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:42 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 02 20:23:42 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4007248386' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 20:23:42 compute-0 ceph-mon[191910]: from='client.15693 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:42 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2802434029' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 20:23:42 compute-0 ceph-mon[191910]: pgmap v2374: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:42 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4007248386' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 20:23:43 compute-0 nova_compute[355794]: 2025-10-02 20:23:43.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 02 20:23:43 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4058772001' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 20:23:43 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 02 20:23:43 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/227015470' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 20:23:43 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4058772001' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 20:23:43 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/227015470' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 20:23:44 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15703 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2375: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:44 compute-0 systemd[1]: Starting Hostname Service...
Oct 02 20:23:44 compute-0 systemd[1]: Started Hostname Service.
Oct 02 20:23:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 02 20:23:44 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/425869490' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 20:23:44 compute-0 ceph-mon[191910]: from='client.15703 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:44 compute-0 ceph-mon[191910]: pgmap v2375: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:44 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/425869490' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 20:23:44 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 02 20:23:44 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3434817716' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 20:23:45 compute-0 nova_compute[355794]: 2025-10-02 20:23:45.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:45 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15709 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 02 20:23:45 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4090452447' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 20:23:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:23:45 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3434817716' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 20:23:45 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4090452447' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 20:23:46 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15713 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2376: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:46 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15715 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:46 compute-0 ceph-mon[191910]: from='client.15709 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:46 compute-0 ceph-mon[191910]: from='client.15713 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:46 compute-0 ceph-mon[191910]: pgmap v2376: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 02 20:23:47 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1844937497' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 20:23:47 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 02 20:23:47 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982764010' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 20:23:47 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15721 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:48 compute-0 nova_compute[355794]: 2025-10-02 20:23:48.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:48 compute-0 ceph-mon[191910]: from='client.15715 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:48 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1844937497' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 20:23:48 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3982764010' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2377: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15723 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:48 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:23:48 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 02 20:23:48 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2572177995' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 20:23:49 compute-0 ceph-mon[191910]: from='client.15721 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:49 compute-0 ceph-mon[191910]: pgmap v2377: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:49 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2572177995' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 20:23:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 02 20:23:49 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/38900240' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 20:23:49 compute-0 podman[483640]: 2025-10-02 20:23:49.695306031 +0000 UTC m=+0.119809354 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Oct 02 20:23:49 compute-0 podman[483637]: 2025-10-02 20:23:49.734179062 +0000 UTC m=+0.142250538 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:23:49 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15729 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:50 compute-0 ceph-mon[191910]: from='client.15723 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:50 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/38900240' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 20:23:50 compute-0 nova_compute[355794]: 2025-10-02 20:23:50.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2378: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:50 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15731 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 20:23:50 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2946357753' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 20:23:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:23:51 compute-0 ceph-mon[191910]: from='client.15729 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:51 compute-0 ceph-mon[191910]: pgmap v2378: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:51 compute-0 ceph-mon[191910]: from='client.15731 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:23:51 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2946357753' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 20:23:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct 02 20:23:51 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2847762674' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 20:23:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct 02 20:23:51 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2155505886' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 20:23:52 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2847762674' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 20:23:52 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2155505886' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 20:23:52 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15739 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2379: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 20:23:52 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1544401976' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 20:23:53 compute-0 nova_compute[355794]: 2025-10-02 20:23:53.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Oct 02 20:23:53 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3884801021' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 20:23:53 compute-0 ceph-mon[191910]: from='client.15739 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:53 compute-0 ceph-mon[191910]: pgmap v2379: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:53 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1544401976' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 20:23:53 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3884801021' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 20:23:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Oct 02 20:23:53 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1374611047' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 02 20:23:54 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Oct 02 20:23:54 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3873938339' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 02 20:23:54 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1374611047' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 02 20:23:54 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3873938339' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 02 20:23:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2380: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:54 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15749 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Oct 02 20:23:55 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2798561456' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 02 20:23:55 compute-0 nova_compute[355794]: 2025-10-02 20:23:55.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:55 compute-0 ovs-appctl[484726]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 20:23:55 compute-0 ovs-appctl[484732]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 20:23:55 compute-0 ceph-mon[191910]: pgmap v2380: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:55 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2798561456' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 02 20:23:55 compute-0 ovs-appctl[484745]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 20:23:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Oct 02 20:23:55 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3281339286' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 02 20:23:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:23:56 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15755 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2381: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:56 compute-0 ceph-mon[191910]: from='client.15749 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:56 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3281339286' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 02 20:23:56 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Oct 02 20:23:56 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3806214471' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 02 20:23:56 compute-0 podman[484933]: 2025-10-02 20:23:56.721549069 +0000 UTC m=+0.128606083 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 20:23:56 compute-0 podman[484936]: 2025-10-02 20:23:56.739244709 +0000 UTC m=+0.149241009 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9)
Oct 02 20:23:56 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15759 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:57 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15761 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:57 compute-0 ceph-mon[191910]: from='client.15755 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:57 compute-0 ceph-mon[191910]: pgmap v2381: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:57 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3806214471' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 02 20:23:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Oct 02 20:23:57 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2990057507' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 02 20:23:58 compute-0 nova_compute[355794]: 2025-10-02 20:23:58.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:23:58 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Oct 02 20:23:58 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/798900191' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 02 20:23:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2382: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:58 compute-0 ceph-mon[191910]: from='client.15759 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:58 compute-0 ceph-mon[191910]: from='client.15761 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:58 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2990057507' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 02 20:23:58 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/798900191' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 02 20:23:58 compute-0 podman[485330]: 2025-10-02 20:23:58.720993347 +0000 UTC m=+0.124169798 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, config_id=edpm, distribution-scope=public, io.openshift.expose-services=)
Oct 02 20:23:58 compute-0 podman[485326]: 2025-10-02 20:23:58.730937556 +0000 UTC m=+0.137397112 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 20:23:58 compute-0 podman[485328]: 2025-10-02 20:23:58.737155017 +0000 UTC m=+0.152847483 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3)
Oct 02 20:23:58 compute-0 podman[485349]: 2025-10-02 20:23:58.751441418 +0000 UTC m=+0.140924303 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 20:23:58 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15767 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:58 compute-0 podman[485332]: 2025-10-02 20:23:58.784791415 +0000 UTC m=+0.176966420 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15769 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:23:59 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:23:59 compute-0 ceph-mon[191910]: pgmap v2382: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:23:59 compute-0 ceph-mon[191910]: from='client.15767 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:59 compute-0 ceph-mon[191910]: from='client.15769 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:23:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 20:23:59 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1900802904' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 20:23:59 compute-0 podman[157186]: time="2025-10-02T20:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:23:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:23:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9110 "" "Go-http-client/1.1"
Oct 02 20:23:59 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Oct 02 20:23:59 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2267470380' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 02 20:24:00 compute-0 nova_compute[355794]: 2025-10-02 20:24:00.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2383: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:00 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15775 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:24:00 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1900802904' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 20:24:00 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2267470380' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 02 20:24:00 compute-0 ceph-mon[191910]: pgmap v2383: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:00 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15777 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:24:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:24:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 20:24:01 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2092290261' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 20:24:01 compute-0 sudo[485955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:24:01 compute-0 sudo[485955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:24:01 compute-0 sudo[485955]: pam_unix(sudo:session): session closed for user root
Oct 02 20:24:01 compute-0 openstack_network_exporter[372736]: ERROR   20:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:24:01 compute-0 openstack_network_exporter[372736]: ERROR   20:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:24:01 compute-0 openstack_network_exporter[372736]: ERROR   20:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:24:01 compute-0 openstack_network_exporter[372736]: ERROR   20:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:24:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:24:01 compute-0 openstack_network_exporter[372736]: ERROR   20:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:24:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:24:01 compute-0 ceph-mon[191910]: from='client.15775 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:24:01 compute-0 ceph-mon[191910]: from='client.15777 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:24:01 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2092290261' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 20:24:01 compute-0 sudo[486012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:24:01 compute-0 sudo[486012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:24:01 compute-0 sudo[486012]: pam_unix(sudo:session): session closed for user root
Oct 02 20:24:01 compute-0 sudo[486063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:24:01 compute-0 sudo[486063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:24:01 compute-0 sudo[486063]: pam_unix(sudo:session): session closed for user root
Oct 02 20:24:01 compute-0 sudo[486133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:24:01 compute-0 sudo[486133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:24:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Oct 02 20:24:01 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2556108383' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 02 20:24:02 compute-0 sudo[486133]: pam_unix(sudo:session): session closed for user root
Oct 02 20:24:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2384: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:24:02 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:24:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:24:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:24:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:24:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:24:02 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 3e0396b3-8d38-4fe0-875a-31cd04d21644 does not exist
Oct 02 20:24:02 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 0d3cc3ee-86df-4bd8-9a3a-0c8be6206c87 does not exist
Oct 02 20:24:02 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 0ed57cea-0e0e-4926-9bec-5587db2a5f34 does not exist
Oct 02 20:24:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:24:02 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:24:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:24:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:24:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:24:02 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:24:02 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2556108383' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 02 20:24:02 compute-0 ceph-mon[191910]: pgmap v2384: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:02 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:24:02 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:24:02 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:24:02 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:24:02 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:24:02 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:24:02 compute-0 sudo[486359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:24:02 compute-0 sudo[486359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:24:02 compute-0 sudo[486359]: pam_unix(sudo:session): session closed for user root
Oct 02 20:24:02 compute-0 sudo[486414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:24:02 compute-0 sudo[486414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:24:02 compute-0 sudo[486414]: pam_unix(sudo:session): session closed for user root
Oct 02 20:24:02 compute-0 sudo[486469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:24:02 compute-0 sudo[486469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:24:02 compute-0 sudo[486469]: pam_unix(sudo:session): session closed for user root
Oct 02 20:24:02 compute-0 sudo[486524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:24:02 compute-0 sudo[486524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:24:03 compute-0 nova_compute[355794]: 2025-10-02 20:24:03.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:03 compute-0 podman[486692]: 2025-10-02 20:24:03.274333133 +0000 UTC m=+0.087257969 container create 04b65f962de67541fd40c59a6a82b5a9bfcc48d9755b73b5975c104d760d08d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jang, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 20:24:03 compute-0 podman[486692]: 2025-10-02 20:24:03.225329889 +0000 UTC m=+0.038254745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:24:03 compute-0 systemd[1]: Started libpod-conmon-04b65f962de67541fd40c59a6a82b5a9bfcc48d9755b73b5975c104d760d08d4.scope.
Oct 02 20:24:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:24:03 compute-0 podman[486692]: 2025-10-02 20:24:03.409359962 +0000 UTC m=+0.222284828 container init 04b65f962de67541fd40c59a6a82b5a9bfcc48d9755b73b5975c104d760d08d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jang, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 02 20:24:03 compute-0 podman[486692]: 2025-10-02 20:24:03.419775273 +0000 UTC m=+0.232700109 container start 04b65f962de67541fd40c59a6a82b5a9bfcc48d9755b73b5975c104d760d08d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jang, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:24:03 compute-0 podman[486692]: 2025-10-02 20:24:03.432545234 +0000 UTC m=+0.245470120 container attach 04b65f962de67541fd40c59a6a82b5a9bfcc48d9755b73b5975c104d760d08d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 20:24:03 compute-0 amazing_jang[486731]: 167 167
Oct 02 20:24:03 compute-0 podman[486692]: 2025-10-02 20:24:03.439913586 +0000 UTC m=+0.252838422 container died 04b65f962de67541fd40c59a6a82b5a9bfcc48d9755b73b5975c104d760d08d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 20:24:03 compute-0 systemd[1]: libpod-04b65f962de67541fd40c59a6a82b5a9bfcc48d9755b73b5975c104d760d08d4.scope: Deactivated successfully.
Oct 02 20:24:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b73765b0b58b26d11470ef46d67f77d9b641abf3361df6ad008a4162d6200dd-merged.mount: Deactivated successfully.
Oct 02 20:24:03 compute-0 podman[486692]: 2025-10-02 20:24:03.5597561 +0000 UTC m=+0.372680946 container remove 04b65f962de67541fd40c59a6a82b5a9bfcc48d9755b73b5975c104d760d08d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 20:24:03 compute-0 systemd[1]: libpod-conmon-04b65f962de67541fd40c59a6a82b5a9bfcc48d9755b73b5975c104d760d08d4.scope: Deactivated successfully.
Oct 02 20:24:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:24:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:24:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:24:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:24:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:24:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:24:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:24:03
Oct 02 20:24:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:24:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:24:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'volumes', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'default.rgw.log', 'backups', '.rgw.root', 'default.rgw.control', '.mgr']
Oct 02 20:24:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:24:03 compute-0 podman[486794]: 2025-10-02 20:24:03.797822787 +0000 UTC m=+0.076178061 container create dcfc41ba1132fe626bfcac09edc49fe54c1aa02b8b07fc049ef96ef7aa0fa947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jones, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 02 20:24:03 compute-0 podman[486794]: 2025-10-02 20:24:03.75909608 +0000 UTC m=+0.037451394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:24:03 compute-0 systemd[1]: Started libpod-conmon-dcfc41ba1132fe626bfcac09edc49fe54c1aa02b8b07fc049ef96ef7aa0fa947.scope.
Oct 02 20:24:03 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab06fb65959b71c52f94190d0c98995778545415d53efef146eaf5157e63050/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab06fb65959b71c52f94190d0c98995778545415d53efef146eaf5157e63050/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab06fb65959b71c52f94190d0c98995778545415d53efef146eaf5157e63050/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab06fb65959b71c52f94190d0c98995778545415d53efef146eaf5157e63050/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab06fb65959b71c52f94190d0c98995778545415d53efef146eaf5157e63050/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:24:03 compute-0 podman[486794]: 2025-10-02 20:24:03.973015179 +0000 UTC m=+0.251370473 container init dcfc41ba1132fe626bfcac09edc49fe54c1aa02b8b07fc049ef96ef7aa0fa947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jones, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 20:24:03 compute-0 podman[486794]: 2025-10-02 20:24:03.983117542 +0000 UTC m=+0.261472816 container start dcfc41ba1132fe626bfcac09edc49fe54c1aa02b8b07fc049ef96ef7aa0fa947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jones, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:24:03 compute-0 podman[486794]: 2025-10-02 20:24:03.989636721 +0000 UTC m=+0.267991995 container attach dcfc41ba1132fe626bfcac09edc49fe54c1aa02b8b07fc049ef96ef7aa0fa947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 20:24:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2385: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:04 compute-0 ceph-mgr[192222]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2078717049
Oct 02 20:24:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:24:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:24:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:24:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:24:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:24:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:24:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:24:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:24:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:24:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:24:05 compute-0 nova_compute[355794]: 2025-10-02 20:24:05.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:05 compute-0 stoic_jones[486846]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:24:05 compute-0 stoic_jones[486846]: --> relative data size: 1.0
Oct 02 20:24:05 compute-0 stoic_jones[486846]: --> All data devices are unavailable
Oct 02 20:24:05 compute-0 systemd[1]: libpod-dcfc41ba1132fe626bfcac09edc49fe54c1aa02b8b07fc049ef96ef7aa0fa947.scope: Deactivated successfully.
Oct 02 20:24:05 compute-0 podman[486794]: 2025-10-02 20:24:05.319261763 +0000 UTC m=+1.597617047 container died dcfc41ba1132fe626bfcac09edc49fe54c1aa02b8b07fc049ef96ef7aa0fa947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 20:24:05 compute-0 systemd[1]: libpod-dcfc41ba1132fe626bfcac09edc49fe54c1aa02b8b07fc049ef96ef7aa0fa947.scope: Consumed 1.199s CPU time.
Oct 02 20:24:05 compute-0 ceph-mon[191910]: pgmap v2385: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ab06fb65959b71c52f94190d0c98995778545415d53efef146eaf5157e63050-merged.mount: Deactivated successfully.
Oct 02 20:24:05 compute-0 podman[486794]: 2025-10-02 20:24:05.51654305 +0000 UTC m=+1.794898334 container remove dcfc41ba1132fe626bfcac09edc49fe54c1aa02b8b07fc049ef96ef7aa0fa947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jones, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:24:05 compute-0 systemd[1]: libpod-conmon-dcfc41ba1132fe626bfcac09edc49fe54c1aa02b8b07fc049ef96ef7aa0fa947.scope: Deactivated successfully.
Oct 02 20:24:05 compute-0 sudo[486524]: pam_unix(sudo:session): session closed for user root
Oct 02 20:24:05 compute-0 sudo[487133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:24:05 compute-0 sudo[487133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:24:05 compute-0 sudo[487133]: pam_unix(sudo:session): session closed for user root
Oct 02 20:24:05 compute-0 sudo[487178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:24:05 compute-0 sudo[487178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:24:05 compute-0 sudo[487178]: pam_unix(sudo:session): session closed for user root
Oct 02 20:24:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:24:05 compute-0 sudo[487215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:24:05 compute-0 sudo[487215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:24:05 compute-0 sudo[487215]: pam_unix(sudo:session): session closed for user root
Oct 02 20:24:06 compute-0 sudo[487241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:24:06 compute-0 sudo[487241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:24:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2386: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:06 compute-0 ceph-mon[191910]: pgmap v2386: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:06 compute-0 podman[487304]: 2025-10-02 20:24:06.706938344 +0000 UTC m=+0.078702666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:24:06 compute-0 podman[487304]: 2025-10-02 20:24:06.832633771 +0000 UTC m=+0.204398033 container create 941b6fc5a2cc8903128f7dd9b28c14f7dfa2734dd01b4dedea9173ffe6309b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mendeleev, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 20:24:06 compute-0 systemd[1]: Started libpod-conmon-941b6fc5a2cc8903128f7dd9b28c14f7dfa2734dd01b4dedea9173ffe6309b29.scope.
Oct 02 20:24:06 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:24:07 compute-0 podman[487304]: 2025-10-02 20:24:07.144117455 +0000 UTC m=+0.515881777 container init 941b6fc5a2cc8903128f7dd9b28c14f7dfa2734dd01b4dedea9173ffe6309b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mendeleev, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 20:24:07 compute-0 podman[487304]: 2025-10-02 20:24:07.165716687 +0000 UTC m=+0.537480959 container start 941b6fc5a2cc8903128f7dd9b28c14f7dfa2734dd01b4dedea9173ffe6309b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mendeleev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 02 20:24:07 compute-0 laughing_mendeleev[487320]: 167 167
Oct 02 20:24:07 compute-0 systemd[1]: libpod-941b6fc5a2cc8903128f7dd9b28c14f7dfa2734dd01b4dedea9173ffe6309b29.scope: Deactivated successfully.
Oct 02 20:24:07 compute-0 podman[487304]: 2025-10-02 20:24:07.26360322 +0000 UTC m=+0.635367462 container attach 941b6fc5a2cc8903128f7dd9b28c14f7dfa2734dd01b4dedea9173ffe6309b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 02 20:24:07 compute-0 podman[487304]: 2025-10-02 20:24:07.265450448 +0000 UTC m=+0.637214710 container died 941b6fc5a2cc8903128f7dd9b28c14f7dfa2734dd01b4dedea9173ffe6309b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mendeleev, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 20:24:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-013c4a426a5e0f4805ecc3b5eaa42434d167db9e97ed238a3f40a79e01920dac-merged.mount: Deactivated successfully.
Oct 02 20:24:07 compute-0 podman[487304]: 2025-10-02 20:24:07.613996766 +0000 UTC m=+0.985761028 container remove 941b6fc5a2cc8903128f7dd9b28c14f7dfa2734dd01b4dedea9173ffe6309b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 20:24:07 compute-0 systemd[1]: libpod-conmon-941b6fc5a2cc8903128f7dd9b28c14f7dfa2734dd01b4dedea9173ffe6309b29.scope: Deactivated successfully.
Oct 02 20:24:08 compute-0 podman[487345]: 2025-10-02 20:24:07.909000492 +0000 UTC m=+0.056409517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:24:08 compute-0 podman[487345]: 2025-10-02 20:24:08.039834902 +0000 UTC m=+0.187243897 container create 358de2323be101be9fe97301508e1de104de6c4b6afbd1fa4be4d17be0defaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 20:24:08 compute-0 nova_compute[355794]: 2025-10-02 20:24:08.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2387: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:08 compute-0 systemd[1]: Started libpod-conmon-358de2323be101be9fe97301508e1de104de6c4b6afbd1fa4be4d17be0defaa2.scope.
Oct 02 20:24:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/863bceb455e6bdef534c58e20c593c62273e92583cfc4350f32bf9aaeca27ebe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/863bceb455e6bdef534c58e20c593c62273e92583cfc4350f32bf9aaeca27ebe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/863bceb455e6bdef534c58e20c593c62273e92583cfc4350f32bf9aaeca27ebe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/863bceb455e6bdef534c58e20c593c62273e92583cfc4350f32bf9aaeca27ebe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:24:08 compute-0 podman[487345]: 2025-10-02 20:24:08.486816748 +0000 UTC m=+0.634225803 container init 358de2323be101be9fe97301508e1de104de6c4b6afbd1fa4be4d17be0defaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:24:08 compute-0 podman[487345]: 2025-10-02 20:24:08.510202445 +0000 UTC m=+0.657611430 container start 358de2323be101be9fe97301508e1de104de6c4b6afbd1fa4be4d17be0defaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:24:08 compute-0 ceph-mon[191910]: pgmap v2387: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:08 compute-0 podman[487345]: 2025-10-02 20:24:08.537959357 +0000 UTC m=+0.685368352 container attach 358de2323be101be9fe97301508e1de104de6c4b6afbd1fa4be4d17be0defaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]: {
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:     "0": [
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:         {
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "devices": [
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "/dev/loop3"
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             ],
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "lv_name": "ceph_lv0",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "lv_size": "21470642176",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "name": "ceph_lv0",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "tags": {
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.cluster_name": "ceph",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.crush_device_class": "",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.encrypted": "0",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.osd_id": "0",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.type": "block",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.vdo": "0"
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             },
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "type": "block",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "vg_name": "ceph_vg0"
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:         }
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:     ],
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:     "1": [
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:         {
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "devices": [
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "/dev/loop4"
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             ],
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "lv_name": "ceph_lv1",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "lv_size": "21470642176",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "name": "ceph_lv1",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "tags": {
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.cluster_name": "ceph",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.crush_device_class": "",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.encrypted": "0",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.osd_id": "1",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.type": "block",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.vdo": "0"
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             },
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "type": "block",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "vg_name": "ceph_vg1"
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:         }
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:     ],
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:     "2": [
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:         {
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "devices": [
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "/dev/loop5"
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             ],
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "lv_name": "ceph_lv2",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "lv_size": "21470642176",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "name": "ceph_lv2",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "tags": {
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.cluster_name": "ceph",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.crush_device_class": "",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.encrypted": "0",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.osd_id": "2",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.type": "block",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:                 "ceph.vdo": "0"
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             },
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "type": "block",
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:             "vg_name": "ceph_vg2"
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:         }
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]:     ]
Oct 02 20:24:09 compute-0 unruffled_swirles[487361]: }
Oct 02 20:24:09 compute-0 systemd[1]: libpod-358de2323be101be9fe97301508e1de104de6c4b6afbd1fa4be4d17be0defaa2.scope: Deactivated successfully.
Oct 02 20:24:09 compute-0 podman[487345]: 2025-10-02 20:24:09.413544669 +0000 UTC m=+1.560953654 container died 358de2323be101be9fe97301508e1de104de6c4b6afbd1fa4be4d17be0defaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 20:24:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-863bceb455e6bdef534c58e20c593c62273e92583cfc4350f32bf9aaeca27ebe-merged.mount: Deactivated successfully.
Oct 02 20:24:09 compute-0 podman[487345]: 2025-10-02 20:24:09.616506564 +0000 UTC m=+1.763915549 container remove 358de2323be101be9fe97301508e1de104de6c4b6afbd1fa4be4d17be0defaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 20:24:09 compute-0 systemd[1]: libpod-conmon-358de2323be101be9fe97301508e1de104de6c4b6afbd1fa4be4d17be0defaa2.scope: Deactivated successfully.
Oct 02 20:24:09 compute-0 sudo[487241]: pam_unix(sudo:session): session closed for user root
Oct 02 20:24:09 compute-0 sudo[487398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:24:09 compute-0 sudo[487398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:24:09 compute-0 sudo[487398]: pam_unix(sudo:session): session closed for user root
Oct 02 20:24:09 compute-0 sudo[487429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:24:09 compute-0 sudo[487429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:24:09 compute-0 sudo[487429]: pam_unix(sudo:session): session closed for user root
Oct 02 20:24:09 compute-0 podman[487422]: 2025-10-02 20:24:09.967005092 +0000 UTC m=+0.142419612 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 20:24:10 compute-0 sudo[487466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:24:10 compute-0 sudo[487466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:24:10 compute-0 sudo[487466]: pam_unix(sudo:session): session closed for user root
Oct 02 20:24:10 compute-0 sudo[487491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:24:10 compute-0 sudo[487491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:24:10 compute-0 nova_compute[355794]: 2025-10-02 20:24:10.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2388: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:24:10 compute-0 podman[487556]: 2025-10-02 20:24:10.77886286 +0000 UTC m=+0.045231637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:24:10 compute-0 ceph-mon[191910]: pgmap v2388: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:11 compute-0 podman[487556]: 2025-10-02 20:24:11.428639515 +0000 UTC m=+0.695008202 container create c1e5f21c7aeca4a16577b7c48058057b75f0b3c32743c5ab561a21c85a39ce24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 20:24:12 compute-0 systemd[1]: Started libpod-conmon-c1e5f21c7aeca4a16577b7c48058057b75f0b3c32743c5ab561a21c85a39ce24.scope.
Oct 02 20:24:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:24:12 compute-0 podman[487556]: 2025-10-02 20:24:12.223248325 +0000 UTC m=+1.489617102 container init c1e5f21c7aeca4a16577b7c48058057b75f0b3c32743c5ab561a21c85a39ce24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 02 20:24:12 compute-0 podman[487556]: 2025-10-02 20:24:12.244004734 +0000 UTC m=+1.510373451 container start c1e5f21c7aeca4a16577b7c48058057b75f0b3c32743c5ab561a21c85a39ce24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jepsen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 20:24:12 compute-0 podman[487556]: 2025-10-02 20:24:12.251401796 +0000 UTC m=+1.517770483 container attach c1e5f21c7aeca4a16577b7c48058057b75f0b3c32743c5ab561a21c85a39ce24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:24:12 compute-0 practical_jepsen[487595]: 167 167
Oct 02 20:24:12 compute-0 systemd[1]: libpod-c1e5f21c7aeca4a16577b7c48058057b75f0b3c32743c5ab561a21c85a39ce24.scope: Deactivated successfully.
Oct 02 20:24:12 compute-0 conmon[487595]: conmon c1e5f21c7aeca4a16577 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c1e5f21c7aeca4a16577b7c48058057b75f0b3c32743c5ab561a21c85a39ce24.scope/container/memory.events
Oct 02 20:24:12 compute-0 podman[487556]: 2025-10-02 20:24:12.265345449 +0000 UTC m=+1.531714196 container died c1e5f21c7aeca4a16577b7c48058057b75f0b3c32743c5ab561a21c85a39ce24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jepsen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 02 20:24:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2389: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a303b11f4fd8e8d23e18a8cdb77ed628ae3edd2738e9523bd7d6afd170ddd18c-merged.mount: Deactivated successfully.
Oct 02 20:24:12 compute-0 podman[487556]: 2025-10-02 20:24:12.347648307 +0000 UTC m=+1.614016994 container remove c1e5f21c7aeca4a16577b7c48058057b75f0b3c32743c5ab561a21c85a39ce24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:24:12 compute-0 systemd[1]: libpod-conmon-c1e5f21c7aeca4a16577b7c48058057b75f0b3c32743c5ab561a21c85a39ce24.scope: Deactivated successfully.
Oct 02 20:24:12 compute-0 podman[487633]: 2025-10-02 20:24:12.63330815 +0000 UTC m=+0.091469588 container create dda098a274e37df0d747e850204da649a501b6d8e2075cd1a219b4ef9d359555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mayer, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 20:24:12 compute-0 podman[487633]: 2025-10-02 20:24:12.592326205 +0000 UTC m=+0.050487673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:24:12 compute-0 systemd[1]: Started libpod-conmon-dda098a274e37df0d747e850204da649a501b6d8e2075cd1a219b4ef9d359555.scope.
Oct 02 20:24:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:24:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8f254e294fc7070dc138a78b1b41c39fc6af22d51b7835cba792196c26a9ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:24:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8f254e294fc7070dc138a78b1b41c39fc6af22d51b7835cba792196c26a9ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:24:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8f254e294fc7070dc138a78b1b41c39fc6af22d51b7835cba792196c26a9ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:24:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8f254e294fc7070dc138a78b1b41c39fc6af22d51b7835cba792196c26a9ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:24:12 compute-0 podman[487633]: 2025-10-02 20:24:12.822738412 +0000 UTC m=+0.280899890 container init dda098a274e37df0d747e850204da649a501b6d8e2075cd1a219b4ef9d359555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mayer, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 20:24:12 compute-0 podman[487633]: 2025-10-02 20:24:12.849660472 +0000 UTC m=+0.307821910 container start dda098a274e37df0d747e850204da649a501b6d8e2075cd1a219b4ef9d359555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:24:12 compute-0 podman[487633]: 2025-10-02 20:24:12.856329825 +0000 UTC m=+0.314491263 container attach dda098a274e37df0d747e850204da649a501b6d8e2075cd1a219b4ef9d359555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:24:13 compute-0 nova_compute[355794]: 2025-10-02 20:24:13.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:13 compute-0 ceph-mon[191910]: pgmap v2389: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:24:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:24:13 compute-0 virtqemud[153606]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]: {
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:         "osd_id": 1,
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:         "type": "bluestore"
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:     },
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:         "osd_id": 2,
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:         "type": "bluestore"
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:     },
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:         "osd_id": 0,
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:         "type": "bluestore"
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]:     }
Oct 02 20:24:13 compute-0 hopeful_mayer[487657]: }
Oct 02 20:24:13 compute-0 systemd[1]: libpod-dda098a274e37df0d747e850204da649a501b6d8e2075cd1a219b4ef9d359555.scope: Deactivated successfully.
Oct 02 20:24:13 compute-0 systemd[1]: libpod-dda098a274e37df0d747e850204da649a501b6d8e2075cd1a219b4ef9d359555.scope: Consumed 1.000s CPU time.
Oct 02 20:24:13 compute-0 podman[487633]: 2025-10-02 20:24:13.895985103 +0000 UTC m=+1.354146541 container died dda098a274e37df0d747e850204da649a501b6d8e2075cd1a219b4ef9d359555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 20:24:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d8f254e294fc7070dc138a78b1b41c39fc6af22d51b7835cba792196c26a9ae-merged.mount: Deactivated successfully.
Oct 02 20:24:14 compute-0 podman[487633]: 2025-10-02 20:24:14.263856322 +0000 UTC m=+1.722017740 container remove dda098a274e37df0d747e850204da649a501b6d8e2075cd1a219b4ef9d359555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mayer, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:24:14 compute-0 systemd[1]: libpod-conmon-dda098a274e37df0d747e850204da649a501b6d8e2075cd1a219b4ef9d359555.scope: Deactivated successfully.
Oct 02 20:24:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2390: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:14 compute-0 sudo[487491]: pam_unix(sudo:session): session closed for user root
Oct 02 20:24:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:24:14 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:24:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:24:14 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:24:14 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 85d0ba43-682e-424e-b6ed-51978d8f5830 does not exist
Oct 02 20:24:14 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev b55e2289-4cea-44e8-89d1-b0739dd620dd does not exist
Oct 02 20:24:14 compute-0 sudo[488008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:24:14 compute-0 sudo[488008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:24:14 compute-0 sudo[488008]: pam_unix(sudo:session): session closed for user root
Oct 02 20:24:14 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 20:24:14 compute-0 sudo[488045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:24:14 compute-0 sudo[488045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:24:14 compute-0 sudo[488045]: pam_unix(sudo:session): session closed for user root
Oct 02 20:24:15 compute-0 nova_compute[355794]: 2025-10-02 20:24:15.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:15 compute-0 ceph-mon[191910]: pgmap v2390: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:24:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:24:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:24:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2391: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:16 compute-0 ceph-mon[191910]: pgmap v2391: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:17 compute-0 systemd[1]: Starting Time & Date Service...
Oct 02 20:24:17 compute-0 systemd[1]: Started Time & Date Service.
Oct 02 20:24:18 compute-0 nova_compute[355794]: 2025-10-02 20:24:18.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:18 compute-0 systemd[1]: Starting Hostname Service...
Oct 02 20:24:18 compute-0 systemd[1]: Started Hostname Service.
Oct 02 20:24:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2392: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:18 compute-0 ceph-mon[191910]: pgmap v2392: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:24:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3401759649' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:24:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:24:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3401759649' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:24:20 compute-0 nova_compute[355794]: 2025-10-02 20:24:20.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3401759649' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:24:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3401759649' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:24:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2393: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:20 compute-0 podman[488200]: 2025-10-02 20:24:20.734175363 +0000 UTC m=+0.146655962 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 20:24:20 compute-0 podman[488201]: 2025-10-02 20:24:20.738251779 +0000 UTC m=+0.148177881 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0)
Oct 02 20:24:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:24:21 compute-0 ceph-mon[191910]: pgmap v2393: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:21 compute-0 nova_compute[355794]: 2025-10-02 20:24:21.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:24:21 compute-0 nova_compute[355794]: 2025-10-02 20:24:21.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:24:21 compute-0 nova_compute[355794]: 2025-10-02 20:24:21.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:24:22 compute-0 nova_compute[355794]: 2025-10-02 20:24:22.296 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:24:22 compute-0 nova_compute[355794]: 2025-10-02 20:24:22.297 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:24:22 compute-0 nova_compute[355794]: 2025-10-02 20:24:22.298 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:24:22 compute-0 nova_compute[355794]: 2025-10-02 20:24:22.298 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:24:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2394: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:23 compute-0 nova_compute[355794]: 2025-10-02 20:24:23.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:23 compute-0 ceph-mon[191910]: pgmap v2394: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:23 compute-0 nova_compute[355794]: 2025-10-02 20:24:23.866 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:24:23 compute-0 nova_compute[355794]: 2025-10-02 20:24:23.892 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:24:23 compute-0 nova_compute[355794]: 2025-10-02 20:24:23.893 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:24:23 compute-0 nova_compute[355794]: 2025-10-02 20:24:23.894 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:24:23 compute-0 nova_compute[355794]: 2025-10-02 20:24:23.894 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:24:23 compute-0 nova_compute[355794]: 2025-10-02 20:24:23.895 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:24:23 compute-0 nova_compute[355794]: 2025-10-02 20:24:23.896 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:24:23 compute-0 nova_compute[355794]: 2025-10-02 20:24:23.897 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:24:23 compute-0 nova_compute[355794]: 2025-10-02 20:24:23.898 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:24:23 compute-0 nova_compute[355794]: 2025-10-02 20:24:23.926 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:24:23 compute-0 nova_compute[355794]: 2025-10-02 20:24:23.927 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:24:23 compute-0 nova_compute[355794]: 2025-10-02 20:24:23.928 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:24:23 compute-0 nova_compute[355794]: 2025-10-02 20:24:23.928 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:24:23 compute-0 nova_compute[355794]: 2025-10-02 20:24:23.929 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:24:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2395: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:24:24 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/392892536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:24:24 compute-0 nova_compute[355794]: 2025-10-02 20:24:24.426 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:24:24 compute-0 nova_compute[355794]: 2025-10-02 20:24:24.514 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:24:24 compute-0 nova_compute[355794]: 2025-10-02 20:24:24.516 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:24:24 compute-0 nova_compute[355794]: 2025-10-02 20:24:24.516 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:24:25 compute-0 nova_compute[355794]: 2025-10-02 20:24:25.013 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:24:25 compute-0 nova_compute[355794]: 2025-10-02 20:24:25.016 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3500MB free_disk=59.955204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:24:25 compute-0 nova_compute[355794]: 2025-10-02 20:24:25.017 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:24:25 compute-0 nova_compute[355794]: 2025-10-02 20:24:25.018 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:24:25 compute-0 nova_compute[355794]: 2025-10-02 20:24:25.127 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:24:25 compute-0 nova_compute[355794]: 2025-10-02 20:24:25.128 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:24:25 compute-0 nova_compute[355794]: 2025-10-02 20:24:25.129 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:24:25 compute-0 nova_compute[355794]: 2025-10-02 20:24:25.200 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:24:25 compute-0 nova_compute[355794]: 2025-10-02 20:24:25.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:25 compute-0 ceph-mon[191910]: pgmap v2395: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:25 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/392892536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:24:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:24:25 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2219228981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:24:25 compute-0 nova_compute[355794]: 2025-10-02 20:24:25.733 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:24:25 compute-0 nova_compute[355794]: 2025-10-02 20:24:25.747 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:24:25 compute-0 nova_compute[355794]: 2025-10-02 20:24:25.773 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:24:25 compute-0 nova_compute[355794]: 2025-10-02 20:24:25.776 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:24:25 compute-0 nova_compute[355794]: 2025-10-02 20:24:25.776 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:24:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:24:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2396: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:26 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2219228981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:24:26 compute-0 ceph-mon[191910]: pgmap v2396: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:27 compute-0 podman[488284]: 2025-10-02 20:24:27.196072585 +0000 UTC m=+0.120992495 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release-0.7.12=, version=9.4, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30)
Oct 02 20:24:27 compute-0 podman[488283]: 2025-10-02 20:24:27.223455537 +0000 UTC m=+0.148409358 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 20:24:27 compute-0 nova_compute[355794]: 2025-10-02 20:24:27.772 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:24:28 compute-0 nova_compute[355794]: 2025-10-02 20:24:28.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2397: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:28 compute-0 ceph-mon[191910]: pgmap v2397: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:28 compute-0 nova_compute[355794]: 2025-10-02 20:24:28.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:24:29 compute-0 nova_compute[355794]: 2025-10-02 20:24:29.577 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:24:29 compute-0 podman[488319]: 2025-10-02 20:24:29.670562349 +0000 UTC m=+0.100334208 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:24:29 compute-0 podman[488322]: 2025-10-02 20:24:29.688511526 +0000 UTC m=+0.101343655 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 20:24:29 compute-0 podman[488318]: 2025-10-02 20:24:29.692330435 +0000 UTC m=+0.118434589 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 20:24:29 compute-0 podman[488320]: 2025-10-02 20:24:29.712305744 +0000 UTC m=+0.118489340 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, version=9.6, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, config_id=edpm, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64)
Oct 02 20:24:29 compute-0 podman[157186]: time="2025-10-02T20:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:24:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:24:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9108 "" "Go-http-client/1.1"
Oct 02 20:24:29 compute-0 podman[488321]: 2025-10-02 20:24:29.788282699 +0000 UTC m=+0.189178978 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct 02 20:24:30 compute-0 nova_compute[355794]: 2025-10-02 20:24:30.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2398: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:24:31 compute-0 ceph-mon[191910]: pgmap v2398: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:31 compute-0 openstack_network_exporter[372736]: ERROR   20:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:24:31 compute-0 openstack_network_exporter[372736]: ERROR   20:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:24:31 compute-0 openstack_network_exporter[372736]: ERROR   20:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:24:31 compute-0 openstack_network_exporter[372736]: ERROR   20:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:24:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:24:31 compute-0 openstack_network_exporter[372736]: ERROR   20:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:24:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:24:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2399: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:24:32.345 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:24:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:24:32.346 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:24:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:24:32.347 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:24:32 compute-0 ceph-mon[191910]: pgmap v2399: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:33 compute-0 nova_compute[355794]: 2025-10-02 20:24:33.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:33 compute-0 nova_compute[355794]: 2025-10-02 20:24:33.570 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:24:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:24:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:24:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:24:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:24:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:24:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:24:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2400: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:35 compute-0 nova_compute[355794]: 2025-10-02 20:24:35.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:35 compute-0 ceph-mon[191910]: pgmap v2400: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:24:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2401: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:37 compute-0 ceph-mon[191910]: pgmap v2401: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:38 compute-0 nova_compute[355794]: 2025-10-02 20:24:38.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2402: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:38 compute-0 ceph-mon[191910]: pgmap v2402: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:40 compute-0 nova_compute[355794]: 2025-10-02 20:24:40.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2403: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:40 compute-0 podman[488422]: 2025-10-02 20:24:40.722692366 +0000 UTC m=+0.136756185 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 20:24:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:24:40.883323) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436680883349, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 949, "num_deletes": 250, "total_data_size": 1141432, "memory_usage": 1165816, "flush_reason": "Manual Compaction"}
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436680894842, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 748402, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48662, "largest_seqno": 49610, "table_properties": {"data_size": 744331, "index_size": 1595, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11626, "raw_average_key_size": 21, "raw_value_size": 735365, "raw_average_value_size": 1359, "num_data_blocks": 71, "num_entries": 541, "num_filter_entries": 541, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759436611, "oldest_key_time": 1759436611, "file_creation_time": 1759436680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 11570 microseconds, and 3877 cpu microseconds.
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:24:40.894888) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 748402 bytes OK
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:24:40.894910) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:24:40.898753) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:24:40.898778) EVENT_LOG_v1 {"time_micros": 1759436680898771, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:24:40.898798) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 1136651, prev total WAL file size 1136651, number of live WAL files 2.
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:24:40.901314) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303033' seq:72057594037927935, type:22 .. '6D6772737461740032323534' seq:0, type:0; will stop at (end)
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(730KB)], [116(9179KB)]
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436680901460, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 10148390, "oldest_snapshot_seqno": -1}
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 6344 keys, 7229936 bytes, temperature: kUnknown
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436680968645, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 7229936, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7192049, "index_size": 20945, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 166135, "raw_average_key_size": 26, "raw_value_size": 7081716, "raw_average_value_size": 1116, "num_data_blocks": 826, "num_entries": 6344, "num_filter_entries": 6344, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759436680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:24:40.969286) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 7229936 bytes
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:24:40.974627) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.1 rd, 106.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.0 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(23.2) write-amplify(9.7) OK, records in: 6827, records dropped: 483 output_compression: NoCompression
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:24:40.974648) EVENT_LOG_v1 {"time_micros": 1759436680974639, "job": 70, "event": "compaction_finished", "compaction_time_micros": 67622, "compaction_time_cpu_micros": 28239, "output_level": 6, "num_output_files": 1, "total_output_size": 7229936, "num_input_records": 6827, "num_output_records": 6344, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436680974947, "job": 70, "event": "table_file_deletion", "file_number": 118}
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436680979308, "job": 70, "event": "table_file_deletion", "file_number": 116}
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:24:40.900064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:24:40.979500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:24:40.979504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:24:40.979506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:24:40.979507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:24:40 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:24:40.979508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:24:41 compute-0 ceph-mon[191910]: pgmap v2403: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2404: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:42 compute-0 ceph-mon[191910]: pgmap v2404: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:43 compute-0 nova_compute[355794]: 2025-10-02 20:24:43.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2405: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:44 compute-0 ceph-mon[191910]: pgmap v2405: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:45 compute-0 nova_compute[355794]: 2025-10-02 20:24:45.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:24:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2406: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:46 compute-0 ceph-mon[191910]: pgmap v2406: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:48 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 02 20:24:48 compute-0 nova_compute[355794]: 2025-10-02 20:24:48.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:48 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 20:24:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2407: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:48 compute-0 ceph-mon[191910]: pgmap v2407: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:50 compute-0 nova_compute[355794]: 2025-10-02 20:24:50.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2408: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:50 compute-0 ceph-mon[191910]: pgmap v2408: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:24:51 compute-0 podman[488447]: 2025-10-02 20:24:51.669730351 +0000 UTC m=+0.092379561 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 20:24:51 compute-0 podman[488448]: 2025-10-02 20:24:51.695164852 +0000 UTC m=+0.113640004 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 20:24:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2409: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:52 compute-0 ceph-mon[191910]: pgmap v2409: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:53 compute-0 nova_compute[355794]: 2025-10-02 20:24:53.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2410: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:54 compute-0 ceph-mon[191910]: pgmap v2410: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:55 compute-0 nova_compute[355794]: 2025-10-02 20:24:55.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:24:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2411: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:56 compute-0 ceph-mon[191910]: pgmap v2411: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:57 compute-0 podman[488488]: 2025-10-02 20:24:57.700985002 +0000 UTC m=+0.118231413 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:24:57 compute-0 podman[488489]: 2025-10-02 20:24:57.700861339 +0000 UTC m=+0.118736756 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, version=9.4, build-date=2024-09-18T21:23:30, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9)
Oct 02 20:24:58 compute-0 nova_compute[355794]: 2025-10-02 20:24:58.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:24:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2412: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:58 compute-0 ceph-mon[191910]: pgmap v2412: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:24:59 compute-0 podman[157186]: time="2025-10-02T20:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:24:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:24:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9120 "" "Go-http-client/1.1"
Oct 02 20:25:00 compute-0 nova_compute[355794]: 2025-10-02 20:25:00.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2413: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:00 compute-0 ceph-mon[191910]: pgmap v2413: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:00 compute-0 podman[488532]: 2025-10-02 20:25:00.516359864 +0000 UTC m=+0.096870748 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 20:25:00 compute-0 podman[488530]: 2025-10-02 20:25:00.531916528 +0000 UTC m=+0.114131717 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, distribution-scope=public, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Oct 02 20:25:00 compute-0 podman[488528]: 2025-10-02 20:25:00.553195241 +0000 UTC m=+0.147927415 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 20:25:00 compute-0 podman[488529]: 2025-10-02 20:25:00.557038001 +0000 UTC m=+0.147456133 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 20:25:00 compute-0 podman[488531]: 2025-10-02 20:25:00.596755203 +0000 UTC m=+0.176300102 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:25:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:25:01 compute-0 openstack_network_exporter[372736]: ERROR   20:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:25:01 compute-0 openstack_network_exporter[372736]: ERROR   20:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:25:01 compute-0 openstack_network_exporter[372736]: ERROR   20:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:25:01 compute-0 openstack_network_exporter[372736]: ERROR   20:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:25:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:25:01 compute-0 openstack_network_exporter[372736]: ERROR   20:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:25:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:25:01 compute-0 sudo[479681]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:01 compute-0 sshd-session[479680]: Received disconnect from 192.168.122.10 port 51088:11: disconnected by user
Oct 02 20:25:01 compute-0 sshd-session[479680]: Disconnected from user zuul 192.168.122.10 port 51088
Oct 02 20:25:01 compute-0 sshd-session[479677]: pam_unix(sshd:session): session closed for user zuul
Oct 02 20:25:01 compute-0 systemd[1]: session-65.scope: Deactivated successfully.
Oct 02 20:25:01 compute-0 systemd[1]: session-65.scope: Consumed 3min 36.181s CPU time, 994.0M memory peak, read 510.6M from disk, written 403.8M to disk.
Oct 02 20:25:01 compute-0 systemd-logind[793]: Session 65 logged out. Waiting for processes to exit.
Oct 02 20:25:01 compute-0 systemd-logind[793]: Removed session 65.
Oct 02 20:25:01 compute-0 sshd-session[488627]: Accepted publickey for zuul from 192.168.122.10 port 33224 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 20:25:01 compute-0 systemd-logind[793]: New session 66 of user zuul.
Oct 02 20:25:01 compute-0 systemd[1]: Started Session 66 of User zuul.
Oct 02 20:25:01 compute-0 sshd-session[488627]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 20:25:02 compute-0 sudo[488631]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-10-02-xftdliu.tar.xz
Oct 02 20:25:02 compute-0 sudo[488631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 20:25:02 compute-0 sudo[488631]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:02 compute-0 sshd-session[488630]: Received disconnect from 192.168.122.10 port 33224:11: disconnected by user
Oct 02 20:25:02 compute-0 sshd-session[488630]: Disconnected from user zuul 192.168.122.10 port 33224
Oct 02 20:25:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2414: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:02 compute-0 sshd-session[488627]: pam_unix(sshd:session): session closed for user zuul
Oct 02 20:25:02 compute-0 systemd[1]: session-66.scope: Deactivated successfully.
Oct 02 20:25:02 compute-0 systemd-logind[793]: Session 66 logged out. Waiting for processes to exit.
Oct 02 20:25:02 compute-0 systemd-logind[793]: Removed session 66.
Oct 02 20:25:02 compute-0 sshd-session[488656]: Accepted publickey for zuul from 192.168.122.10 port 33228 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 20:25:02 compute-0 systemd-logind[793]: New session 67 of user zuul.
Oct 02 20:25:02 compute-0 systemd[1]: Started Session 67 of User zuul.
Oct 02 20:25:02 compute-0 sshd-session[488656]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 20:25:02 compute-0 ceph-mon[191910]: pgmap v2414: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:02 compute-0 sudo[488660]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Oct 02 20:25:02 compute-0 sudo[488660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 20:25:02 compute-0 sudo[488660]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:02 compute-0 sshd-session[488659]: Received disconnect from 192.168.122.10 port 33228:11: disconnected by user
Oct 02 20:25:02 compute-0 sshd-session[488659]: Disconnected from user zuul 192.168.122.10 port 33228
Oct 02 20:25:02 compute-0 sshd-session[488656]: pam_unix(sshd:session): session closed for user zuul
Oct 02 20:25:02 compute-0 systemd[1]: session-67.scope: Deactivated successfully.
Oct 02 20:25:02 compute-0 systemd-logind[793]: Session 67 logged out. Waiting for processes to exit.
Oct 02 20:25:02 compute-0 systemd-logind[793]: Removed session 67.
Oct 02 20:25:03 compute-0 nova_compute[355794]: 2025-10-02 20:25:03.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:25:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:25:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:25:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:25:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:25:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:25:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:25:03
Oct 02 20:25:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:25:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:25:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'backups', 'images', '.mgr', 'vms']
Oct 02 20:25:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.312 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.313 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343346b020>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.323 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.324 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.324 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.325 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T20:25:04.324602) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2415: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.399 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.400 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.400 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.401 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.401 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.401 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.401 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.401 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.402 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.402 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:25:04.401961) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.435 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.435 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.436 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.436 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.436 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.436 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.437 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.437 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.437 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.437 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.437 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.438 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.438 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.438 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.438 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.439 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:25:04.437244) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.438 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.440 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.440 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.440 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.441 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.442 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.443 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.443 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:25:04.440318) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.443 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.444 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.444 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.444 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.445 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.445 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:25:04.444973) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.484 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.485 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.485 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.485 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.485 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.486 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.486 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.488 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.489 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.489 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.489 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.489 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.489 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.490 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.490 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.490 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.490 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:25:04.486129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.491 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:25:04.490216) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.495 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.496 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.496 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.496 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.496 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.496 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.496 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.497 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.497 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.497 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.497 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:25:04.496767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.498 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.498 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.498 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceph-mon[191910]: pgmap v2415: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.498 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.498 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.498 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.499 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.499 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.499 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.499 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.499 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.499 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.499 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.500 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.500 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.500 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:25:04.498347) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:25:04.499166) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.500 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:25:04.500112) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.500 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.500 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.500 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.501 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.501 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.501 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.501 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:25:04.501051) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.502 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.502 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.503 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.503 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:25:04.502081) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.503 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:25:04.503051) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.503 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.504 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.504 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.504 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.504 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2524 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.504 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.505 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.505 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.505 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.505 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.506 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.506 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.507 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.507 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.507 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.507 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.507 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.507 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:25:04.504385) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.508 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2730 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:25:04.505284) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:25:04.507084) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:25:04.507950) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.508 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.508 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.508 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.508 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.509 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:25:04.508918) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.509 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.509 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.509 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.509 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.510 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:25:04.509818) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.510 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.510 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.510 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.510 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.511 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.511 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.511 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.511 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.511 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.511 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.511 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.512 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:25:04.511174) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.512 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.512 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.512 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.512 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.512 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.512 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.512 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.513 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.513 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 72240000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.513 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.513 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.513 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.513 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.513 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.514 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:25:04.512107) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:25:04.512985) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.514 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:25:04.513841) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.514 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.514 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:25:04.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:25:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:25:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:25:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:25:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:25:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:25:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:25:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:25:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:25:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:25:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:25:05 compute-0 nova_compute[355794]: 2025-10-02 20:25:05.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:25:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2416: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:07 compute-0 ceph-mon[191910]: pgmap v2416: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:08 compute-0 nova_compute[355794]: 2025-10-02 20:25:08.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2417: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:08 compute-0 ceph-mon[191910]: pgmap v2417: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:10 compute-0 nova_compute[355794]: 2025-10-02 20:25:10.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2418: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:25:11 compute-0 ceph-mon[191910]: pgmap v2418: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:11 compute-0 podman[488686]: 2025-10-02 20:25:11.689847634 +0000 UTC m=+0.115699118 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:25:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2419: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:12 compute-0 ceph-mon[191910]: pgmap v2419: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:13 compute-0 nova_compute[355794]: 2025-10-02 20:25:13.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:25:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:25:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2420: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:14 compute-0 sudo[488705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:25:14 compute-0 sudo[488705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:14 compute-0 sudo[488705]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:14 compute-0 sudo[488730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:25:14 compute-0 sudo[488730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:14 compute-0 sudo[488730]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:14 compute-0 sudo[488755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:25:14 compute-0 sudo[488755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:14 compute-0 sudo[488755]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:15 compute-0 sudo[488780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 20:25:15 compute-0 sudo[488780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:15 compute-0 nova_compute[355794]: 2025-10-02 20:25:15.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:15 compute-0 ceph-mon[191910]: pgmap v2420: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:15 compute-0 sudo[488780]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:15 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 20:25:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:25:15 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 20:25:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:25:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:25:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:25:15 compute-0 sudo[488825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:25:15 compute-0 sudo[488825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:15 compute-0 sudo[488825]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:15 compute-0 sudo[488850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:25:15 compute-0 sudo[488850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:15 compute-0 sudo[488850]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:15 compute-0 sudo[488875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:25:15 compute-0 sudo[488875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:15 compute-0 sudo[488875]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:25:15 compute-0 sudo[488900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:25:15 compute-0 sudo[488900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2421: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:25:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:25:16 compute-0 ceph-mon[191910]: pgmap v2421: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:16 compute-0 sudo[488900]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:25:16 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:25:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:25:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:25:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:25:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:25:16 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 60ae1e55-64bb-4d75-9c1b-619946d084c5 does not exist
Oct 02 20:25:16 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev c33aedf6-8a8c-4520-80e5-a80eb0a496d3 does not exist
Oct 02 20:25:16 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 9a00d49f-5ba7-4f0b-8ad6-46ef7edc86d2 does not exist
Oct 02 20:25:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:25:16 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:25:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:25:16 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:25:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:25:16 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:25:16 compute-0 sudo[488955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:25:16 compute-0 sudo[488955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:16 compute-0 sudo[488955]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:16 compute-0 sudo[488980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:25:16 compute-0 sudo[488980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:16 compute-0 sudo[488980]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:17 compute-0 sudo[489005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:25:17 compute-0 sudo[489005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:17 compute-0 sudo[489005]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:17 compute-0 sudo[489030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:25:17 compute-0 sudo[489030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:25:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:25:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:25:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:25:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:25:17 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:25:17 compute-0 podman[489094]: 2025-10-02 20:25:17.853208148 +0000 UTC m=+0.099292021 container create e44e2be2c1c0379e2331c19443c026c4ef5969b1db7aa00fa60a9c5e62a169b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 20:25:17 compute-0 podman[489094]: 2025-10-02 20:25:17.816904915 +0000 UTC m=+0.062988828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:25:17 compute-0 systemd[1]: Started libpod-conmon-e44e2be2c1c0379e2331c19443c026c4ef5969b1db7aa00fa60a9c5e62a169b1.scope.
Oct 02 20:25:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:25:18 compute-0 podman[489094]: 2025-10-02 20:25:18.030496305 +0000 UTC m=+0.276580228 container init e44e2be2c1c0379e2331c19443c026c4ef5969b1db7aa00fa60a9c5e62a169b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_keldysh, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:25:18 compute-0 podman[489094]: 2025-10-02 20:25:18.053969555 +0000 UTC m=+0.300053428 container start e44e2be2c1c0379e2331c19443c026c4ef5969b1db7aa00fa60a9c5e62a169b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_keldysh, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 20:25:18 compute-0 podman[489094]: 2025-10-02 20:25:18.062090046 +0000 UTC m=+0.308173969 container attach e44e2be2c1c0379e2331c19443c026c4ef5969b1db7aa00fa60a9c5e62a169b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 20:25:18 compute-0 serene_keldysh[489111]: 167 167
Oct 02 20:25:18 compute-0 systemd[1]: libpod-e44e2be2c1c0379e2331c19443c026c4ef5969b1db7aa00fa60a9c5e62a169b1.scope: Deactivated successfully.
Oct 02 20:25:18 compute-0 podman[489094]: 2025-10-02 20:25:18.071551022 +0000 UTC m=+0.317634935 container died e44e2be2c1c0379e2331c19443c026c4ef5969b1db7aa00fa60a9c5e62a169b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_keldysh, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 20:25:18 compute-0 nova_compute[355794]: 2025-10-02 20:25:18.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-77f23a3f0703898b3761e86771cd4429c3cf4a89d0c35a4aef6091953e1d3448-merged.mount: Deactivated successfully.
Oct 02 20:25:18 compute-0 podman[489094]: 2025-10-02 20:25:18.170799311 +0000 UTC m=+0.416883184 container remove e44e2be2c1c0379e2331c19443c026c4ef5969b1db7aa00fa60a9c5e62a169b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_keldysh, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:25:18 compute-0 systemd[1]: libpod-conmon-e44e2be2c1c0379e2331c19443c026c4ef5969b1db7aa00fa60a9c5e62a169b1.scope: Deactivated successfully.
Oct 02 20:25:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2422: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:18 compute-0 podman[489134]: 2025-10-02 20:25:18.416706342 +0000 UTC m=+0.071506750 container create 0a4ffc4b36fa9630462bbc0c0fbf815109ebac5722a94ea78e754f38d46fd9e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_banach, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:25:18 compute-0 podman[489134]: 2025-10-02 20:25:18.391560008 +0000 UTC m=+0.046360406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:25:18 compute-0 systemd[1]: Started libpod-conmon-0a4ffc4b36fa9630462bbc0c0fbf815109ebac5722a94ea78e754f38d46fd9e4.scope.
Oct 02 20:25:18 compute-0 ceph-mon[191910]: pgmap v2422: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6562912c20f2a112f599bd108ca3a7b0e436e581ba2591e2611e2eb88cac5af6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6562912c20f2a112f599bd108ca3a7b0e436e581ba2591e2611e2eb88cac5af6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6562912c20f2a112f599bd108ca3a7b0e436e581ba2591e2611e2eb88cac5af6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6562912c20f2a112f599bd108ca3a7b0e436e581ba2591e2611e2eb88cac5af6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6562912c20f2a112f599bd108ca3a7b0e436e581ba2591e2611e2eb88cac5af6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:25:18 compute-0 podman[489134]: 2025-10-02 20:25:18.57517676 +0000 UTC m=+0.229977158 container init 0a4ffc4b36fa9630462bbc0c0fbf815109ebac5722a94ea78e754f38d46fd9e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 20:25:18 compute-0 podman[489134]: 2025-10-02 20:25:18.600689653 +0000 UTC m=+0.255490051 container start 0a4ffc4b36fa9630462bbc0c0fbf815109ebac5722a94ea78e754f38d46fd9e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_banach, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 20:25:18 compute-0 podman[489134]: 2025-10-02 20:25:18.608805544 +0000 UTC m=+0.263605922 container attach 0a4ffc4b36fa9630462bbc0c0fbf815109ebac5722a94ea78e754f38d46fd9e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_banach, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 20:25:19 compute-0 intelligent_banach[489150]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:25:19 compute-0 intelligent_banach[489150]: --> relative data size: 1.0
Oct 02 20:25:19 compute-0 intelligent_banach[489150]: --> All data devices are unavailable
Oct 02 20:25:20 compute-0 systemd[1]: libpod-0a4ffc4b36fa9630462bbc0c0fbf815109ebac5722a94ea78e754f38d46fd9e4.scope: Deactivated successfully.
Oct 02 20:25:20 compute-0 podman[489134]: 2025-10-02 20:25:20.009726749 +0000 UTC m=+1.664527157 container died 0a4ffc4b36fa9630462bbc0c0fbf815109ebac5722a94ea78e754f38d46fd9e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_banach, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:25:20 compute-0 systemd[1]: libpod-0a4ffc4b36fa9630462bbc0c0fbf815109ebac5722a94ea78e754f38d46fd9e4.scope: Consumed 1.338s CPU time.
Oct 02 20:25:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6562912c20f2a112f599bd108ca3a7b0e436e581ba2591e2611e2eb88cac5af6-merged.mount: Deactivated successfully.
Oct 02 20:25:20 compute-0 podman[489134]: 2025-10-02 20:25:20.12249929 +0000 UTC m=+1.777299668 container remove 0a4ffc4b36fa9630462bbc0c0fbf815109ebac5722a94ea78e754f38d46fd9e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:25:20 compute-0 systemd[1]: libpod-conmon-0a4ffc4b36fa9630462bbc0c0fbf815109ebac5722a94ea78e754f38d46fd9e4.scope: Deactivated successfully.
Oct 02 20:25:20 compute-0 sudo[489030]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:20 compute-0 nova_compute[355794]: 2025-10-02 20:25:20.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:20 compute-0 sudo[489191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:25:20 compute-0 sudo[489191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:20 compute-0 sudo[489191]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2423: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:20 compute-0 sudo[489216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:25:20 compute-0 sudo[489216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:20 compute-0 sudo[489216]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:25:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1588049012' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:25:20 compute-0 sudo[489241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:25:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:25:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1588049012' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:25:20 compute-0 sudo[489241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:20 compute-0 sudo[489241]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:20 compute-0 sudo[489266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:25:20 compute-0 sudo[489266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:25:21 compute-0 podman[489328]: 2025-10-02 20:25:21.284630799 +0000 UTC m=+0.099302142 container create fe12694a9c02b443f1768af1fa544ae734e921b78eb568a39b04c589e68248c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:25:21 compute-0 podman[489328]: 2025-10-02 20:25:21.240329987 +0000 UTC m=+0.055001380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:25:21 compute-0 systemd[1]: Started libpod-conmon-fe12694a9c02b443f1768af1fa544ae734e921b78eb568a39b04c589e68248c1.scope.
Oct 02 20:25:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:25:21 compute-0 ceph-mon[191910]: pgmap v2423: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1588049012' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:25:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1588049012' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:25:21 compute-0 podman[489328]: 2025-10-02 20:25:21.440166891 +0000 UTC m=+0.254838284 container init fe12694a9c02b443f1768af1fa544ae734e921b78eb568a39b04c589e68248c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gagarin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:25:21 compute-0 podman[489328]: 2025-10-02 20:25:21.459864312 +0000 UTC m=+0.274535655 container start fe12694a9c02b443f1768af1fa544ae734e921b78eb568a39b04c589e68248c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gagarin, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:25:21 compute-0 podman[489328]: 2025-10-02 20:25:21.467139621 +0000 UTC m=+0.281811104 container attach fe12694a9c02b443f1768af1fa544ae734e921b78eb568a39b04c589e68248c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:25:21 compute-0 zealous_gagarin[489344]: 167 167
Oct 02 20:25:21 compute-0 systemd[1]: libpod-fe12694a9c02b443f1768af1fa544ae734e921b78eb568a39b04c589e68248c1.scope: Deactivated successfully.
Oct 02 20:25:21 compute-0 podman[489328]: 2025-10-02 20:25:21.47631202 +0000 UTC m=+0.290983393 container died fe12694a9c02b443f1768af1fa544ae734e921b78eb568a39b04c589e68248c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:25:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-76ac4614e7ffb18810253189b822a7be81cba4c16cafdcd728dc5954f1d8b43d-merged.mount: Deactivated successfully.
Oct 02 20:25:21 compute-0 podman[489328]: 2025-10-02 20:25:21.56058619 +0000 UTC m=+0.375257543 container remove fe12694a9c02b443f1768af1fa544ae734e921b78eb568a39b04c589e68248c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:25:21 compute-0 systemd[1]: libpod-conmon-fe12694a9c02b443f1768af1fa544ae734e921b78eb568a39b04c589e68248c1.scope: Deactivated successfully.
Oct 02 20:25:21 compute-0 podman[489367]: 2025-10-02 20:25:21.841480839 +0000 UTC m=+0.077460573 container create c4b7049c135dab03da4a8603028caaf93af45b599ebb377a537539cd37935bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 20:25:21 compute-0 podman[489367]: 2025-10-02 20:25:21.806026818 +0000 UTC m=+0.042006622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:25:21 compute-0 systemd[1]: Started libpod-conmon-c4b7049c135dab03da4a8603028caaf93af45b599ebb377a537539cd37935bd6.scope.
Oct 02 20:25:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c6f0d37092e7a35893ada164e0678ca1d85184c5ca5e96123a30e9cd65c607f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c6f0d37092e7a35893ada164e0678ca1d85184c5ca5e96123a30e9cd65c607f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c6f0d37092e7a35893ada164e0678ca1d85184c5ca5e96123a30e9cd65c607f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c6f0d37092e7a35893ada164e0678ca1d85184c5ca5e96123a30e9cd65c607f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:25:22 compute-0 podman[489367]: 2025-10-02 20:25:22.023055988 +0000 UTC m=+0.259035752 container init c4b7049c135dab03da4a8603028caaf93af45b599ebb377a537539cd37935bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:25:22 compute-0 podman[489367]: 2025-10-02 20:25:22.046502837 +0000 UTC m=+0.282482561 container start c4b7049c135dab03da4a8603028caaf93af45b599ebb377a537539cd37935bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:25:22 compute-0 podman[489367]: 2025-10-02 20:25:22.051778054 +0000 UTC m=+0.287757828 container attach c4b7049c135dab03da4a8603028caaf93af45b599ebb377a537539cd37935bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:25:22 compute-0 podman[489381]: 2025-10-02 20:25:22.085889091 +0000 UTC m=+0.158014348 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:25:22 compute-0 podman[489384]: 2025-10-02 20:25:22.111679131 +0000 UTC m=+0.181334233 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute)
Oct 02 20:25:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2424: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:22 compute-0 ceph-mon[191910]: pgmap v2424: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:22 compute-0 nova_compute[355794]: 2025-10-02 20:25:22.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:25:22 compute-0 nova_compute[355794]: 2025-10-02 20:25:22.618 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:25:22 compute-0 nova_compute[355794]: 2025-10-02 20:25:22.619 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:25:22 compute-0 nova_compute[355794]: 2025-10-02 20:25:22.619 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:25:22 compute-0 nova_compute[355794]: 2025-10-02 20:25:22.620 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:25:22 compute-0 nova_compute[355794]: 2025-10-02 20:25:22.620 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:25:22 compute-0 modest_booth[489396]: {
Oct 02 20:25:22 compute-0 modest_booth[489396]:     "0": [
Oct 02 20:25:22 compute-0 modest_booth[489396]:         {
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "devices": [
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "/dev/loop3"
Oct 02 20:25:22 compute-0 modest_booth[489396]:             ],
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "lv_name": "ceph_lv0",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "lv_size": "21470642176",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "name": "ceph_lv0",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "tags": {
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.cluster_name": "ceph",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.crush_device_class": "",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.encrypted": "0",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.osd_id": "0",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.type": "block",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.vdo": "0"
Oct 02 20:25:22 compute-0 modest_booth[489396]:             },
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "type": "block",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "vg_name": "ceph_vg0"
Oct 02 20:25:22 compute-0 modest_booth[489396]:         }
Oct 02 20:25:22 compute-0 modest_booth[489396]:     ],
Oct 02 20:25:22 compute-0 modest_booth[489396]:     "1": [
Oct 02 20:25:22 compute-0 modest_booth[489396]:         {
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "devices": [
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "/dev/loop4"
Oct 02 20:25:22 compute-0 modest_booth[489396]:             ],
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "lv_name": "ceph_lv1",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "lv_size": "21470642176",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "name": "ceph_lv1",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "tags": {
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.cluster_name": "ceph",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.crush_device_class": "",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.encrypted": "0",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.osd_id": "1",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.type": "block",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.vdo": "0"
Oct 02 20:25:22 compute-0 modest_booth[489396]:             },
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "type": "block",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "vg_name": "ceph_vg1"
Oct 02 20:25:22 compute-0 modest_booth[489396]:         }
Oct 02 20:25:22 compute-0 modest_booth[489396]:     ],
Oct 02 20:25:22 compute-0 modest_booth[489396]:     "2": [
Oct 02 20:25:22 compute-0 modest_booth[489396]:         {
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "devices": [
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "/dev/loop5"
Oct 02 20:25:22 compute-0 modest_booth[489396]:             ],
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "lv_name": "ceph_lv2",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "lv_size": "21470642176",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "name": "ceph_lv2",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "tags": {
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.cluster_name": "ceph",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.crush_device_class": "",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.encrypted": "0",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.osd_id": "2",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.type": "block",
Oct 02 20:25:22 compute-0 modest_booth[489396]:                 "ceph.vdo": "0"
Oct 02 20:25:22 compute-0 modest_booth[489396]:             },
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "type": "block",
Oct 02 20:25:22 compute-0 modest_booth[489396]:             "vg_name": "ceph_vg2"
Oct 02 20:25:22 compute-0 modest_booth[489396]:         }
Oct 02 20:25:22 compute-0 modest_booth[489396]:     ]
Oct 02 20:25:22 compute-0 modest_booth[489396]: }
Oct 02 20:25:22 compute-0 systemd[1]: libpod-c4b7049c135dab03da4a8603028caaf93af45b599ebb377a537539cd37935bd6.scope: Deactivated successfully.
Oct 02 20:25:22 compute-0 conmon[489396]: conmon c4b7049c135dab03da4a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4b7049c135dab03da4a8603028caaf93af45b599ebb377a537539cd37935bd6.scope/container/memory.events
Oct 02 20:25:22 compute-0 podman[489367]: 2025-10-02 20:25:22.902467841 +0000 UTC m=+1.138447595 container died c4b7049c135dab03da4a8603028caaf93af45b599ebb377a537539cd37935bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:25:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c6f0d37092e7a35893ada164e0678ca1d85184c5ca5e96123a30e9cd65c607f-merged.mount: Deactivated successfully.
Oct 02 20:25:23 compute-0 nova_compute[355794]: 2025-10-02 20:25:23.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:25:23 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3536889372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:25:23 compute-0 podman[489367]: 2025-10-02 20:25:23.155485356 +0000 UTC m=+1.391465090 container remove c4b7049c135dab03da4a8603028caaf93af45b599ebb377a537539cd37935bd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 20:25:23 compute-0 nova_compute[355794]: 2025-10-02 20:25:23.187 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:25:23 compute-0 systemd[1]: libpod-conmon-c4b7049c135dab03da4a8603028caaf93af45b599ebb377a537539cd37935bd6.scope: Deactivated successfully.
Oct 02 20:25:23 compute-0 sudo[489266]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:23 compute-0 nova_compute[355794]: 2025-10-02 20:25:23.290 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:25:23 compute-0 nova_compute[355794]: 2025-10-02 20:25:23.291 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:25:23 compute-0 nova_compute[355794]: 2025-10-02 20:25:23.291 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:25:23 compute-0 sudo[489466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:25:23 compute-0 sudo[489466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:23 compute-0 sudo[489466]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:23 compute-0 sudo[489491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:25:23 compute-0 sudo[489491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:23 compute-0 sudo[489491]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:23 compute-0 sudo[489516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:25:23 compute-0 sudo[489516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:23 compute-0 sudo[489516]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:23 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3536889372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:25:23 compute-0 sudo[489541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:25:23 compute-0 sudo[489541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:23 compute-0 nova_compute[355794]: 2025-10-02 20:25:23.681 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:25:23 compute-0 nova_compute[355794]: 2025-10-02 20:25:23.683 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3590MB free_disk=59.955204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:25:23 compute-0 nova_compute[355794]: 2025-10-02 20:25:23.683 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:25:23 compute-0 nova_compute[355794]: 2025-10-02 20:25:23.683 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:25:23 compute-0 nova_compute[355794]: 2025-10-02 20:25:23.759 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:25:23 compute-0 nova_compute[355794]: 2025-10-02 20:25:23.759 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:25:23 compute-0 nova_compute[355794]: 2025-10-02 20:25:23.759 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:25:23 compute-0 nova_compute[355794]: 2025-10-02 20:25:23.801 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:25:24 compute-0 podman[489625]: 2025-10-02 20:25:24.167268638 +0000 UTC m=+0.116311053 container create cf702d576e0258dc3a3f4e87fe2618c37ccca890f6e84faf3557a302efeed957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_snyder, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:25:24 compute-0 podman[489625]: 2025-10-02 20:25:24.087855095 +0000 UTC m=+0.036897560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:25:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:25:24 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/461361804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:25:24 compute-0 nova_compute[355794]: 2025-10-02 20:25:24.271 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:25:24 compute-0 nova_compute[355794]: 2025-10-02 20:25:24.284 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:25:24 compute-0 systemd[1]: Started libpod-conmon-cf702d576e0258dc3a3f4e87fe2618c37ccca890f6e84faf3557a302efeed957.scope.
Oct 02 20:25:24 compute-0 nova_compute[355794]: 2025-10-02 20:25:24.304 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:25:24 compute-0 nova_compute[355794]: 2025-10-02 20:25:24.308 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:25:24 compute-0 nova_compute[355794]: 2025-10-02 20:25:24.308 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:25:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2425: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:24 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:25:24 compute-0 podman[489625]: 2025-10-02 20:25:24.445486258 +0000 UTC m=+0.394528633 container init cf702d576e0258dc3a3f4e87fe2618c37ccca890f6e84faf3557a302efeed957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_snyder, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:25:24 compute-0 podman[489625]: 2025-10-02 20:25:24.46518582 +0000 UTC m=+0.414228195 container start cf702d576e0258dc3a3f4e87fe2618c37ccca890f6e84faf3557a302efeed957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_snyder, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 20:25:24 compute-0 upbeat_snyder[489643]: 167 167
Oct 02 20:25:24 compute-0 systemd[1]: libpod-cf702d576e0258dc3a3f4e87fe2618c37ccca890f6e84faf3557a302efeed957.scope: Deactivated successfully.
Oct 02 20:25:24 compute-0 conmon[489643]: conmon cf702d576e0258dc3a3f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cf702d576e0258dc3a3f4e87fe2618c37ccca890f6e84faf3557a302efeed957.scope/container/memory.events
Oct 02 20:25:24 compute-0 podman[489625]: 2025-10-02 20:25:24.527956111 +0000 UTC m=+0.476998526 container attach cf702d576e0258dc3a3f4e87fe2618c37ccca890f6e84faf3557a302efeed957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:25:24 compute-0 podman[489625]: 2025-10-02 20:25:24.528810684 +0000 UTC m=+0.477853089 container died cf702d576e0258dc3a3f4e87fe2618c37ccca890f6e84faf3557a302efeed957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_snyder, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:25:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6113932fb71b6d0dc4cdaab2757a0be724492695621ef829b0fccee5d5e6de59-merged.mount: Deactivated successfully.
Oct 02 20:25:24 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/461361804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:25:24 compute-0 ceph-mon[191910]: pgmap v2425: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:24 compute-0 podman[489625]: 2025-10-02 20:25:24.684846558 +0000 UTC m=+0.633888973 container remove cf702d576e0258dc3a3f4e87fe2618c37ccca890f6e84faf3557a302efeed957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 20:25:24 compute-0 systemd[1]: libpod-conmon-cf702d576e0258dc3a3f4e87fe2618c37ccca890f6e84faf3557a302efeed957.scope: Deactivated successfully.
Oct 02 20:25:25 compute-0 podman[489669]: 2025-10-02 20:25:24.950020789 +0000 UTC m=+0.038357377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:25:25 compute-0 podman[489669]: 2025-10-02 20:25:25.111835264 +0000 UTC m=+0.200171802 container create a368c86189a00b2c4c56e8249ca5d11320fe17b84c1c38a778a0b211d80ef226 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 02 20:25:25 compute-0 systemd[1]: Started libpod-conmon-a368c86189a00b2c4c56e8249ca5d11320fe17b84c1c38a778a0b211d80ef226.scope.
Oct 02 20:25:25 compute-0 nova_compute[355794]: 2025-10-02 20:25:25.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:25 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceaac42420eaf7f9f30ed9bfd8c2aa008febbda8e530986bf82a001a56158505/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceaac42420eaf7f9f30ed9bfd8c2aa008febbda8e530986bf82a001a56158505/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceaac42420eaf7f9f30ed9bfd8c2aa008febbda8e530986bf82a001a56158505/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceaac42420eaf7f9f30ed9bfd8c2aa008febbda8e530986bf82a001a56158505/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:25:25 compute-0 nova_compute[355794]: 2025-10-02 20:25:25.302 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:25:25 compute-0 nova_compute[355794]: 2025-10-02 20:25:25.304 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:25:25 compute-0 nova_compute[355794]: 2025-10-02 20:25:25.304 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:25:25 compute-0 nova_compute[355794]: 2025-10-02 20:25:25.304 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:25:25 compute-0 podman[489669]: 2025-10-02 20:25:25.404093889 +0000 UTC m=+0.492430487 container init a368c86189a00b2c4c56e8249ca5d11320fe17b84c1c38a778a0b211d80ef226 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 20:25:25 compute-0 podman[489669]: 2025-10-02 20:25:25.424206172 +0000 UTC m=+0.512542730 container start a368c86189a00b2c4c56e8249ca5d11320fe17b84c1c38a778a0b211d80ef226 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 20:25:25 compute-0 podman[489669]: 2025-10-02 20:25:25.444339095 +0000 UTC m=+0.532675623 container attach a368c86189a00b2c4c56e8249ca5d11320fe17b84c1c38a778a0b211d80ef226 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 20:25:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:25:25 compute-0 nova_compute[355794]: 2025-10-02 20:25:25.892 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:25:25 compute-0 nova_compute[355794]: 2025-10-02 20:25:25.893 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:25:25 compute-0 nova_compute[355794]: 2025-10-02 20:25:25.893 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:25:25 compute-0 nova_compute[355794]: 2025-10-02 20:25:25.894 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:25:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2426: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:26 compute-0 ceph-mon[191910]: pgmap v2426: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:26 compute-0 objective_hoover[489685]: {
Oct 02 20:25:26 compute-0 objective_hoover[489685]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:25:26 compute-0 objective_hoover[489685]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:25:26 compute-0 objective_hoover[489685]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:25:26 compute-0 objective_hoover[489685]:         "osd_id": 1,
Oct 02 20:25:26 compute-0 objective_hoover[489685]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:25:26 compute-0 objective_hoover[489685]:         "type": "bluestore"
Oct 02 20:25:26 compute-0 objective_hoover[489685]:     },
Oct 02 20:25:26 compute-0 objective_hoover[489685]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:25:26 compute-0 objective_hoover[489685]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:25:26 compute-0 objective_hoover[489685]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:25:26 compute-0 objective_hoover[489685]:         "osd_id": 2,
Oct 02 20:25:26 compute-0 objective_hoover[489685]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:25:26 compute-0 objective_hoover[489685]:         "type": "bluestore"
Oct 02 20:25:26 compute-0 objective_hoover[489685]:     },
Oct 02 20:25:26 compute-0 objective_hoover[489685]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:25:26 compute-0 objective_hoover[489685]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:25:26 compute-0 objective_hoover[489685]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:25:26 compute-0 objective_hoover[489685]:         "osd_id": 0,
Oct 02 20:25:26 compute-0 objective_hoover[489685]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:25:26 compute-0 objective_hoover[489685]:         "type": "bluestore"
Oct 02 20:25:26 compute-0 objective_hoover[489685]:     }
Oct 02 20:25:26 compute-0 objective_hoover[489685]: }
Oct 02 20:25:26 compute-0 systemd[1]: libpod-a368c86189a00b2c4c56e8249ca5d11320fe17b84c1c38a778a0b211d80ef226.scope: Deactivated successfully.
Oct 02 20:25:26 compute-0 systemd[1]: libpod-a368c86189a00b2c4c56e8249ca5d11320fe17b84c1c38a778a0b211d80ef226.scope: Consumed 1.295s CPU time.
Oct 02 20:25:26 compute-0 podman[489669]: 2025-10-02 20:25:26.73709152 +0000 UTC m=+1.825428088 container died a368c86189a00b2c4c56e8249ca5d11320fe17b84c1c38a778a0b211d80ef226 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:25:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceaac42420eaf7f9f30ed9bfd8c2aa008febbda8e530986bf82a001a56158505-merged.mount: Deactivated successfully.
Oct 02 20:25:27 compute-0 podman[489669]: 2025-10-02 20:25:27.413814425 +0000 UTC m=+2.502150963 container remove a368c86189a00b2c4c56e8249ca5d11320fe17b84c1c38a778a0b211d80ef226 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hoover, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:25:27 compute-0 systemd[1]: libpod-conmon-a368c86189a00b2c4c56e8249ca5d11320fe17b84c1c38a778a0b211d80ef226.scope: Deactivated successfully.
Oct 02 20:25:27 compute-0 sudo[489541]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:25:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:25:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:25:27 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:25:27 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 86bf0c26-90d9-4ebd-92c8-e111813a7a4f does not exist
Oct 02 20:25:27 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 10eeb217-49b8-490a-8c8e-4ec7824bafba does not exist
Oct 02 20:25:27 compute-0 sudo[489729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:25:27 compute-0 sudo[489729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:27 compute-0 sudo[489729]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:27 compute-0 sudo[489754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:25:27 compute-0 sudo[489754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:25:27 compute-0 sudo[489754]: pam_unix(sudo:session): session closed for user root
Oct 02 20:25:28 compute-0 podman[489779]: 2025-10-02 20:25:28.026024644 +0000 UTC m=+0.118111580 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, name=ubi9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64)
Oct 02 20:25:28 compute-0 podman[489778]: 2025-10-02 20:25:28.060769227 +0000 UTC m=+0.153282114 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:25:28 compute-0 nova_compute[355794]: 2025-10-02 20:25:28.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2427: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:25:28 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:25:28 compute-0 ceph-mon[191910]: pgmap v2427: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:28 compute-0 nova_compute[355794]: 2025-10-02 20:25:28.588 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:25:28 compute-0 nova_compute[355794]: 2025-10-02 20:25:28.609 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:25:28 compute-0 nova_compute[355794]: 2025-10-02 20:25:28.609 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:25:28 compute-0 nova_compute[355794]: 2025-10-02 20:25:28.610 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:25:28 compute-0 nova_compute[355794]: 2025-10-02 20:25:28.610 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:25:28 compute-0 nova_compute[355794]: 2025-10-02 20:25:28.611 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:25:28 compute-0 nova_compute[355794]: 2025-10-02 20:25:28.611 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:25:28 compute-0 nova_compute[355794]: 2025-10-02 20:25:28.611 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:25:29 compute-0 nova_compute[355794]: 2025-10-02 20:25:29.577 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:25:29 compute-0 podman[157186]: time="2025-10-02T20:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:25:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:25:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9115 "" "Go-http-client/1.1"
Oct 02 20:25:30 compute-0 nova_compute[355794]: 2025-10-02 20:25:30.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2428: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:30 compute-0 ceph-mon[191910]: pgmap v2428: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:30 compute-0 nova_compute[355794]: 2025-10-02 20:25:30.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:25:30 compute-0 podman[489815]: 2025-10-02 20:25:30.70608706 +0000 UTC m=+0.127729950 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:25:30 compute-0 podman[489816]: 2025-10-02 20:25:30.735985677 +0000 UTC m=+0.145185704 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.6)
Oct 02 20:25:30 compute-0 podman[489859]: 2025-10-02 20:25:30.834034585 +0000 UTC m=+0.087018312 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:25:30 compute-0 podman[489860]: 2025-10-02 20:25:30.854592819 +0000 UTC m=+0.110859812 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true)
Oct 02 20:25:30 compute-0 podman[489861]: 2025-10-02 20:25:30.882590067 +0000 UTC m=+0.132899105 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 20:25:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:25:31 compute-0 openstack_network_exporter[372736]: ERROR   20:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:25:31 compute-0 openstack_network_exporter[372736]: ERROR   20:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:25:31 compute-0 openstack_network_exporter[372736]: ERROR   20:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:25:31 compute-0 openstack_network_exporter[372736]: ERROR   20:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:25:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:25:31 compute-0 openstack_network_exporter[372736]: ERROR   20:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:25:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:25:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:25:32.346 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:25:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:25:32.348 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:25:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:25:32.349 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:25:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2429: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:32 compute-0 ceph-mon[191910]: pgmap v2429: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:33 compute-0 nova_compute[355794]: 2025-10-02 20:25:33.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:25:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:25:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:25:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:25:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:25:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:25:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2430: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:34 compute-0 ceph-mon[191910]: pgmap v2430: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:35 compute-0 nova_compute[355794]: 2025-10-02 20:25:35.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:25:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2431: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:36 compute-0 ceph-mon[191910]: pgmap v2431: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:38 compute-0 nova_compute[355794]: 2025-10-02 20:25:38.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2432: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:38 compute-0 ceph-mon[191910]: pgmap v2432: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:40 compute-0 nova_compute[355794]: 2025-10-02 20:25:40.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2433: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:25:41 compute-0 ceph-mon[191910]: pgmap v2433: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2434: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:42 compute-0 ceph-mon[191910]: pgmap v2434: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:42 compute-0 podman[489922]: 2025-10-02 20:25:42.720599205 +0000 UTC m=+0.140666217 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd)
Oct 02 20:25:43 compute-0 nova_compute[355794]: 2025-10-02 20:25:43.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2435: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:45 compute-0 nova_compute[355794]: 2025-10-02 20:25:45.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:45 compute-0 ceph-mon[191910]: pgmap v2435: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:25:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2436: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:46 compute-0 ceph-mon[191910]: pgmap v2436: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:48 compute-0 nova_compute[355794]: 2025-10-02 20:25:48.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2437: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:49 compute-0 ceph-mon[191910]: pgmap v2437: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:50 compute-0 nova_compute[355794]: 2025-10-02 20:25:50.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2438: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:50 compute-0 ceph-mon[191910]: pgmap v2438: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:25:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2439: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:52 compute-0 podman[489942]: 2025-10-02 20:25:52.729538772 +0000 UTC m=+0.132273139 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:25:52 compute-0 podman[489943]: 2025-10-02 20:25:52.755998499 +0000 UTC m=+0.159239149 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true)
Oct 02 20:25:53 compute-0 nova_compute[355794]: 2025-10-02 20:25:53.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:53 compute-0 ceph-mon[191910]: pgmap v2439: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2440: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:54 compute-0 ceph-mon[191910]: pgmap v2440: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:55 compute-0 nova_compute[355794]: 2025-10-02 20:25:55.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:25:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2441: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:57 compute-0 ceph-mon[191910]: pgmap v2441: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:58 compute-0 nova_compute[355794]: 2025-10-02 20:25:58.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:25:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2442: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:58 compute-0 ceph-mon[191910]: pgmap v2442: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:25:58 compute-0 podman[489985]: 2025-10-02 20:25:58.718188877 +0000 UTC m=+0.134069885 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, distribution-scope=public, release-0.7.12=, build-date=2024-09-18T21:23:30, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc.)
Oct 02 20:25:58 compute-0 podman[489984]: 2025-10-02 20:25:58.731723168 +0000 UTC m=+0.147490303 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Oct 02 20:25:59 compute-0 podman[157186]: time="2025-10-02T20:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:25:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:25:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9120 "" "Go-http-client/1.1"
Oct 02 20:26:00 compute-0 nova_compute[355794]: 2025-10-02 20:26:00.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2443: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:26:01 compute-0 openstack_network_exporter[372736]: ERROR   20:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:26:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:26:01 compute-0 openstack_network_exporter[372736]: ERROR   20:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:26:01 compute-0 openstack_network_exporter[372736]: ERROR   20:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:26:01 compute-0 openstack_network_exporter[372736]: ERROR   20:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:26:01 compute-0 openstack_network_exporter[372736]: ERROR   20:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:26:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:26:01 compute-0 ceph-mon[191910]: pgmap v2443: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:01 compute-0 podman[490024]: 2025-10-02 20:26:01.706578554 +0000 UTC m=+0.113353856 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 20:26:01 compute-0 podman[490027]: 2025-10-02 20:26:01.717547719 +0000 UTC m=+0.115086391 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 20:26:01 compute-0 podman[490023]: 2025-10-02 20:26:01.726340798 +0000 UTC m=+0.137468253 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 20:26:01 compute-0 podman[490025]: 2025-10-02 20:26:01.741238305 +0000 UTC m=+0.138950012 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-type=git, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, managed_by=edpm_ansible, container_name=openstack_network_exporter)
Oct 02 20:26:01 compute-0 podman[490026]: 2025-10-02 20:26:01.783716169 +0000 UTC m=+0.176963160 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 20:26:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2444: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:02 compute-0 ceph-mon[191910]: pgmap v2444: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:03 compute-0 nova_compute[355794]: 2025-10-02 20:26:03.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:26:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:26:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:26:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:26:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:26:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:26:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:26:03
Oct 02 20:26:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:26:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:26:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'vms', '.mgr']
Oct 02 20:26:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:26:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2445: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:04 compute-0 ceph-mon[191910]: pgmap v2445: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:26:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:26:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:26:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:26:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:26:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:26:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:26:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:26:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:26:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:26:05 compute-0 nova_compute[355794]: 2025-10-02 20:26:05.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:26:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2446: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:06 compute-0 ceph-mon[191910]: pgmap v2446: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:08 compute-0 nova_compute[355794]: 2025-10-02 20:26:08.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2447: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:08 compute-0 ceph-mon[191910]: pgmap v2447: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:10 compute-0 nova_compute[355794]: 2025-10-02 20:26:10.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2448: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:10 compute-0 ceph-mon[191910]: pgmap v2448: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:26:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2449: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:12 compute-0 ceph-mon[191910]: pgmap v2449: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:13 compute-0 nova_compute[355794]: 2025-10-02 20:26:13.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:26:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:26:13 compute-0 podman[490120]: 2025-10-02 20:26:13.705273559 +0000 UTC m=+0.125685867 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3)
Oct 02 20:26:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2450: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:14 compute-0 ceph-mon[191910]: pgmap v2450: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:15 compute-0 nova_compute[355794]: 2025-10-02 20:26:15.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:26:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2451: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:16 compute-0 ceph-mon[191910]: pgmap v2451: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:18 compute-0 nova_compute[355794]: 2025-10-02 20:26:18.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2452: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:18 compute-0 ceph-mon[191910]: pgmap v2452: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:26:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3791510272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:26:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:26:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3791510272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:26:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3791510272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:26:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3791510272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:26:20 compute-0 nova_compute[355794]: 2025-10-02 20:26:20.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2453: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:26:21 compute-0 ceph-mon[191910]: pgmap v2453: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2454: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:22 compute-0 ceph-mon[191910]: pgmap v2454: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:23 compute-0 nova_compute[355794]: 2025-10-02 20:26:23.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:23 compute-0 nova_compute[355794]: 2025-10-02 20:26:23.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:26:23 compute-0 nova_compute[355794]: 2025-10-02 20:26:23.626 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:26:23 compute-0 nova_compute[355794]: 2025-10-02 20:26:23.627 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:26:23 compute-0 nova_compute[355794]: 2025-10-02 20:26:23.627 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:26:23 compute-0 nova_compute[355794]: 2025-10-02 20:26:23.628 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:26:23 compute-0 nova_compute[355794]: 2025-10-02 20:26:23.628 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:26:23 compute-0 podman[490140]: 2025-10-02 20:26:23.695851589 +0000 UTC m=+0.112222167 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:26:23 compute-0 podman[490141]: 2025-10-02 20:26:23.709326659 +0000 UTC m=+0.126362294 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 20:26:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:26:24 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3517662213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:26:24 compute-0 nova_compute[355794]: 2025-10-02 20:26:24.162 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:26:24 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3517662213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:26:24 compute-0 nova_compute[355794]: 2025-10-02 20:26:24.287 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:26:24 compute-0 nova_compute[355794]: 2025-10-02 20:26:24.288 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:26:24 compute-0 nova_compute[355794]: 2025-10-02 20:26:24.289 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:26:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2455: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:24 compute-0 nova_compute[355794]: 2025-10-02 20:26:24.903 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:26:24 compute-0 nova_compute[355794]: 2025-10-02 20:26:24.905 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3626MB free_disk=59.955204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:26:24 compute-0 nova_compute[355794]: 2025-10-02 20:26:24.905 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:26:24 compute-0 nova_compute[355794]: 2025-10-02 20:26:24.906 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:26:25 compute-0 nova_compute[355794]: 2025-10-02 20:26:25.078 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:26:25 compute-0 nova_compute[355794]: 2025-10-02 20:26:25.079 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:26:25 compute-0 nova_compute[355794]: 2025-10-02 20:26:25.079 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:26:25 compute-0 nova_compute[355794]: 2025-10-02 20:26:25.113 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing inventories for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 20:26:25 compute-0 nova_compute[355794]: 2025-10-02 20:26:25.134 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating ProviderTree inventory for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 20:26:25 compute-0 nova_compute[355794]: 2025-10-02 20:26:25.135 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating inventory in ProviderTree for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 20:26:25 compute-0 nova_compute[355794]: 2025-10-02 20:26:25.171 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing aggregate associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 20:26:25 compute-0 ceph-mon[191910]: pgmap v2455: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:25 compute-0 nova_compute[355794]: 2025-10-02 20:26:25.214 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing trait associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, traits: COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 20:26:25 compute-0 nova_compute[355794]: 2025-10-02 20:26:25.280 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:26:25 compute-0 nova_compute[355794]: 2025-10-02 20:26:25.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:26:25 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3945020521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:26:25 compute-0 nova_compute[355794]: 2025-10-02 20:26:25.825 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:26:25 compute-0 nova_compute[355794]: 2025-10-02 20:26:25.837 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:26:25 compute-0 nova_compute[355794]: 2025-10-02 20:26:25.874 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:26:25 compute-0 nova_compute[355794]: 2025-10-02 20:26:25.877 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:26:25 compute-0 nova_compute[355794]: 2025-10-02 20:26:25.878 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:26:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:26:26 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3945020521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:26:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2456: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:26 compute-0 nova_compute[355794]: 2025-10-02 20:26:26.874 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:26:26 compute-0 nova_compute[355794]: 2025-10-02 20:26:26.875 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:26:26 compute-0 nova_compute[355794]: 2025-10-02 20:26:26.875 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:26:26 compute-0 nova_compute[355794]: 2025-10-02 20:26:26.876 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:26:27 compute-0 ceph-mon[191910]: pgmap v2456: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:27 compute-0 nova_compute[355794]: 2025-10-02 20:26:27.379 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:26:27 compute-0 nova_compute[355794]: 2025-10-02 20:26:27.379 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:26:27 compute-0 nova_compute[355794]: 2025-10-02 20:26:27.380 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:26:27 compute-0 nova_compute[355794]: 2025-10-02 20:26:27.380 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:26:28 compute-0 sudo[490227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:26:28 compute-0 sudo[490227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:26:28 compute-0 sudo[490227]: pam_unix(sudo:session): session closed for user root
Oct 02 20:26:28 compute-0 nova_compute[355794]: 2025-10-02 20:26:28.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:28 compute-0 sudo[490252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:26:28 compute-0 sudo[490252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:26:28 compute-0 sudo[490252]: pam_unix(sudo:session): session closed for user root
Oct 02 20:26:28 compute-0 sudo[490277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:26:28 compute-0 sudo[490277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:26:28 compute-0 sudo[490277]: pam_unix(sudo:session): session closed for user root
Oct 02 20:26:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2457: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:28 compute-0 sudo[490302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:26:28 compute-0 ceph-mon[191910]: pgmap v2457: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:28 compute-0 sudo[490302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:26:29 compute-0 sudo[490302]: pam_unix(sudo:session): session closed for user root
Oct 02 20:26:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 02 20:26:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 20:26:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:26:29 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:26:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:26:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:26:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:26:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:26:29 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev a30d8abe-8861-4325-b193-16627dce1c01 does not exist
Oct 02 20:26:29 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev d420046c-a167-4ba8-a725-144d01d10064 does not exist
Oct 02 20:26:29 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 044358ac-7dcc-49d8-a194-a10ada9116ac does not exist
Oct 02 20:26:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:26:29 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:26:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:26:29 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:26:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:26:29 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:26:29 compute-0 sudo[490357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:26:29 compute-0 sudo[490357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:26:29 compute-0 sudo[490357]: pam_unix(sudo:session): session closed for user root
Oct 02 20:26:29 compute-0 sudo[490389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:26:29 compute-0 sudo[490389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:26:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 20:26:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:26:29 compute-0 sudo[490389]: pam_unix(sudo:session): session closed for user root
Oct 02 20:26:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:26:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:26:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:26:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:26:29 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:26:29 compute-0 podman[490381]: 2025-10-02 20:26:29.534131015 +0000 UTC m=+0.109429934 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi)
Oct 02 20:26:29 compute-0 podman[490382]: 2025-10-02 20:26:29.552311288 +0000 UTC m=+0.135313878 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, build-date=2024-09-18T21:23:30, vcs-type=git, release-0.7.12=, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct 02 20:26:29 compute-0 sudo[490448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:26:29 compute-0 sudo[490448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:26:29 compute-0 sudo[490448]: pam_unix(sudo:session): session closed for user root
Oct 02 20:26:29 compute-0 sudo[490474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:26:29 compute-0 podman[157186]: time="2025-10-02T20:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:26:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:26:29 compute-0 sudo[490474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:26:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9120 "" "Go-http-client/1.1"
Oct 02 20:26:30 compute-0 nova_compute[355794]: 2025-10-02 20:26:30.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2458: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:30 compute-0 podman[490538]: 2025-10-02 20:26:30.404479493 +0000 UTC m=+0.099206339 container create 21832a6cccb625afd15faed59580052d82383a483ce8f9e045f8fcb8a4e3b83a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_agnesi, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:26:30 compute-0 podman[490538]: 2025-10-02 20:26:30.358088777 +0000 UTC m=+0.052815663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:26:30 compute-0 systemd[1]: Started libpod-conmon-21832a6cccb625afd15faed59580052d82383a483ce8f9e045f8fcb8a4e3b83a.scope.
Oct 02 20:26:30 compute-0 ceph-mon[191910]: pgmap v2458: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:30 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:26:30 compute-0 podman[490538]: 2025-10-02 20:26:30.592318694 +0000 UTC m=+0.287045580 container init 21832a6cccb625afd15faed59580052d82383a483ce8f9e045f8fcb8a4e3b83a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 20:26:30 compute-0 podman[490538]: 2025-10-02 20:26:30.612769866 +0000 UTC m=+0.307496712 container start 21832a6cccb625afd15faed59580052d82383a483ce8f9e045f8fcb8a4e3b83a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_agnesi, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:26:30 compute-0 podman[490538]: 2025-10-02 20:26:30.622247782 +0000 UTC m=+0.316974688 container attach 21832a6cccb625afd15faed59580052d82383a483ce8f9e045f8fcb8a4e3b83a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:26:30 compute-0 wizardly_agnesi[490555]: 167 167
Oct 02 20:26:30 compute-0 systemd[1]: libpod-21832a6cccb625afd15faed59580052d82383a483ce8f9e045f8fcb8a4e3b83a.scope: Deactivated successfully.
Oct 02 20:26:30 compute-0 podman[490538]: 2025-10-02 20:26:30.625866406 +0000 UTC m=+0.320593242 container died 21832a6cccb625afd15faed59580052d82383a483ce8f9e045f8fcb8a4e3b83a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_agnesi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 02 20:26:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-383bfac3d5414eece7a17d927062d5b527aa08368dde27755fb6c917a9c65fac-merged.mount: Deactivated successfully.
Oct 02 20:26:30 compute-0 podman[490538]: 2025-10-02 20:26:30.719439058 +0000 UTC m=+0.414165894 container remove 21832a6cccb625afd15faed59580052d82383a483ce8f9e045f8fcb8a4e3b83a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:26:30 compute-0 systemd[1]: libpod-conmon-21832a6cccb625afd15faed59580052d82383a483ce8f9e045f8fcb8a4e3b83a.scope: Deactivated successfully.
Oct 02 20:26:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:26:31 compute-0 podman[490578]: 2025-10-02 20:26:31.037942345 +0000 UTC m=+0.103076540 container create b70e80a15f40e12bb7bb00ca43207230ddbd403a01baacee8484c4c8dbc7efb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mclaren, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 20:26:31 compute-0 podman[490578]: 2025-10-02 20:26:31.001288762 +0000 UTC m=+0.066423037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:26:31 compute-0 systemd[1]: Started libpod-conmon-b70e80a15f40e12bb7bb00ca43207230ddbd403a01baacee8484c4c8dbc7efb1.scope.
Oct 02 20:26:31 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78aeefa15733cb221b03a27ae4a505856a4237f3b45ae5162718f44a4f629912/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78aeefa15733cb221b03a27ae4a505856a4237f3b45ae5162718f44a4f629912/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78aeefa15733cb221b03a27ae4a505856a4237f3b45ae5162718f44a4f629912/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78aeefa15733cb221b03a27ae4a505856a4237f3b45ae5162718f44a4f629912/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78aeefa15733cb221b03a27ae4a505856a4237f3b45ae5162718f44a4f629912/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:26:31 compute-0 podman[490578]: 2025-10-02 20:26:31.225060107 +0000 UTC m=+0.290194362 container init b70e80a15f40e12bb7bb00ca43207230ddbd403a01baacee8484c4c8dbc7efb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 20:26:31 compute-0 podman[490578]: 2025-10-02 20:26:31.250291233 +0000 UTC m=+0.315425458 container start b70e80a15f40e12bb7bb00ca43207230ddbd403a01baacee8484c4c8dbc7efb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mclaren, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 20:26:31 compute-0 podman[490578]: 2025-10-02 20:26:31.25748713 +0000 UTC m=+0.322621365 container attach b70e80a15f40e12bb7bb00ca43207230ddbd403a01baacee8484c4c8dbc7efb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:26:31 compute-0 nova_compute[355794]: 2025-10-02 20:26:31.391 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:26:31 compute-0 nova_compute[355794]: 2025-10-02 20:26:31.415 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:26:31 compute-0 nova_compute[355794]: 2025-10-02 20:26:31.415 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:26:31 compute-0 nova_compute[355794]: 2025-10-02 20:26:31.415 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:26:31 compute-0 nova_compute[355794]: 2025-10-02 20:26:31.416 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:26:31 compute-0 nova_compute[355794]: 2025-10-02 20:26:31.416 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:26:31 compute-0 nova_compute[355794]: 2025-10-02 20:26:31.416 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:26:31 compute-0 openstack_network_exporter[372736]: ERROR   20:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:26:31 compute-0 nova_compute[355794]: 2025-10-02 20:26:31.416 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:26:31 compute-0 nova_compute[355794]: 2025-10-02 20:26:31.417 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:26:31 compute-0 openstack_network_exporter[372736]: ERROR   20:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:26:31 compute-0 openstack_network_exporter[372736]: ERROR   20:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:26:31 compute-0 openstack_network_exporter[372736]: ERROR   20:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:26:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:26:31 compute-0 openstack_network_exporter[372736]: ERROR   20:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:26:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:26:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:26:32.347 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:26:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:26:32.349 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:26:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:26:32.349 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:26:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2459: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:32 compute-0 ceph-mon[191910]: pgmap v2459: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:32 compute-0 happy_mclaren[490595]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:26:32 compute-0 happy_mclaren[490595]: --> relative data size: 1.0
Oct 02 20:26:32 compute-0 happy_mclaren[490595]: --> All data devices are unavailable
Oct 02 20:26:32 compute-0 nova_compute[355794]: 2025-10-02 20:26:32.581 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:26:32 compute-0 systemd[1]: libpod-b70e80a15f40e12bb7bb00ca43207230ddbd403a01baacee8484c4c8dbc7efb1.scope: Deactivated successfully.
Oct 02 20:26:32 compute-0 podman[490578]: 2025-10-02 20:26:32.585866919 +0000 UTC m=+1.651001114 container died b70e80a15f40e12bb7bb00ca43207230ddbd403a01baacee8484c4c8dbc7efb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:26:32 compute-0 systemd[1]: libpod-b70e80a15f40e12bb7bb00ca43207230ddbd403a01baacee8484c4c8dbc7efb1.scope: Consumed 1.264s CPU time.
Oct 02 20:26:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-78aeefa15733cb221b03a27ae4a505856a4237f3b45ae5162718f44a4f629912-merged.mount: Deactivated successfully.
Oct 02 20:26:32 compute-0 podman[490578]: 2025-10-02 20:26:32.661730171 +0000 UTC m=+1.726864356 container remove b70e80a15f40e12bb7bb00ca43207230ddbd403a01baacee8484c4c8dbc7efb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:26:32 compute-0 systemd[1]: libpod-conmon-b70e80a15f40e12bb7bb00ca43207230ddbd403a01baacee8484c4c8dbc7efb1.scope: Deactivated successfully.
Oct 02 20:26:32 compute-0 sudo[490474]: pam_unix(sudo:session): session closed for user root
Oct 02 20:26:32 compute-0 podman[490625]: 2025-10-02 20:26:32.733517356 +0000 UTC m=+0.143659934 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 20:26:32 compute-0 podman[490626]: 2025-10-02 20:26:32.746211706 +0000 UTC m=+0.156395365 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.buildah.version=1.33.7, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm)
Oct 02 20:26:32 compute-0 podman[490624]: 2025-10-02 20:26:32.748837554 +0000 UTC m=+0.158968292 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 20:26:32 compute-0 podman[490628]: 2025-10-02 20:26:32.751716329 +0000 UTC m=+0.141995931 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 20:26:32 compute-0 sudo[490718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:26:32 compute-0 sudo[490718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:26:32 compute-0 sudo[490718]: pam_unix(sudo:session): session closed for user root
Oct 02 20:26:32 compute-0 podman[490627]: 2025-10-02 20:26:32.78486067 +0000 UTC m=+0.165572633 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 20:26:32 compute-0 sudo[490761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:26:32 compute-0 sudo[490761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:26:32 compute-0 sudo[490761]: pam_unix(sudo:session): session closed for user root
Oct 02 20:26:32 compute-0 sudo[490786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:26:32 compute-0 sudo[490786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:26:32 compute-0 sudo[490786]: pam_unix(sudo:session): session closed for user root
Oct 02 20:26:33 compute-0 sudo[490811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:26:33 compute-0 sudo[490811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:26:33 compute-0 nova_compute[355794]: 2025-10-02 20:26:33.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:33 compute-0 nova_compute[355794]: 2025-10-02 20:26:33.570 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:26:33 compute-0 podman[490876]: 2025-10-02 20:26:33.635331281 +0000 UTC m=+0.094759843 container create e479c1f502694983e428966f5c423ade245f441cff602ac2e4832000ad028a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bartik, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 20:26:33 compute-0 podman[490876]: 2025-10-02 20:26:33.603483844 +0000 UTC m=+0.062912456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:26:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:26:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:26:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:26:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:26:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:26:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:26:33 compute-0 systemd[1]: Started libpod-conmon-e479c1f502694983e428966f5c423ade245f441cff602ac2e4832000ad028a53.scope.
Oct 02 20:26:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:26:33 compute-0 podman[490876]: 2025-10-02 20:26:33.791174051 +0000 UTC m=+0.250602643 container init e479c1f502694983e428966f5c423ade245f441cff602ac2e4832000ad028a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:26:33 compute-0 podman[490876]: 2025-10-02 20:26:33.808637055 +0000 UTC m=+0.268065607 container start e479c1f502694983e428966f5c423ade245f441cff602ac2e4832000ad028a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bartik, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:26:33 compute-0 podman[490876]: 2025-10-02 20:26:33.815174825 +0000 UTC m=+0.274603437 container attach e479c1f502694983e428966f5c423ade245f441cff602ac2e4832000ad028a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:26:33 compute-0 heuristic_bartik[490891]: 167 167
Oct 02 20:26:33 compute-0 systemd[1]: libpod-e479c1f502694983e428966f5c423ade245f441cff602ac2e4832000ad028a53.scope: Deactivated successfully.
Oct 02 20:26:33 compute-0 podman[490876]: 2025-10-02 20:26:33.820887563 +0000 UTC m=+0.280316125 container died e479c1f502694983e428966f5c423ade245f441cff602ac2e4832000ad028a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 20:26:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-369b20aba2902520ce71b126492d3c6ebb80107ca2e0104f744f5f9f186ebd29-merged.mount: Deactivated successfully.
Oct 02 20:26:33 compute-0 podman[490876]: 2025-10-02 20:26:33.898974283 +0000 UTC m=+0.358402815 container remove e479c1f502694983e428966f5c423ade245f441cff602ac2e4832000ad028a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bartik, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:26:33 compute-0 systemd[1]: libpod-conmon-e479c1f502694983e428966f5c423ade245f441cff602ac2e4832000ad028a53.scope: Deactivated successfully.
Oct 02 20:26:34 compute-0 podman[490916]: 2025-10-02 20:26:34.224194204 +0000 UTC m=+0.097971007 container create d0f174bd8a19a6467ef51388c64dcbc5dc2ff259c46a12485d83932383b40e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:26:34 compute-0 podman[490916]: 2025-10-02 20:26:34.187649504 +0000 UTC m=+0.061426337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:26:34 compute-0 systemd[1]: Started libpod-conmon-d0f174bd8a19a6467ef51388c64dcbc5dc2ff259c46a12485d83932383b40e93.scope.
Oct 02 20:26:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40123d7eceacbdca31016f9bc3156ec54c47b7d94892000db6aa6a0a096b587/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40123d7eceacbdca31016f9bc3156ec54c47b7d94892000db6aa6a0a096b587/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40123d7eceacbdca31016f9bc3156ec54c47b7d94892000db6aa6a0a096b587/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40123d7eceacbdca31016f9bc3156ec54c47b7d94892000db6aa6a0a096b587/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:26:34 compute-0 podman[490916]: 2025-10-02 20:26:34.376008609 +0000 UTC m=+0.249785472 container init d0f174bd8a19a6467ef51388c64dcbc5dc2ff259c46a12485d83932383b40e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:26:34 compute-0 podman[490916]: 2025-10-02 20:26:34.400413764 +0000 UTC m=+0.274190537 container start d0f174bd8a19a6467ef51388c64dcbc5dc2ff259c46a12485d83932383b40e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:26:34 compute-0 podman[490916]: 2025-10-02 20:26:34.406256015 +0000 UTC m=+0.280032878 container attach d0f174bd8a19a6467ef51388c64dcbc5dc2ff259c46a12485d83932383b40e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 20:26:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2460: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:34 compute-0 ceph-mon[191910]: pgmap v2460: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:35 compute-0 zen_raman[490932]: {
Oct 02 20:26:35 compute-0 zen_raman[490932]:     "0": [
Oct 02 20:26:35 compute-0 zen_raman[490932]:         {
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "devices": [
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "/dev/loop3"
Oct 02 20:26:35 compute-0 zen_raman[490932]:             ],
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "lv_name": "ceph_lv0",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "lv_size": "21470642176",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "name": "ceph_lv0",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "tags": {
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.cluster_name": "ceph",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.crush_device_class": "",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.encrypted": "0",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.osd_id": "0",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.type": "block",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.vdo": "0"
Oct 02 20:26:35 compute-0 zen_raman[490932]:             },
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "type": "block",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "vg_name": "ceph_vg0"
Oct 02 20:26:35 compute-0 zen_raman[490932]:         }
Oct 02 20:26:35 compute-0 zen_raman[490932]:     ],
Oct 02 20:26:35 compute-0 zen_raman[490932]:     "1": [
Oct 02 20:26:35 compute-0 zen_raman[490932]:         {
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "devices": [
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "/dev/loop4"
Oct 02 20:26:35 compute-0 zen_raman[490932]:             ],
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "lv_name": "ceph_lv1",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "lv_size": "21470642176",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "name": "ceph_lv1",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "tags": {
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.cluster_name": "ceph",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.crush_device_class": "",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.encrypted": "0",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.osd_id": "1",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.type": "block",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.vdo": "0"
Oct 02 20:26:35 compute-0 zen_raman[490932]:             },
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "type": "block",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "vg_name": "ceph_vg1"
Oct 02 20:26:35 compute-0 zen_raman[490932]:         }
Oct 02 20:26:35 compute-0 zen_raman[490932]:     ],
Oct 02 20:26:35 compute-0 zen_raman[490932]:     "2": [
Oct 02 20:26:35 compute-0 zen_raman[490932]:         {
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "devices": [
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "/dev/loop5"
Oct 02 20:26:35 compute-0 zen_raman[490932]:             ],
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "lv_name": "ceph_lv2",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "lv_size": "21470642176",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "name": "ceph_lv2",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "tags": {
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.cluster_name": "ceph",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.crush_device_class": "",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.encrypted": "0",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.osd_id": "2",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.type": "block",
Oct 02 20:26:35 compute-0 zen_raman[490932]:                 "ceph.vdo": "0"
Oct 02 20:26:35 compute-0 zen_raman[490932]:             },
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "type": "block",
Oct 02 20:26:35 compute-0 zen_raman[490932]:             "vg_name": "ceph_vg2"
Oct 02 20:26:35 compute-0 zen_raman[490932]:         }
Oct 02 20:26:35 compute-0 zen_raman[490932]:     ]
Oct 02 20:26:35 compute-0 zen_raman[490932]: }
Oct 02 20:26:35 compute-0 systemd[1]: libpod-d0f174bd8a19a6467ef51388c64dcbc5dc2ff259c46a12485d83932383b40e93.scope: Deactivated successfully.
Oct 02 20:26:35 compute-0 podman[490916]: 2025-10-02 20:26:35.194364716 +0000 UTC m=+1.068141519 container died d0f174bd8a19a6467ef51388c64dcbc5dc2ff259c46a12485d83932383b40e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 20:26:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f40123d7eceacbdca31016f9bc3156ec54c47b7d94892000db6aa6a0a096b587-merged.mount: Deactivated successfully.
Oct 02 20:26:35 compute-0 podman[490916]: 2025-10-02 20:26:35.291188352 +0000 UTC m=+1.164965115 container remove d0f174bd8a19a6467ef51388c64dcbc5dc2ff259c46a12485d83932383b40e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 20:26:35 compute-0 systemd[1]: libpod-conmon-d0f174bd8a19a6467ef51388c64dcbc5dc2ff259c46a12485d83932383b40e93.scope: Deactivated successfully.
Oct 02 20:26:35 compute-0 nova_compute[355794]: 2025-10-02 20:26:35.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:35 compute-0 sudo[490811]: pam_unix(sudo:session): session closed for user root
Oct 02 20:26:35 compute-0 sudo[490955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:26:35 compute-0 sudo[490955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:26:35 compute-0 sudo[490955]: pam_unix(sudo:session): session closed for user root
Oct 02 20:26:35 compute-0 sudo[490980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:26:35 compute-0 sudo[490980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:26:35 compute-0 sudo[490980]: pam_unix(sudo:session): session closed for user root
Oct 02 20:26:35 compute-0 sudo[491005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:26:35 compute-0 sudo[491005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:26:35 compute-0 sudo[491005]: pam_unix(sudo:session): session closed for user root
Oct 02 20:26:35 compute-0 sudo[491030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:26:35 compute-0 sudo[491030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:26:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:26:36 compute-0 podman[491093]: 2025-10-02 20:26:36.404767279 +0000 UTC m=+0.100675927 container create 85763397bbcd4a01f2cf33c43aa1ba3d6fe990d56bcf92f0f6decb35e422ee3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:26:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2461: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:36 compute-0 podman[491093]: 2025-10-02 20:26:36.369292797 +0000 UTC m=+0.065201495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:26:36 compute-0 systemd[1]: Started libpod-conmon-85763397bbcd4a01f2cf33c43aa1ba3d6fe990d56bcf92f0f6decb35e422ee3d.scope.
Oct 02 20:26:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:26:36 compute-0 ceph-mon[191910]: pgmap v2461: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:36 compute-0 podman[491093]: 2025-10-02 20:26:36.538154826 +0000 UTC m=+0.234063444 container init 85763397bbcd4a01f2cf33c43aa1ba3d6fe990d56bcf92f0f6decb35e422ee3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:26:36 compute-0 podman[491093]: 2025-10-02 20:26:36.557076777 +0000 UTC m=+0.252985415 container start 85763397bbcd4a01f2cf33c43aa1ba3d6fe990d56bcf92f0f6decb35e422ee3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_liskov, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:26:36 compute-0 podman[491093]: 2025-10-02 20:26:36.562864368 +0000 UTC m=+0.258773086 container attach 85763397bbcd4a01f2cf33c43aa1ba3d6fe990d56bcf92f0f6decb35e422ee3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_liskov, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 20:26:36 compute-0 fervent_liskov[491109]: 167 167
Oct 02 20:26:36 compute-0 systemd[1]: libpod-85763397bbcd4a01f2cf33c43aa1ba3d6fe990d56bcf92f0f6decb35e422ee3d.scope: Deactivated successfully.
Oct 02 20:26:36 compute-0 conmon[491109]: conmon 85763397bbcd4a01f2cf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85763397bbcd4a01f2cf33c43aa1ba3d6fe990d56bcf92f0f6decb35e422ee3d.scope/container/memory.events
Oct 02 20:26:36 compute-0 podman[491093]: 2025-10-02 20:26:36.569774967 +0000 UTC m=+0.265683605 container died 85763397bbcd4a01f2cf33c43aa1ba3d6fe990d56bcf92f0f6decb35e422ee3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 20:26:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-f98d10e092075b9368fab1c012ccbd18ea2b8a0cf9f6d1b748ded421b4197269-merged.mount: Deactivated successfully.
Oct 02 20:26:36 compute-0 podman[491093]: 2025-10-02 20:26:36.658654987 +0000 UTC m=+0.354563615 container remove 85763397bbcd4a01f2cf33c43aa1ba3d6fe990d56bcf92f0f6decb35e422ee3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:26:36 compute-0 systemd[1]: libpod-conmon-85763397bbcd4a01f2cf33c43aa1ba3d6fe990d56bcf92f0f6decb35e422ee3d.scope: Deactivated successfully.
Oct 02 20:26:36 compute-0 podman[491133]: 2025-10-02 20:26:36.866748655 +0000 UTC m=+0.059797375 container create c5a3b996877bd433eb6fa47dd442f03647c59215c7f4f39710a8abd64619ac37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 20:26:36 compute-0 podman[491133]: 2025-10-02 20:26:36.844797794 +0000 UTC m=+0.037846474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:26:36 compute-0 systemd[1]: Started libpod-conmon-c5a3b996877bd433eb6fa47dd442f03647c59215c7f4f39710a8abd64619ac37.scope.
Oct 02 20:26:36 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:26:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b64ee7ca8e07402a26a4e937e0cc8beaee6cd894f138babe9bce46a421ecb5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:26:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b64ee7ca8e07402a26a4e937e0cc8beaee6cd894f138babe9bce46a421ecb5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:26:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b64ee7ca8e07402a26a4e937e0cc8beaee6cd894f138babe9bce46a421ecb5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:26:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b64ee7ca8e07402a26a4e937e0cc8beaee6cd894f138babe9bce46a421ecb5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:26:37 compute-0 podman[491133]: 2025-10-02 20:26:37.045989172 +0000 UTC m=+0.239037932 container init c5a3b996877bd433eb6fa47dd442f03647c59215c7f4f39710a8abd64619ac37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 02 20:26:37 compute-0 podman[491133]: 2025-10-02 20:26:37.074584026 +0000 UTC m=+0.267632706 container start c5a3b996877bd433eb6fa47dd442f03647c59215c7f4f39710a8abd64619ac37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermat, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:26:37 compute-0 podman[491133]: 2025-10-02 20:26:37.078512678 +0000 UTC m=+0.271561398 container attach c5a3b996877bd433eb6fa47dd442f03647c59215c7f4f39710a8abd64619ac37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:26:38 compute-0 frosty_fermat[491149]: {
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:         "osd_id": 1,
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:         "type": "bluestore"
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:     },
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:         "osd_id": 2,
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:         "type": "bluestore"
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:     },
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:         "osd_id": 0,
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:         "type": "bluestore"
Oct 02 20:26:38 compute-0 frosty_fermat[491149]:     }
Oct 02 20:26:38 compute-0 frosty_fermat[491149]: }
Oct 02 20:26:38 compute-0 nova_compute[355794]: 2025-10-02 20:26:38.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:38 compute-0 systemd[1]: libpod-c5a3b996877bd433eb6fa47dd442f03647c59215c7f4f39710a8abd64619ac37.scope: Deactivated successfully.
Oct 02 20:26:38 compute-0 podman[491133]: 2025-10-02 20:26:38.20069 +0000 UTC m=+1.393738680 container died c5a3b996877bd433eb6fa47dd442f03647c59215c7f4f39710a8abd64619ac37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermat, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 20:26:38 compute-0 systemd[1]: libpod-c5a3b996877bd433eb6fa47dd442f03647c59215c7f4f39710a8abd64619ac37.scope: Consumed 1.127s CPU time.
Oct 02 20:26:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b64ee7ca8e07402a26a4e937e0cc8beaee6cd894f138babe9bce46a421ecb5e-merged.mount: Deactivated successfully.
Oct 02 20:26:38 compute-0 podman[491133]: 2025-10-02 20:26:38.290284168 +0000 UTC m=+1.483332848 container remove c5a3b996877bd433eb6fa47dd442f03647c59215c7f4f39710a8abd64619ac37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:26:38 compute-0 systemd[1]: libpod-conmon-c5a3b996877bd433eb6fa47dd442f03647c59215c7f4f39710a8abd64619ac37.scope: Deactivated successfully.
Oct 02 20:26:38 compute-0 sudo[491030]: pam_unix(sudo:session): session closed for user root
Oct 02 20:26:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:26:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:26:38 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:26:38 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:26:38 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 215064a1-2835-4603-a27e-43d60e79ef6c does not exist
Oct 02 20:26:38 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev c8932a33-5115-4f50-9e4c-71488c08f46d does not exist
Oct 02 20:26:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2462: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:38 compute-0 sudo[491193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:26:38 compute-0 sudo[491193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:26:38 compute-0 sudo[491193]: pam_unix(sudo:session): session closed for user root
Oct 02 20:26:38 compute-0 sudo[491218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:26:38 compute-0 sudo[491218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:26:38 compute-0 sudo[491218]: pam_unix(sudo:session): session closed for user root
Oct 02 20:26:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:26:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:26:39 compute-0 ceph-mon[191910]: pgmap v2462: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:40 compute-0 nova_compute[355794]: 2025-10-02 20:26:40.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2463: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:40 compute-0 ceph-mon[191910]: pgmap v2463: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:26:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2464: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:42 compute-0 ceph-mon[191910]: pgmap v2464: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:43 compute-0 nova_compute[355794]: 2025-10-02 20:26:43.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2465: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:44 compute-0 ceph-mon[191910]: pgmap v2465: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:44 compute-0 podman[491243]: 2025-10-02 20:26:44.771581274 +0000 UTC m=+0.186532678 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 20:26:45 compute-0 nova_compute[355794]: 2025-10-02 20:26:45.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:26:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2466: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:46 compute-0 ceph-mon[191910]: pgmap v2466: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:48 compute-0 nova_compute[355794]: 2025-10-02 20:26:48.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2467: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:48 compute-0 ceph-mon[191910]: pgmap v2467: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:50 compute-0 nova_compute[355794]: 2025-10-02 20:26:50.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2468: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:50 compute-0 ceph-mon[191910]: pgmap v2468: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:26:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2469: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:52 compute-0 ceph-mon[191910]: pgmap v2469: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:53 compute-0 nova_compute[355794]: 2025-10-02 20:26:53.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2470: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:54 compute-0 rsyslogd[187702]: imjournal: 17368 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 02 20:26:54 compute-0 ceph-mon[191910]: pgmap v2470: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:54 compute-0 podman[491265]: 2025-10-02 20:26:54.697646327 +0000 UTC m=+0.103486920 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930)
Oct 02 20:26:54 compute-0 podman[491264]: 2025-10-02 20:26:54.702328949 +0000 UTC m=+0.117693519 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 20:26:55 compute-0 nova_compute[355794]: 2025-10-02 20:26:55.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:26:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2471: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:56 compute-0 ceph-mon[191910]: pgmap v2471: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:58 compute-0 nova_compute[355794]: 2025-10-02 20:26:58.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:26:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2472: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:58 compute-0 ceph-mon[191910]: pgmap v2472: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:26:59 compute-0 podman[157186]: time="2025-10-02T20:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:26:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:26:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9112 "" "Go-http-client/1.1"
Oct 02 20:27:00 compute-0 nova_compute[355794]: 2025-10-02 20:27:00.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2473: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:00 compute-0 ceph-mon[191910]: pgmap v2473: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:00 compute-0 podman[491303]: 2025-10-02 20:27:00.736022113 +0000 UTC m=+0.156534819 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:27:00 compute-0 podman[491304]: 2025-10-02 20:27:00.76053723 +0000 UTC m=+0.175072750 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, name=ubi9, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vendor=Red Hat, Inc., release=1214.1726694543, vcs-type=git, version=9.4, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 20:27:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:27:01 compute-0 openstack_network_exporter[372736]: ERROR   20:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:27:01 compute-0 openstack_network_exporter[372736]: ERROR   20:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:27:01 compute-0 openstack_network_exporter[372736]: ERROR   20:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:27:01 compute-0 openstack_network_exporter[372736]: ERROR   20:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:27:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:27:01 compute-0 openstack_network_exporter[372736]: ERROR   20:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:27:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:27:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2474: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:02 compute-0 ceph-mon[191910]: pgmap v2474: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:03 compute-0 nova_compute[355794]: 2025-10-02 20:27:03.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:27:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:27:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:27:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:27:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:27:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:27:03 compute-0 podman[491341]: 2025-10-02 20:27:03.71396346 +0000 UTC m=+0.128661364 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct 02 20:27:03 compute-0 podman[491354]: 2025-10-02 20:27:03.720073649 +0000 UTC m=+0.106511239 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 20:27:03 compute-0 podman[491343]: 2025-10-02 20:27:03.728410806 +0000 UTC m=+0.134082326 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, name=ubi9-minimal, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6)
Oct 02 20:27:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:27:03
Oct 02 20:27:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:27:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:27:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'backups', 'volumes', 'images', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.rgw.root']
Oct 02 20:27:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:27:03 compute-0 podman[491342]: 2025-10-02 20:27:03.759183165 +0000 UTC m=+0.165714127 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 20:27:03 compute-0 podman[491344]: 2025-10-02 20:27:03.781359032 +0000 UTC m=+0.166773845 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.313 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.314 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f3433493530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.326 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.327 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.327 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.327 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.327 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T20:27:04.327719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.395 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.397 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.397 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.398 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.398 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.398 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.398 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.399 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.399 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.400 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:27:04.399208) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2475: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.435 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.435 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.436 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.437 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.437 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.437 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.438 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.438 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.438 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.438 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.439 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.440 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.439 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:27:04.438339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.440 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.441 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.441 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.441 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.441 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.441 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.442 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.442 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.443 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:27:04.441852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.443 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.444 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.444 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.444 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.444 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.445 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.445 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.446 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:27:04.445192) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.485 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.486 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.486 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.487 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.487 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.487 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.488 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.488 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.488 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:27:04.488041) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.489 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.489 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.490 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.491 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.491 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.491 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.491 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.491 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.492 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:27:04.491909) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.498 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.499 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.499 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.500 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.500 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.500 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.501 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:27:04.500692) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.502 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.502 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.503 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:27:04.503010) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.504 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.504 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.505 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:27:04.505135) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.506 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.507 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.507 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:27:04.507507) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.508 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.509 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.509 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.509 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.509 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.510 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.511 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.511 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.511 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.512 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:27:04.509664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.512 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.512 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:27:04.512462) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.513 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.513 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.514 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.514 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.514 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.514 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.515 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.515 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:27:04.514848) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.515 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.516 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.517 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.517 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.517 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.518 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.518 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.518 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2524 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:27:04.518276) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.519 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.520 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.520 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.520 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.520 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.520 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.521 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.522 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.523 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:27:04.520650) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.524 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.525 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.525 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:27:04.525648) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.525 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.526 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.527 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.528 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.528 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.528 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.529 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.529 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2730 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.530 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.531 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.531 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceph-mon[191910]: pgmap v2475: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.532 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.532 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.533 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.535 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.535 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:27:04.529125) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.536 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:27:04.532790) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.536 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.536 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.536 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.537 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:27:04.536888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.536 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.537 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.538 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.538 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.540 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.540 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.541 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.541 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.541 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.542 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:27:04.542106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.542 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.542 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.543 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.544 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.544 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.544 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.544 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.544 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.545 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.545 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:27:04.544717) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.545 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.545 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.545 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.546 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.546 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:27:04.546022) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.546 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 74190000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.546 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.546 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.547 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.547 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.547 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:27:04.547288) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.547 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.548 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.548 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.548 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.549 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:27:04.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:27:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:27:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:27:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:27:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:27:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:27:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:27:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:27:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:27:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:27:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:27:05 compute-0 nova_compute[355794]: 2025-10-02 20:27:05.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:27:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2476: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:06 compute-0 ceph-mon[191910]: pgmap v2476: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:08 compute-0 nova_compute[355794]: 2025-10-02 20:27:08.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2477: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:27:08.498640) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436828498684, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1413, "num_deletes": 251, "total_data_size": 2234033, "memory_usage": 2269744, "flush_reason": "Manual Compaction"}
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436828517821, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 2190443, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49611, "largest_seqno": 51023, "table_properties": {"data_size": 2183811, "index_size": 3831, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13692, "raw_average_key_size": 19, "raw_value_size": 2170546, "raw_average_value_size": 3145, "num_data_blocks": 172, "num_entries": 690, "num_filter_entries": 690, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759436681, "oldest_key_time": 1759436681, "file_creation_time": 1759436828, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 19275 microseconds, and 11832 cpu microseconds.
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:27:08.517911) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 2190443 bytes OK
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:27:08.517940) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:27:08.521600) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:27:08.521621) EVENT_LOG_v1 {"time_micros": 1759436828521614, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:27:08.521644) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 2227803, prev total WAL file size 2227803, number of live WAL files 2.
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:27:08.523865) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(2139KB)], [119(7060KB)]
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436828523955, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 9420379, "oldest_snapshot_seqno": -1}
Oct 02 20:27:08 compute-0 ceph-mon[191910]: pgmap v2477: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 6520 keys, 7659533 bytes, temperature: kUnknown
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436828594760, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 7659533, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7620042, "index_size": 22101, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 170400, "raw_average_key_size": 26, "raw_value_size": 7506123, "raw_average_value_size": 1151, "num_data_blocks": 871, "num_entries": 6520, "num_filter_entries": 6520, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759436828, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:27:08.595184) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 7659533 bytes
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:27:08.604311) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.7 rd, 107.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 6.9 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(7.8) write-amplify(3.5) OK, records in: 7034, records dropped: 514 output_compression: NoCompression
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:27:08.604342) EVENT_LOG_v1 {"time_micros": 1759436828604327, "job": 72, "event": "compaction_finished", "compaction_time_micros": 70973, "compaction_time_cpu_micros": 39465, "output_level": 6, "num_output_files": 1, "total_output_size": 7659533, "num_input_records": 7034, "num_output_records": 6520, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436828605636, "job": 72, "event": "table_file_deletion", "file_number": 121}
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436828610042, "job": 72, "event": "table_file_deletion", "file_number": 119}
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:27:08.523557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:27:08.610773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:27:08.610783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:27:08.610786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:27:08.610789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:27:08 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:27:08.610792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:27:10 compute-0 nova_compute[355794]: 2025-10-02 20:27:10.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2478: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:10 compute-0 ceph-mon[191910]: pgmap v2478: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:27:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2479: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:12 compute-0 ceph-mon[191910]: pgmap v2479: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:13 compute-0 nova_compute[355794]: 2025-10-02 20:27:13.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:27:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:27:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2480: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:14 compute-0 ceph-mon[191910]: pgmap v2480: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:15 compute-0 nova_compute[355794]: 2025-10-02 20:27:15.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:15 compute-0 podman[491445]: 2025-10-02 20:27:15.721196646 +0000 UTC m=+0.143041918 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 20:27:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:27:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2481: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:16 compute-0 ceph-mon[191910]: pgmap v2481: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:18 compute-0 nova_compute[355794]: 2025-10-02 20:27:18.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2482: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:18 compute-0 ceph-mon[191910]: pgmap v2482: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:27:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1045107596' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:27:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:27:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1045107596' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:27:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1045107596' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:27:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1045107596' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:27:20 compute-0 nova_compute[355794]: 2025-10-02 20:27:20.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2483: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:27:21 compute-0 ceph-mon[191910]: pgmap v2483: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2484: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:22 compute-0 ceph-mon[191910]: pgmap v2484: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:23 compute-0 nova_compute[355794]: 2025-10-02 20:27:23.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2485: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:24 compute-0 ceph-mon[191910]: pgmap v2485: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:24 compute-0 nova_compute[355794]: 2025-10-02 20:27:24.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:27:24 compute-0 nova_compute[355794]: 2025-10-02 20:27:24.611 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:27:24 compute-0 nova_compute[355794]: 2025-10-02 20:27:24.612 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:27:24 compute-0 nova_compute[355794]: 2025-10-02 20:27:24.612 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:27:24 compute-0 nova_compute[355794]: 2025-10-02 20:27:24.612 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:27:24 compute-0 nova_compute[355794]: 2025-10-02 20:27:24.612 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:27:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:27:25 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2374303401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:27:25 compute-0 nova_compute[355794]: 2025-10-02 20:27:25.103 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:27:25 compute-0 nova_compute[355794]: 2025-10-02 20:27:25.181 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:27:25 compute-0 nova_compute[355794]: 2025-10-02 20:27:25.182 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:27:25 compute-0 nova_compute[355794]: 2025-10-02 20:27:25.182 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:27:25 compute-0 nova_compute[355794]: 2025-10-02 20:27:25.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:25 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2374303401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:27:25 compute-0 podman[491488]: 2025-10-02 20:27:25.682035223 +0000 UTC m=+0.103053658 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 20:27:25 compute-0 nova_compute[355794]: 2025-10-02 20:27:25.681 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:27:25 compute-0 nova_compute[355794]: 2025-10-02 20:27:25.682 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3626MB free_disk=59.955204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:27:25 compute-0 nova_compute[355794]: 2025-10-02 20:27:25.682 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:27:25 compute-0 nova_compute[355794]: 2025-10-02 20:27:25.682 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:27:25 compute-0 podman[491487]: 2025-10-02 20:27:25.689519568 +0000 UTC m=+0.111498798 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 20:27:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:27:25 compute-0 nova_compute[355794]: 2025-10-02 20:27:25.931 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:27:25 compute-0 nova_compute[355794]: 2025-10-02 20:27:25.932 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:27:25 compute-0 nova_compute[355794]: 2025-10-02 20:27:25.932 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:27:26 compute-0 nova_compute[355794]: 2025-10-02 20:27:26.097 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:27:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2486: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:27:26 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2997435193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:27:26 compute-0 ceph-mon[191910]: pgmap v2486: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:26 compute-0 nova_compute[355794]: 2025-10-02 20:27:26.589 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:27:26 compute-0 nova_compute[355794]: 2025-10-02 20:27:26.597 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:27:26 compute-0 nova_compute[355794]: 2025-10-02 20:27:26.623 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:27:26 compute-0 nova_compute[355794]: 2025-10-02 20:27:26.626 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:27:26 compute-0 nova_compute[355794]: 2025-10-02 20:27:26.627 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:27:26 compute-0 nova_compute[355794]: 2025-10-02 20:27:26.628 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:27:27 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2997435193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:27:27 compute-0 nova_compute[355794]: 2025-10-02 20:27:27.641 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:27:27 compute-0 nova_compute[355794]: 2025-10-02 20:27:27.642 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:27:27 compute-0 nova_compute[355794]: 2025-10-02 20:27:27.642 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:27:27 compute-0 nova_compute[355794]: 2025-10-02 20:27:27.643 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:27:28 compute-0 nova_compute[355794]: 2025-10-02 20:27:28.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2487: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:28 compute-0 nova_compute[355794]: 2025-10-02 20:27:28.440 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:27:28 compute-0 nova_compute[355794]: 2025-10-02 20:27:28.441 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:27:28 compute-0 nova_compute[355794]: 2025-10-02 20:27:28.441 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:27:28 compute-0 nova_compute[355794]: 2025-10-02 20:27:28.442 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:27:28 compute-0 ceph-mon[191910]: pgmap v2487: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:29 compute-0 podman[157186]: time="2025-10-02T20:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:27:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:27:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9122 "" "Go-http-client/1.1"
Oct 02 20:27:30 compute-0 nova_compute[355794]: 2025-10-02 20:27:30.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2488: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:30 compute-0 ceph-mon[191910]: pgmap v2488: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:27:31 compute-0 nova_compute[355794]: 2025-10-02 20:27:31.186 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:27:31 compute-0 nova_compute[355794]: 2025-10-02 20:27:31.205 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:27:31 compute-0 nova_compute[355794]: 2025-10-02 20:27:31.206 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:27:31 compute-0 nova_compute[355794]: 2025-10-02 20:27:31.206 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:27:31 compute-0 nova_compute[355794]: 2025-10-02 20:27:31.207 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:27:31 compute-0 nova_compute[355794]: 2025-10-02 20:27:31.207 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:27:31 compute-0 nova_compute[355794]: 2025-10-02 20:27:31.208 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:27:31 compute-0 nova_compute[355794]: 2025-10-02 20:27:31.208 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:27:31 compute-0 nova_compute[355794]: 2025-10-02 20:27:31.208 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:27:31 compute-0 openstack_network_exporter[372736]: ERROR   20:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:27:31 compute-0 openstack_network_exporter[372736]: ERROR   20:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:27:31 compute-0 openstack_network_exporter[372736]: ERROR   20:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:27:31 compute-0 openstack_network_exporter[372736]: ERROR   20:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:27:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:27:31 compute-0 openstack_network_exporter[372736]: ERROR   20:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:27:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:27:31 compute-0 podman[491553]: 2025-10-02 20:27:31.726344394 +0000 UTC m=+0.141266942 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct 02 20:27:31 compute-0 podman[491554]: 2025-10-02 20:27:31.726327154 +0000 UTC m=+0.134150907 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=base rhel9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, release-0.7.12=, version=9.4, io.openshift.expose-services=, build-date=2024-09-18T21:23:30)
Oct 02 20:27:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:27:32.348 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:27:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:27:32.349 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:27:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:27:32.350 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:27:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2489: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:32 compute-0 ceph-mon[191910]: pgmap v2489: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:33 compute-0 nova_compute[355794]: 2025-10-02 20:27:33.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:33 compute-0 nova_compute[355794]: 2025-10-02 20:27:33.576 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:27:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:27:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:27:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:27:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:27:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:27:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:27:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2490: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:34 compute-0 ceph-mon[191910]: pgmap v2490: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:34 compute-0 podman[491592]: 2025-10-02 20:27:34.698796628 +0000 UTC m=+0.110614285 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:27:34 compute-0 podman[491594]: 2025-10-02 20:27:34.701848367 +0000 UTC m=+0.095727178 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vcs-type=git, config_id=edpm, vendor=Red Hat, Inc., maintainer=Red Hat, Inc.)
Oct 02 20:27:34 compute-0 podman[491602]: 2025-10-02 20:27:34.702739211 +0000 UTC m=+0.088026989 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 20:27:34 compute-0 podman[491593]: 2025-10-02 20:27:34.717851373 +0000 UTC m=+0.122336320 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:27:34 compute-0 podman[491595]: 2025-10-02 20:27:34.742231237 +0000 UTC m=+0.130791010 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 20:27:35 compute-0 nova_compute[355794]: 2025-10-02 20:27:35.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:27:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2491: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:36 compute-0 ceph-mon[191910]: pgmap v2491: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:38 compute-0 nova_compute[355794]: 2025-10-02 20:27:38.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2492: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:38 compute-0 ceph-mon[191910]: pgmap v2492: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:38 compute-0 sudo[491693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:27:38 compute-0 sudo[491693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:27:38 compute-0 sudo[491693]: pam_unix(sudo:session): session closed for user root
Oct 02 20:27:38 compute-0 sudo[491718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:27:38 compute-0 sudo[491718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:27:38 compute-0 sudo[491718]: pam_unix(sudo:session): session closed for user root
Oct 02 20:27:38 compute-0 sudo[491743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:27:38 compute-0 sudo[491743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:27:38 compute-0 sudo[491743]: pam_unix(sudo:session): session closed for user root
Oct 02 20:27:39 compute-0 sudo[491768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:27:39 compute-0 sudo[491768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:27:39 compute-0 sudo[491768]: pam_unix(sudo:session): session closed for user root
Oct 02 20:27:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:27:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:27:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:27:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:27:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:27:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:27:39 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 8ddd001f-fd8a-41ef-aeec-48d3a51544df does not exist
Oct 02 20:27:39 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 46117ca3-55ce-4b13-a674-8a498e7abcdf does not exist
Oct 02 20:27:39 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 5b15c543-7b9e-413e-93a4-a0c2da190ec0 does not exist
Oct 02 20:27:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:27:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:27:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:27:39 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:27:39 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:27:39 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:27:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:27:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:27:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:27:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:27:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:27:39 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:27:40 compute-0 sudo[491824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:27:40 compute-0 sudo[491824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:27:40 compute-0 sudo[491824]: pam_unix(sudo:session): session closed for user root
Oct 02 20:27:40 compute-0 sudo[491849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:27:40 compute-0 sudo[491849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:27:40 compute-0 sudo[491849]: pam_unix(sudo:session): session closed for user root
Oct 02 20:27:40 compute-0 sudo[491874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:27:40 compute-0 sudo[491874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:27:40 compute-0 sudo[491874]: pam_unix(sudo:session): session closed for user root
Oct 02 20:27:40 compute-0 nova_compute[355794]: 2025-10-02 20:27:40.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2493: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:40 compute-0 sudo[491899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:27:40 compute-0 sudo[491899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:27:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:27:40 compute-0 ceph-mon[191910]: pgmap v2493: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:41 compute-0 podman[491962]: 2025-10-02 20:27:41.068485694 +0000 UTC m=+0.070131114 container create e11780ca632fea69cfe1ee9a091f200fb79982037a7fee3c9ccefd62a0a446ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:27:41 compute-0 podman[491962]: 2025-10-02 20:27:41.039519561 +0000 UTC m=+0.041165001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:27:41 compute-0 systemd[1]: Started libpod-conmon-e11780ca632fea69cfe1ee9a091f200fb79982037a7fee3c9ccefd62a0a446ac.scope.
Oct 02 20:27:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:27:41 compute-0 podman[491962]: 2025-10-02 20:27:41.210046993 +0000 UTC m=+0.211692433 container init e11780ca632fea69cfe1ee9a091f200fb79982037a7fee3c9ccefd62a0a446ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_feistel, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:27:41 compute-0 podman[491962]: 2025-10-02 20:27:41.221621403 +0000 UTC m=+0.223266843 container start e11780ca632fea69cfe1ee9a091f200fb79982037a7fee3c9ccefd62a0a446ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_feistel, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:27:41 compute-0 podman[491962]: 2025-10-02 20:27:41.227207608 +0000 UTC m=+0.228853038 container attach e11780ca632fea69cfe1ee9a091f200fb79982037a7fee3c9ccefd62a0a446ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 20:27:41 compute-0 sleepy_feistel[491977]: 167 167
Oct 02 20:27:41 compute-0 systemd[1]: libpod-e11780ca632fea69cfe1ee9a091f200fb79982037a7fee3c9ccefd62a0a446ac.scope: Deactivated successfully.
Oct 02 20:27:41 compute-0 podman[491962]: 2025-10-02 20:27:41.233576824 +0000 UTC m=+0.235222264 container died e11780ca632fea69cfe1ee9a091f200fb79982037a7fee3c9ccefd62a0a446ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_feistel, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 20:27:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-500141c55d1d5eb96b5115c581ebd0760770d19d331bf966fd198f738b6cb46f-merged.mount: Deactivated successfully.
Oct 02 20:27:41 compute-0 podman[491962]: 2025-10-02 20:27:41.307787783 +0000 UTC m=+0.309433213 container remove e11780ca632fea69cfe1ee9a091f200fb79982037a7fee3c9ccefd62a0a446ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_feistel, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:27:41 compute-0 systemd[1]: libpod-conmon-e11780ca632fea69cfe1ee9a091f200fb79982037a7fee3c9ccefd62a0a446ac.scope: Deactivated successfully.
Oct 02 20:27:41 compute-0 podman[492000]: 2025-10-02 20:27:41.570487509 +0000 UTC m=+0.066619882 container create 77fa6748155c01f3905776ce6c5c4a90316ef9e15326b2a92450d095140db5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 02 20:27:41 compute-0 podman[492000]: 2025-10-02 20:27:41.554487894 +0000 UTC m=+0.050620297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:27:41 compute-0 systemd[1]: Started libpod-conmon-77fa6748155c01f3905776ce6c5c4a90316ef9e15326b2a92450d095140db5c5.scope.
Oct 02 20:27:41 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbcf1d8608a16d2d6ee368343e27526373f2e5084a501e65d5ff64179f07f5b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbcf1d8608a16d2d6ee368343e27526373f2e5084a501e65d5ff64179f07f5b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbcf1d8608a16d2d6ee368343e27526373f2e5084a501e65d5ff64179f07f5b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbcf1d8608a16d2d6ee368343e27526373f2e5084a501e65d5ff64179f07f5b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbcf1d8608a16d2d6ee368343e27526373f2e5084a501e65d5ff64179f07f5b5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:27:41 compute-0 podman[492000]: 2025-10-02 20:27:41.74600092 +0000 UTC m=+0.242133373 container init 77fa6748155c01f3905776ce6c5c4a90316ef9e15326b2a92450d095140db5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_varahamihira, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:27:41 compute-0 podman[492000]: 2025-10-02 20:27:41.780087456 +0000 UTC m=+0.276219869 container start 77fa6748155c01f3905776ce6c5c4a90316ef9e15326b2a92450d095140db5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:27:41 compute-0 podman[492000]: 2025-10-02 20:27:41.787321214 +0000 UTC m=+0.283453907 container attach 77fa6748155c01f3905776ce6c5c4a90316ef9e15326b2a92450d095140db5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:27:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2494: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:42 compute-0 ceph-mon[191910]: pgmap v2494: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:43 compute-0 nifty_varahamihira[492015]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:27:43 compute-0 nifty_varahamihira[492015]: --> relative data size: 1.0
Oct 02 20:27:43 compute-0 nifty_varahamihira[492015]: --> All data devices are unavailable
Oct 02 20:27:43 compute-0 systemd[1]: libpod-77fa6748155c01f3905776ce6c5c4a90316ef9e15326b2a92450d095140db5c5.scope: Deactivated successfully.
Oct 02 20:27:43 compute-0 systemd[1]: libpod-77fa6748155c01f3905776ce6c5c4a90316ef9e15326b2a92450d095140db5c5.scope: Consumed 1.208s CPU time.
Oct 02 20:27:43 compute-0 podman[492000]: 2025-10-02 20:27:43.034482814 +0000 UTC m=+1.530615277 container died 77fa6748155c01f3905776ce6c5c4a90316ef9e15326b2a92450d095140db5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_varahamihira, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:27:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbcf1d8608a16d2d6ee368343e27526373f2e5084a501e65d5ff64179f07f5b5-merged.mount: Deactivated successfully.
Oct 02 20:27:43 compute-0 podman[492000]: 2025-10-02 20:27:43.149553194 +0000 UTC m=+1.645685577 container remove 77fa6748155c01f3905776ce6c5c4a90316ef9e15326b2a92450d095140db5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_varahamihira, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 20:27:43 compute-0 systemd[1]: libpod-conmon-77fa6748155c01f3905776ce6c5c4a90316ef9e15326b2a92450d095140db5c5.scope: Deactivated successfully.
Oct 02 20:27:43 compute-0 sudo[491899]: pam_unix(sudo:session): session closed for user root
Oct 02 20:27:43 compute-0 nova_compute[355794]: 2025-10-02 20:27:43.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:43 compute-0 sudo[492056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:27:43 compute-0 sudo[492056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:27:43 compute-0 sudo[492056]: pam_unix(sudo:session): session closed for user root
Oct 02 20:27:43 compute-0 sudo[492081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:27:43 compute-0 sudo[492081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:27:43 compute-0 sudo[492081]: pam_unix(sudo:session): session closed for user root
Oct 02 20:27:43 compute-0 sudo[492106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:27:43 compute-0 sudo[492106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:27:43 compute-0 sudo[492106]: pam_unix(sudo:session): session closed for user root
Oct 02 20:27:43 compute-0 sudo[492131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:27:43 compute-0 sudo[492131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:27:44 compute-0 podman[492196]: 2025-10-02 20:27:44.327207167 +0000 UTC m=+0.072906416 container create 018e32dc0a49930c8da50289d3232b57b9fffd161562fadd44fcf89e2c716b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_joliot, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:27:44 compute-0 systemd[1]: Started libpod-conmon-018e32dc0a49930c8da50289d3232b57b9fffd161562fadd44fcf89e2c716b5f.scope.
Oct 02 20:27:44 compute-0 podman[492196]: 2025-10-02 20:27:44.306243742 +0000 UTC m=+0.051943021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:27:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:27:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2495: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:44 compute-0 podman[492196]: 2025-10-02 20:27:44.462349078 +0000 UTC m=+0.208048327 container init 018e32dc0a49930c8da50289d3232b57b9fffd161562fadd44fcf89e2c716b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 20:27:44 compute-0 podman[492196]: 2025-10-02 20:27:44.480533841 +0000 UTC m=+0.226233080 container start 018e32dc0a49930c8da50289d3232b57b9fffd161562fadd44fcf89e2c716b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:27:44 compute-0 podman[492196]: 2025-10-02 20:27:44.48817454 +0000 UTC m=+0.233873869 container attach 018e32dc0a49930c8da50289d3232b57b9fffd161562fadd44fcf89e2c716b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 20:27:44 compute-0 wonderful_joliot[492212]: 167 167
Oct 02 20:27:44 compute-0 systemd[1]: libpod-018e32dc0a49930c8da50289d3232b57b9fffd161562fadd44fcf89e2c716b5f.scope: Deactivated successfully.
Oct 02 20:27:44 compute-0 podman[492196]: 2025-10-02 20:27:44.495988373 +0000 UTC m=+0.241687672 container died 018e32dc0a49930c8da50289d3232b57b9fffd161562fadd44fcf89e2c716b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_joliot, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:27:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-2499c10ce983ef64c32f5c4dcaca742576b626128ae2ade957c9f9d670e1c5b2-merged.mount: Deactivated successfully.
Oct 02 20:27:44 compute-0 ceph-mon[191910]: pgmap v2495: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:44 compute-0 nova_compute[355794]: 2025-10-02 20:27:44.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:27:44 compute-0 nova_compute[355794]: 2025-10-02 20:27:44.577 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 20:27:44 compute-0 podman[492196]: 2025-10-02 20:27:44.581858654 +0000 UTC m=+0.327557903 container remove 018e32dc0a49930c8da50289d3232b57b9fffd161562fadd44fcf89e2c716b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:27:44 compute-0 nova_compute[355794]: 2025-10-02 20:27:44.594 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 20:27:44 compute-0 systemd[1]: libpod-conmon-018e32dc0a49930c8da50289d3232b57b9fffd161562fadd44fcf89e2c716b5f.scope: Deactivated successfully.
Oct 02 20:27:44 compute-0 podman[492235]: 2025-10-02 20:27:44.883256716 +0000 UTC m=+0.084484256 container create 605858f49fc36d8c60367d0392c57bd7f643b43325c36df7fc4bcd071430a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 20:27:44 compute-0 podman[492235]: 2025-10-02 20:27:44.854262813 +0000 UTC m=+0.055490333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:27:44 compute-0 systemd[1]: Started libpod-conmon-605858f49fc36d8c60367d0392c57bd7f643b43325c36df7fc4bcd071430a5dd.scope.
Oct 02 20:27:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ae5c5be9c99bf7df4f43e6a6435441825dc000ee0609ccc0df994f297ddd24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ae5c5be9c99bf7df4f43e6a6435441825dc000ee0609ccc0df994f297ddd24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ae5c5be9c99bf7df4f43e6a6435441825dc000ee0609ccc0df994f297ddd24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ae5c5be9c99bf7df4f43e6a6435441825dc000ee0609ccc0df994f297ddd24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:27:45 compute-0 podman[492235]: 2025-10-02 20:27:45.065677927 +0000 UTC m=+0.266905507 container init 605858f49fc36d8c60367d0392c57bd7f643b43325c36df7fc4bcd071430a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 20:27:45 compute-0 podman[492235]: 2025-10-02 20:27:45.093988163 +0000 UTC m=+0.295215693 container start 605858f49fc36d8c60367d0392c57bd7f643b43325c36df7fc4bcd071430a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:27:45 compute-0 podman[492235]: 2025-10-02 20:27:45.09965868 +0000 UTC m=+0.300886210 container attach 605858f49fc36d8c60367d0392c57bd7f643b43325c36df7fc4bcd071430a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 02 20:27:45 compute-0 nova_compute[355794]: 2025-10-02 20:27:45.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:45 compute-0 nice_kepler[492251]: {
Oct 02 20:27:45 compute-0 nice_kepler[492251]:     "0": [
Oct 02 20:27:45 compute-0 nice_kepler[492251]:         {
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "devices": [
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "/dev/loop3"
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             ],
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "lv_name": "ceph_lv0",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "lv_size": "21470642176",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "name": "ceph_lv0",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "tags": {
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.cluster_name": "ceph",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.crush_device_class": "",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.encrypted": "0",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.osd_id": "0",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.type": "block",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.vdo": "0"
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             },
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "type": "block",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "vg_name": "ceph_vg0"
Oct 02 20:27:45 compute-0 nice_kepler[492251]:         }
Oct 02 20:27:45 compute-0 nice_kepler[492251]:     ],
Oct 02 20:27:45 compute-0 nice_kepler[492251]:     "1": [
Oct 02 20:27:45 compute-0 nice_kepler[492251]:         {
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "devices": [
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "/dev/loop4"
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             ],
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "lv_name": "ceph_lv1",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "lv_size": "21470642176",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "name": "ceph_lv1",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "tags": {
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.cluster_name": "ceph",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.crush_device_class": "",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.encrypted": "0",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.osd_id": "1",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.type": "block",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.vdo": "0"
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             },
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "type": "block",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "vg_name": "ceph_vg1"
Oct 02 20:27:45 compute-0 nice_kepler[492251]:         }
Oct 02 20:27:45 compute-0 nice_kepler[492251]:     ],
Oct 02 20:27:45 compute-0 nice_kepler[492251]:     "2": [
Oct 02 20:27:45 compute-0 nice_kepler[492251]:         {
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "devices": [
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "/dev/loop5"
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             ],
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "lv_name": "ceph_lv2",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "lv_size": "21470642176",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "name": "ceph_lv2",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "tags": {
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.cluster_name": "ceph",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.crush_device_class": "",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.encrypted": "0",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.osd_id": "2",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.type": "block",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:                 "ceph.vdo": "0"
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             },
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "type": "block",
Oct 02 20:27:45 compute-0 nice_kepler[492251]:             "vg_name": "ceph_vg2"
Oct 02 20:27:45 compute-0 nice_kepler[492251]:         }
Oct 02 20:27:45 compute-0 nice_kepler[492251]:     ]
Oct 02 20:27:45 compute-0 nice_kepler[492251]: }
Oct 02 20:27:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:27:45 compute-0 systemd[1]: libpod-605858f49fc36d8c60367d0392c57bd7f643b43325c36df7fc4bcd071430a5dd.scope: Deactivated successfully.
Oct 02 20:27:45 compute-0 podman[492235]: 2025-10-02 20:27:45.968021096 +0000 UTC m=+1.169248636 container died 605858f49fc36d8c60367d0392c57bd7f643b43325c36df7fc4bcd071430a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 20:27:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-84ae5c5be9c99bf7df4f43e6a6435441825dc000ee0609ccc0df994f297ddd24-merged.mount: Deactivated successfully.
Oct 02 20:27:46 compute-0 podman[492235]: 2025-10-02 20:27:46.071565637 +0000 UTC m=+1.272793137 container remove 605858f49fc36d8c60367d0392c57bd7f643b43325c36df7fc4bcd071430a5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:27:46 compute-0 systemd[1]: libpod-conmon-605858f49fc36d8c60367d0392c57bd7f643b43325c36df7fc4bcd071430a5dd.scope: Deactivated successfully.
Oct 02 20:27:46 compute-0 sudo[492131]: pam_unix(sudo:session): session closed for user root
Oct 02 20:27:46 compute-0 podman[492261]: 2025-10-02 20:27:46.13518775 +0000 UTC m=+0.133592992 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 20:27:46 compute-0 sudo[492289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:27:46 compute-0 sudo[492289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:27:46 compute-0 sudo[492289]: pam_unix(sudo:session): session closed for user root
Oct 02 20:27:46 compute-0 sudo[492315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:27:46 compute-0 sudo[492315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:27:46 compute-0 sudo[492315]: pam_unix(sudo:session): session closed for user root
Oct 02 20:27:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2496: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:46 compute-0 sudo[492340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:27:46 compute-0 sudo[492340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:27:46 compute-0 sudo[492340]: pam_unix(sudo:session): session closed for user root
Oct 02 20:27:46 compute-0 ceph-mon[191910]: pgmap v2496: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:46 compute-0 sudo[492365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:27:46 compute-0 sudo[492365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:27:47 compute-0 podman[492428]: 2025-10-02 20:27:47.127908097 +0000 UTC m=+0.079030954 container create 0df256234a4e11d52c2b15ee99680c8b2d9146ef03064f91559118a63716c953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:27:47 compute-0 podman[492428]: 2025-10-02 20:27:47.094799687 +0000 UTC m=+0.045922624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:27:47 compute-0 systemd[1]: Started libpod-conmon-0df256234a4e11d52c2b15ee99680c8b2d9146ef03064f91559118a63716c953.scope.
Oct 02 20:27:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:27:47 compute-0 podman[492428]: 2025-10-02 20:27:47.279676571 +0000 UTC m=+0.230799458 container init 0df256234a4e11d52c2b15ee99680c8b2d9146ef03064f91559118a63716c953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_khayyam, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:27:47 compute-0 podman[492428]: 2025-10-02 20:27:47.299432324 +0000 UTC m=+0.250555171 container start 0df256234a4e11d52c2b15ee99680c8b2d9146ef03064f91559118a63716c953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:27:47 compute-0 podman[492428]: 2025-10-02 20:27:47.304771833 +0000 UTC m=+0.255894700 container attach 0df256234a4e11d52c2b15ee99680c8b2d9146ef03064f91559118a63716c953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_khayyam, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:27:47 compute-0 bold_khayyam[492443]: 167 167
Oct 02 20:27:47 compute-0 systemd[1]: libpod-0df256234a4e11d52c2b15ee99680c8b2d9146ef03064f91559118a63716c953.scope: Deactivated successfully.
Oct 02 20:27:47 compute-0 podman[492428]: 2025-10-02 20:27:47.313324585 +0000 UTC m=+0.264447432 container died 0df256234a4e11d52c2b15ee99680c8b2d9146ef03064f91559118a63716c953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:27:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f3f480cc529d439f15d5935f13b20b5dd5b81894bdbcdf5fdaabf5dfb592e4e-merged.mount: Deactivated successfully.
Oct 02 20:27:47 compute-0 podman[492428]: 2025-10-02 20:27:47.366021105 +0000 UTC m=+0.317143962 container remove 0df256234a4e11d52c2b15ee99680c8b2d9146ef03064f91559118a63716c953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_khayyam, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 20:27:47 compute-0 systemd[1]: libpod-conmon-0df256234a4e11d52c2b15ee99680c8b2d9146ef03064f91559118a63716c953.scope: Deactivated successfully.
Oct 02 20:27:47 compute-0 podman[492467]: 2025-10-02 20:27:47.614265006 +0000 UTC m=+0.083516792 container create 48ff3fcf4a247844c3b2f4c93a1a6a7183d88db2f5c4909b90a329e0b13595ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 02 20:27:47 compute-0 podman[492467]: 2025-10-02 20:27:47.580543679 +0000 UTC m=+0.049795505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:27:47 compute-0 systemd[1]: Started libpod-conmon-48ff3fcf4a247844c3b2f4c93a1a6a7183d88db2f5c4909b90a329e0b13595ca.scope.
Oct 02 20:27:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa05873936f2d1fef5b0a1727dc323d96551165232b38ed66435d74abc5dffb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa05873936f2d1fef5b0a1727dc323d96551165232b38ed66435d74abc5dffb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa05873936f2d1fef5b0a1727dc323d96551165232b38ed66435d74abc5dffb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa05873936f2d1fef5b0a1727dc323d96551165232b38ed66435d74abc5dffb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:27:47 compute-0 podman[492467]: 2025-10-02 20:27:47.788679648 +0000 UTC m=+0.257931514 container init 48ff3fcf4a247844c3b2f4c93a1a6a7183d88db2f5c4909b90a329e0b13595ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_knuth, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:27:47 compute-0 podman[492467]: 2025-10-02 20:27:47.82107069 +0000 UTC m=+0.290322506 container start 48ff3fcf4a247844c3b2f4c93a1a6a7183d88db2f5c4909b90a329e0b13595ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 20:27:47 compute-0 podman[492467]: 2025-10-02 20:27:47.830665459 +0000 UTC m=+0.299917335 container attach 48ff3fcf4a247844c3b2f4c93a1a6a7183d88db2f5c4909b90a329e0b13595ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:27:48 compute-0 nova_compute[355794]: 2025-10-02 20:27:48.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2497: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:48 compute-0 ceph-mon[191910]: pgmap v2497: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:49 compute-0 gracious_knuth[492483]: {
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:         "osd_id": 1,
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:         "type": "bluestore"
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:     },
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:         "osd_id": 2,
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:         "type": "bluestore"
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:     },
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:         "osd_id": 0,
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:         "type": "bluestore"
Oct 02 20:27:49 compute-0 gracious_knuth[492483]:     }
Oct 02 20:27:49 compute-0 gracious_knuth[492483]: }
Oct 02 20:27:49 compute-0 systemd[1]: libpod-48ff3fcf4a247844c3b2f4c93a1a6a7183d88db2f5c4909b90a329e0b13595ca.scope: Deactivated successfully.
Oct 02 20:27:49 compute-0 systemd[1]: libpod-48ff3fcf4a247844c3b2f4c93a1a6a7183d88db2f5c4909b90a329e0b13595ca.scope: Consumed 1.276s CPU time.
Oct 02 20:27:49 compute-0 podman[492516]: 2025-10-02 20:27:49.170639711 +0000 UTC m=+0.046378856 container died 48ff3fcf4a247844c3b2f4c93a1a6a7183d88db2f5c4909b90a329e0b13595ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 20:27:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-faa05873936f2d1fef5b0a1727dc323d96551165232b38ed66435d74abc5dffb-merged.mount: Deactivated successfully.
Oct 02 20:27:49 compute-0 podman[492516]: 2025-10-02 20:27:49.379428967 +0000 UTC m=+0.255168142 container remove 48ff3fcf4a247844c3b2f4c93a1a6a7183d88db2f5c4909b90a329e0b13595ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 02 20:27:49 compute-0 systemd[1]: libpod-conmon-48ff3fcf4a247844c3b2f4c93a1a6a7183d88db2f5c4909b90a329e0b13595ca.scope: Deactivated successfully.
Oct 02 20:27:49 compute-0 sudo[492365]: pam_unix(sudo:session): session closed for user root
Oct 02 20:27:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:27:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:27:49 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:27:49 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:27:49 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 98b9d5ad-1535-4927-a59e-08fadea6c2a1 does not exist
Oct 02 20:27:49 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev c2030dbf-aed9-4984-b138-3138dff8510f does not exist
Oct 02 20:27:49 compute-0 sudo[492532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:27:49 compute-0 sudo[492532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:27:49 compute-0 sudo[492532]: pam_unix(sudo:session): session closed for user root
Oct 02 20:27:49 compute-0 sudo[492557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:27:49 compute-0 sudo[492557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:27:49 compute-0 sudo[492557]: pam_unix(sudo:session): session closed for user root
Oct 02 20:27:50 compute-0 nova_compute[355794]: 2025-10-02 20:27:50.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2498: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:27:50 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:27:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:27:51 compute-0 nova_compute[355794]: 2025-10-02 20:27:51.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:27:51 compute-0 nova_compute[355794]: 2025-10-02 20:27:51.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 20:27:51 compute-0 ceph-mon[191910]: pgmap v2498: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2499: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:52 compute-0 ceph-mon[191910]: pgmap v2499: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:53 compute-0 nova_compute[355794]: 2025-10-02 20:27:53.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2500: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:54 compute-0 ceph-mon[191910]: pgmap v2500: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:55 compute-0 nova_compute[355794]: 2025-10-02 20:27:55.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:27:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2501: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:56 compute-0 ceph-mon[191910]: pgmap v2501: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:56 compute-0 podman[492583]: 2025-10-02 20:27:56.663228187 +0000 UTC m=+0.088548312 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 20:27:56 compute-0 podman[492582]: 2025-10-02 20:27:56.680108496 +0000 UTC m=+0.102016012 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:27:58 compute-0 nova_compute[355794]: 2025-10-02 20:27:58.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:27:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2502: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:58 compute-0 ceph-mon[191910]: pgmap v2502: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:27:59 compute-0 podman[157186]: time="2025-10-02T20:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:27:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:27:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9110 "" "Go-http-client/1.1"
Oct 02 20:28:00 compute-0 nova_compute[355794]: 2025-10-02 20:28:00.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2503: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:00 compute-0 ceph-mon[191910]: pgmap v2503: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:28:01 compute-0 openstack_network_exporter[372736]: ERROR   20:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:28:01 compute-0 openstack_network_exporter[372736]: ERROR   20:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:28:01 compute-0 openstack_network_exporter[372736]: ERROR   20:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:28:01 compute-0 openstack_network_exporter[372736]: ERROR   20:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:28:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:28:01 compute-0 openstack_network_exporter[372736]: ERROR   20:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:28:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:28:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2504: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:02 compute-0 ceph-mon[191910]: pgmap v2504: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:02 compute-0 podman[492623]: 2025-10-02 20:28:02.653142394 +0000 UTC m=+0.084382194 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 20:28:02 compute-0 podman[492624]: 2025-10-02 20:28:02.705672809 +0000 UTC m=+0.123395957 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, version=9.4, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, release-0.7.12=, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct 02 20:28:03 compute-0 nova_compute[355794]: 2025-10-02 20:28:03.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:28:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:28:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:28:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:28:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:28:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:28:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:28:03
Oct 02 20:28:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:28:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:28:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'vms', 'default.rgw.control', 'images', 'backups']
Oct 02 20:28:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:28:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2505: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:04 compute-0 ceph-mon[191910]: pgmap v2505: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:28:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:28:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:28:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:28:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:28:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:28:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:28:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:28:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:28:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:28:05 compute-0 nova_compute[355794]: 2025-10-02 20:28:05.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:05 compute-0 podman[492663]: 2025-10-02 20:28:05.696554382 +0000 UTC m=+0.111110829 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 20:28:05 compute-0 podman[492665]: 2025-10-02 20:28:05.735242047 +0000 UTC m=+0.135689477 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.7, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Oct 02 20:28:05 compute-0 podman[492667]: 2025-10-02 20:28:05.735563335 +0000 UTC m=+0.129841655 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 20:28:05 compute-0 podman[492664]: 2025-10-02 20:28:05.749961259 +0000 UTC m=+0.155089511 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 20:28:05 compute-0 podman[492666]: 2025-10-02 20:28:05.774019685 +0000 UTC m=+0.166786796 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 20:28:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:28:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2506: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:06 compute-0 ceph-mon[191910]: pgmap v2506: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:08 compute-0 nova_compute[355794]: 2025-10-02 20:28:08.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2507: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:08 compute-0 ceph-mon[191910]: pgmap v2507: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:10 compute-0 nova_compute[355794]: 2025-10-02 20:28:10.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2508: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:10 compute-0 ceph-mon[191910]: pgmap v2508: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:28:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2509: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:12 compute-0 ceph-mon[191910]: pgmap v2509: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:13 compute-0 nova_compute[355794]: 2025-10-02 20:28:13.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:28:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:28:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:28:14 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.0 total, 600.0 interval
                                            Cumulative writes: 11K writes, 51K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1361 writes, 6226 keys, 1361 commit groups, 1.0 writes per commit group, ingest: 8.70 MB, 0.01 MB/s
                                            Interval WAL: 1361 writes, 1361 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     86.5      0.74              0.31        36    0.021       0      0       0.0       0.0
                                              L6      1/0    7.30 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.2    127.2    104.9      2.54              1.19        35    0.072    193K    19K       0.0       0.0
                                             Sum      1/0    7.30 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.2     98.5    100.8      3.27              1.50        71    0.046    193K    19K       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.5     90.0     91.9      0.53              0.22        10    0.053     33K   2554       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    127.2    104.9      2.54              1.19        35    0.072    193K    19K       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     86.9      0.73              0.31        35    0.021       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 4800.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.062, interval 0.009
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.32 GB write, 0.07 MB/s write, 0.32 GB read, 0.07 MB/s read, 3.3 seconds
                                            Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.5 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x557df35531f0#2 capacity: 304.00 MB usage: 41.03 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000495 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2798,39.63 MB,13.0372%) FilterBlock(72,541.11 KB,0.173825%) IndexBlock(72,886.39 KB,0.284742%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Oct 02 20:28:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2510: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:14 compute-0 ceph-mon[191910]: pgmap v2510: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:15 compute-0 nova_compute[355794]: 2025-10-02 20:28:15.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:28:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2511: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:16 compute-0 ceph-mon[191910]: pgmap v2511: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:16 compute-0 podman[492766]: 2025-10-02 20:28:16.689293045 +0000 UTC m=+0.111545440 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:28:18 compute-0 nova_compute[355794]: 2025-10-02 20:28:18.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2512: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:18 compute-0 ceph-mon[191910]: pgmap v2512: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:28:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3675942417' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:28:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:28:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3675942417' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:28:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3675942417' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:28:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3675942417' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:28:20 compute-0 nova_compute[355794]: 2025-10-02 20:28:20.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2513: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:28:21 compute-0 ceph-mon[191910]: pgmap v2513: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2514: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:22 compute-0 ceph-mon[191910]: pgmap v2514: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:23 compute-0 nova_compute[355794]: 2025-10-02 20:28:23.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2515: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:24 compute-0 ceph-mon[191910]: pgmap v2515: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:25 compute-0 nova_compute[355794]: 2025-10-02 20:28:25.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:25 compute-0 nova_compute[355794]: 2025-10-02 20:28:25.612 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:28:25 compute-0 nova_compute[355794]: 2025-10-02 20:28:25.612 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:28:25 compute-0 nova_compute[355794]: 2025-10-02 20:28:25.613 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:28:25 compute-0 nova_compute[355794]: 2025-10-02 20:28:25.940 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:28:25 compute-0 nova_compute[355794]: 2025-10-02 20:28:25.941 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:28:25 compute-0 nova_compute[355794]: 2025-10-02 20:28:25.941 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:28:25 compute-0 nova_compute[355794]: 2025-10-02 20:28:25.942 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:28:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:28:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2516: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:26 compute-0 ceph-mon[191910]: pgmap v2516: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:27 compute-0 podman[492785]: 2025-10-02 20:28:27.723661079 +0000 UTC m=+0.139617339 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 20:28:27 compute-0 podman[492786]: 2025-10-02 20:28:27.734554223 +0000 UTC m=+0.143529811 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930)
Oct 02 20:28:27 compute-0 nova_compute[355794]: 2025-10-02 20:28:27.760 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:28:27 compute-0 nova_compute[355794]: 2025-10-02 20:28:27.779 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:28:27 compute-0 nova_compute[355794]: 2025-10-02 20:28:27.779 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:28:27 compute-0 nova_compute[355794]: 2025-10-02 20:28:27.780 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:28:27 compute-0 nova_compute[355794]: 2025-10-02 20:28:27.781 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:28:27 compute-0 nova_compute[355794]: 2025-10-02 20:28:27.781 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:28:27 compute-0 nova_compute[355794]: 2025-10-02 20:28:27.781 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:28:27 compute-0 nova_compute[355794]: 2025-10-02 20:28:27.807 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:28:27 compute-0 nova_compute[355794]: 2025-10-02 20:28:27.808 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:28:27 compute-0 nova_compute[355794]: 2025-10-02 20:28:27.809 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:28:27 compute-0 nova_compute[355794]: 2025-10-02 20:28:27.809 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:28:27 compute-0 nova_compute[355794]: 2025-10-02 20:28:27.810 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:28:28 compute-0 nova_compute[355794]: 2025-10-02 20:28:28.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:28:28 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2618443802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:28:28 compute-0 nova_compute[355794]: 2025-10-02 20:28:28.304 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:28:28 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2618443802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:28:28 compute-0 nova_compute[355794]: 2025-10-02 20:28:28.437 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:28:28 compute-0 nova_compute[355794]: 2025-10-02 20:28:28.437 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:28:28 compute-0 nova_compute[355794]: 2025-10-02 20:28:28.438 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:28:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2517: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:28 compute-0 nova_compute[355794]: 2025-10-02 20:28:28.950 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:28:28 compute-0 nova_compute[355794]: 2025-10-02 20:28:28.952 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3639MB free_disk=59.955204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:28:28 compute-0 nova_compute[355794]: 2025-10-02 20:28:28.952 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:28:28 compute-0 nova_compute[355794]: 2025-10-02 20:28:28.952 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:28:29 compute-0 nova_compute[355794]: 2025-10-02 20:28:29.040 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:28:29 compute-0 nova_compute[355794]: 2025-10-02 20:28:29.041 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:28:29 compute-0 nova_compute[355794]: 2025-10-02 20:28:29.041 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:28:29 compute-0 nova_compute[355794]: 2025-10-02 20:28:29.077 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:28:29 compute-0 ceph-mon[191910]: pgmap v2517: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:28:29 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2117051184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:28:29 compute-0 nova_compute[355794]: 2025-10-02 20:28:29.571 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:28:29 compute-0 nova_compute[355794]: 2025-10-02 20:28:29.581 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:28:29 compute-0 nova_compute[355794]: 2025-10-02 20:28:29.597 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:28:29 compute-0 nova_compute[355794]: 2025-10-02 20:28:29.599 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:28:29 compute-0 nova_compute[355794]: 2025-10-02 20:28:29.599 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:28:29 compute-0 podman[157186]: time="2025-10-02T20:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:28:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:28:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9126 "" "Go-http-client/1.1"
Oct 02 20:28:30 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2117051184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:28:30 compute-0 nova_compute[355794]: 2025-10-02 20:28:30.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2518: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:28:31 compute-0 ceph-mon[191910]: pgmap v2518: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:31 compute-0 nova_compute[355794]: 2025-10-02 20:28:31.395 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:28:31 compute-0 nova_compute[355794]: 2025-10-02 20:28:31.396 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:28:31 compute-0 openstack_network_exporter[372736]: ERROR   20:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:28:31 compute-0 openstack_network_exporter[372736]: ERROR   20:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:28:31 compute-0 openstack_network_exporter[372736]: ERROR   20:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:28:31 compute-0 openstack_network_exporter[372736]: ERROR   20:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:28:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:28:31 compute-0 openstack_network_exporter[372736]: ERROR   20:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:28:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:28:31 compute-0 nova_compute[355794]: 2025-10-02 20:28:31.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:28:31 compute-0 nova_compute[355794]: 2025-10-02 20:28:31.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:28:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:28:32.349 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:28:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:28:32.350 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:28:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:28:32.351 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:28:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2519: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:32 compute-0 ceph-mon[191910]: pgmap v2519: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:33 compute-0 nova_compute[355794]: 2025-10-02 20:28:33.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:33 compute-0 nova_compute[355794]: 2025-10-02 20:28:33.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:28:33 compute-0 podman[492870]: 2025-10-02 20:28:33.693360341 +0000 UTC m=+0.109667200 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, version=9.4, release-0.7.12=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64)
Oct 02 20:28:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:28:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:28:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:28:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:28:33 compute-0 podman[492869]: 2025-10-02 20:28:33.706219525 +0000 UTC m=+0.137483583 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:28:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:28:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:28:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2520: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:34 compute-0 ceph-mon[191910]: pgmap v2520: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:35 compute-0 nova_compute[355794]: 2025-10-02 20:28:35.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:28:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2521: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:36 compute-0 nova_compute[355794]: 2025-10-02 20:28:36.569 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:28:36 compute-0 ceph-mon[191910]: pgmap v2521: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:36 compute-0 podman[492916]: 2025-10-02 20:28:36.67136728 +0000 UTC m=+0.085111113 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 20:28:36 compute-0 podman[492907]: 2025-10-02 20:28:36.67522139 +0000 UTC m=+0.107700700 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:28:36 compute-0 podman[492909]: 2025-10-02 20:28:36.678439383 +0000 UTC m=+0.096733095 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 02 20:28:36 compute-0 podman[492908]: 2025-10-02 20:28:36.706366369 +0000 UTC m=+0.135518193 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct 02 20:28:36 compute-0 podman[492910]: 2025-10-02 20:28:36.758916505 +0000 UTC m=+0.167670439 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Oct 02 20:28:38 compute-0 nova_compute[355794]: 2025-10-02 20:28:38.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2522: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:38 compute-0 ceph-mon[191910]: pgmap v2522: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:40 compute-0 nova_compute[355794]: 2025-10-02 20:28:40.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2523: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:40 compute-0 ceph-mon[191910]: pgmap v2523: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:40 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:28:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2524: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:42 compute-0 ceph-mon[191910]: pgmap v2524: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:43 compute-0 nova_compute[355794]: 2025-10-02 20:28:43.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2525: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:44 compute-0 ceph-mon[191910]: pgmap v2525: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:45 compute-0 nova_compute[355794]: 2025-10-02 20:28:45.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:45 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:28:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2526: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:46 compute-0 ceph-mon[191910]: pgmap v2526: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:47 compute-0 podman[493006]: 2025-10-02 20:28:47.710603931 +0000 UTC m=+0.131763955 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 02 20:28:48 compute-0 nova_compute[355794]: 2025-10-02 20:28:48.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2527: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:48 compute-0 ceph-mon[191910]: pgmap v2527: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:49 compute-0 sudo[493026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:28:49 compute-0 sudo[493026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:49 compute-0 sudo[493026]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:49 compute-0 sudo[493051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:28:49 compute-0 sudo[493051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:50 compute-0 sudo[493051]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:50 compute-0 sudo[493076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:28:50 compute-0 sudo[493076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:50 compute-0 sudo[493076]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:50 compute-0 sudo[493101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 20:28:50 compute-0 sudo[493101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:50 compute-0 nova_compute[355794]: 2025-10-02 20:28:50.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2528: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:50 compute-0 ceph-mon[191910]: pgmap v2528: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:50 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:28:51 compute-0 podman[493195]: 2025-10-02 20:28:51.12235996 +0000 UTC m=+0.133621933 container exec a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:28:51 compute-0 podman[493195]: 2025-10-02 20:28:51.286871236 +0000 UTC m=+0.298133159 container exec_died a22d7e12819e15e62b68ee5a31e785102282a0d25d678266cf7bfc45478723c1 (image=quay.io/ceph/ceph:v18, name=ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 20:28:52 compute-0 sudo[493101]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:28:52 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:28:52 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:28:52 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:28:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2529: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:52 compute-0 sudo[493338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:28:52 compute-0 sudo[493338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:52 compute-0 sudo[493338]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:52 compute-0 sudo[493363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:28:52 compute-0 sudo[493363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:52 compute-0 sudo[493363]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:52 compute-0 sudo[493388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:28:52 compute-0 sudo[493388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:52 compute-0 sudo[493388]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:52 compute-0 sudo[493413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:28:52 compute-0 sudo[493413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:53 compute-0 nova_compute[355794]: 2025-10-02 20:28:53.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:53 compute-0 sudo[493413]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:53 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:28:53 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:28:53 compute-0 ceph-mon[191910]: pgmap v2529: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:28:53 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:28:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:28:53 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:28:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:28:53 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:28:53 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev e589dd54-ccc2-4094-a88a-91a58dd5a9c3 does not exist
Oct 02 20:28:53 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev c3186d16-5b58-4481-8ed5-54d3968d9b31 does not exist
Oct 02 20:28:53 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 77fd3eb8-e877-4b65-b7fa-b7dc515c7a80 does not exist
Oct 02 20:28:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:28:53 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:28:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:28:53 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:28:53 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:28:53 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:28:53 compute-0 sudo[493469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:28:53 compute-0 sudo[493469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:53 compute-0 sudo[493469]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:53 compute-0 sudo[493494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:28:53 compute-0 sudo[493494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:53 compute-0 sudo[493494]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:53 compute-0 sudo[493519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:28:53 compute-0 sudo[493519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:53 compute-0 sudo[493519]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:53 compute-0 sudo[493544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:28:53 compute-0 sudo[493544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:54 compute-0 podman[493606]: 2025-10-02 20:28:54.435035225 +0000 UTC m=+0.077698280 container create 8571a277cf3168e218e8ec8d7179b32739464839e0d43fadfc909c153837b060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 20:28:54 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:28:54 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:28:54 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:28:54 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:28:54 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:28:54 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:28:54 compute-0 podman[493606]: 2025-10-02 20:28:54.411158135 +0000 UTC m=+0.053821170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:28:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2530: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:54 compute-0 systemd[1]: Started libpod-conmon-8571a277cf3168e218e8ec8d7179b32739464839e0d43fadfc909c153837b060.scope.
Oct 02 20:28:54 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:28:54 compute-0 podman[493606]: 2025-10-02 20:28:54.577819516 +0000 UTC m=+0.220482581 container init 8571a277cf3168e218e8ec8d7179b32739464839e0d43fadfc909c153837b060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ptolemy, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:28:54 compute-0 podman[493606]: 2025-10-02 20:28:54.585444634 +0000 UTC m=+0.228107659 container start 8571a277cf3168e218e8ec8d7179b32739464839e0d43fadfc909c153837b060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ptolemy, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:28:54 compute-0 podman[493606]: 2025-10-02 20:28:54.590008513 +0000 UTC m=+0.232671578 container attach 8571a277cf3168e218e8ec8d7179b32739464839e0d43fadfc909c153837b060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Oct 02 20:28:54 compute-0 admiring_ptolemy[493621]: 167 167
Oct 02 20:28:54 compute-0 systemd[1]: libpod-8571a277cf3168e218e8ec8d7179b32739464839e0d43fadfc909c153837b060.scope: Deactivated successfully.
Oct 02 20:28:54 compute-0 podman[493606]: 2025-10-02 20:28:54.599350195 +0000 UTC m=+0.242013240 container died 8571a277cf3168e218e8ec8d7179b32739464839e0d43fadfc909c153837b060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:28:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3513897fbe9715ff926d9ce5bb7bd0591bfc3b1d9bc7c06bb2a50698696aa620-merged.mount: Deactivated successfully.
Oct 02 20:28:54 compute-0 podman[493606]: 2025-10-02 20:28:54.669334884 +0000 UTC m=+0.311997899 container remove 8571a277cf3168e218e8ec8d7179b32739464839e0d43fadfc909c153837b060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ptolemy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 20:28:54 compute-0 systemd[1]: libpod-conmon-8571a277cf3168e218e8ec8d7179b32739464839e0d43fadfc909c153837b060.scope: Deactivated successfully.
Oct 02 20:28:54 compute-0 podman[493644]: 2025-10-02 20:28:54.967530813 +0000 UTC m=+0.096673313 container create 49f1a03f8177360f7d8295bc0fd3e0b00f0237e93b5c4c4eeac20cc121d9753b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 02 20:28:55 compute-0 podman[493644]: 2025-10-02 20:28:54.934012352 +0000 UTC m=+0.063154892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:28:55 compute-0 systemd[1]: Started libpod-conmon-49f1a03f8177360f7d8295bc0fd3e0b00f0237e93b5c4c4eeac20cc121d9753b.scope.
Oct 02 20:28:55 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:28:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a1f5c034d03ade6ee862969af68235da370146a74c5e955575a724d2726c38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:28:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a1f5c034d03ade6ee862969af68235da370146a74c5e955575a724d2726c38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:28:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a1f5c034d03ade6ee862969af68235da370146a74c5e955575a724d2726c38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:28:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a1f5c034d03ade6ee862969af68235da370146a74c5e955575a724d2726c38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:28:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a1f5c034d03ade6ee862969af68235da370146a74c5e955575a724d2726c38/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:28:55 compute-0 podman[493644]: 2025-10-02 20:28:55.146480763 +0000 UTC m=+0.275623273 container init 49f1a03f8177360f7d8295bc0fd3e0b00f0237e93b5c4c4eeac20cc121d9753b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 02 20:28:55 compute-0 podman[493644]: 2025-10-02 20:28:55.156072042 +0000 UTC m=+0.285214502 container start 49f1a03f8177360f7d8295bc0fd3e0b00f0237e93b5c4c4eeac20cc121d9753b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 20:28:55 compute-0 podman[493644]: 2025-10-02 20:28:55.160877037 +0000 UTC m=+0.290019537 container attach 49f1a03f8177360f7d8295bc0fd3e0b00f0237e93b5c4c4eeac20cc121d9753b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 20:28:55 compute-0 nova_compute[355794]: 2025-10-02 20:28:55.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:55 compute-0 ceph-mon[191910]: pgmap v2530: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:55 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:28:55 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Oct 02 20:28:55 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:28:55.990779) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 20:28:55 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Oct 02 20:28:55 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436935990824, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1134, "num_deletes": 256, "total_data_size": 1608822, "memory_usage": 1631688, "flush_reason": "Manual Compaction"}
Oct 02 20:28:55 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436936005511, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1581795, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51024, "largest_seqno": 52157, "table_properties": {"data_size": 1576374, "index_size": 2817, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11454, "raw_average_key_size": 19, "raw_value_size": 1565464, "raw_average_value_size": 2639, "num_data_blocks": 126, "num_entries": 593, "num_filter_entries": 593, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759436828, "oldest_key_time": 1759436828, "file_creation_time": 1759436935, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 14813 microseconds, and 4410 cpu microseconds.
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:28:56.005594) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1581795 bytes OK
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:28:56.005617) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:28:56.010200) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:28:56.010222) EVENT_LOG_v1 {"time_micros": 1759436936010215, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:28:56.010249) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1603580, prev total WAL file size 1603580, number of live WAL files 2.
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:28:56.012281) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303036' seq:72057594037927935, type:22 .. '6C6F676D0032323538' seq:0, type:0; will stop at (end)
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1544KB)], [122(7480KB)]
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436936012454, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 9241328, "oldest_snapshot_seqno": -1}
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 6589 keys, 9129385 bytes, temperature: kUnknown
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436936088911, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 9129385, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9087303, "index_size": 24498, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16517, "raw_key_size": 172710, "raw_average_key_size": 26, "raw_value_size": 8970074, "raw_average_value_size": 1361, "num_data_blocks": 972, "num_entries": 6589, "num_filter_entries": 6589, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759436936, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:28:56.089254) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 9129385 bytes
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:28:56.106254) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.7 rd, 119.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.3 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(11.6) write-amplify(5.8) OK, records in: 7113, records dropped: 524 output_compression: NoCompression
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:28:56.106286) EVENT_LOG_v1 {"time_micros": 1759436936106270, "job": 74, "event": "compaction_finished", "compaction_time_micros": 76556, "compaction_time_cpu_micros": 46870, "output_level": 6, "num_output_files": 1, "total_output_size": 9129385, "num_input_records": 7113, "num_output_records": 6589, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436936107138, "job": 74, "event": "table_file_deletion", "file_number": 124}
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759436936110192, "job": 74, "event": "table_file_deletion", "file_number": 122}
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:28:56.011833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:28:56.110334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:28:56.110340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:28:56.110342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:28:56.110344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:28:56 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:28:56.110346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:28:56 compute-0 dazzling_chandrasekhar[493660]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:28:56 compute-0 dazzling_chandrasekhar[493660]: --> relative data size: 1.0
Oct 02 20:28:56 compute-0 dazzling_chandrasekhar[493660]: --> All data devices are unavailable
Oct 02 20:28:56 compute-0 systemd[1]: libpod-49f1a03f8177360f7d8295bc0fd3e0b00f0237e93b5c4c4eeac20cc121d9753b.scope: Deactivated successfully.
Oct 02 20:28:56 compute-0 systemd[1]: libpod-49f1a03f8177360f7d8295bc0fd3e0b00f0237e93b5c4c4eeac20cc121d9753b.scope: Consumed 1.228s CPU time.
Oct 02 20:28:56 compute-0 podman[493644]: 2025-10-02 20:28:56.451340122 +0000 UTC m=+1.580482592 container died 49f1a03f8177360f7d8295bc0fd3e0b00f0237e93b5c4c4eeac20cc121d9753b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:28:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-85a1f5c034d03ade6ee862969af68235da370146a74c5e955575a724d2726c38-merged.mount: Deactivated successfully.
Oct 02 20:28:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2531: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:56 compute-0 podman[493644]: 2025-10-02 20:28:56.539868162 +0000 UTC m=+1.669010632 container remove 49f1a03f8177360f7d8295bc0fd3e0b00f0237e93b5c4c4eeac20cc121d9753b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:28:56 compute-0 systemd[1]: libpod-conmon-49f1a03f8177360f7d8295bc0fd3e0b00f0237e93b5c4c4eeac20cc121d9753b.scope: Deactivated successfully.
Oct 02 20:28:56 compute-0 sudo[493544]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:56 compute-0 sudo[493700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:28:56 compute-0 sudo[493700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:56 compute-0 sudo[493700]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:56 compute-0 sudo[493725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:28:56 compute-0 sudo[493725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:56 compute-0 sudo[493725]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:56 compute-0 sudo[493750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:28:56 compute-0 sudo[493750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:56 compute-0 sudo[493750]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:57 compute-0 ceph-mon[191910]: pgmap v2531: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:57 compute-0 sudo[493775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:28:57 compute-0 sudo[493775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:57 compute-0 podman[493839]: 2025-10-02 20:28:57.665117954 +0000 UTC m=+0.075782191 container create c40db37b98bbff999f53ef29cb9c3bd7efe79e91df0a56bb4f674b53417417dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_turing, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:28:57 compute-0 podman[493839]: 2025-10-02 20:28:57.628910903 +0000 UTC m=+0.039575190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:28:57 compute-0 systemd[1]: Started libpod-conmon-c40db37b98bbff999f53ef29cb9c3bd7efe79e91df0a56bb4f674b53417417dd.scope.
Oct 02 20:28:57 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:28:57 compute-0 podman[493839]: 2025-10-02 20:28:57.806327993 +0000 UTC m=+0.216992270 container init c40db37b98bbff999f53ef29cb9c3bd7efe79e91df0a56bb4f674b53417417dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_turing, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 02 20:28:57 compute-0 podman[493839]: 2025-10-02 20:28:57.817917075 +0000 UTC m=+0.228581282 container start c40db37b98bbff999f53ef29cb9c3bd7efe79e91df0a56bb4f674b53417417dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 20:28:57 compute-0 podman[493839]: 2025-10-02 20:28:57.823183061 +0000 UTC m=+0.233847308 container attach c40db37b98bbff999f53ef29cb9c3bd7efe79e91df0a56bb4f674b53417417dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 02 20:28:57 compute-0 gifted_turing[493854]: 167 167
Oct 02 20:28:57 compute-0 systemd[1]: libpod-c40db37b98bbff999f53ef29cb9c3bd7efe79e91df0a56bb4f674b53417417dd.scope: Deactivated successfully.
Oct 02 20:28:57 compute-0 podman[493839]: 2025-10-02 20:28:57.831778625 +0000 UTC m=+0.242442852 container died c40db37b98bbff999f53ef29cb9c3bd7efe79e91df0a56bb4f674b53417417dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_turing, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 20:28:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-22c24e2379d11c228cc9c4d21dc76ffe7c17cced30d7fddde3dadaf8896b91b5-merged.mount: Deactivated successfully.
Oct 02 20:28:57 compute-0 podman[493839]: 2025-10-02 20:28:57.902953224 +0000 UTC m=+0.313617411 container remove c40db37b98bbff999f53ef29cb9c3bd7efe79e91df0a56bb4f674b53417417dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_turing, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 20:28:57 compute-0 podman[493855]: 2025-10-02 20:28:57.911550878 +0000 UTC m=+0.135084542 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 20:28:57 compute-0 systemd[1]: libpod-conmon-c40db37b98bbff999f53ef29cb9c3bd7efe79e91df0a56bb4f674b53417417dd.scope: Deactivated successfully.
Oct 02 20:28:57 compute-0 podman[493857]: 2025-10-02 20:28:57.938321613 +0000 UTC m=+0.156365164 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_id=edpm)
Oct 02 20:28:58 compute-0 podman[493920]: 2025-10-02 20:28:58.172835888 +0000 UTC m=+0.088839020 container create 07bf64c63e61a993220b4fcd97ea9c70963b0c7b86056820c912ecdd22388417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct 02 20:28:58 compute-0 podman[493920]: 2025-10-02 20:28:58.147975422 +0000 UTC m=+0.063978524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:28:58 compute-0 systemd[1]: Started libpod-conmon-07bf64c63e61a993220b4fcd97ea9c70963b0c7b86056820c912ecdd22388417.scope.
Oct 02 20:28:58 compute-0 nova_compute[355794]: 2025-10-02 20:28:58.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:28:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:28:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63177c8e054b401f33e775adbd2f6247f0e291200d792bfe0dd9a6da6848b57b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:28:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63177c8e054b401f33e775adbd2f6247f0e291200d792bfe0dd9a6da6848b57b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:28:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63177c8e054b401f33e775adbd2f6247f0e291200d792bfe0dd9a6da6848b57b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:28:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63177c8e054b401f33e775adbd2f6247f0e291200d792bfe0dd9a6da6848b57b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:28:58 compute-0 podman[493920]: 2025-10-02 20:28:58.325638579 +0000 UTC m=+0.241641691 container init 07bf64c63e61a993220b4fcd97ea9c70963b0c7b86056820c912ecdd22388417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 02 20:28:58 compute-0 podman[493920]: 2025-10-02 20:28:58.344679763 +0000 UTC m=+0.260682885 container start 07bf64c63e61a993220b4fcd97ea9c70963b0c7b86056820c912ecdd22388417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rubin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 20:28:58 compute-0 podman[493920]: 2025-10-02 20:28:58.351940902 +0000 UTC m=+0.267944264 container attach 07bf64c63e61a993220b4fcd97ea9c70963b0c7b86056820c912ecdd22388417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:28:58 compute-0 nova_compute[355794]: 2025-10-02 20:28:58.442 2 DEBUG oslo_concurrency.processutils [None req-8c2cdf5c-3933-4140-82c0-439d43948d4c 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:28:58 compute-0 nova_compute[355794]: 2025-10-02 20:28:58.503 2 DEBUG oslo_concurrency.processutils [None req-8c2cdf5c-3933-4140-82c0-439d43948d4c 811fb7ac717e4ba9b9874e5454ee08f4 1c35486f37b94d43a7bf2f2fa09c70b9 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:28:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2532: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:58 compute-0 ceph-mon[191910]: pgmap v2532: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:28:59 compute-0 zen_rubin[493936]: {
Oct 02 20:28:59 compute-0 zen_rubin[493936]:     "0": [
Oct 02 20:28:59 compute-0 zen_rubin[493936]:         {
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "devices": [
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "/dev/loop3"
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             ],
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "lv_name": "ceph_lv0",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "lv_size": "21470642176",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "name": "ceph_lv0",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "tags": {
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.cluster_name": "ceph",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.crush_device_class": "",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.encrypted": "0",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.osd_id": "0",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.type": "block",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.vdo": "0"
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             },
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "type": "block",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "vg_name": "ceph_vg0"
Oct 02 20:28:59 compute-0 zen_rubin[493936]:         }
Oct 02 20:28:59 compute-0 zen_rubin[493936]:     ],
Oct 02 20:28:59 compute-0 zen_rubin[493936]:     "1": [
Oct 02 20:28:59 compute-0 zen_rubin[493936]:         {
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "devices": [
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "/dev/loop4"
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             ],
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "lv_name": "ceph_lv1",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "lv_size": "21470642176",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "name": "ceph_lv1",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "tags": {
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.cluster_name": "ceph",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.crush_device_class": "",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.encrypted": "0",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.osd_id": "1",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.type": "block",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.vdo": "0"
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             },
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "type": "block",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "vg_name": "ceph_vg1"
Oct 02 20:28:59 compute-0 zen_rubin[493936]:         }
Oct 02 20:28:59 compute-0 zen_rubin[493936]:     ],
Oct 02 20:28:59 compute-0 zen_rubin[493936]:     "2": [
Oct 02 20:28:59 compute-0 zen_rubin[493936]:         {
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "devices": [
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "/dev/loop5"
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             ],
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "lv_name": "ceph_lv2",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "lv_size": "21470642176",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "name": "ceph_lv2",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "tags": {
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.cluster_name": "ceph",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.crush_device_class": "",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.encrypted": "0",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.osd_id": "2",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.type": "block",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:                 "ceph.vdo": "0"
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             },
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "type": "block",
Oct 02 20:28:59 compute-0 zen_rubin[493936]:             "vg_name": "ceph_vg2"
Oct 02 20:28:59 compute-0 zen_rubin[493936]:         }
Oct 02 20:28:59 compute-0 zen_rubin[493936]:     ]
Oct 02 20:28:59 compute-0 zen_rubin[493936]: }
Oct 02 20:28:59 compute-0 systemd[1]: libpod-07bf64c63e61a993220b4fcd97ea9c70963b0c7b86056820c912ecdd22388417.scope: Deactivated successfully.
Oct 02 20:28:59 compute-0 podman[493920]: 2025-10-02 20:28:59.180127483 +0000 UTC m=+1.096130615 container died 07bf64c63e61a993220b4fcd97ea9c70963b0c7b86056820c912ecdd22388417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rubin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:28:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-63177c8e054b401f33e775adbd2f6247f0e291200d792bfe0dd9a6da6848b57b-merged.mount: Deactivated successfully.
Oct 02 20:28:59 compute-0 podman[493920]: 2025-10-02 20:28:59.281017475 +0000 UTC m=+1.197020577 container remove 07bf64c63e61a993220b4fcd97ea9c70963b0c7b86056820c912ecdd22388417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:28:59 compute-0 systemd[1]: libpod-conmon-07bf64c63e61a993220b4fcd97ea9c70963b0c7b86056820c912ecdd22388417.scope: Deactivated successfully.
Oct 02 20:28:59 compute-0 sudo[493775]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:59 compute-0 sudo[493958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:28:59 compute-0 sudo[493958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:59 compute-0 sudo[493958]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:59 compute-0 sudo[493983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:28:59 compute-0 sudo[493983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:59 compute-0 sudo[493983]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:59 compute-0 sudo[494008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:28:59 compute-0 sudo[494008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:28:59 compute-0 sudo[494008]: pam_unix(sudo:session): session closed for user root
Oct 02 20:28:59 compute-0 podman[157186]: time="2025-10-02T20:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:28:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:28:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9123 "" "Go-http-client/1.1"
Oct 02 20:28:59 compute-0 sudo[494033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:28:59 compute-0 sudo[494033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:29:00 compute-0 podman[494095]: 2025-10-02 20:29:00.367264143 +0000 UTC m=+0.079102727 container create 108872c62e39a80c642e3b5db4611ca9b4a4d3176de3e61c0b0d7cf6b83730b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_maxwell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 02 20:29:00 compute-0 nova_compute[355794]: 2025-10-02 20:29:00.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:00 compute-0 podman[494095]: 2025-10-02 20:29:00.332228032 +0000 UTC m=+0.044066676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:29:00 compute-0 systemd[1]: Started libpod-conmon-108872c62e39a80c642e3b5db4611ca9b4a4d3176de3e61c0b0d7cf6b83730b1.scope.
Oct 02 20:29:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:29:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2533: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:00 compute-0 podman[494095]: 2025-10-02 20:29:00.521684236 +0000 UTC m=+0.233522870 container init 108872c62e39a80c642e3b5db4611ca9b4a4d3176de3e61c0b0d7cf6b83730b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_maxwell, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:29:00 compute-0 podman[494095]: 2025-10-02 20:29:00.530254009 +0000 UTC m=+0.242092583 container start 108872c62e39a80c642e3b5db4611ca9b4a4d3176de3e61c0b0d7cf6b83730b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_maxwell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 20:29:00 compute-0 podman[494095]: 2025-10-02 20:29:00.536960923 +0000 UTC m=+0.248799537 container attach 108872c62e39a80c642e3b5db4611ca9b4a4d3176de3e61c0b0d7cf6b83730b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_maxwell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 20:29:00 compute-0 infallible_maxwell[494110]: 167 167
Oct 02 20:29:00 compute-0 systemd[1]: libpod-108872c62e39a80c642e3b5db4611ca9b4a4d3176de3e61c0b0d7cf6b83730b1.scope: Deactivated successfully.
Oct 02 20:29:00 compute-0 podman[494095]: 2025-10-02 20:29:00.540179306 +0000 UTC m=+0.252017880 container died 108872c62e39a80c642e3b5db4611ca9b4a4d3176de3e61c0b0d7cf6b83730b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_maxwell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 20:29:00 compute-0 ceph-mon[191910]: pgmap v2533: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-877b1dea3d07ce1e830edf835a8208c0378269e772332a592b551eb4c954c562-merged.mount: Deactivated successfully.
Oct 02 20:29:00 compute-0 podman[494095]: 2025-10-02 20:29:00.618615695 +0000 UTC m=+0.330454239 container remove 108872c62e39a80c642e3b5db4611ca9b4a4d3176de3e61c0b0d7cf6b83730b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_maxwell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct 02 20:29:00 compute-0 systemd[1]: libpod-conmon-108872c62e39a80c642e3b5db4611ca9b4a4d3176de3e61c0b0d7cf6b83730b1.scope: Deactivated successfully.
Oct 02 20:29:00 compute-0 podman[494133]: 2025-10-02 20:29:00.860763877 +0000 UTC m=+0.064897627 container create 59c50d81c4a9706d7b2c33f17d3fe3a5310be301d10f8c78793e46aabc76cd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_moser, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:29:00 compute-0 podman[494133]: 2025-10-02 20:29:00.838592371 +0000 UTC m=+0.042726141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:29:00 compute-0 systemd[1]: Started libpod-conmon-59c50d81c4a9706d7b2c33f17d3fe3a5310be301d10f8c78793e46aabc76cd91.scope.
Oct 02 20:29:00 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:29:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d017005ef3ddd7c3037c4a3888049b2d6893c5ef52209ce753428c4a89c00b35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:29:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d017005ef3ddd7c3037c4a3888049b2d6893c5ef52209ce753428c4a89c00b35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:29:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d017005ef3ddd7c3037c4a3888049b2d6893c5ef52209ce753428c4a89c00b35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:29:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d017005ef3ddd7c3037c4a3888049b2d6893c5ef52209ce753428c4a89c00b35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:29:00 compute-0 podman[494133]: 2025-10-02 20:29:00.981184957 +0000 UTC m=+0.185318727 container init 59c50d81c4a9706d7b2c33f17d3fe3a5310be301d10f8c78793e46aabc76cd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_moser, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 20:29:00 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:29:00 compute-0 podman[494133]: 2025-10-02 20:29:00.997218103 +0000 UTC m=+0.201351853 container start 59c50d81c4a9706d7b2c33f17d3fe3a5310be301d10f8c78793e46aabc76cd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 02 20:29:01 compute-0 podman[494133]: 2025-10-02 20:29:01.001973297 +0000 UTC m=+0.206107057 container attach 59c50d81c4a9706d7b2c33f17d3fe3a5310be301d10f8c78793e46aabc76cd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_moser, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:29:01 compute-0 openstack_network_exporter[372736]: ERROR   20:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:29:01 compute-0 openstack_network_exporter[372736]: ERROR   20:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:29:01 compute-0 openstack_network_exporter[372736]: ERROR   20:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:29:01 compute-0 openstack_network_exporter[372736]: ERROR   20:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:29:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:29:01 compute-0 openstack_network_exporter[372736]: ERROR   20:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:29:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:29:02 compute-0 trusting_moser[494150]: {
Oct 02 20:29:02 compute-0 trusting_moser[494150]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:29:02 compute-0 trusting_moser[494150]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:29:02 compute-0 trusting_moser[494150]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:29:02 compute-0 trusting_moser[494150]:         "osd_id": 1,
Oct 02 20:29:02 compute-0 trusting_moser[494150]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:29:02 compute-0 trusting_moser[494150]:         "type": "bluestore"
Oct 02 20:29:02 compute-0 trusting_moser[494150]:     },
Oct 02 20:29:02 compute-0 trusting_moser[494150]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:29:02 compute-0 trusting_moser[494150]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:29:02 compute-0 trusting_moser[494150]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:29:02 compute-0 trusting_moser[494150]:         "osd_id": 2,
Oct 02 20:29:02 compute-0 trusting_moser[494150]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:29:02 compute-0 trusting_moser[494150]:         "type": "bluestore"
Oct 02 20:29:02 compute-0 trusting_moser[494150]:     },
Oct 02 20:29:02 compute-0 trusting_moser[494150]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:29:02 compute-0 trusting_moser[494150]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:29:02 compute-0 trusting_moser[494150]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:29:02 compute-0 trusting_moser[494150]:         "osd_id": 0,
Oct 02 20:29:02 compute-0 trusting_moser[494150]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:29:02 compute-0 trusting_moser[494150]:         "type": "bluestore"
Oct 02 20:29:02 compute-0 trusting_moser[494150]:     }
Oct 02 20:29:02 compute-0 trusting_moser[494150]: }
Oct 02 20:29:02 compute-0 systemd[1]: libpod-59c50d81c4a9706d7b2c33f17d3fe3a5310be301d10f8c78793e46aabc76cd91.scope: Deactivated successfully.
Oct 02 20:29:02 compute-0 systemd[1]: libpod-59c50d81c4a9706d7b2c33f17d3fe3a5310be301d10f8c78793e46aabc76cd91.scope: Consumed 1.181s CPU time.
Oct 02 20:29:02 compute-0 podman[494133]: 2025-10-02 20:29:02.187460114 +0000 UTC m=+1.391593904 container died 59c50d81c4a9706d7b2c33f17d3fe3a5310be301d10f8c78793e46aabc76cd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 20:29:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d017005ef3ddd7c3037c4a3888049b2d6893c5ef52209ce753428c4a89c00b35-merged.mount: Deactivated successfully.
Oct 02 20:29:02 compute-0 podman[494133]: 2025-10-02 20:29:02.306012724 +0000 UTC m=+1.510146484 container remove 59c50d81c4a9706d7b2c33f17d3fe3a5310be301d10f8c78793e46aabc76cd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_moser, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:29:02 compute-0 systemd[1]: libpod-conmon-59c50d81c4a9706d7b2c33f17d3fe3a5310be301d10f8c78793e46aabc76cd91.scope: Deactivated successfully.
Oct 02 20:29:02 compute-0 sudo[494033]: pam_unix(sudo:session): session closed for user root
Oct 02 20:29:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:29:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:29:02 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:29:02 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:29:02 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 1c9888d4-816c-495b-9d25-f819badf7e29 does not exist
Oct 02 20:29:02 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 91eced43-1df8-4e1c-bb7e-4828c5519ae1 does not exist
Oct 02 20:29:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2534: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:02 compute-0 sudo[494197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:29:02 compute-0 sudo[494197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:29:02 compute-0 sudo[494197]: pam_unix(sudo:session): session closed for user root
Oct 02 20:29:02 compute-0 sudo[494222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:29:02 compute-0 sudo[494222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:29:02 compute-0 sudo[494222]: pam_unix(sudo:session): session closed for user root
Oct 02 20:29:03 compute-0 nova_compute[355794]: 2025-10-02 20:29:03.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:29:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:29:03 compute-0 ceph-mon[191910]: pgmap v2534: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:29:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:29:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:29:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:29:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:29:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:29:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:29:03
Oct 02 20:29:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:29:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:29:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'vms', 'default.rgw.control', 'volumes', 'backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log']
Oct 02 20:29:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.313 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.314 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f343371fce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.328 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.328 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.328 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.329 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.329 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.330 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T20:29:04.329196) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.408 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.410 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.411 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.412 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.412 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.412 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.412 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.413 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.413 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:29:04.413247) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.450 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.451 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.451 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.452 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.453 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.453 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.453 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.453 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.454 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:29:04.453747) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.454 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.455 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.456 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.457 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.457 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.457 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.457 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.458 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.458 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.458 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.459 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.460 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.460 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.460 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.461 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.461 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:29:04.458003) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.461 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.461 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.462 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:29:04.461543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.503 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.504 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.504 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.505 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.505 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.506 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.507 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.507 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.507 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:29:04.505219) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2535: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.509 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:29:04.507675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.515 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.516 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.517 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.517 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.517 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.518 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.518 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.519 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:29:04.518025) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.519 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.520 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.520 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.520 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.520 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.521 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:29:04.520662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.522 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.522 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.522 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.523 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.523 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:29:04.523142) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.524 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.525 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.525 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.525 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.525 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.526 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:29:04.525667) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.526 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.527 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.527 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.527 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.527 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.528 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.528 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.529 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.529 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.529 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.530 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:29:04.528094) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:29:04.530145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.530 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.532 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.532 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.532 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.532 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.533 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:29:04.532842) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.533 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.534 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.535 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.536 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.536 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.536 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.537 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.537 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.538 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:29:04.537190) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.539 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2524 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.540 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.540 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.540 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.541 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.541 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.541 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.542 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.542 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:29:04.541738) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.544 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.545 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.547 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.548 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.549 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.549 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.550 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.551 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:29:04.550259) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.551 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.552 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.553 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.554 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.554 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.554 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.554 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2730 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.555 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.556 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.556 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.556 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.557 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.557 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.558 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.559 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.559 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.559 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.559 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.559 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:29:04.554496) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.560 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:29:04.556960) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.560 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:29:04.559340) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.561 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.562 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.562 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.563 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.563 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.563 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.564 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:29:04.563759) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.563 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.564 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.565 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.566 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.566 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.566 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.566 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.567 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.567 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.568 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:29:04.566437) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.569 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 76060000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.570 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.571 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.572 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:29:04.569516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.572 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.573 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.574 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.575 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:29:04.572629) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:29:04.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:29:04 compute-0 ceph-mon[191910]: pgmap v2535: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:29:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:29:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:29:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:29:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:29:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:29:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:29:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:29:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:29:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:29:04 compute-0 podman[494249]: 2025-10-02 20:29:04.669937264 +0000 UTC m=+0.084987149 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, name=ubi9, maintainer=Red Hat, Inc., release=1214.1726694543, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, architecture=x86_64, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 20:29:04 compute-0 podman[494248]: 2025-10-02 20:29:04.688810215 +0000 UTC m=+0.093984613 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct 02 20:29:05 compute-0 nova_compute[355794]: 2025-10-02 20:29:05.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:29:06 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:29:06.000 285790 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '6e:8a:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ba:85:f4:22:b9:4f'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:29:06 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:29:06.002 285790 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 20:29:06 compute-0 nova_compute[355794]: 2025-10-02 20:29:06.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2536: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:06 compute-0 ceph-mon[191910]: pgmap v2536: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:07 compute-0 podman[494287]: 2025-10-02 20:29:07.709764869 +0000 UTC m=+0.121175930 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 20:29:07 compute-0 podman[494297]: 2025-10-02 20:29:07.713835235 +0000 UTC m=+0.097953247 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 20:29:07 compute-0 podman[494288]: 2025-10-02 20:29:07.720684993 +0000 UTC m=+0.124387954 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:29:07 compute-0 podman[494289]: 2025-10-02 20:29:07.741005811 +0000 UTC m=+0.139338882 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 20:29:07 compute-0 podman[494290]: 2025-10-02 20:29:07.786997616 +0000 UTC m=+0.171194600 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:29:08 compute-0 nova_compute[355794]: 2025-10-02 20:29:08.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2537: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:09 compute-0 ceph-mon[191910]: pgmap v2537: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:10 compute-0 nova_compute[355794]: 2025-10-02 20:29:10.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2538: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:10 compute-0 ceph-mon[191910]: pgmap v2538: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:29:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2539: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:13 compute-0 ceph-mon[191910]: pgmap v2539: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:13 compute-0 nova_compute[355794]: 2025-10-02 20:29:13.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:29:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:29:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2540: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:14 compute-0 ceph-mon[191910]: pgmap v2540: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:15 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:29:15.005 285790 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1de2af17-a89c-45e5-97c6-db433f26bbb6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:29:15 compute-0 nova_compute[355794]: 2025-10-02 20:29:15.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:29:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2541: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:16 compute-0 ceph-mon[191910]: pgmap v2541: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:18 compute-0 nova_compute[355794]: 2025-10-02 20:29:18.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2542: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:18 compute-0 ceph-mon[191910]: pgmap v2542: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:18 compute-0 podman[494390]: 2025-10-02 20:29:18.726265229 +0000 UTC m=+0.142704129 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 20:29:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:29:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1842940920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:29:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:29:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1842940920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:29:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1842940920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:29:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1842940920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:29:20 compute-0 nova_compute[355794]: 2025-10-02 20:29:20.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2543: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:29:21 compute-0 ceph-mon[191910]: pgmap v2543: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2544: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:22 compute-0 ceph-mon[191910]: pgmap v2544: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:23 compute-0 nova_compute[355794]: 2025-10-02 20:29:23.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2545: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:24 compute-0 ceph-mon[191910]: pgmap v2545: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:25 compute-0 nova_compute[355794]: 2025-10-02 20:29:25.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:29:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2546: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:26 compute-0 nova_compute[355794]: 2025-10-02 20:29:26.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:29:26 compute-0 nova_compute[355794]: 2025-10-02 20:29:26.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:29:26 compute-0 nova_compute[355794]: 2025-10-02 20:29:26.576 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:29:26 compute-0 ceph-mon[191910]: pgmap v2546: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:27 compute-0 nova_compute[355794]: 2025-10-02 20:29:27.595 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:29:27 compute-0 nova_compute[355794]: 2025-10-02 20:29:27.595 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:29:27 compute-0 nova_compute[355794]: 2025-10-02 20:29:27.595 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:29:27 compute-0 nova_compute[355794]: 2025-10-02 20:29:27.596 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:29:28 compute-0 nova_compute[355794]: 2025-10-02 20:29:28.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2547: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:28 compute-0 ceph-mon[191910]: pgmap v2547: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:28 compute-0 podman[494412]: 2025-10-02 20:29:28.691183773 +0000 UTC m=+0.114847156 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true)
Oct 02 20:29:28 compute-0 podman[494411]: 2025-10-02 20:29:28.738476172 +0000 UTC m=+0.150740909 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:29:29 compute-0 podman[157186]: time="2025-10-02T20:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:29:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:29:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9118 "" "Go-http-client/1.1"
Oct 02 20:29:29 compute-0 nova_compute[355794]: 2025-10-02 20:29:29.851 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:29:29 compute-0 nova_compute[355794]: 2025-10-02 20:29:29.867 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:29:29 compute-0 nova_compute[355794]: 2025-10-02 20:29:29.868 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:29:29 compute-0 nova_compute[355794]: 2025-10-02 20:29:29.869 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:29:29 compute-0 nova_compute[355794]: 2025-10-02 20:29:29.870 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:29:29 compute-0 nova_compute[355794]: 2025-10-02 20:29:29.871 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:29:29 compute-0 nova_compute[355794]: 2025-10-02 20:29:29.873 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:29:29 compute-0 nova_compute[355794]: 2025-10-02 20:29:29.899 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:29:29 compute-0 nova_compute[355794]: 2025-10-02 20:29:29.900 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:29:29 compute-0 nova_compute[355794]: 2025-10-02 20:29:29.901 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:29:29 compute-0 nova_compute[355794]: 2025-10-02 20:29:29.901 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:29:29 compute-0 nova_compute[355794]: 2025-10-02 20:29:29.902 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:29:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:29:30 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1008003479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:29:30 compute-0 nova_compute[355794]: 2025-10-02 20:29:30.434 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:29:30 compute-0 nova_compute[355794]: 2025-10-02 20:29:30.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:30 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1008003479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:29:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2548: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:30 compute-0 nova_compute[355794]: 2025-10-02 20:29:30.547 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:29:30 compute-0 nova_compute[355794]: 2025-10-02 20:29:30.549 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:29:30 compute-0 nova_compute[355794]: 2025-10-02 20:29:30.550 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:29:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:29:31 compute-0 nova_compute[355794]: 2025-10-02 20:29:31.012 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:29:31 compute-0 nova_compute[355794]: 2025-10-02 20:29:31.014 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3627MB free_disk=59.955204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:29:31 compute-0 nova_compute[355794]: 2025-10-02 20:29:31.014 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:29:31 compute-0 nova_compute[355794]: 2025-10-02 20:29:31.015 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:29:31 compute-0 nova_compute[355794]: 2025-10-02 20:29:31.105 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:29:31 compute-0 nova_compute[355794]: 2025-10-02 20:29:31.106 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:29:31 compute-0 nova_compute[355794]: 2025-10-02 20:29:31.106 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:29:31 compute-0 nova_compute[355794]: 2025-10-02 20:29:31.157 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:29:31 compute-0 openstack_network_exporter[372736]: ERROR   20:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:29:31 compute-0 openstack_network_exporter[372736]: ERROR   20:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:29:31 compute-0 openstack_network_exporter[372736]: ERROR   20:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:29:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:29:31 compute-0 openstack_network_exporter[372736]: ERROR   20:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:29:31 compute-0 openstack_network_exporter[372736]: ERROR   20:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:29:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:29:31 compute-0 ceph-mon[191910]: pgmap v2548: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:29:31 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/801532374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:29:31 compute-0 nova_compute[355794]: 2025-10-02 20:29:31.658 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:29:31 compute-0 nova_compute[355794]: 2025-10-02 20:29:31.671 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:29:31 compute-0 nova_compute[355794]: 2025-10-02 20:29:31.706 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:29:31 compute-0 nova_compute[355794]: 2025-10-02 20:29:31.709 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:29:31 compute-0 nova_compute[355794]: 2025-10-02 20:29:31.711 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:29:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:29:32.351 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:29:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:29:32.352 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:29:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:29:32.353 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:29:32 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/801532374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:29:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2549: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:33 compute-0 nova_compute[355794]: 2025-10-02 20:29:33.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:33 compute-0 nova_compute[355794]: 2025-10-02 20:29:33.417 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:29:33 compute-0 nova_compute[355794]: 2025-10-02 20:29:33.419 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:29:33 compute-0 nova_compute[355794]: 2025-10-02 20:29:33.419 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:29:33 compute-0 nova_compute[355794]: 2025-10-02 20:29:33.419 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:29:33 compute-0 ceph-mon[191910]: pgmap v2549: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:29:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:29:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:29:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:29:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:29:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:29:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2550: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:34 compute-0 ceph-mon[191910]: pgmap v2550: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:35 compute-0 nova_compute[355794]: 2025-10-02 20:29:35.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:35 compute-0 nova_compute[355794]: 2025-10-02 20:29:35.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:29:35 compute-0 podman[494497]: 2025-10-02 20:29:35.71279398 +0000 UTC m=+0.129544648 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 20:29:35 compute-0 podman[494498]: 2025-10-02 20:29:35.728514108 +0000 UTC m=+0.139070585 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, release=1214.1726694543, config_id=edpm, version=9.4, io.openshift.tags=base rhel9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.29.0, release-0.7.12=, vcs-type=git, container_name=kepler, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9)
Oct 02 20:29:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:29:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2551: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:36 compute-0 ceph-mon[191910]: pgmap v2551: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:38 compute-0 nova_compute[355794]: 2025-10-02 20:29:38.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2552: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:38 compute-0 podman[494537]: 2025-10-02 20:29:38.70204201 +0000 UTC m=+0.106717115 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 20:29:38 compute-0 podman[494545]: 2025-10-02 20:29:38.702892482 +0000 UTC m=+0.088762328 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 20:29:38 compute-0 podman[494536]: 2025-10-02 20:29:38.716187547 +0000 UTC m=+0.137059723 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:29:38 compute-0 podman[494538]: 2025-10-02 20:29:38.720859449 +0000 UTC m=+0.118570253 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container)
Oct 02 20:29:38 compute-0 podman[494544]: 2025-10-02 20:29:38.759329298 +0000 UTC m=+0.156848227 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 20:29:39 compute-0 ceph-mon[191910]: pgmap v2552: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:40 compute-0 nova_compute[355794]: 2025-10-02 20:29:40.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2553: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:40 compute-0 ceph-mon[191910]: pgmap v2553: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:29:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2554: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:42 compute-0 ceph-mon[191910]: pgmap v2554: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:43 compute-0 nova_compute[355794]: 2025-10-02 20:29:43.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2555: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:44 compute-0 ceph-mon[191910]: pgmap v2555: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:45 compute-0 nova_compute[355794]: 2025-10-02 20:29:45.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:29:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2556: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:46 compute-0 ceph-mon[191910]: pgmap v2556: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:48 compute-0 nova_compute[355794]: 2025-10-02 20:29:48.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2557: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:48 compute-0 ceph-mon[191910]: pgmap v2557: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:49 compute-0 podman[494636]: 2025-10-02 20:29:49.768868178 +0000 UTC m=+0.189118496 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:29:50 compute-0 nova_compute[355794]: 2025-10-02 20:29:50.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2558: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:50 compute-0 ceph-mon[191910]: pgmap v2558: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:29:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2559: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:52 compute-0 ceph-mon[191910]: pgmap v2559: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:53 compute-0 nova_compute[355794]: 2025-10-02 20:29:53.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2560: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:54 compute-0 ceph-mon[191910]: pgmap v2560: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:55 compute-0 nova_compute[355794]: 2025-10-02 20:29:55.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:56 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:29:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2561: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:56 compute-0 ceph-mon[191910]: pgmap v2561: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:58 compute-0 nova_compute[355794]: 2025-10-02 20:29:58.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:29:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2562: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:58 compute-0 ceph-mon[191910]: pgmap v2562: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:29:59 compute-0 podman[494655]: 2025-10-02 20:29:59.653642599 +0000 UTC m=+0.066968452 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 20:29:59 compute-0 podman[494654]: 2025-10-02 20:29:59.657070738 +0000 UTC m=+0.078373478 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:29:59 compute-0 podman[157186]: time="2025-10-02T20:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:29:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:29:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9122 "" "Go-http-client/1.1"
Oct 02 20:30:00 compute-0 nova_compute[355794]: 2025-10-02 20:30:00.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2563: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:00 compute-0 ceph-mon[191910]: pgmap v2563: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:30:01 compute-0 openstack_network_exporter[372736]: ERROR   20:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:30:01 compute-0 openstack_network_exporter[372736]: ERROR   20:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:30:01 compute-0 openstack_network_exporter[372736]: ERROR   20:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:30:01 compute-0 openstack_network_exporter[372736]: ERROR   20:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:30:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:30:01 compute-0 openstack_network_exporter[372736]: ERROR   20:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:30:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:30:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2564: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:02 compute-0 ceph-mon[191910]: pgmap v2564: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:02 compute-0 sudo[494698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:30:02 compute-0 sudo[494698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:30:02 compute-0 sudo[494698]: pam_unix(sudo:session): session closed for user root
Oct 02 20:30:02 compute-0 sudo[494723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:30:02 compute-0 sudo[494723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:30:02 compute-0 sudo[494723]: pam_unix(sudo:session): session closed for user root
Oct 02 20:30:03 compute-0 sudo[494748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:30:03 compute-0 sudo[494748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:30:03 compute-0 sudo[494748]: pam_unix(sudo:session): session closed for user root
Oct 02 20:30:03 compute-0 sudo[494773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:30:03 compute-0 sudo[494773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:30:03 compute-0 nova_compute[355794]: 2025-10-02 20:30:03.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:30:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:30:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:30:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:30:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:30:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:30:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:30:03
Oct 02 20:30:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:30:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:30:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'vms', 'images', 'backups', '.rgw.root']
Oct 02 20:30:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:30:03 compute-0 sudo[494773]: pam_unix(sudo:session): session closed for user root
Oct 02 20:30:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:30:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:30:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:30:03 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:30:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:30:03 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:30:03 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 1eaa48ea-4e6a-4bb9-b152-7e63874f540e does not exist
Oct 02 20:30:03 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev ca631607-88d9-4326-afd9-644e80e73782 does not exist
Oct 02 20:30:03 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 32801ecb-e597-4ab5-8605-7af54839a43c does not exist
Oct 02 20:30:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:30:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:30:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:30:03 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:30:03 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:30:03 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:30:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:30:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:30:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:30:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:30:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:30:03 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:30:04 compute-0 sudo[494828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:30:04 compute-0 sudo[494828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:30:04 compute-0 sudo[494828]: pam_unix(sudo:session): session closed for user root
Oct 02 20:30:04 compute-0 sudo[494853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:30:04 compute-0 sudo[494853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:30:04 compute-0 sudo[494853]: pam_unix(sudo:session): session closed for user root
Oct 02 20:30:04 compute-0 sudo[494878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:30:04 compute-0 sudo[494878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:30:04 compute-0 sudo[494878]: pam_unix(sudo:session): session closed for user root
Oct 02 20:30:04 compute-0 sudo[494903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:30:04 compute-0 sudo[494903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:30:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2565: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:30:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:30:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:30:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:30:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:30:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:30:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:30:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:30:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:30:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:30:04 compute-0 ceph-mon[191910]: pgmap v2565: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:05 compute-0 podman[494966]: 2025-10-02 20:30:05.036155742 +0000 UTC m=+0.059207410 container create 27a1d33a22cafb143e1b6f73ed0e7ce816113cfe517673f912df485b923308ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 02 20:30:05 compute-0 systemd[1]: Started libpod-conmon-27a1d33a22cafb143e1b6f73ed0e7ce816113cfe517673f912df485b923308ec.scope.
Oct 02 20:30:05 compute-0 podman[494966]: 2025-10-02 20:30:05.0133682 +0000 UTC m=+0.036419878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:30:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:30:05 compute-0 podman[494966]: 2025-10-02 20:30:05.186082198 +0000 UTC m=+0.209133936 container init 27a1d33a22cafb143e1b6f73ed0e7ce816113cfe517673f912df485b923308ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:30:05 compute-0 podman[494966]: 2025-10-02 20:30:05.205484012 +0000 UTC m=+0.228535710 container start 27a1d33a22cafb143e1b6f73ed0e7ce816113cfe517673f912df485b923308ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_clarke, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:30:05 compute-0 podman[494966]: 2025-10-02 20:30:05.212021732 +0000 UTC m=+0.235073440 container attach 27a1d33a22cafb143e1b6f73ed0e7ce816113cfe517673f912df485b923308ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:30:05 compute-0 trusting_clarke[494983]: 167 167
Oct 02 20:30:05 compute-0 systemd[1]: libpod-27a1d33a22cafb143e1b6f73ed0e7ce816113cfe517673f912df485b923308ec.scope: Deactivated successfully.
Oct 02 20:30:05 compute-0 podman[494966]: 2025-10-02 20:30:05.218553992 +0000 UTC m=+0.241605660 container died 27a1d33a22cafb143e1b6f73ed0e7ce816113cfe517673f912df485b923308ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 20:30:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1bacff8d6c636465c648e46fc9116377c80a700884fcdeca16d02a88b11ff29-merged.mount: Deactivated successfully.
Oct 02 20:30:05 compute-0 podman[494966]: 2025-10-02 20:30:05.306454186 +0000 UTC m=+0.329505874 container remove 27a1d33a22cafb143e1b6f73ed0e7ce816113cfe517673f912df485b923308ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 02 20:30:05 compute-0 systemd[1]: libpod-conmon-27a1d33a22cafb143e1b6f73ed0e7ce816113cfe517673f912df485b923308ec.scope: Deactivated successfully.
Oct 02 20:30:05 compute-0 nova_compute[355794]: 2025-10-02 20:30:05.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:05 compute-0 podman[495005]: 2025-10-02 20:30:05.620001154 +0000 UTC m=+0.109645610 container create 2a8ad7c90fffc401831b650274c92614c8146c0d71a4f061dbb23e0b3071553b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:30:05 compute-0 podman[495005]: 2025-10-02 20:30:05.565834256 +0000 UTC m=+0.055478712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:30:05 compute-0 systemd[1]: Started libpod-conmon-2a8ad7c90fffc401831b650274c92614c8146c0d71a4f061dbb23e0b3071553b.scope.
Oct 02 20:30:05 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5e7a845c5968c481bc835bebce1f1f581eec7b0fe32f158dff51f5e1d46b76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5e7a845c5968c481bc835bebce1f1f581eec7b0fe32f158dff51f5e1d46b76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5e7a845c5968c481bc835bebce1f1f581eec7b0fe32f158dff51f5e1d46b76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5e7a845c5968c481bc835bebce1f1f581eec7b0fe32f158dff51f5e1d46b76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5e7a845c5968c481bc835bebce1f1f581eec7b0fe32f158dff51f5e1d46b76/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:30:05 compute-0 podman[495005]: 2025-10-02 20:30:05.786742787 +0000 UTC m=+0.276387233 container init 2a8ad7c90fffc401831b650274c92614c8146c0d71a4f061dbb23e0b3071553b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 20:30:05 compute-0 podman[495005]: 2025-10-02 20:30:05.797269471 +0000 UTC m=+0.286913897 container start 2a8ad7c90fffc401831b650274c92614c8146c0d71a4f061dbb23e0b3071553b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 02 20:30:05 compute-0 podman[495005]: 2025-10-02 20:30:05.803354439 +0000 UTC m=+0.292998875 container attach 2a8ad7c90fffc401831b650274c92614c8146c0d71a4f061dbb23e0b3071553b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 20:30:05 compute-0 podman[495024]: 2025-10-02 20:30:05.890949865 +0000 UTC m=+0.122436483 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm)
Oct 02 20:30:05 compute-0 podman[495025]: 2025-10-02 20:30:05.911962171 +0000 UTC m=+0.130075931 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, distribution-scope=public, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=kepler, config_id=edpm, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4)
Oct 02 20:30:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:30:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2566: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:06 compute-0 ceph-mon[191910]: pgmap v2566: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:07 compute-0 infallible_sinoussi[495021]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:30:07 compute-0 infallible_sinoussi[495021]: --> relative data size: 1.0
Oct 02 20:30:07 compute-0 infallible_sinoussi[495021]: --> All data devices are unavailable
Oct 02 20:30:07 compute-0 systemd[1]: libpod-2a8ad7c90fffc401831b650274c92614c8146c0d71a4f061dbb23e0b3071553b.scope: Deactivated successfully.
Oct 02 20:30:07 compute-0 systemd[1]: libpod-2a8ad7c90fffc401831b650274c92614c8146c0d71a4f061dbb23e0b3071553b.scope: Consumed 1.201s CPU time.
Oct 02 20:30:07 compute-0 podman[495005]: 2025-10-02 20:30:07.065363573 +0000 UTC m=+1.555008009 container died 2a8ad7c90fffc401831b650274c92614c8146c0d71a4f061dbb23e0b3071553b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sinoussi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:30:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f5e7a845c5968c481bc835bebce1f1f581eec7b0fe32f158dff51f5e1d46b76-merged.mount: Deactivated successfully.
Oct 02 20:30:07 compute-0 podman[495005]: 2025-10-02 20:30:07.242317242 +0000 UTC m=+1.731961698 container remove 2a8ad7c90fffc401831b650274c92614c8146c0d71a4f061dbb23e0b3071553b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 02 20:30:07 compute-0 systemd[1]: libpod-conmon-2a8ad7c90fffc401831b650274c92614c8146c0d71a4f061dbb23e0b3071553b.scope: Deactivated successfully.
Oct 02 20:30:07 compute-0 sudo[494903]: pam_unix(sudo:session): session closed for user root
Oct 02 20:30:07 compute-0 sudo[495103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:30:07 compute-0 sudo[495103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:30:07 compute-0 sudo[495103]: pam_unix(sudo:session): session closed for user root
Oct 02 20:30:07 compute-0 sudo[495128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:30:07 compute-0 sudo[495128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:30:07 compute-0 sudo[495128]: pam_unix(sudo:session): session closed for user root
Oct 02 20:30:07 compute-0 sudo[495153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:30:07 compute-0 sudo[495153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:30:07 compute-0 sudo[495153]: pam_unix(sudo:session): session closed for user root
Oct 02 20:30:07 compute-0 sudo[495178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:30:07 compute-0 sudo[495178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:30:08 compute-0 nova_compute[355794]: 2025-10-02 20:30:08.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:08 compute-0 podman[495240]: 2025-10-02 20:30:08.424268107 +0000 UTC m=+0.089671561 container create 357fe4f195dba49fdbdb628694181680687aa06d4c19b89662b2bab1a12698e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:30:08 compute-0 podman[495240]: 2025-10-02 20:30:08.390055678 +0000 UTC m=+0.055459182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:30:08 compute-0 systemd[1]: Started libpod-conmon-357fe4f195dba49fdbdb628694181680687aa06d4c19b89662b2bab1a12698e5.scope.
Oct 02 20:30:08 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:30:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2567: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:08 compute-0 podman[495240]: 2025-10-02 20:30:08.576666437 +0000 UTC m=+0.242069921 container init 357fe4f195dba49fdbdb628694181680687aa06d4c19b89662b2bab1a12698e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_matsumoto, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 20:30:08 compute-0 podman[495240]: 2025-10-02 20:30:08.589936342 +0000 UTC m=+0.255339796 container start 357fe4f195dba49fdbdb628694181680687aa06d4c19b89662b2bab1a12698e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_matsumoto, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct 02 20:30:08 compute-0 optimistic_matsumoto[495256]: 167 167
Oct 02 20:30:08 compute-0 podman[495240]: 2025-10-02 20:30:08.599211193 +0000 UTC m=+0.264614697 container attach 357fe4f195dba49fdbdb628694181680687aa06d4c19b89662b2bab1a12698e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_matsumoto, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 20:30:08 compute-0 systemd[1]: libpod-357fe4f195dba49fdbdb628694181680687aa06d4c19b89662b2bab1a12698e5.scope: Deactivated successfully.
Oct 02 20:30:08 compute-0 podman[495240]: 2025-10-02 20:30:08.601256766 +0000 UTC m=+0.266660230 container died 357fe4f195dba49fdbdb628694181680687aa06d4c19b89662b2bab1a12698e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 02 20:30:08 compute-0 ceph-mon[191910]: pgmap v2567: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-843fc132c68e33ad1240bf5bf0bfb8323ac02cd30acb46d4024fc6333047ca73-merged.mount: Deactivated successfully.
Oct 02 20:30:08 compute-0 podman[495240]: 2025-10-02 20:30:08.713041431 +0000 UTC m=+0.378444875 container remove 357fe4f195dba49fdbdb628694181680687aa06d4c19b89662b2bab1a12698e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 20:30:08 compute-0 systemd[1]: libpod-conmon-357fe4f195dba49fdbdb628694181680687aa06d4c19b89662b2bab1a12698e5.scope: Deactivated successfully.
Oct 02 20:30:08 compute-0 podman[495277]: 2025-10-02 20:30:08.866962821 +0000 UTC m=+0.094523867 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 20:30:08 compute-0 podman[495276]: 2025-10-02 20:30:08.87692206 +0000 UTC m=+0.097776632 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, name=ubi9-minimal, maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64)
Oct 02 20:30:08 compute-0 podman[495275]: 2025-10-02 20:30:08.882778652 +0000 UTC m=+0.112970076 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 20:30:08 compute-0 podman[495274]: 2025-10-02 20:30:08.897124115 +0000 UTC m=+0.126883898 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true)
Oct 02 20:30:08 compute-0 podman[495359]: 2025-10-02 20:30:08.977903884 +0000 UTC m=+0.066150590 container create f75ec3823f7c2328b6b44978062e4b2dab0c45c313f3e00cafd8ec7f368e67d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bassi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 02 20:30:09 compute-0 podman[495351]: 2025-10-02 20:30:09.026022935 +0000 UTC m=+0.122350761 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 20:30:09 compute-0 systemd[1]: Started libpod-conmon-f75ec3823f7c2328b6b44978062e4b2dab0c45c313f3e00cafd8ec7f368e67d8.scope.
Oct 02 20:30:09 compute-0 podman[495359]: 2025-10-02 20:30:08.954174928 +0000 UTC m=+0.042421664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:30:09 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d72231a910c65cd0c2ac81b429aa1e4363f7d325f2105d25f4e2b960887028/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d72231a910c65cd0c2ac81b429aa1e4363f7d325f2105d25f4e2b960887028/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d72231a910c65cd0c2ac81b429aa1e4363f7d325f2105d25f4e2b960887028/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2d72231a910c65cd0c2ac81b429aa1e4363f7d325f2105d25f4e2b960887028/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:30:09 compute-0 podman[495359]: 2025-10-02 20:30:09.118557079 +0000 UTC m=+0.206803815 container init f75ec3823f7c2328b6b44978062e4b2dab0c45c313f3e00cafd8ec7f368e67d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 20:30:09 compute-0 podman[495359]: 2025-10-02 20:30:09.140543901 +0000 UTC m=+0.228790617 container start f75ec3823f7c2328b6b44978062e4b2dab0c45c313f3e00cafd8ec7f368e67d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:30:09 compute-0 podman[495359]: 2025-10-02 20:30:09.149067512 +0000 UTC m=+0.237314218 container attach f75ec3823f7c2328b6b44978062e4b2dab0c45c313f3e00cafd8ec7f368e67d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bassi, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:30:09 compute-0 sad_bassi[495401]: {
Oct 02 20:30:09 compute-0 sad_bassi[495401]:     "0": [
Oct 02 20:30:09 compute-0 sad_bassi[495401]:         {
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "devices": [
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "/dev/loop3"
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             ],
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "lv_name": "ceph_lv0",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "lv_size": "21470642176",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "name": "ceph_lv0",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "tags": {
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.cluster_name": "ceph",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.crush_device_class": "",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.encrypted": "0",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.osd_id": "0",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.type": "block",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.vdo": "0"
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             },
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "type": "block",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "vg_name": "ceph_vg0"
Oct 02 20:30:09 compute-0 sad_bassi[495401]:         }
Oct 02 20:30:09 compute-0 sad_bassi[495401]:     ],
Oct 02 20:30:09 compute-0 sad_bassi[495401]:     "1": [
Oct 02 20:30:09 compute-0 sad_bassi[495401]:         {
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "devices": [
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "/dev/loop4"
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             ],
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "lv_name": "ceph_lv1",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "lv_size": "21470642176",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "name": "ceph_lv1",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "tags": {
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.cluster_name": "ceph",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.crush_device_class": "",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.encrypted": "0",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.osd_id": "1",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.type": "block",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.vdo": "0"
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             },
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "type": "block",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "vg_name": "ceph_vg1"
Oct 02 20:30:09 compute-0 sad_bassi[495401]:         }
Oct 02 20:30:09 compute-0 sad_bassi[495401]:     ],
Oct 02 20:30:09 compute-0 sad_bassi[495401]:     "2": [
Oct 02 20:30:09 compute-0 sad_bassi[495401]:         {
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "devices": [
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "/dev/loop5"
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             ],
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "lv_name": "ceph_lv2",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "lv_size": "21470642176",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "name": "ceph_lv2",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "tags": {
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.cluster_name": "ceph",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.crush_device_class": "",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.encrypted": "0",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.osd_id": "2",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.type": "block",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:                 "ceph.vdo": "0"
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             },
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "type": "block",
Oct 02 20:30:09 compute-0 sad_bassi[495401]:             "vg_name": "ceph_vg2"
Oct 02 20:30:09 compute-0 sad_bassi[495401]:         }
Oct 02 20:30:09 compute-0 sad_bassi[495401]:     ]
Oct 02 20:30:09 compute-0 sad_bassi[495401]: }
Oct 02 20:30:09 compute-0 systemd[1]: libpod-f75ec3823f7c2328b6b44978062e4b2dab0c45c313f3e00cafd8ec7f368e67d8.scope: Deactivated successfully.
Oct 02 20:30:09 compute-0 podman[495359]: 2025-10-02 20:30:09.951894055 +0000 UTC m=+1.040140811 container died f75ec3823f7c2328b6b44978062e4b2dab0c45c313f3e00cafd8ec7f368e67d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bassi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:30:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2d72231a910c65cd0c2ac81b429aa1e4363f7d325f2105d25f4e2b960887028-merged.mount: Deactivated successfully.
Oct 02 20:30:10 compute-0 podman[495359]: 2025-10-02 20:30:10.413783807 +0000 UTC m=+1.502030553 container remove f75ec3823f7c2328b6b44978062e4b2dab0c45c313f3e00cafd8ec7f368e67d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bassi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:30:10 compute-0 systemd[1]: libpod-conmon-f75ec3823f7c2328b6b44978062e4b2dab0c45c313f3e00cafd8ec7f368e67d8.scope: Deactivated successfully.
Oct 02 20:30:10 compute-0 nova_compute[355794]: 2025-10-02 20:30:10.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:10 compute-0 sudo[495178]: pam_unix(sudo:session): session closed for user root
Oct 02 20:30:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2568: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:10 compute-0 sudo[495423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:30:10 compute-0 sudo[495423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:30:10 compute-0 sudo[495423]: pam_unix(sudo:session): session closed for user root
Oct 02 20:30:10 compute-0 ceph-mon[191910]: pgmap v2568: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:10 compute-0 sudo[495448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:30:10 compute-0 sudo[495448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:30:10 compute-0 sudo[495448]: pam_unix(sudo:session): session closed for user root
Oct 02 20:30:10 compute-0 sudo[495473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:30:10 compute-0 sudo[495473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:30:10 compute-0 sudo[495473]: pam_unix(sudo:session): session closed for user root
Oct 02 20:30:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:30:11 compute-0 sudo[495498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:30:11 compute-0 sudo[495498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:30:11 compute-0 podman[495563]: 2025-10-02 20:30:11.562708534 +0000 UTC m=+0.051237362 container create 875e1f75ed63ed9295da84bc20b2977c5da65412c13b3329d2aaa9482461f5af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cartwright, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 20:30:11 compute-0 podman[495563]: 2025-10-02 20:30:11.544184723 +0000 UTC m=+0.032713581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:30:11 compute-0 systemd[1]: Started libpod-conmon-875e1f75ed63ed9295da84bc20b2977c5da65412c13b3329d2aaa9482461f5af.scope.
Oct 02 20:30:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:30:11 compute-0 podman[495563]: 2025-10-02 20:30:11.733211365 +0000 UTC m=+0.221740283 container init 875e1f75ed63ed9295da84bc20b2977c5da65412c13b3329d2aaa9482461f5af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cartwright, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 02 20:30:11 compute-0 podman[495563]: 2025-10-02 20:30:11.754429996 +0000 UTC m=+0.242958824 container start 875e1f75ed63ed9295da84bc20b2977c5da65412c13b3329d2aaa9482461f5af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 02 20:30:11 compute-0 stoic_cartwright[495579]: 167 167
Oct 02 20:30:11 compute-0 podman[495563]: 2025-10-02 20:30:11.764359254 +0000 UTC m=+0.252888172 container attach 875e1f75ed63ed9295da84bc20b2977c5da65412c13b3329d2aaa9482461f5af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 20:30:11 compute-0 systemd[1]: libpod-875e1f75ed63ed9295da84bc20b2977c5da65412c13b3329d2aaa9482461f5af.scope: Deactivated successfully.
Oct 02 20:30:11 compute-0 podman[495563]: 2025-10-02 20:30:11.765817572 +0000 UTC m=+0.254346420 container died 875e1f75ed63ed9295da84bc20b2977c5da65412c13b3329d2aaa9482461f5af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 20:30:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-64a0d9124f776d8065b2eca910a901f4745c8cb11f799c7101f9dd7ed3c52c7c-merged.mount: Deactivated successfully.
Oct 02 20:30:12 compute-0 podman[495563]: 2025-10-02 20:30:12.016788494 +0000 UTC m=+0.505317332 container remove 875e1f75ed63ed9295da84bc20b2977c5da65412c13b3329d2aaa9482461f5af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cartwright, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:30:12 compute-0 systemd[1]: libpod-conmon-875e1f75ed63ed9295da84bc20b2977c5da65412c13b3329d2aaa9482461f5af.scope: Deactivated successfully.
Oct 02 20:30:12 compute-0 podman[495601]: 2025-10-02 20:30:12.26213306 +0000 UTC m=+0.046507770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:30:12 compute-0 podman[495601]: 2025-10-02 20:30:12.416013959 +0000 UTC m=+0.200388669 container create 3ebcbd7e9aefb83fcbed21f95bea2a303a2e1e9c22f40ff27fa001bd7a83630b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:30:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2569: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:12 compute-0 systemd[1]: Started libpod-conmon-3ebcbd7e9aefb83fcbed21f95bea2a303a2e1e9c22f40ff27fa001bd7a83630b.scope.
Oct 02 20:30:12 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc082a58092312642245f9dd1d396012505e8599ea3ffa3594cfb79e8fdabaad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc082a58092312642245f9dd1d396012505e8599ea3ffa3594cfb79e8fdabaad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc082a58092312642245f9dd1d396012505e8599ea3ffa3594cfb79e8fdabaad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc082a58092312642245f9dd1d396012505e8599ea3ffa3594cfb79e8fdabaad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:30:12 compute-0 podman[495601]: 2025-10-02 20:30:12.808999141 +0000 UTC m=+0.593373841 container init 3ebcbd7e9aefb83fcbed21f95bea2a303a2e1e9c22f40ff27fa001bd7a83630b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lichterman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 02 20:30:12 compute-0 podman[495601]: 2025-10-02 20:30:12.828320233 +0000 UTC m=+0.612694913 container start 3ebcbd7e9aefb83fcbed21f95bea2a303a2e1e9c22f40ff27fa001bd7a83630b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:30:12 compute-0 ceph-mon[191910]: pgmap v2569: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:12 compute-0 podman[495601]: 2025-10-02 20:30:12.918186049 +0000 UTC m=+0.702560749 container attach 3ebcbd7e9aefb83fcbed21f95bea2a303a2e1e9c22f40ff27fa001bd7a83630b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lichterman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 02 20:30:13 compute-0 nova_compute[355794]: 2025-10-02 20:30:13.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:30:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]: {
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:         "osd_id": 1,
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:         "type": "bluestore"
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:     },
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:         "osd_id": 2,
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:         "type": "bluestore"
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:     },
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:         "osd_id": 0,
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:         "type": "bluestore"
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]:     }
Oct 02 20:30:13 compute-0 relaxed_lichterman[495618]: }
Oct 02 20:30:13 compute-0 systemd[1]: libpod-3ebcbd7e9aefb83fcbed21f95bea2a303a2e1e9c22f40ff27fa001bd7a83630b.scope: Deactivated successfully.
Oct 02 20:30:13 compute-0 systemd[1]: libpod-3ebcbd7e9aefb83fcbed21f95bea2a303a2e1e9c22f40ff27fa001bd7a83630b.scope: Consumed 1.096s CPU time.
Oct 02 20:30:13 compute-0 podman[495601]: 2025-10-02 20:30:13.93646827 +0000 UTC m=+1.720843010 container died 3ebcbd7e9aefb83fcbed21f95bea2a303a2e1e9c22f40ff27fa001bd7a83630b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 20:30:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc082a58092312642245f9dd1d396012505e8599ea3ffa3594cfb79e8fdabaad-merged.mount: Deactivated successfully.
Oct 02 20:30:14 compute-0 podman[495601]: 2025-10-02 20:30:14.038968923 +0000 UTC m=+1.823343593 container remove 3ebcbd7e9aefb83fcbed21f95bea2a303a2e1e9c22f40ff27fa001bd7a83630b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lichterman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 20:30:14 compute-0 systemd[1]: libpod-conmon-3ebcbd7e9aefb83fcbed21f95bea2a303a2e1e9c22f40ff27fa001bd7a83630b.scope: Deactivated successfully.
Oct 02 20:30:14 compute-0 sudo[495498]: pam_unix(sudo:session): session closed for user root
Oct 02 20:30:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:30:14 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:30:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:30:14 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:30:14 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 7891d88b-73af-4346-a6f6-ac2bcc4fc842 does not exist
Oct 02 20:30:14 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 2fe7e090-c6c7-467d-9487-69459eaf6b46 does not exist
Oct 02 20:30:14 compute-0 sudo[495662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:30:14 compute-0 sudo[495662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:30:14 compute-0 sudo[495662]: pam_unix(sudo:session): session closed for user root
Oct 02 20:30:14 compute-0 sudo[495687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:30:14 compute-0 sudo[495687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:30:14 compute-0 sudo[495687]: pam_unix(sudo:session): session closed for user root
Oct 02 20:30:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2570: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:30:15 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:30:15 compute-0 ceph-mon[191910]: pgmap v2570: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:15 compute-0 nova_compute[355794]: 2025-10-02 20:30:15.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:30:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2571: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:16 compute-0 ceph-mon[191910]: pgmap v2571: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:30:17 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.1 total, 600.0 interval
                                            Cumulative writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 3214 syncs, 3.49 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 461 writes, 1111 keys, 461 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s
                                            Interval WAL: 461 writes, 217 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:30:18 compute-0 nova_compute[355794]: 2025-10-02 20:30:18.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2572: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:18 compute-0 ceph-mon[191910]: pgmap v2572: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:30:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3942995551' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:30:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:30:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3942995551' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:30:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3942995551' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:30:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/3942995551' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:30:20 compute-0 nova_compute[355794]: 2025-10-02 20:30:20.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2573: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:20 compute-0 podman[495713]: 2025-10-02 20:30:20.709722214 +0000 UTC m=+0.120044261 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 20:30:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:30:21 compute-0 ceph-mon[191910]: pgmap v2573: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2574: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:23 compute-0 ceph-mon[191910]: pgmap v2574: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:23 compute-0 nova_compute[355794]: 2025-10-02 20:30:23.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:30:23 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.1 total, 600.0 interval
                                            Cumulative writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 3206 syncs, 3.58 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 470 writes, 1409 keys, 470 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s
                                            Interval WAL: 470 writes, 213 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:30:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2575: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:24 compute-0 ceph-mon[191910]: pgmap v2575: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:25 compute-0 nova_compute[355794]: 2025-10-02 20:30:25.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:30:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2576: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:26 compute-0 ceph-mon[191910]: pgmap v2576: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:27 compute-0 nova_compute[355794]: 2025-10-02 20:30:27.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:30:27 compute-0 nova_compute[355794]: 2025-10-02 20:30:27.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:30:27 compute-0 nova_compute[355794]: 2025-10-02 20:30:27.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:30:28 compute-0 nova_compute[355794]: 2025-10-02 20:30:28.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:28 compute-0 nova_compute[355794]: 2025-10-02 20:30:28.481 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:30:28 compute-0 nova_compute[355794]: 2025-10-02 20:30:28.482 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:30:28 compute-0 nova_compute[355794]: 2025-10-02 20:30:28.483 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:30:28 compute-0 nova_compute[355794]: 2025-10-02 20:30:28.484 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:30:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2577: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:28 compute-0 ceph-mon[191910]: pgmap v2577: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:29 compute-0 podman[157186]: time="2025-10-02T20:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:30:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:30:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9119 "" "Go-http-client/1.1"
Oct 02 20:30:30 compute-0 nova_compute[355794]: 2025-10-02 20:30:30.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2578: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:30:30 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.1 total, 600.0 interval
                                            Cumulative writes: 9495 writes, 36K keys, 9495 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9495 writes, 2461 syncs, 3.86 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 432 writes, 1184 keys, 432 commit groups, 1.0 writes per commit group, ingest: 0.44 MB, 0.00 MB/s
                                            Interval WAL: 432 writes, 196 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:30:30 compute-0 ceph-mon[191910]: pgmap v2578: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:30 compute-0 podman[495731]: 2025-10-02 20:30:30.701323881 +0000 UTC m=+0.119115577 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:30:30 compute-0 podman[495732]: 2025-10-02 20:30:30.736989918 +0000 UTC m=+0.151740815 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 20:30:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:30:31 compute-0 nova_compute[355794]: 2025-10-02 20:30:31.194 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:30:31 compute-0 nova_compute[355794]: 2025-10-02 20:30:31.230 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:30:31 compute-0 nova_compute[355794]: 2025-10-02 20:30:31.230 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:30:31 compute-0 nova_compute[355794]: 2025-10-02 20:30:31.232 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:30:31 compute-0 nova_compute[355794]: 2025-10-02 20:30:31.233 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:30:31 compute-0 nova_compute[355794]: 2025-10-02 20:30:31.233 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:30:31 compute-0 nova_compute[355794]: 2025-10-02 20:30:31.234 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:30:31 compute-0 nova_compute[355794]: 2025-10-02 20:30:31.268 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:30:31 compute-0 nova_compute[355794]: 2025-10-02 20:30:31.269 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:30:31 compute-0 nova_compute[355794]: 2025-10-02 20:30:31.270 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:30:31 compute-0 nova_compute[355794]: 2025-10-02 20:30:31.270 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:30:31 compute-0 nova_compute[355794]: 2025-10-02 20:30:31.271 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:30:31 compute-0 openstack_network_exporter[372736]: ERROR   20:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:30:31 compute-0 openstack_network_exporter[372736]: ERROR   20:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:30:31 compute-0 openstack_network_exporter[372736]: ERROR   20:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:30:31 compute-0 openstack_network_exporter[372736]: ERROR   20:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:30:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:30:31 compute-0 openstack_network_exporter[372736]: ERROR   20:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:30:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:30:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:30:31 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2696620437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:30:31 compute-0 ceph-mgr[192222]: [devicehealth INFO root] Check health
Oct 02 20:30:31 compute-0 nova_compute[355794]: 2025-10-02 20:30:31.749 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:30:31 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2696620437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:30:31 compute-0 nova_compute[355794]: 2025-10-02 20:30:31.849 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:30:31 compute-0 nova_compute[355794]: 2025-10-02 20:30:31.849 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:30:31 compute-0 nova_compute[355794]: 2025-10-02 20:30:31.849 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:30:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:30:32.352 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:30:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:30:32.352 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:30:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:30:32.353 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:30:32 compute-0 nova_compute[355794]: 2025-10-02 20:30:32.465 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:30:32 compute-0 nova_compute[355794]: 2025-10-02 20:30:32.467 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3633MB free_disk=59.955204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:30:32 compute-0 nova_compute[355794]: 2025-10-02 20:30:32.468 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:30:32 compute-0 nova_compute[355794]: 2025-10-02 20:30:32.469 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:30:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2579: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:33 compute-0 nova_compute[355794]: 2025-10-02 20:30:33.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:33 compute-0 nova_compute[355794]: 2025-10-02 20:30:33.454 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:30:33 compute-0 nova_compute[355794]: 2025-10-02 20:30:33.455 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:30:33 compute-0 nova_compute[355794]: 2025-10-02 20:30:33.455 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:30:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:30:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:30:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:30:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:30:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:30:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:30:33 compute-0 ceph-mon[191910]: pgmap v2579: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2580: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:34 compute-0 nova_compute[355794]: 2025-10-02 20:30:34.731 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:30:35 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:30:35 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1803060970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:30:35 compute-0 nova_compute[355794]: 2025-10-02 20:30:35.224 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:30:35 compute-0 nova_compute[355794]: 2025-10-02 20:30:35.237 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:30:35 compute-0 nova_compute[355794]: 2025-10-02 20:30:35.263 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:30:35 compute-0 nova_compute[355794]: 2025-10-02 20:30:35.267 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:30:35 compute-0 nova_compute[355794]: 2025-10-02 20:30:35.268 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:30:35 compute-0 nova_compute[355794]: 2025-10-02 20:30:35.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:36 compute-0 ceph-mon[191910]: pgmap v2580: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:36 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1803060970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:30:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:30:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2581: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:36 compute-0 nova_compute[355794]: 2025-10-02 20:30:36.610 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:30:36 compute-0 nova_compute[355794]: 2025-10-02 20:30:36.611 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:30:36 compute-0 nova_compute[355794]: 2025-10-02 20:30:36.611 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:30:36 compute-0 nova_compute[355794]: 2025-10-02 20:30:36.611 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:30:36 compute-0 nova_compute[355794]: 2025-10-02 20:30:36.612 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:30:36 compute-0 podman[495815]: 2025-10-02 20:30:36.733648919 +0000 UTC m=+0.160670156 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Oct 02 20:30:36 compute-0 podman[495816]: 2025-10-02 20:30:36.742613412 +0000 UTC m=+0.153106169 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, maintainer=Red Hat, Inc., release-0.7.12=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 20:30:37 compute-0 nova_compute[355794]: 2025-10-02 20:30:37.570 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:30:38 compute-0 ceph-mon[191910]: pgmap v2581: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:38 compute-0 nova_compute[355794]: 2025-10-02 20:30:38.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2582: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:39 compute-0 podman[495855]: 2025-10-02 20:30:39.661545885 +0000 UTC m=+0.087853294 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:30:39 compute-0 podman[495856]: 2025-10-02 20:30:39.690621461 +0000 UTC m=+0.100912084 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 02 20:30:39 compute-0 podman[495854]: 2025-10-02 20:30:39.690731904 +0000 UTC m=+0.121444367 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct 02 20:30:39 compute-0 podman[495863]: 2025-10-02 20:30:39.707324355 +0000 UTC m=+0.119219869 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 20:30:39 compute-0 podman[495857]: 2025-10-02 20:30:39.723189117 +0000 UTC m=+0.135669267 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:30:40 compute-0 ceph-mon[191910]: pgmap v2582: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:40 compute-0 nova_compute[355794]: 2025-10-02 20:30:40.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2583: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:30:42 compute-0 ceph-mon[191910]: pgmap v2583: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2584: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:43 compute-0 nova_compute[355794]: 2025-10-02 20:30:43.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:44 compute-0 ceph-mon[191910]: pgmap v2584: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2585: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:45 compute-0 nova_compute[355794]: 2025-10-02 20:30:45.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:45 compute-0 ceph-mon[191910]: pgmap v2585: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:30:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2586: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:47 compute-0 ceph-mon[191910]: pgmap v2586: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:48 compute-0 nova_compute[355794]: 2025-10-02 20:30:48.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2587: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:49 compute-0 ceph-mon[191910]: pgmap v2587: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:50 compute-0 nova_compute[355794]: 2025-10-02 20:30:50.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2588: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:30:51 compute-0 podman[495954]: 2025-10-02 20:30:51.719093539 +0000 UTC m=+0.143103150 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 20:30:52 compute-0 ceph-mon[191910]: pgmap v2588: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:30:52.128880) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759437052129050, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 1159, "num_deletes": 251, "total_data_size": 1735159, "memory_usage": 1766592, "flush_reason": "Manual Compaction"}
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759437052210343, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 1707496, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52158, "largest_seqno": 53316, "table_properties": {"data_size": 1701879, "index_size": 3012, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11721, "raw_average_key_size": 19, "raw_value_size": 1690711, "raw_average_value_size": 2841, "num_data_blocks": 135, "num_entries": 595, "num_filter_entries": 595, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759436936, "oldest_key_time": 1759436936, "file_creation_time": 1759437052, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 81639 microseconds, and 11735 cpu microseconds.
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:30:52.210545) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 1707496 bytes OK
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:30:52.210573) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:30:52.234289) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:30:52.234339) EVENT_LOG_v1 {"time_micros": 1759437052234328, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:30:52.234364) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1729867, prev total WAL file size 1729867, number of live WAL files 2.
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:30:52.235683) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(1667KB)], [125(8915KB)]
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759437052235741, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 10836881, "oldest_snapshot_seqno": -1}
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 6670 keys, 9123183 bytes, temperature: kUnknown
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759437052361359, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 9123183, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9080629, "index_size": 24777, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 175004, "raw_average_key_size": 26, "raw_value_size": 8961922, "raw_average_value_size": 1343, "num_data_blocks": 979, "num_entries": 6670, "num_filter_entries": 6670, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759432089, "oldest_key_time": 0, "file_creation_time": 1759437052, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11520233-f463-4c1a-a5e6-8f5a74526a6e", "db_session_id": "FLU3CD8VPCVBNG3UEXDY", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:30:52.361886) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 9123183 bytes
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:30:52.382713) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 86.1 rd, 72.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.7 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(11.7) write-amplify(5.3) OK, records in: 7184, records dropped: 514 output_compression: NoCompression
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:30:52.382763) EVENT_LOG_v1 {"time_micros": 1759437052382744, "job": 76, "event": "compaction_finished", "compaction_time_micros": 125893, "compaction_time_cpu_micros": 23827, "output_level": 6, "num_output_files": 1, "total_output_size": 9123183, "num_input_records": 7184, "num_output_records": 6670, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759437052383588, "job": 76, "event": "table_file_deletion", "file_number": 127}
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759437052386850, "job": 76, "event": "table_file_deletion", "file_number": 125}
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:30:52.235368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:30:52.387237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:30:52.387249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:30:52.387254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:30:52.387258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:30:52 compute-0 ceph-mon[191910]: rocksdb: (Original Log Time 2025/10/02-20:30:52.387263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 20:30:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2589: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:53 compute-0 nova_compute[355794]: 2025-10-02 20:30:53.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:54 compute-0 ceph-mon[191910]: pgmap v2589: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2590: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:55 compute-0 nova_compute[355794]: 2025-10-02 20:30:55.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:56 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:30:56 compute-0 ceph-mon[191910]: pgmap v2590: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2591: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:58 compute-0 ceph-mon[191910]: pgmap v2591: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:58 compute-0 nova_compute[355794]: 2025-10-02 20:30:58.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:30:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2592: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:30:59 compute-0 podman[157186]: time="2025-10-02T20:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:30:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:30:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9115 "" "Go-http-client/1.1"
Oct 02 20:31:00 compute-0 ceph-mon[191910]: pgmap v2592: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:00 compute-0 nova_compute[355794]: 2025-10-02 20:31:00.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2593: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:31:01 compute-0 openstack_network_exporter[372736]: ERROR   20:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:31:01 compute-0 openstack_network_exporter[372736]: ERROR   20:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:31:01 compute-0 openstack_network_exporter[372736]: ERROR   20:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:31:01 compute-0 openstack_network_exporter[372736]: ERROR   20:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:31:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:31:01 compute-0 openstack_network_exporter[372736]: ERROR   20:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:31:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:31:01 compute-0 podman[495975]: 2025-10-02 20:31:01.647285438 +0000 UTC m=+0.077678660 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.build-date=20250930)
Oct 02 20:31:01 compute-0 podman[495974]: 2025-10-02 20:31:01.651339983 +0000 UTC m=+0.079085186 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:31:02 compute-0 ceph-mon[191910]: pgmap v2593: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2594: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:03 compute-0 nova_compute[355794]: 2025-10-02 20:31:03.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:31:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:31:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:31:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:31:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:31:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:31:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:31:03
Oct 02 20:31:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:31:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:31:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'vms', 'images', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.data']
Oct 02 20:31:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.314 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.314 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f343821b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438294170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f3438219580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f34381795b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.402 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd4e04444-ce39-4fb0-af9c-8fd9b0e6fb77', 'name': 'test_0', 'flavor': {'id': '8f0521f8-dc4e-4ca1-bf77-f443ae74db03', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ce28338d-119e-49e1-ab67-60da8882593a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1c35486f37b94d43a7bf2f2fa09c70b9', 'user_id': '811fb7ac717e4ba9b9874e5454ee08f4', 'hostId': '0a709b9b1573f6989b501b180b12e229f7debe7b6b8960df8967de2d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.402 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.403 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.403 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.403 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.404 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T20:31:04.403298) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.478 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.479 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.479 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.480 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.480 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f343821b080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.480 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.480 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.480 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b0b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.480 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.481 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:31:04.480806) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.513 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.514 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.514 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.515 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f343821b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.515 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.515 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.515 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.515 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.515 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.516 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.516 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.516 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f343821b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.517 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.517 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.517 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.517 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.517 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 7285327854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.517 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 37924984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.518 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.518 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f3438294140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.518 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.519 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438294170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.519 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438294170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.519 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.519 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:31:04.515700) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.520 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:31:04.517459) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.520 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:31:04.519222) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.554 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.555 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f343821b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.555 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.555 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.555 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.555 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.555 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.556 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.556 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f343821b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.557 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.557 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.557 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821ba10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.557 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.558 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:31:04.555726) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.558 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:31:04.557823) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.562 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.563 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.563 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f343821b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.563 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.564 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.564 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.564 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.565 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:31:04.564741) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.565 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f343821b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.566 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f343821b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.567 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.567 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:31:04.567793) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.568 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f343821ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.569 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.569 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.569 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.570 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.570 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:31:04.569824) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.570 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f343821bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.571 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.572 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:31:04.572042) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.572 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f343821bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.573 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.573 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343b0b3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.574 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:31:04.574428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.575 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f343821bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.575 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.576 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.576 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.576 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.576 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.577 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:31:04.576642) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f343821afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.578 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343a859bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.579 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.579 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:31:04.578827) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.579 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f343821bbc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.581 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.582 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bbf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.582 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.583 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.bytes volume: 2524 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:31:04.582570) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f3438cb78c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.584 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.585 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343bbc9c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.585 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.585 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:31:04.585353) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.586 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.587 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f3438b32120>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.588 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f343821b4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.588 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.589 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/memory.usage volume: 48.8515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.589 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:31:04.589519) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.590 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f343821b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.591 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.591 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.591 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.592 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.592 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.incoming.bytes volume: 2730 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.592 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:31:04.592050) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f343821bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.593 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.594 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceph-mon[191910]: pgmap v2594: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:31:04.594502) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.595 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:31:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2595: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.595 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f343b606ba0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.595 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.596 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f3438219580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.596 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f3438219580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.596 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.596 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:31:04.596654) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.597 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.597 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.598 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.598 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f343821bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.598 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.599 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.599 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.599 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.599 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.599 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:31:04.599531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.600 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.600 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f343821be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.600 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.601 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.601 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.602 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:31:04.601799) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.602 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f3438219550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.603 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343aa52780>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.604 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/cpu volume: 77930000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:31:04.603866) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.604 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.605 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f343821af00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f3438baf7a0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.605 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.605 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.605 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f343821aff0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.605 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.606 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 1786872188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.606 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:31:04.605940) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.606 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 311003966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.606 14 DEBUG ceilometer.compute.pollsters [-] d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77/disk.device.read.latency volume: 139101644 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.607 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.607 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.608 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.608 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.608 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.608 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.608 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.608 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.608 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.608 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.608 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.608 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.609 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.610 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.610 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.610 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceilometer_agent_compute[366598]: 2025-10-02 20:31:04.610 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:31:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:31:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:31:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:31:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:31:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:31:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:31:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:31:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:31:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:31:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:31:05 compute-0 nova_compute[355794]: 2025-10-02 20:31:05.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:05 compute-0 ceph-mon[191910]: pgmap v2595: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:31:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2596: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:07 compute-0 podman[496012]: 2025-10-02 20:31:07.700071648 +0000 UTC m=+0.118389087 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 20:31:07 compute-0 podman[496013]: 2025-10-02 20:31:07.746483404 +0000 UTC m=+0.146967400 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release-0.7.12=, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, container_name=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.4, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct 02 20:31:07 compute-0 ceph-mon[191910]: pgmap v2596: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:08 compute-0 nova_compute[355794]: 2025-10-02 20:31:08.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2597: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:10 compute-0 ceph-mon[191910]: pgmap v2597: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:10 compute-0 nova_compute[355794]: 2025-10-02 20:31:10.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2598: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:10 compute-0 podman[496047]: 2025-10-02 20:31:10.69088898 +0000 UTC m=+0.105310878 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:31:10 compute-0 podman[496049]: 2025-10-02 20:31:10.706894846 +0000 UTC m=+0.106527719 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Oct 02 20:31:10 compute-0 podman[496056]: 2025-10-02 20:31:10.71898246 +0000 UTC m=+0.113453919 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 20:31:10 compute-0 podman[496048]: 2025-10-02 20:31:10.726309441 +0000 UTC m=+0.132991557 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 20:31:10 compute-0 podman[496050]: 2025-10-02 20:31:10.745073108 +0000 UTC m=+0.149296771 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 20:31:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:31:12 compute-0 ceph-mon[191910]: pgmap v2598: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2599: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:13 compute-0 nova_compute[355794]: 2025-10-02 20:31:13.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:31:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:31:14 compute-0 ceph-mon[191910]: pgmap v2599: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:14 compute-0 sudo[496151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:31:14 compute-0 sudo[496151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:31:14 compute-0 sudo[496151]: pam_unix(sudo:session): session closed for user root
Oct 02 20:31:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2600: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:14 compute-0 sudo[496176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:31:14 compute-0 sudo[496176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:31:14 compute-0 sudo[496176]: pam_unix(sudo:session): session closed for user root
Oct 02 20:31:14 compute-0 sudo[496201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:31:14 compute-0 sudo[496201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:31:14 compute-0 sudo[496201]: pam_unix(sudo:session): session closed for user root
Oct 02 20:31:14 compute-0 sudo[496226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 20:31:14 compute-0 sudo[496226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:31:15 compute-0 nova_compute[355794]: 2025-10-02 20:31:15.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:15 compute-0 sudo[496226]: pam_unix(sudo:session): session closed for user root
Oct 02 20:31:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:31:15 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:31:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 02 20:31:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:31:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 02 20:31:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:31:15 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 3ad743a1-20ca-4a5c-b35a-c67324d49567 does not exist
Oct 02 20:31:15 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 49f341f2-bbc7-4488-9993-aa5d06bb5239 does not exist
Oct 02 20:31:15 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 40bfa216-b6bf-4cc0-bcaa-c5a5a50ed737 does not exist
Oct 02 20:31:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 02 20:31:15 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:31:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 02 20:31:15 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:31:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:31:15 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:31:16 compute-0 sudo[496281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:31:16 compute-0 sudo[496281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:31:16 compute-0 sudo[496281]: pam_unix(sudo:session): session closed for user root
Oct 02 20:31:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:31:16 compute-0 sudo[496306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:31:16 compute-0 sudo[496306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:31:16 compute-0 sudo[496306]: pam_unix(sudo:session): session closed for user root
Oct 02 20:31:16 compute-0 sudo[496331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:31:16 compute-0 sudo[496331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:31:16 compute-0 sudo[496331]: pam_unix(sudo:session): session closed for user root
Oct 02 20:31:16 compute-0 ceph-mon[191910]: pgmap v2600: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:31:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 20:31:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:31:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 20:31:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 20:31:16 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:31:16 compute-0 sudo[496356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 02 20:31:16 compute-0 sudo[496356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:31:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2601: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:17 compute-0 podman[496418]: 2025-10-02 20:31:17.053560294 +0000 UTC m=+0.089985010 container create dd417ec495c77ca96a84b8e5f55169a62a0beb032e4c1e16d93b456aaf9f19cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ardinghelli, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 20:31:17 compute-0 podman[496418]: 2025-10-02 20:31:17.017768793 +0000 UTC m=+0.054193539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:31:17 compute-0 systemd[1]: Started libpod-conmon-dd417ec495c77ca96a84b8e5f55169a62a0beb032e4c1e16d93b456aaf9f19cd.scope.
Oct 02 20:31:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:31:17 compute-0 podman[496418]: 2025-10-02 20:31:17.215177044 +0000 UTC m=+0.251601810 container init dd417ec495c77ca96a84b8e5f55169a62a0beb032e4c1e16d93b456aaf9f19cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ardinghelli, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:31:17 compute-0 podman[496418]: 2025-10-02 20:31:17.228201962 +0000 UTC m=+0.264626688 container start dd417ec495c77ca96a84b8e5f55169a62a0beb032e4c1e16d93b456aaf9f19cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 20:31:17 compute-0 trusting_ardinghelli[496434]: 167 167
Oct 02 20:31:17 compute-0 systemd[1]: libpod-dd417ec495c77ca96a84b8e5f55169a62a0beb032e4c1e16d93b456aaf9f19cd.scope: Deactivated successfully.
Oct 02 20:31:17 compute-0 podman[496418]: 2025-10-02 20:31:17.242316189 +0000 UTC m=+0.278740885 container attach dd417ec495c77ca96a84b8e5f55169a62a0beb032e4c1e16d93b456aaf9f19cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 02 20:31:17 compute-0 podman[496418]: 2025-10-02 20:31:17.243303924 +0000 UTC m=+0.279728630 container died dd417ec495c77ca96a84b8e5f55169a62a0beb032e4c1e16d93b456aaf9f19cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:31:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7056f306cf9e0856cce0eeeeaf9d85620023a8603af970008c82747bbd717ca-merged.mount: Deactivated successfully.
Oct 02 20:31:17 compute-0 podman[496418]: 2025-10-02 20:31:17.381194158 +0000 UTC m=+0.417618874 container remove dd417ec495c77ca96a84b8e5f55169a62a0beb032e4c1e16d93b456aaf9f19cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ardinghelli, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 02 20:31:17 compute-0 systemd[1]: libpod-conmon-dd417ec495c77ca96a84b8e5f55169a62a0beb032e4c1e16d93b456aaf9f19cd.scope: Deactivated successfully.
Oct 02 20:31:17 compute-0 podman[496457]: 2025-10-02 20:31:17.613185746 +0000 UTC m=+0.052156876 container create 0b4dd68fbf55237c551ced2e55488bf276c78c114e714d9a578347d1703c4894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_noyce, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 02 20:31:17 compute-0 podman[496457]: 2025-10-02 20:31:17.587978731 +0000 UTC m=+0.026949891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:31:17 compute-0 systemd[1]: Started libpod-conmon-0b4dd68fbf55237c551ced2e55488bf276c78c114e714d9a578347d1703c4894.scope.
Oct 02 20:31:17 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3a4cdef5db42e4db8104d998c5abc566507957e94f6d75be7dcb90b268befb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3a4cdef5db42e4db8104d998c5abc566507957e94f6d75be7dcb90b268befb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3a4cdef5db42e4db8104d998c5abc566507957e94f6d75be7dcb90b268befb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3a4cdef5db42e4db8104d998c5abc566507957e94f6d75be7dcb90b268befb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3a4cdef5db42e4db8104d998c5abc566507957e94f6d75be7dcb90b268befb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 20:31:17 compute-0 podman[496457]: 2025-10-02 20:31:17.79881651 +0000 UTC m=+0.237787740 container init 0b4dd68fbf55237c551ced2e55488bf276c78c114e714d9a578347d1703c4894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:31:17 compute-0 podman[496457]: 2025-10-02 20:31:17.812894956 +0000 UTC m=+0.251866126 container start 0b4dd68fbf55237c551ced2e55488bf276c78c114e714d9a578347d1703c4894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 20:31:17 compute-0 podman[496457]: 2025-10-02 20:31:17.829543509 +0000 UTC m=+0.268514699 container attach 0b4dd68fbf55237c551ced2e55488bf276c78c114e714d9a578347d1703c4894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 02 20:31:18 compute-0 nova_compute[355794]: 2025-10-02 20:31:18.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:18 compute-0 ceph-mon[191910]: pgmap v2601: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2602: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:19 compute-0 jolly_noyce[496473]: --> passed data devices: 0 physical, 3 LVM
Oct 02 20:31:19 compute-0 jolly_noyce[496473]: --> relative data size: 1.0
Oct 02 20:31:19 compute-0 jolly_noyce[496473]: --> All data devices are unavailable
Oct 02 20:31:19 compute-0 systemd[1]: libpod-0b4dd68fbf55237c551ced2e55488bf276c78c114e714d9a578347d1703c4894.scope: Deactivated successfully.
Oct 02 20:31:19 compute-0 podman[496457]: 2025-10-02 20:31:19.161289606 +0000 UTC m=+1.600260756 container died 0b4dd68fbf55237c551ced2e55488bf276c78c114e714d9a578347d1703c4894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:31:19 compute-0 systemd[1]: libpod-0b4dd68fbf55237c551ced2e55488bf276c78c114e714d9a578347d1703c4894.scope: Consumed 1.274s CPU time.
Oct 02 20:31:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f3a4cdef5db42e4db8104d998c5abc566507957e94f6d75be7dcb90b268befb-merged.mount: Deactivated successfully.
Oct 02 20:31:19 compute-0 podman[496457]: 2025-10-02 20:31:19.348206993 +0000 UTC m=+1.787178173 container remove 0b4dd68fbf55237c551ced2e55488bf276c78c114e714d9a578347d1703c4894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_noyce, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:31:19 compute-0 systemd[1]: libpod-conmon-0b4dd68fbf55237c551ced2e55488bf276c78c114e714d9a578347d1703c4894.scope: Deactivated successfully.
Oct 02 20:31:19 compute-0 sudo[496356]: pam_unix(sudo:session): session closed for user root
Oct 02 20:31:19 compute-0 sudo[496512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:31:19 compute-0 sudo[496512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:31:19 compute-0 sudo[496512]: pam_unix(sudo:session): session closed for user root
Oct 02 20:31:19 compute-0 sudo[496538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:31:19 compute-0 sudo[496538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:31:19 compute-0 sudo[496538]: pam_unix(sudo:session): session closed for user root
Oct 02 20:31:19 compute-0 sudo[496563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:31:19 compute-0 sudo[496563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:31:19 compute-0 sudo[496563]: pam_unix(sudo:session): session closed for user root
Oct 02 20:31:19 compute-0 sudo[496588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- lvm list --format json
Oct 02 20:31:19 compute-0 sudo[496588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:31:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:31:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1867974840' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:31:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:31:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1867974840' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:31:20 compute-0 nova_compute[355794]: 2025-10-02 20:31:20.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:20 compute-0 ceph-mon[191910]: pgmap v2602: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1867974840' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:31:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/1867974840' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:31:20 compute-0 podman[496652]: 2025-10-02 20:31:20.483813454 +0000 UTC m=+0.034018595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:31:20 compute-0 podman[496652]: 2025-10-02 20:31:20.593952156 +0000 UTC m=+0.144157327 container create 4cf33287a24d0f490e31763166391c8a3cc363c4d2636137fb5713f69766ef36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 20:31:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2603: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:20 compute-0 systemd[1]: Started libpod-conmon-4cf33287a24d0f490e31763166391c8a3cc363c4d2636137fb5713f69766ef36.scope.
Oct 02 20:31:20 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:31:20 compute-0 podman[496652]: 2025-10-02 20:31:20.946669582 +0000 UTC m=+0.496874743 container init 4cf33287a24d0f490e31763166391c8a3cc363c4d2636137fb5713f69766ef36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 20:31:20 compute-0 podman[496652]: 2025-10-02 20:31:20.967140084 +0000 UTC m=+0.517345255 container start 4cf33287a24d0f490e31763166391c8a3cc363c4d2636137fb5713f69766ef36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 20:31:20 compute-0 dreamy_carson[496666]: 167 167
Oct 02 20:31:20 compute-0 systemd[1]: libpod-4cf33287a24d0f490e31763166391c8a3cc363c4d2636137fb5713f69766ef36.scope: Deactivated successfully.
Oct 02 20:31:21 compute-0 podman[496652]: 2025-10-02 20:31:21.068519619 +0000 UTC m=+0.618724830 container attach 4cf33287a24d0f490e31763166391c8a3cc363c4d2636137fb5713f69766ef36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 20:31:21 compute-0 podman[496652]: 2025-10-02 20:31:21.069915675 +0000 UTC m=+0.620120836 container died 4cf33287a24d0f490e31763166391c8a3cc363c4d2636137fb5713f69766ef36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 20:31:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:31:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f968a989a89f7ae3c3c2db646db1b14b02453d9b13fa1855abd35cf963de898-merged.mount: Deactivated successfully.
Oct 02 20:31:21 compute-0 podman[496652]: 2025-10-02 20:31:21.425157656 +0000 UTC m=+0.975362817 container remove 4cf33287a24d0f490e31763166391c8a3cc363c4d2636137fb5713f69766ef36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carson, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:31:21 compute-0 systemd[1]: libpod-conmon-4cf33287a24d0f490e31763166391c8a3cc363c4d2636137fb5713f69766ef36.scope: Deactivated successfully.
Oct 02 20:31:21 compute-0 podman[496693]: 2025-10-02 20:31:21.717214596 +0000 UTC m=+0.054788095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:31:21 compute-0 podman[496693]: 2025-10-02 20:31:21.80933119 +0000 UTC m=+0.146904579 container create 43d09749014c99e8502021c32ac8347ff3ae33e41fc92d3a8ccc61f2fbddd557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 02 20:31:21 compute-0 systemd[1]: Started libpod-conmon-43d09749014c99e8502021c32ac8347ff3ae33e41fc92d3a8ccc61f2fbddd557.scope.
Oct 02 20:31:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cde392ffc31ac8e53ece0723902f949593b40c10761a719878ddca0d1915f95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cde392ffc31ac8e53ece0723902f949593b40c10761a719878ddca0d1915f95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cde392ffc31ac8e53ece0723902f949593b40c10761a719878ddca0d1915f95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cde392ffc31ac8e53ece0723902f949593b40c10761a719878ddca0d1915f95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:31:22 compute-0 podman[496707]: 2025-10-02 20:31:22.03104489 +0000 UTC m=+0.164303010 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 02 20:31:22 compute-0 podman[496693]: 2025-10-02 20:31:22.080964958 +0000 UTC m=+0.418538447 container init 43d09749014c99e8502021c32ac8347ff3ae33e41fc92d3a8ccc61f2fbddd557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_carson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 20:31:22 compute-0 podman[496693]: 2025-10-02 20:31:22.093930935 +0000 UTC m=+0.431504354 container start 43d09749014c99e8502021c32ac8347ff3ae33e41fc92d3a8ccc61f2fbddd557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_carson, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:31:22 compute-0 podman[496693]: 2025-10-02 20:31:22.100848024 +0000 UTC m=+0.438421453 container attach 43d09749014c99e8502021c32ac8347ff3ae33e41fc92d3a8ccc61f2fbddd557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 20:31:22 compute-0 ceph-mon[191910]: pgmap v2603: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2604: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]: {
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:     "0": [
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:         {
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "devices": [
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "/dev/loop3"
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             ],
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "lv_name": "ceph_lv0",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "lv_size": "21470642176",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "lv_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "name": "ceph_lv0",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "tags": {
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.block_uuid": "1noJ2G-oKEz-sEhK-C3S0-GUjL-7iXC-cQsoWE",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.cluster_name": "ceph",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.crush_device_class": "",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.encrypted": "0",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.osd_fsid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.osd_id": "0",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.type": "block",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.vdo": "0"
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             },
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "type": "block",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "vg_name": "ceph_vg0"
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:         }
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:     ],
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:     "1": [
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:         {
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "devices": [
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "/dev/loop4"
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             ],
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "lv_name": "ceph_lv1",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "lv_size": "21470642176",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=82844b2c-c78f-4ec2-a159-b058e47d1cbd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "lv_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "name": "ceph_lv1",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "tags": {
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.block_uuid": "lxKmeJ-wtN9-s3t7-WqgD-h5rj-UQiD-22KVSB",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.cluster_name": "ceph",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.crush_device_class": "",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.encrypted": "0",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.osd_fsid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.osd_id": "1",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.type": "block",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.vdo": "0"
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             },
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "type": "block",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "vg_name": "ceph_vg1"
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:         }
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:     ],
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:     "2": [
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:         {
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "devices": [
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "/dev/loop5"
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             ],
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "lv_name": "ceph_lv2",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "lv_size": "21470642176",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=6019f664-a1c2-5955-8391-692cb79a59f9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afe0acfe-daf6-4901-80df-bc50bc9ae508,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "lv_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "name": "ceph_lv2",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "tags": {
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.block_uuid": "LsCP5l-eNWg-R412-kAep-xsmD-1TER-c3qXRm",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.cluster_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.cluster_name": "ceph",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.crush_device_class": "",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.encrypted": "0",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.osd_fsid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.osd_id": "2",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.type": "block",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:                 "ceph.vdo": "0"
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             },
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "type": "block",
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:             "vg_name": "ceph_vg2"
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:         }
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]:     ]
Oct 02 20:31:22 compute-0 flamboyant_carson[496724]: }
Oct 02 20:31:23 compute-0 systemd[1]: libpod-43d09749014c99e8502021c32ac8347ff3ae33e41fc92d3a8ccc61f2fbddd557.scope: Deactivated successfully.
Oct 02 20:31:23 compute-0 podman[496736]: 2025-10-02 20:31:23.111218961 +0000 UTC m=+0.060971616 container died 43d09749014c99e8502021c32ac8347ff3ae33e41fc92d3a8ccc61f2fbddd557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_carson, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:31:23 compute-0 nova_compute[355794]: 2025-10-02 20:31:23.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cde392ffc31ac8e53ece0723902f949593b40c10761a719878ddca0d1915f95-merged.mount: Deactivated successfully.
Oct 02 20:31:24 compute-0 ceph-mon[191910]: pgmap v2604: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:24 compute-0 podman[496736]: 2025-10-02 20:31:24.503277026 +0000 UTC m=+1.453029601 container remove 43d09749014c99e8502021c32ac8347ff3ae33e41fc92d3a8ccc61f2fbddd557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_carson, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:31:24 compute-0 systemd[1]: libpod-conmon-43d09749014c99e8502021c32ac8347ff3ae33e41fc92d3a8ccc61f2fbddd557.scope: Deactivated successfully.
Oct 02 20:31:24 compute-0 sudo[496588]: pam_unix(sudo:session): session closed for user root
Oct 02 20:31:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2605: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:24 compute-0 sudo[496749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:31:24 compute-0 sudo[496749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:31:24 compute-0 sudo[496749]: pam_unix(sudo:session): session closed for user root
Oct 02 20:31:24 compute-0 sudo[496774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 20:31:24 compute-0 sudo[496774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:31:24 compute-0 sudo[496774]: pam_unix(sudo:session): session closed for user root
Oct 02 20:31:24 compute-0 sudo[496799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:31:24 compute-0 sudo[496799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:31:24 compute-0 sudo[496799]: pam_unix(sudo:session): session closed for user root
Oct 02 20:31:25 compute-0 sudo[496824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/6019f664-a1c2-5955-8391-692cb79a59f9/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 6019f664-a1c2-5955-8391-692cb79a59f9 -- raw list --format json
Oct 02 20:31:25 compute-0 sudo[496824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:31:25 compute-0 nova_compute[355794]: 2025-10-02 20:31:25.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:25 compute-0 podman[496887]: 2025-10-02 20:31:25.751765499 +0000 UTC m=+0.056955391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:31:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:31:26 compute-0 podman[496887]: 2025-10-02 20:31:26.203108858 +0000 UTC m=+0.508298740 container create a9312432e8a85d2b1d963aa6e8e4727e958a2de4ef611435662b1a8429fb98a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shockley, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:31:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2606: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:26 compute-0 systemd[1]: Started libpod-conmon-a9312432e8a85d2b1d963aa6e8e4727e958a2de4ef611435662b1a8429fb98a4.scope.
Oct 02 20:31:26 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:31:26 compute-0 ceph-mon[191910]: pgmap v2605: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:26 compute-0 podman[496887]: 2025-10-02 20:31:26.808633504 +0000 UTC m=+1.113823446 container init a9312432e8a85d2b1d963aa6e8e4727e958a2de4ef611435662b1a8429fb98a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 20:31:26 compute-0 podman[496887]: 2025-10-02 20:31:26.82849493 +0000 UTC m=+1.133684822 container start a9312432e8a85d2b1d963aa6e8e4727e958a2de4ef611435662b1a8429fb98a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shockley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 20:31:26 compute-0 wonderful_shockley[496903]: 167 167
Oct 02 20:31:26 compute-0 systemd[1]: libpod-a9312432e8a85d2b1d963aa6e8e4727e958a2de4ef611435662b1a8429fb98a4.scope: Deactivated successfully.
Oct 02 20:31:27 compute-0 podman[496887]: 2025-10-02 20:31:27.453612985 +0000 UTC m=+1.758802927 container attach a9312432e8a85d2b1d963aa6e8e4727e958a2de4ef611435662b1a8429fb98a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:31:27 compute-0 podman[496887]: 2025-10-02 20:31:27.455499104 +0000 UTC m=+1.760689026 container died a9312432e8a85d2b1d963aa6e8e4727e958a2de4ef611435662b1a8429fb98a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shockley, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 20:31:27 compute-0 nova_compute[355794]: 2025-10-02 20:31:27.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:31:27 compute-0 nova_compute[355794]: 2025-10-02 20:31:27.620 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:31:27 compute-0 nova_compute[355794]: 2025-10-02 20:31:27.621 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:31:27 compute-0 nova_compute[355794]: 2025-10-02 20:31:27.621 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:31:27 compute-0 nova_compute[355794]: 2025-10-02 20:31:27.622 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:31:27 compute-0 nova_compute[355794]: 2025-10-02 20:31:27.623 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:31:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bcef1ed015782a90f9264ad5f77ed587830e4ba635bea3dc21c687973badf91-merged.mount: Deactivated successfully.
Oct 02 20:31:27 compute-0 ceph-mon[191910]: pgmap v2606: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:27 compute-0 podman[496887]: 2025-10-02 20:31:27.868496186 +0000 UTC m=+2.173686058 container remove a9312432e8a85d2b1d963aa6e8e4727e958a2de4ef611435662b1a8429fb98a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shockley, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:31:27 compute-0 systemd[1]: libpod-conmon-a9312432e8a85d2b1d963aa6e8e4727e958a2de4ef611435662b1a8429fb98a4.scope: Deactivated successfully.
Oct 02 20:31:28 compute-0 podman[496950]: 2025-10-02 20:31:28.102815465 +0000 UTC m=+0.080304648 container create ccb483026e94b0235564a54e41c664d2d9aff218008f99f111ed9b101314d813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 20:31:28 compute-0 podman[496950]: 2025-10-02 20:31:28.053447042 +0000 UTC m=+0.030936255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 20:31:28 compute-0 systemd[1]: Started libpod-conmon-ccb483026e94b0235564a54e41c664d2d9aff218008f99f111ed9b101314d813.scope.
Oct 02 20:31:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:31:28 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1908007027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:31:28 compute-0 nova_compute[355794]: 2025-10-02 20:31:28.221 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:31:28 compute-0 systemd[1]: Started libcrun container.
Oct 02 20:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c547176ae159ea7c4f72296b9779d66bf01aea67f31a98b2ad4790f3bf7bf976/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 20:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c547176ae159ea7c4f72296b9779d66bf01aea67f31a98b2ad4790f3bf7bf976/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 20:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c547176ae159ea7c4f72296b9779d66bf01aea67f31a98b2ad4790f3bf7bf976/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 20:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c547176ae159ea7c4f72296b9779d66bf01aea67f31a98b2ad4790f3bf7bf976/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 20:31:28 compute-0 podman[496950]: 2025-10-02 20:31:28.303641694 +0000 UTC m=+0.281130907 container init ccb483026e94b0235564a54e41c664d2d9aff218008f99f111ed9b101314d813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 20:31:28 compute-0 podman[496950]: 2025-10-02 20:31:28.313548141 +0000 UTC m=+0.291037324 container start ccb483026e94b0235564a54e41c664d2d9aff218008f99f111ed9b101314d813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lovelace, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 20:31:28 compute-0 podman[496950]: 2025-10-02 20:31:28.377535064 +0000 UTC m=+0.355024337 container attach ccb483026e94b0235564a54e41c664d2d9aff218008f99f111ed9b101314d813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 20:31:28 compute-0 nova_compute[355794]: 2025-10-02 20:31:28.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:28 compute-0 nova_compute[355794]: 2025-10-02 20:31:28.507 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:31:28 compute-0 nova_compute[355794]: 2025-10-02 20:31:28.508 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:31:28 compute-0 nova_compute[355794]: 2025-10-02 20:31:28.509 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:31:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2607: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:28 compute-0 nova_compute[355794]: 2025-10-02 20:31:28.958 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:31:28 compute-0 nova_compute[355794]: 2025-10-02 20:31:28.961 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3589MB free_disk=59.955204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:31:28 compute-0 nova_compute[355794]: 2025-10-02 20:31:28.961 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:31:28 compute-0 nova_compute[355794]: 2025-10-02 20:31:28.962 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:31:29 compute-0 nova_compute[355794]: 2025-10-02 20:31:29.032 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:31:29 compute-0 nova_compute[355794]: 2025-10-02 20:31:29.032 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:31:29 compute-0 nova_compute[355794]: 2025-10-02 20:31:29.032 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:31:29 compute-0 nova_compute[355794]: 2025-10-02 20:31:29.049 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing inventories for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 20:31:29 compute-0 nova_compute[355794]: 2025-10-02 20:31:29.068 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating ProviderTree inventory for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 20:31:29 compute-0 nova_compute[355794]: 2025-10-02 20:31:29.069 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Updating inventory in ProviderTree for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 20:31:29 compute-0 nova_compute[355794]: 2025-10-02 20:31:29.088 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing aggregate associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 20:31:29 compute-0 nova_compute[355794]: 2025-10-02 20:31:29.120 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Refreshing trait associations for resource provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0, traits: COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE4A,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 20:31:29 compute-0 nova_compute[355794]: 2025-10-02 20:31:29.165 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:31:29 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1908007027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]: {
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:     "82844b2c-c78f-4ec2-a159-b058e47d1cbd": {
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:         "osd_id": 1,
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:         "osd_uuid": "82844b2c-c78f-4ec2-a159-b058e47d1cbd",
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:         "type": "bluestore"
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:     },
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:     "afe0acfe-daf6-4901-80df-bc50bc9ae508": {
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:         "osd_id": 2,
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:         "osd_uuid": "afe0acfe-daf6-4901-80df-bc50bc9ae508",
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:         "type": "bluestore"
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:     },
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:     "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48": {
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:         "ceph_fsid": "6019f664-a1c2-5955-8391-692cb79a59f9",
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:         "osd_id": 0,
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:         "osd_uuid": "dbf9fafa-1ebf-4d35-9eb5-39ce94ab8a48",
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:         "type": "bluestore"
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]:     }
Oct 02 20:31:29 compute-0 eloquent_lovelace[496967]: }
Oct 02 20:31:29 compute-0 systemd[1]: libpod-ccb483026e94b0235564a54e41c664d2d9aff218008f99f111ed9b101314d813.scope: Deactivated successfully.
Oct 02 20:31:29 compute-0 podman[496950]: 2025-10-02 20:31:29.428865074 +0000 UTC m=+1.406354287 container died ccb483026e94b0235564a54e41c664d2d9aff218008f99f111ed9b101314d813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 20:31:29 compute-0 systemd[1]: libpod-ccb483026e94b0235564a54e41c664d2d9aff218008f99f111ed9b101314d813.scope: Consumed 1.069s CPU time.
Oct 02 20:31:29 compute-0 podman[157186]: time="2025-10-02T20:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:31:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:31:30 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1733389586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:31:30 compute-0 nova_compute[355794]: 2025-10-02 20:31:30.121 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.957s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:31:30 compute-0 nova_compute[355794]: 2025-10-02 20:31:30.138 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:31:30 compute-0 nova_compute[355794]: 2025-10-02 20:31:30.165 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:31:30 compute-0 nova_compute[355794]: 2025-10-02 20:31:30.167 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:31:30 compute-0 nova_compute[355794]: 2025-10-02 20:31:30.167 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:31:30 compute-0 nova_compute[355794]: 2025-10-02 20:31:30.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2608: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:30 compute-0 ceph-mon[191910]: pgmap v2607: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:30 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1733389586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:31:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c547176ae159ea7c4f72296b9779d66bf01aea67f31a98b2ad4790f3bf7bf976-merged.mount: Deactivated successfully.
Oct 02 20:31:31 compute-0 nova_compute[355794]: 2025-10-02 20:31:31.168 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:31:31 compute-0 nova_compute[355794]: 2025-10-02 20:31:31.168 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:31:31 compute-0 nova_compute[355794]: 2025-10-02 20:31:31.169 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:31:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:31:31 compute-0 openstack_network_exporter[372736]: ERROR   20:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:31:31 compute-0 openstack_network_exporter[372736]: ERROR   20:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:31:31 compute-0 openstack_network_exporter[372736]: ERROR   20:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:31:31 compute-0 openstack_network_exporter[372736]: ERROR   20:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:31:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:31:31 compute-0 openstack_network_exporter[372736]: ERROR   20:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:31:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:31:31 compute-0 nova_compute[355794]: 2025-10-02 20:31:31.773 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:31:31 compute-0 nova_compute[355794]: 2025-10-02 20:31:31.774 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:31:31 compute-0 nova_compute[355794]: 2025-10-02 20:31:31.774 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:31:31 compute-0 nova_compute[355794]: 2025-10-02 20:31:31.775 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:31:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:31:32.354 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:31:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:31:32.355 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:31:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:31:32.356 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:31:32 compute-0 ceph-mon[191910]: pgmap v2608: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2609: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:33 compute-0 podman[496950]: 2025-10-02 20:31:33.196964144 +0000 UTC m=+5.174453357 container remove ccb483026e94b0235564a54e41c664d2d9aff218008f99f111ed9b101314d813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 20:31:33 compute-0 sudo[496824]: pam_unix(sudo:session): session closed for user root
Oct 02 20:31:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 02 20:31:33 compute-0 systemd[1]: libpod-conmon-ccb483026e94b0235564a54e41c664d2d9aff218008f99f111ed9b101314d813.scope: Deactivated successfully.
Oct 02 20:31:33 compute-0 podman[157186]: @ - - [02/Oct/2025:20:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47830 "" "Go-http-client/1.1"
Oct 02 20:31:33 compute-0 podman[497035]: 2025-10-02 20:31:33.351862769 +0000 UTC m=+0.764975819 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:31:33 compute-0 podman[497036]: 2025-10-02 20:31:33.372223918 +0000 UTC m=+0.782543696 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 20:31:33 compute-0 podman[157186]: @ - - [02/Oct/2025:20:31:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9121 "" "Go-http-client/1.1"
Oct 02 20:31:33 compute-0 nova_compute[355794]: 2025-10-02 20:31:33.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:33 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:31:33 compute-0 nova_compute[355794]: 2025-10-02 20:31:33.670 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:31:33 compute-0 nova_compute[355794]: 2025-10-02 20:31:33.697 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:31:33 compute-0 nova_compute[355794]: 2025-10-02 20:31:33.698 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:31:33 compute-0 nova_compute[355794]: 2025-10-02 20:31:33.699 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:31:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:31:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:31:33 compute-0 nova_compute[355794]: 2025-10-02 20:31:33.699 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:31:33 compute-0 nova_compute[355794]: 2025-10-02 20:31:33.700 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:31:33 compute-0 nova_compute[355794]: 2025-10-02 20:31:33.700 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:31:33 compute-0 nova_compute[355794]: 2025-10-02 20:31:33.701 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:31:33 compute-0 nova_compute[355794]: 2025-10-02 20:31:33.701 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:31:33 compute-0 nova_compute[355794]: 2025-10-02 20:31:33.702 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:31:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:31:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:31:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:31:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:31:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 02 20:31:34 compute-0 nova_compute[355794]: 2025-10-02 20:31:34.103 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:31:34 compute-0 ceph-mon[191910]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:31:34 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev fc694a2b-73cb-479d-acb7-5661de47459c does not exist
Oct 02 20:31:34 compute-0 ceph-mgr[192222]: [progress WARNING root] complete: ev 404cba15-1469-439d-a6de-8f790efd7e37 does not exist
Oct 02 20:31:34 compute-0 sudo[497076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:31:34 compute-0 sudo[497076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:31:34 compute-0 sudo[497076]: pam_unix(sudo:session): session closed for user root
Oct 02 20:31:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2610: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:34 compute-0 sudo[497101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 20:31:34 compute-0 sudo[497101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 20:31:34 compute-0 sudo[497101]: pam_unix(sudo:session): session closed for user root
Oct 02 20:31:34 compute-0 ceph-mon[191910]: pgmap v2609: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:34 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:31:34 compute-0 ceph-mon[191910]: from='mgr.14130 192.168.122.100:0/42525313' entity='mgr.compute-0.uktbkz' 
Oct 02 20:31:35 compute-0 nova_compute[355794]: 2025-10-02 20:31:35.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:36 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:31:36 compute-0 ceph-mon[191910]: pgmap v2610: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:36 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2611: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:38 compute-0 ceph-mon[191910]: pgmap v2611: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:38 compute-0 nova_compute[355794]: 2025-10-02 20:31:38.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:38 compute-0 nova_compute[355794]: 2025-10-02 20:31:38.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:31:38 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2612: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:38 compute-0 podman[497127]: 2025-10-02 20:31:38.692511494 +0000 UTC m=+0.115678237 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-type=git, container_name=kepler, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., name=ubi9, architecture=x86_64)
Oct 02 20:31:38 compute-0 podman[497126]: 2025-10-02 20:31:38.754904186 +0000 UTC m=+0.173024208 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, tcib_managed=true)
Oct 02 20:31:40 compute-0 ceph-mon[191910]: pgmap v2612: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:40 compute-0 nova_compute[355794]: 2025-10-02 20:31:40.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:40 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2613: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:41 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:31:41 compute-0 podman[497168]: 2025-10-02 20:31:41.688562181 +0000 UTC m=+0.105396430 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Oct 02 20:31:41 compute-0 podman[497166]: 2025-10-02 20:31:41.703932381 +0000 UTC m=+0.124273161 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 20:31:41 compute-0 podman[497175]: 2025-10-02 20:31:41.707327079 +0000 UTC m=+0.113957653 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 20:31:41 compute-0 podman[497167]: 2025-10-02 20:31:41.72699252 +0000 UTC m=+0.143845579 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 20:31:41 compute-0 podman[497169]: 2025-10-02 20:31:41.759268479 +0000 UTC m=+0.158226713 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct 02 20:31:42 compute-0 ceph-mon[191910]: pgmap v2613: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:42 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2614: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:43 compute-0 nova_compute[355794]: 2025-10-02 20:31:43.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:44 compute-0 ceph-mon[191910]: pgmap v2614: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:44 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2615: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:45 compute-0 nova_compute[355794]: 2025-10-02 20:31:45.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:46 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:31:46 compute-0 ceph-mon[191910]: pgmap v2615: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:46 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2616: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:47 compute-0 sshd-session[497262]: Accepted publickey for zuul from 192.168.122.10 port 34618 ssh2: ECDSA SHA256:K95ar0T1ZZ0Bv6US9xinFLn1XCFTYcZiJ3U4NONuMu8
Oct 02 20:31:47 compute-0 systemd-logind[793]: New session 68 of user zuul.
Oct 02 20:31:47 compute-0 systemd[1]: Started Session 68 of User zuul.
Oct 02 20:31:47 compute-0 sshd-session[497262]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 20:31:48 compute-0 sudo[497266]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 02 20:31:48 compute-0 sudo[497266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 20:31:48 compute-0 ceph-mon[191910]: pgmap v2616: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:48 compute-0 nova_compute[355794]: 2025-10-02 20:31:48.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:48 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2617: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:50 compute-0 ceph-mon[191910]: pgmap v2617: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:50 compute-0 nova_compute[355794]: 2025-10-02 20:31:50.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:50 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2618: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:51 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:31:52 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2619: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:53 compute-0 nova_compute[355794]: 2025-10-02 20:31:53.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:54 compute-0 podman[497377]: 2025-10-02 20:31:54.235739098 +0000 UTC m=+1.649262759 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:31:54 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2620: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:55 compute-0 nova_compute[355794]: 2025-10-02 20:31:55.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:55 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15847 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:31:55 compute-0 ceph-mon[191910]: pgmap v2618: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:56 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:31:56 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15849 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:31:56 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2621: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:57 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 20:31:57 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/240602999' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 20:31:57 compute-0 ceph-mon[191910]: pgmap v2619: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:57 compute-0 ceph-mon[191910]: pgmap v2620: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:57 compute-0 ceph-mon[191910]: from='client.15847 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:31:57 compute-0 ceph-mon[191910]: from='client.15849 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:31:58 compute-0 nova_compute[355794]: 2025-10-02 20:31:58.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:31:58 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2622: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 02 20:31:58 compute-0 ceph-mon[191910]: pgmap v2621: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:31:58 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/240602999' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 20:31:59 compute-0 podman[157186]: time="2025-10-02T20:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:31:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:31:59 compute-0 podman[157186]: @ - - [02/Oct/2025:20:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9128 "" "Go-http-client/1.1"
Oct 02 20:32:00 compute-0 ovs-vsctl[497577]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 02 20:32:00 compute-0 ceph-mon[191910]: pgmap v2622: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 02 20:32:00 compute-0 nova_compute[355794]: 2025-10-02 20:32:00.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:32:00 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2623: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 02 20:32:01 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:32:01 compute-0 openstack_network_exporter[372736]: ERROR   20:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:32:01 compute-0 openstack_network_exporter[372736]: ERROR   20:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:32:01 compute-0 openstack_network_exporter[372736]: ERROR   20:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:32:01 compute-0 openstack_network_exporter[372736]: ERROR   20:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:32:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:32:01 compute-0 openstack_network_exporter[372736]: ERROR   20:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:32:01 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:32:01 compute-0 virtqemud[153606]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 02 20:32:01 compute-0 virtqemud[153606]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 02 20:32:01 compute-0 virtqemud[153606]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 02 20:32:02 compute-0 sshd-session[497765]: error: kex_exchange_identification: read: Connection reset by peer
Oct 02 20:32:02 compute-0 sshd-session[497765]: Connection reset by 45.140.17.97 port 63497
Oct 02 20:32:02 compute-0 ceph-mon[191910]: pgmap v2623: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 02 20:32:02 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2624: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Oct 02 20:32:02 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: cache status {prefix=cache status} (starting...)
Oct 02 20:32:02 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: client ls {prefix=client ls} (starting...)
Oct 02 20:32:03 compute-0 lvm[497921]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 02 20:32:03 compute-0 lvm[497921]: VG ceph_vg2 finished
Oct 02 20:32:03 compute-0 lvm[497969]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 02 20:32:03 compute-0 lvm[497969]: VG ceph_vg1 finished
Oct 02 20:32:03 compute-0 nova_compute[355794]: 2025-10-02 20:32:03.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:32:03 compute-0 podman[497980]: 2025-10-02 20:32:03.507258143 +0000 UTC m=+0.107690949 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:32:03 compute-0 lvm[498029]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 20:32:03 compute-0 lvm[498029]: VG ceph_vg0 finished
Oct 02 20:32:03 compute-0 podman[498016]: 2025-10-02 20:32:03.629972502 +0000 UTC m=+0.116311664 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_id=edpm, org.label-schema.vendor=CentOS)
Oct 02 20:32:03 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: damage ls {prefix=damage ls} (starting...)
Oct 02 20:32:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:32:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:32:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:32:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:32:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:32:03 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:32:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Optimize plan auto_2025-10-02_20:32:03
Oct 02 20:32:03 compute-0 ceph-mgr[192222]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 02 20:32:03 compute-0 ceph-mgr[192222]: [balancer INFO root] do_upmap
Oct 02 20:32:03 compute-0 ceph-mgr[192222]: [balancer INFO root] pools ['volumes', '.mgr', '.rgw.root', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'vms']
Oct 02 20:32:03 compute-0 ceph-mgr[192222]: [balancer INFO root] prepared 0/10 changes
Oct 02 20:32:03 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: dump loads {prefix=dump loads} (starting...)
Oct 02 20:32:03 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15853 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:04 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 02 20:32:04 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 02 20:32:04 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 02 20:32:04 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15855 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:04 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 02 20:32:04 compute-0 ceph-mon[191910]: pgmap v2624: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Oct 02 20:32:04 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2625: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 13 op/s
Oct 02 20:32:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 02 20:32:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:32:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 02 20:32:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 02 20:32:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:32:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 02 20:32:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:32:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 02 20:32:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:32:04 compute-0 ceph-mgr[192222]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 02 20:32:04 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 02 20:32:04 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 02 20:32:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 02 20:32:05 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/183921300' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 20:32:05 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: ops {prefix=ops} (starting...)
Oct 02 20:32:05 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15861 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:05 compute-0 ceph-mgr[192222]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 20:32:05 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T20:32:05.395+0000 7f5cb7835640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 20:32:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 20:32:05 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1603553829' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:32:05 compute-0 nova_compute[355794]: 2025-10-02 20:32:05.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:32:05 compute-0 ceph-mon[191910]: from='client.15853 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:05 compute-0 ceph-mon[191910]: from='client.15855 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:05 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/183921300' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 20:32:05 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1603553829' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 20:32:05 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: session ls {prefix=session ls} (starting...)
Oct 02 20:32:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 02 20:32:05 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914677543' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 20:32:05 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 02 20:32:05 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1268524546' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 20:32:06 compute-0 ceph-mds[219722]: mds.cephfs.compute-0.fuygbr asok_command: status {prefix=status} (starting...)
Oct 02 20:32:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:32:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 20:32:06 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1696942296' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 20:32:06 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 02 20:32:06 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1292012613' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 20:32:06 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2626: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 02 20:32:06 compute-0 ceph-mon[191910]: pgmap v2625: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 13 op/s
Oct 02 20:32:06 compute-0 ceph-mon[191910]: from='client.15861 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:06 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2914677543' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 20:32:06 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1268524546' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 20:32:06 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1696942296' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 20:32:06 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1292012613' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 20:32:06 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15873 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 20:32:07 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3415234189' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 20:32:07 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15877 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 20:32:07 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2505739660' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 20:32:07 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 02 20:32:07 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2797013825' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 20:32:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 20:32:08 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1914941669' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 20:32:08 compute-0 ceph-mon[191910]: pgmap v2626: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 02 20:32:08 compute-0 ceph-mon[191910]: from='client.15873 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:08 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3415234189' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 20:32:08 compute-0 ceph-mon[191910]: from='client.15877 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:08 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2505739660' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 20:32:08 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2797013825' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 20:32:08 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1914941669' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 20:32:08 compute-0 nova_compute[355794]: 2025-10-02 20:32:08.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:32:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 02 20:32:08 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1374597528' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 20:32:08 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 02 20:32:08 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1094427709' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 20:32:08 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2627: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 31 op/s
Oct 02 20:32:08 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15889 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:08 compute-0 ceph-mgr[192222]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 20:32:08 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T20:32:08.853+0000 7f5cb7835640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 02 20:32:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 20:32:09 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2486020227' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 20:32:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 02 20:32:09 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1316137830' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 20:32:09 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1374597528' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 20:32:09 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1094427709' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 20:32:09 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2486020227' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 20:32:09 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15895 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:09 compute-0 podman[498725]: 2025-10-02 20:32:09.671720136 +0000 UTC m=+0.093917052 container health_status 0edf2028503ee6d126edbc99ab7d100727f6564fcae5e0f6b81e40a4188e0f17 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001)
Oct 02 20:32:09 compute-0 podman[498733]: 2025-10-02 20:32:09.700558475 +0000 UTC m=+0.127689239 container health_status 584ebcc494f2efd7b038f38d2b81e207bf2c7d7c26953166f90991358ba5a733 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, architecture=x86_64, config_id=edpm, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., release-0.7.12=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, container_name=kepler, com.redhat.component=ubi9-container, distribution-scope=public)
Oct 02 20:32:09 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15898 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:09 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 02 20:32:09 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/635963952' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:02.814044+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 28082176 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:03.814582+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 28082176 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:04.815009+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fb8a5000/0x0/0x4ffc00000, data 0x10eae5/0x1d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fb8a6000/0x0/0x4ffc00000, data 0x10eae5/0x1d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 28082176 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:05.815512+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fb8a6000/0x0/0x4ffc00000, data 0x10eae5/0x1d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910605 data_alloc: 218103808 data_used: 352256
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 28082176 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:06.815916+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _renew_subs
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 132 ms_handle_reset con 0x563964e4e800 session 0x563966ca7860
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 35987456 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:07.816295+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 35987456 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:08.816690+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 35987456 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:09.817103+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 35987456 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:10.817669+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fb116000/0x0/0x4ffc00000, data 0x89c062/0x967000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966835 data_alloc: 218103808 data_used: 360448
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 35987456 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:11.818020+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fb116000/0x0/0x4ffc00000, data 0x89c062/0x967000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 35987456 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:12.818472+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fb116000/0x0/0x4ffc00000, data 0x89c062/0x967000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 35987456 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:13.818814+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fb116000/0x0/0x4ffc00000, data 0x89c062/0x967000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 35987456 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:14.819233+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583ac00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.120377541s of 14.281970024s, submitted: 27
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _renew_subs
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89341952 unmapped: 35962880 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:15.819519+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 133 ms_handle_reset con 0x56396583ac00 session 0x563964ef83c0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917633 data_alloc: 218103808 data_used: 368640
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb8a0000/0x0/0x4ffc00000, data 0x112233/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:16.819804+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 36552704 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb8a0000/0x0/0x4ffc00000, data 0x112233/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:17.820095+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 36552704 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:18.820429+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 36552704 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:19.820759+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 36552704 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:20.821055+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 36552704 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917633 data_alloc: 218103808 data_used: 368640
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb8a0000/0x0/0x4ffc00000, data 0x112233/0x1de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:21.821567+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 36552704 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:22.821882+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 36552704 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:23.822276+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 36552704 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:24.822728+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 36552704 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.971796036s of 10.189674377s, submitted: 38
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:25.823051+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 36528128 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: get_auth_request con 0x563964c64000 auth_method 0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:26.823480+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 36528128 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:27.823827+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 36528128 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:28.824080+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 36528128 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:29.824499+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 36528128 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:30.824819+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 36528128 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:31.825076+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 36528128 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:32.825313+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 36528128 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:33.825635+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:34.825992+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:35.826469+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:36.826937+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:37.827492+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:38.828497+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:39.828925+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:40.829347+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:41.829840+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:42.830195+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 36519936 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:43.830591+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 36511744 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:44.830921+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 36511744 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:45.831521+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 36511744 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:46.831987+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 36511744 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:47.832360+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 36511744 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:48.832831+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:49.833097+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:50.833760+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:51.834140+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:52.834618+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:53.835081+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:54.835568+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:55.835929+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:56.836130+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:57.836608+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:58.836986+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:59.837202+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 7023 writes, 27K keys, 7023 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 7023 writes, 1462 syncs, 4.80 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 644 writes, 1936 keys, 644 commit groups, 1.0 writes per commit group, ingest: 1.10 MB, 0.00 MB/s
                                            Interval WAL: 644 writes, 290 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:00.837602+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:01.838006+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:02.838560+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:03.838953+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:04.839206+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:05.839711+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:06.840150+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:07.840648+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:08.841017+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:09.841528+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:10.841909+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:11.842240+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:12.842541+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:13.842915+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:14.843189+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:15.843701+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:16.843985+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:17.844327+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:18.844770+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:19.845019+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:20.845542+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:21.845879+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:22.846262+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:23.846588+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:24.846996+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:25.847533+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:26.847944+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:27.848304+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:28.848645+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:29.848989+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:30.849481+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:31.849890+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:32.850263+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:33.850799+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:34.851031+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:35.851364+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:36.851770+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:37.852149+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:38.852520+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:39.852720+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:40.853108+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:41.853511+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:42.853725+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:43.854141+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:44.854655+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:45.855034+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:46.855522+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:47.855914+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:48.856302+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:49.856694+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:50.857110+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:51.857678+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:52.858193+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:53.858666+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:54.859836+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:55.860471+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:56.861250+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:57.861776+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:58.862495+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:59.862980+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:00.863780+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:01.864370+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:02.865560+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:03.865950+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:04.866705+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:05.867113+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:06.867645+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:07.868294+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:08.868804+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:09.869180+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:10.869589+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:11.870004+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:12.870577+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:13.870935+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:14.871316+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:15.871824+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:16.872521+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:17.873216+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:18.873715+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:19.874177+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:20.874612+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:21.874944+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921807 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:22.875480+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:23.875983+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89c000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 36503552 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:24.876553+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 120.192123413s of 120.216201782s, submitted: 15
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 36470784 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:25.876961+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 36470784 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:26.877323+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 36438016 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:27.877587+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:28.878058+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:29.878321+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:30.878798+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:31.879240+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:32.879482+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:33.879730+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:34.880183+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:35.880714+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:36.881120+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:37.881724+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:38.882131+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:39.882676+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:40.883137+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:41.883637+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:42.884062+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:43.884531+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:44.884963+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:45.885542+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:46.885938+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:47.886151+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:48.886463+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:49.886864+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:50.887195+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:51.888214+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:52.888592+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:53.888964+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:54.889315+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:55.889862+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:56.890274+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:57.890661+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:58.891193+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:59.891669+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:00.892151+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:01.892612+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:02.893152+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:03.893626+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:04.894673+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:05.895137+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:06.895664+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:07.896083+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:08.896603+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:09.896964+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:10.897353+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:11.897820+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:12.898224+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:13.898607+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:14.899033+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:15.899624+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:16.900023+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:17.900554+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:18.900980+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:19.901262+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:20.901724+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:21.902195+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:22.902727+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:23.903076+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:24.903443+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:25.903857+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:26.904320+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:27.904607+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:28.904911+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:29.905334+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:30.905528+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:31.906056+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:32.906610+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:33.907120+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:34.907399+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:35.907924+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:36.908486+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:37.908959+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:38.909633+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:39.910053+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:40.910340+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:41.910807+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:42.911226+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:43.911575+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:44.911849+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:45.912235+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:46.912545+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:47.912917+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:48.913251+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:49.913598+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:50.913917+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:51.914252+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:52.914517+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 36429824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:53.914786+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:54.915121+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:55.915663+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:56.916037+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:57.916331+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:58.916694+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:59.916943+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:00.917290+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:01.917608+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:02.917873+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:03.918111+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:04.918490+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:05.918932+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:06.919431+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:07.921292+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 36421632 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:08.921729+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:09.921965+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:10.922511+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:11.922887+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:12.923332+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:13.923704+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:14.924240+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:15.924559+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:16.925034+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:17.925469+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:18.925817+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:19.926044+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:20.926332+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:21.926631+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:22.927075+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:23.927505+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:24.927980+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:25.928729+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:26.929142+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:27.929589+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:28.930007+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:29.930545+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:30.931008+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:31.931492+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:32.931970+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:33.932470+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:34.932851+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:35.933346+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:36.933869+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:37.934235+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:38.934676+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:39.934927+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:40.935561+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:41.935916+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:42.936500+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:43.937068+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:44.937470+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:45.938107+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:46.938646+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:47.939100+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:48.939633+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:49.940003+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:50.940497+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:51.940798+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:52.941275+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:53.941656+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:54.942024+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:55.942321+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:56.942681+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:57.942922+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:58.943183+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:59.943606+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:00.943913+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:01.944183+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:02.944621+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:03.944978+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:04.945300+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:05.946016+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:06.946478+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x113c96/0x1e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:07.946861+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920927 data_alloc: 218103808 data_used: 376832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:08.947141+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:09.947611+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 163.764251709s of 164.363204956s, submitted: 90
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 36413440 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:10.948109+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 36372480 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _renew_subs
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 135 ms_handle_reset con 0x5639683e6400 session 0x563964c45680
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:11.948531+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 36372480 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583a800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:12.948754+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fb098000/0x0/0x4ffc00000, data 0x915836/0x9e5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 36372480 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068030 data_alloc: 218103808 data_used: 385024
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 136 ms_handle_reset con 0x56396583a800 session 0x56396850e000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:13.949129+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x1585836/0x1655000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:14.949470+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:15.949905+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:16.950266+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:17.950687+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072204 data_alloc: 218103808 data_used: 393216
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:18.950992+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:19.951464+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:20.951685+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:21.952044+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:22.952734+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072204 data_alloc: 218103808 data_used: 393216
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:23.953153+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:24.953632+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:25.954018+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:26.954366+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:27.954873+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072204 data_alloc: 218103808 data_used: 393216
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:28.955257+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:29.955492+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 36364288 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:30.955999+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:31.956561+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:32.957002+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072204 data_alloc: 218103808 data_used: 393216
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:33.957526+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:34.957851+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:35.958842+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:36.959217+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:37.959827+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072364 data_alloc: 218103808 data_used: 397312
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:38.960242+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:39.960673+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:40.961175+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:41.961438+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:42.961795+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072364 data_alloc: 218103808 data_used: 397312
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:43.962224+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:44.962549+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:45.962927+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:46.963261+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:47.963605+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072364 data_alloc: 218103808 data_used: 397312
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:48.963952+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:49.964357+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:50.964848+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x15873b3/0x1658000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:51.965199+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:52.965539+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072364 data_alloc: 218103808 data_used: 397312
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:53.965883+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:54.966238+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:55.966514+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 46.095813751s of 46.311756134s, submitted: 20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _renew_subs
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 137 ms_handle_reset con 0x563963576000 session 0x563964ee8000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:56.967088+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x1588f63/0x165d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 137 ms_handle_reset con 0x563964e4e800 session 0x5639690e4960
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583ac00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 137 ms_handle_reset con 0x56396583ac00 session 0x5639690e5c20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:57.967667+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 36356096 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _renew_subs
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080270 data_alloc: 218103808 data_used: 405504
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:58.968054+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x158ab11/0x165f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 138 ms_handle_reset con 0x5639683e6400 session 0x563965dc10e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 36290560 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:59.968525+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 36290560 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:00.968849+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 36290560 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:01.969226+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 36290560 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fa41f000/0x0/0x4ffc00000, data 0x158ab01/0x165e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:02.969582+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 36290560 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:03.969925+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079557 data_alloc: 218103808 data_used: 405504
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 36290560 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:04.970225+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 36290560 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x5639683e6800 session 0x563964ef63c0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x563964fcb2c0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563964e4e800 session 0x563964b09c20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:05.970687+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583ac00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396583ac00 session 0x5639656f1a40
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 36282368 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x5639683e6400 session 0x563967403860
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804dc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804dc00 session 0x563966a84b40
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:06.971137+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.078349113s of 11.226018906s, submitted: 37
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x563964d21e00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 89022464 unmapped: 36282368 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:07.971617+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563964e4e800 session 0x563964cff2c0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa41d000/0x0/0x4ffc00000, data 0x158c564/0x1661000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 30121984 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:08.972169+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102291 data_alloc: 218103808 data_used: 7225344
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa41d000/0x0/0x4ffc00000, data 0x158c564/0x1661000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 30121984 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583ac00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396583ac00 session 0x5639686d3e00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x5639683e6400 session 0x563965812d20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:09.972433+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804d800 session 0x563965be70e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x563967fcba40
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804d400 session 0x563967fca1e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583ac00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563964e4e800 session 0x563964ee8b40
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396583ac00 session 0x5639674901e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804d000 session 0x5639675c9860
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 29261824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x5639683e6400 session 0x563966ca70e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x5639656323c0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804d000 session 0x563965632000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563964e4e800 session 0x563964ee9680
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:10.972786+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 29261824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cda000/0x0/0x4ffc00000, data 0x1ccd5d6/0x1da4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:11.973217+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 29261824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:12.973593+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 29261824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:13.973988+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171626 data_alloc: 218103808 data_used: 7225344
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 29261824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:14.974638+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 29261824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:15.974941+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cda000/0x0/0x4ffc00000, data 0x1ccd5d6/0x1da4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 29261824 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:16.975697+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396583ac00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396583ac00 session 0x5639690e4b40
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 29302784 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:17.976254+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x5639675c92c0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 29302784 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:18.976695+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172916 data_alloc: 218103808 data_used: 7229440
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 29294592 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:19.976937+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 29294592 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:20.977186+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96215040 unmapped: 29089792 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:21.977493+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 29302784 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:22.977681+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 96690176 unmapped: 28614656 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:23.978073+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218036 data_alloc: 234881024 data_used: 13426688
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:24.978336+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:25.978575+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:26.978816+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:27.979025+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:28.979218+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218036 data_alloc: 234881024 data_used: 13426688
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:29.979415+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:30.979700+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:31.980018+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:32.980570+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:33.980884+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218036 data_alloc: 234881024 data_used: 13426688
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:34.981113+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:35.981577+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:36.981938+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:37.982522+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:38.982933+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218036 data_alloc: 234881024 data_used: 13426688
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:39.983136+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:40.983760+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:41.984188+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:42.985089+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:43.985501+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218036 data_alloc: 234881024 data_used: 13426688
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:44.985946+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:45.986507+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:46.987079+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:47.987516+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:48.988072+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x1cf75d6/0x1dce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218036 data_alloc: 234881024 data_used: 13426688
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 27402240 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:49.988697+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804cc00 session 0x5639690e72c0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804c800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804c800 session 0x563964c443c0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999800 session 0x563966c894a0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 27385856 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:50.988889+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999400 session 0x5639656ea000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.553535461s of 43.824424744s, submitted: 47
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x563966a241e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999800 session 0x56396738f2c0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804c800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804c800 session 0x563965be63c0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804cc00 session 0x563964ef74a0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999000 session 0x563967491a40
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 27230208 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:51.989078+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 102834176 unmapped: 22470656 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:52.989347+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 102858752 unmapped: 22446080 heap: 125304832 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:53.989728+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315282 data_alloc: 234881024 data_used: 14438400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x563966ca7c20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999800 session 0x563966cb2f00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804c800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804c800 session 0x56396738fc20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f87a8000/0x0/0x4ffc00000, data 0x31f65ff/0x32ce000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:54.989937+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 23248896 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804cc00 session 0x5639686d2000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966998c00 session 0x563967fca780
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:55.990336+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 105537536 unmapped: 23969792 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:56.990720+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 23724032 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:57.991163+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 23339008 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966998c00 session 0x563967fca000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7eba000/0x0/0x4ffc00000, data 0x3ae5638/0x3bbd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x5639674090e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:58.991959+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 23339008 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999800 session 0x563967490000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1485351 data_alloc: 234881024 data_used: 15548416
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804c800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804c800 session 0x5639674081e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:59.992529+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 105381888 unmapped: 24125440 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563967013c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:00.992747+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 23977984 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:01.993109+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 23977984 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7e9d000/0x0/0x4ffc00000, data 0x3b0865b/0x3be1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:02.993615+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 105529344 unmapped: 23977984 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563967013400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563967013400 session 0x563966a24b40
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7e9d000/0x0/0x4ffc00000, data 0x3b0865b/0x3be1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:03.993911+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 21422080 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1502228 data_alloc: 234881024 data_used: 18731008
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:04.994259+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 21422080 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:05.994703+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 21422080 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:06.994868+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108863488 unmapped: 20643840 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:07.995231+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 12419072 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7e9d000/0x0/0x4ffc00000, data 0x3b0865b/0x3be1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x417f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:08.995487+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 118022144 unmapped: 11485184 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575828 data_alloc: 251658240 data_used: 28422144
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x563966a85c20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966998c00 session 0x563967409c20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:09.995681+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 118022144 unmapped: 11485184 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.924734116s of 18.826061249s, submitted: 212
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999800 session 0x563964ef41e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:10.995965+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 18358272 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:11.996248+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:12.996642+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:13.996893+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8175000/0x0/0x4ffc00000, data 0x31205f9/0x31f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430650 data_alloc: 234881024 data_used: 18112512
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:14.997230+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8175000/0x0/0x4ffc00000, data 0x31205f9/0x31f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:15.997623+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8175000/0x0/0x4ffc00000, data 0x31205f9/0x31f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:16.997894+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:17.998246+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:18.998509+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430650 data_alloc: 234881024 data_used: 18112512
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:19.998840+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8175000/0x0/0x4ffc00000, data 0x31205f9/0x31f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:20.999138+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:21.999490+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 18333696 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8175000/0x0/0x4ffc00000, data 0x31205f9/0x31f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:22.999900+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 18317312 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:24.000221+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 18317312 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430650 data_alloc: 234881024 data_used: 18112512
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:25.000452+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 18317312 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.332810402s of 15.400432587s, submitted: 16
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:26.000739+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 18317312 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8474000/0x0/0x4ffc00000, data 0x31215f9/0x31f9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:27.001227+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 18317312 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:28.001455+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 18317312 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:29.001741+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 18317312 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431762 data_alloc: 234881024 data_used: 18120704
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:30.002014+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 18317312 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:31.002295+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 18259968 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8472000/0x0/0x4ffc00000, data 0x31225f9/0x31fa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:32.002609+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 18259968 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:33.002896+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 18259968 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:34.003204+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 18259968 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431142 data_alloc: 234881024 data_used: 18120704
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:35.003529+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 18259968 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:36.003846+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 18259968 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x5639683e6400 session 0x5639690e65a0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804d400 session 0x5639673e8780
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.165545464s of 11.226758003s, submitted: 9
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:37.004040+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 14630912 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x5639683e6400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x5639683e6400 session 0x56396738ed20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7a2b000/0x0/0x4ffc00000, data 0x3b635f9/0x3c3b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:38.004226+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 14630912 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804c800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396699ac00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:39.004533+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 16236544 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1521206 data_alloc: 234881024 data_used: 18313216
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:40.004768+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 16203776 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7a04000/0x0/0x4ffc00000, data 0x3b925f9/0x3c6a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:41.005035+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 16195584 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:42.005436+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 16195584 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7a04000/0x0/0x4ffc00000, data 0x3b925f9/0x3c6a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:43.005668+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 16195584 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:44.005868+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 16195584 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1521774 data_alloc: 234881024 data_used: 18370560
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563964e4e800 session 0x5639674905a0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804d000 session 0x563965be6780
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:45.006060+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 16678912 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x563964ef4f00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:46.006309+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:47.006626+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:48.007008+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8297000/0x0/0x4ffc00000, data 0x2da7587/0x2e7d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:49.007362+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8297000/0x0/0x4ffc00000, data 0x2da7587/0x2e7d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1358462 data_alloc: 234881024 data_used: 12562432
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:50.007784+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:51.008256+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.231465340s of 14.594692230s, submitted: 100
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:52.008504+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8296000/0x0/0x4ffc00000, data 0x2da8587/0x2e7e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:53.008785+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:54.009109+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1358690 data_alloc: 234881024 data_used: 12562432
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:55.009451+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8296000/0x0/0x4ffc00000, data 0x2da8587/0x2e7e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:56.009811+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:57.010129+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:58.010544+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:59.010868+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999c00 session 0x563966c890e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x563967409a40
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698b800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698b800 session 0x563967514960
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698a400 session 0x56396738e3c0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 21241856 heap: 129507328 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698bc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698bc00 session 0x5639674083c0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410492 data_alloc: 234881024 data_used: 12562432
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4f800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563964e4f800 session 0x563966cb2f00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x56396581eb40
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:00.011196+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698a400 session 0x56396581f4a0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698b800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698b800 session 0x56396738f680
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108101632 unmapped: 27705344 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f81e6000/0x0/0x4ffc00000, data 0x33b1597/0x3488000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:01.011602+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108101632 unmapped: 27705344 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:02.011862+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108101632 unmapped: 27705344 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:03.012107+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f81e6000/0x0/0x4ffc00000, data 0x33b1597/0x3488000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 27770880 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698bc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f81e6000/0x0/0x4ffc00000, data 0x33b1597/0x3488000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.252738953s of 12.367694855s, submitted: 12
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698bc00 session 0x56396738ef00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563964e4e800 session 0x56396738f2c0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:04.012515+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x56396738fc20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b37000/0x0/0x4ffc00000, data 0x3a60597/0x3b37000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563964e4e800 session 0x56396738f860
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698a400 session 0x56396738eb40
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 27205632 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1461736 data_alloc: 234881024 data_used: 12562432
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:05.012721+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 27205632 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b37000/0x0/0x4ffc00000, data 0x3a60597/0x3b37000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:06.013163+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698b800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 27205632 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698b800 session 0x563964ef4000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:07.013439+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698bc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 27197440 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:08.013809+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b36000/0x0/0x4ffc00000, data 0x3a605ba/0x3b38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 27197440 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:09.014222+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 27197440 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1463677 data_alloc: 234881024 data_used: 12566528
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:10.014638+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 27189248 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:11.014865+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999800 session 0x563967409e00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 109191168 unmapped: 26615808 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563963576000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:12.015057+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 109322240 unmapped: 26484736 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:13.015359+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 109322240 unmapped: 26484736 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:14.015828+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b36000/0x0/0x4ffc00000, data 0x3a605ba/0x3b38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 109322240 unmapped: 26484736 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1507837 data_alloc: 234881024 data_used: 18718720
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:15.016147+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.345785141s of 11.441099167s, submitted: 13
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 109322240 unmapped: 26484736 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:16.016515+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 109355008 unmapped: 26451968 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:17.016817+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 26320896 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:18.017048+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 24870912 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:19.017309+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b33000/0x0/0x4ffc00000, data 0x3a605ba/0x3b38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 24068096 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1545725 data_alloc: 234881024 data_used: 23162880
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:20.017582+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 22740992 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:21.017968+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 22740992 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b33000/0x0/0x4ffc00000, data 0x3a605ba/0x3b38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:22.018287+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 22650880 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:23.018686+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 22650880 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:24.019095+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 22650880 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1568515 data_alloc: 234881024 data_used: 24100864
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:25.019503+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 22650880 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b31000/0x0/0x4ffc00000, data 0x3a655ba/0x3b3d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:26.019975+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 22650880 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:27.020344+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113156096 unmapped: 22650880 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:28.020750+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b31000/0x0/0x4ffc00000, data 0x3a655ba/0x3b3d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 22642688 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.566030502s of 13.616303444s, submitted: 10
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:29.021131+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 22634496 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b30000/0x0/0x4ffc00000, data 0x3a665ba/0x3b3e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:30.021332+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1568767 data_alloc: 234881024 data_used: 24100864
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 22634496 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:31.021507+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 22634496 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:32.021876+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b30000/0x0/0x4ffc00000, data 0x3a665ba/0x3b3e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 22634496 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:33.022100+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 22634496 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:34.022361+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7b30000/0x0/0x4ffc00000, data 0x3a665ba/0x3b3e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698a400 session 0x56396738fe00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 22634496 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698b800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698b800 session 0x563966cb34a0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966998400 session 0x563964ee9e00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999c00 session 0x563964ee92c0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:35.022662+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1568767 data_alloc: 234881024 data_used: 24100864
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804d400 session 0x563964ee83c0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698a400 session 0x563964ee9c20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698b800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698b800 session 0x563967514960
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966998400 session 0x563967515c20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999c00 session 0x563967514000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 22437888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:36.022941+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 22437888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:37.023364+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 22437888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:38.023595+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 22437888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:39.023797+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 22437888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:40.023992+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1607603 data_alloc: 234881024 data_used: 24100864
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f761b000/0x0/0x4ffc00000, data 0x3f7a5ca/0x4053000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 22437888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:41.024206+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804d400 session 0x5639675141e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 22429696 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:42.024444+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698a400 session 0x5639675154a0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 22429696 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:43.024849+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f761b000/0x0/0x4ffc00000, data 0x3f7a5ca/0x4053000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 22421504 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698b800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698b800 session 0x563967515860
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.065951347s of 15.131469727s, submitted: 8
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:44.025055+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966998400 session 0x563967515a40
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 118161408 unmapped: 17645568 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:45.025272+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1674438 data_alloc: 234881024 data_used: 24723456
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 118652928 unmapped: 17154048 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804c800 session 0x5639690e6d20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396699ac00 session 0x56396738e000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:46.025558+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 16695296 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698a400 session 0x5639675c8d20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:47.025753+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f6da1000/0x0/0x4ffc00000, data 0x3d5e5fd/0x3e39000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 16564224 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:48.026147+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121454592 unmapped: 14352384 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:49.026439+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 122183680 unmapped: 13623296 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:50.026737+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1618935 data_alloc: 234881024 data_used: 26521600
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 122183680 unmapped: 13623296 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563966999c00 session 0x563967514f00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804d000 session 0x5639686d30e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:51.027068+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698b800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 16818176 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698b800 session 0x563967515c20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:52.027325+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 16809984 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:53.027542+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7d4b000/0x0/0x4ffc00000, data 0x384a5ba/0x3922000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 12681216 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.293957710s of 10.008902550s, submitted: 192
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:54.027744+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 11329536 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:55.027990+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1615324 data_alloc: 234881024 data_used: 23351296
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f7468000/0x0/0x4ffc00000, data 0x412d5ba/0x4205000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124534784 unmapped: 11272192 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:56.028261+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 12214272 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:57.028658+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123600896 unmapped: 12206080 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:58.028871+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123600896 unmapped: 12206080 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f743c000/0x0/0x4ffc00000, data 0x415a5ba/0x4232000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:59.029239+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 12197888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:00.029600+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1619744 data_alloc: 234881024 data_used: 23511040
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f743c000/0x0/0x4ffc00000, data 0x415a5ba/0x4232000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 12197888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:01.029972+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 12197888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:02.030322+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 12197888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:03.030684+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563963576000 session 0x563964ef4f00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563964e4e800 session 0x5639656d5680
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123592704 unmapped: 12214272 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:04.030936+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.840319633s of 10.184956551s, submitted: 30
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698a400 session 0x5639690e6960
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 18505728 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:05.031217+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448260 data_alloc: 234881024 data_used: 15392768
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 18505728 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f83fb000/0x0/0x4ffc00000, data 0x319b5ba/0x3273000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:06.031714+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 18505728 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:07.032061+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 18505728 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:08.032329+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 18505728 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:09.032758+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 18505728 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:10.033228+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448260 data_alloc: 234881024 data_used: 15392768
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f83fb000/0x0/0x4ffc00000, data 0x319b5ba/0x3273000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 18505728 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:11.033645+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396804cc00 session 0x5639656ead20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x563967013c00 session 0x563967408960
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 18505728 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:12.033989+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698b800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 19365888 heap: 135806976 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:13.034343+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 ms_handle_reset con 0x56396698b800 session 0x563967515a40
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 36093952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:14.034743+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.779101372s of 10.012064934s, submitted: 33
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _renew_subs
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563964e4e800 session 0x563966a25c20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 36077568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:15.034957+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416099 data_alloc: 234881024 data_used: 14102528
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8513000/0x0/0x4ffc00000, data 0x3080137/0x3159000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 36077568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:16.035312+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 36077568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:17.035712+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 36028416 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:18.035994+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 36028416 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:19.036327+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 36028416 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:20.036661+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415367 data_alloc: 234881024 data_used: 14098432
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8513000/0x0/0x4ffc00000, data 0x3081137/0x315a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 36020224 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:21.036938+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:22.037265+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 36020224 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698a400 session 0x563964ef41e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563967013c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563967013c00 session 0x563964291860
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804cc00 session 0x5639656321e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966999c00 session 0x563965632780
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:23.037571+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 36118528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563964e4e800 session 0x563967408000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698bc00 session 0x563964ef45a0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966998c00 session 0x5639656d4f00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:24.037803+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 36102144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.746902466s of 10.012772560s, submitted: 50
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966999c00 session 0x5639674914a0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563967013c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563967013c00 session 0x5639673e8960
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563964e4e800 session 0x5639673e81e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698a400 session 0x5639690e70e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:25.038232+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 42041344 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324498 data_alloc: 218103808 data_used: 7229440
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698bc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698bc00 session 0x5639656485a0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:26.038731+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 34693120 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966998c00 session 0x563966cb3c20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8b6c000/0x0/0x4ffc00000, data 0x2a2a166/0x2b02000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:27.039062+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 34037760 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966999c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966999c00 session 0x563964ef6f00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563964e4e800 session 0x563967490960
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:28.039355+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 33488896 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698a400 session 0x5639674901e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698bc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698bc00 session 0x563964b04000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966998c00 session 0x563964ef4780
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804cc00 session 0x5639673e9c20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804cc00 session 0x563964f4fc20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:29.040308+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 33480704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698bc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698bc00 session 0x563967408780
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:30.040810+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8880000/0x0/0x4ffc00000, data 0x2d16166/0x2dee000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 33480704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966998c00 session 0x5639673e9680
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376443 data_alloc: 234881024 data_used: 14053376
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:31.041189+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 33480704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804d000 session 0x563966a254a0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966998400 session 0x563966a250e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698bc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:32.042158+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966998c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 33480704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8880000/0x0/0x4ffc00000, data 0x2d16166/0x2dee000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:33.042588+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 33800192 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563964e4e800 session 0x563964f4e000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698a400 session 0x5639673e8f00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:34.042774+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 34734080 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.598022461s of 10.010197639s, submitted: 60
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804cc00 session 0x563967409a40
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:35.043094+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1316803 data_alloc: 234881024 data_used: 14163968
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:36.043426+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:37.044285+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:38.044690+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:39.045000+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:40.045425+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328323 data_alloc: 234881024 data_used: 15785984
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:41.045841+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:42.046081+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:43.046459+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:44.046711+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:45.046988+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328323 data_alloc: 234881024 data_used: 15785984
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:46.047446+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:47.047781+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:48.048082+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:49.048491+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:50.048898+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328323 data_alloc: 234881024 data_used: 15785984
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:51.049308+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:52.049751+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:53.050083+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:54.050520+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:55.050692+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328323 data_alloc: 234881024 data_used: 15785984
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:56.051110+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:57.051617+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:58.052036+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:59.053279+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:00.053646+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328323 data_alloc: 234881024 data_used: 15785984
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:01.054019+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:02.054686+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 34619392 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:03.054912+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 34611200 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:04.055243+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 34603008 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:05.055661+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24ea104/0x25c1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 34603008 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328323 data_alloc: 234881024 data_used: 15785984
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:06.056094+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 34603008 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:07.056555+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 34603008 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:08.056926+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 34603008 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:09.057272+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 34.907386780s of 34.980773926s, submitted: 12
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 33161216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:10.057658+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 30949376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1371919 data_alloc: 234881024 data_used: 15798272
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:11.058009+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8c0b000/0x0/0x4ffc00000, data 0x298c104/0x2a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 31309824 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:12.058331+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 31301632 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:13.058765+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121167872 unmapped: 31424512 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:14.059117+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 31309824 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:15.059588+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8ab0000/0x0/0x4ffc00000, data 0x2ae1104/0x2bb8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 31309824 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1388143 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:16.059925+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 31309824 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:17.060292+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 31309824 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:18.060720+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 31309824 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:19.061001+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8ab0000/0x0/0x4ffc00000, data 0x2ae1104/0x2bb8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 31309824 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:20.061328+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.984402657s of 10.894448280s, submitted: 67
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 31899648 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386431 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:21.061670+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 31899648 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:22.061860+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 31899648 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:23.062217+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 31899648 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:24.062603+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 31899648 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:25.062975+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa6000/0x0/0x4ffc00000, data 0x2af1104/0x2bc8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 31899648 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386431 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:26.063469+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa6000/0x0/0x4ffc00000, data 0x2af1104/0x2bc8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120692736 unmapped: 31899648 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa6000/0x0/0x4ffc00000, data 0x2af1104/0x2bc8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:27.063853+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 31891456 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa6000/0x0/0x4ffc00000, data 0x2af1104/0x2bc8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:28.064249+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 31891456 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:29.064551+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 31891456 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa6000/0x0/0x4ffc00000, data 0x2af1104/0x2bc8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:30.064925+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 31891456 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386431 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.951239586s of 10.992996216s, submitted: 2
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:31.065274+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 31842304 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:32.065713+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 31842304 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:33.066124+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 31842304 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:34.066555+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 31842304 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:35.066801+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 31842304 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386587 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:36.067181+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:37.067548+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:38.067783+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:39.068145+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:40.068549+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386587 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:41.069021+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:42.069288+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:43.069630+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:44.070011+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.040291786s of 13.055529594s, submitted: 2
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:45.070283+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386763 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:46.070574+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:47.070921+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:48.071299+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:49.071488+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:50.071845+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:51.072298+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386763 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:52.072684+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:53.073112+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:54.073596+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:55.073991+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:56.074485+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386763 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:57.074862+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:58.075275+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 31825920 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:59.075702+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.354290009s of 15.360252380s, submitted: 1
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:00.076112+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 8697 writes, 34K keys, 8697 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 8697 writes, 2105 syncs, 4.13 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1674 writes, 6961 keys, 1674 commit groups, 1.0 writes per commit group, ingest: 8.89 MB, 0.01 MB/s
                                            Interval WAL: 1674 writes, 643 syncs, 2.60 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:01.076538+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:02.076912+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:03.077345+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:04.077693+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:05.078046+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:06.078568+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 31834112 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:07.078952+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: mgrc ms_handle_reset ms_handle_reset con 0x563966fe1c00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2078717049
Oct 02 20:32:10 compute-0 ceph-osd[208121]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2078717049,v1:192.168.122.100:6801/2078717049]
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: get_auth_request con 0x563967013c00 auth_method 0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: mgrc handle_mgr_configure stats_period=5
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:08.079292+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:09.079619+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:10.079878+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:11.080200+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:12.080425+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:13.080844+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:14.081197+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:15.081573+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:16.081947+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:17.082211+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:18.082549+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:19.082804+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:20.083100+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:21.083440+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:22.083697+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:23.084073+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:24.084493+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:25.084686+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:26.084996+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:27.085205+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 31735808 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:28.085658+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:29.085989+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:30.086263+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:31.086669+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:32.087142+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:33.087718+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:34.088307+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:35.088706+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:36.089124+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:37.089566+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:38.089975+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:39.090551+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:40.090996+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:41.091485+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:42.091827+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:43.092236+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120864768 unmapped: 31727616 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:44.092671+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:45.092973+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:46.093526+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:47.093955+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:48.094486+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:49.094942+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:50.095296+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:51.095698+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:52.096119+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:53.096365+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:54.096698+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:55.096969+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 31719424 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:56.097771+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:57.098900+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:58.099181+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:59.099541+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:00.099766+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:01.100049+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387819 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8aa3000/0x0/0x4ffc00000, data 0x2af4104/0x2bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:02.100494+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:03.100926+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 64.250640869s of 64.275962830s, submitted: 8
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:04.101167+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 31170560 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804d000 session 0x56396581e960
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804c800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804c800 session 0x563966a25860
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563964e4e800 session 0x563966a25c20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:05.101668+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698a400 session 0x5639690e6000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804cc00 session 0x56396738e1e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8817000/0x0/0x4ffc00000, data 0x2d80104/0x2e57000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:06.102109+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8817000/0x0/0x4ffc00000, data 0x2d80104/0x2e57000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411848 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:07.102550+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:08.102863+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:09.103157+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804d000 session 0x563964f4ef00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396699b400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396699b400 session 0x563964ef6960
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:10.103465+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563964e4e800 session 0x563964ef7a40
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:11.103933+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413686 data_alloc: 234881024 data_used: 16306176
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698a400 session 0x563964ef74a0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:12.104229+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:13.104534+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 31711232 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:14.105302+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120889344 unmapped: 31703040 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:15.106798+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120922112 unmapped: 31670272 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:16.107870+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120922112 unmapped: 31670272 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1421818 data_alloc: 234881024 data_used: 17362944
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:17.108693+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121004032 unmapped: 31588352 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x5639655a7000 session 0x5639690e7680
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563967423000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:18.110033+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121004032 unmapped: 31588352 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:19.110513+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121004032 unmapped: 31588352 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:20.110827+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121012224 unmapped: 31580160 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:21.111112+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121012224 unmapped: 31580160 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1433178 data_alloc: 234881024 data_used: 18976768
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:22.111623+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121012224 unmapped: 31580160 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:23.111935+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121012224 unmapped: 31580160 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:24.112214+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.379343033s of 20.399175644s, submitted: 16
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121020416 unmapped: 31571968 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:25.112651+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121077760 unmapped: 31514624 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:26.112902+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121077760 unmapped: 31514624 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434442 data_alloc: 234881024 data_used: 19013632
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:27.113127+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121110528 unmapped: 31481856 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:28.113487+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:29.113943+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:30.114511+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:31.114985+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434442 data_alloc: 234881024 data_used: 19013632
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:32.115242+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:33.115694+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:34.116133+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:35.116418+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:36.116969+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434442 data_alloc: 234881024 data_used: 19013632
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:37.117195+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:38.117490+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:39.117709+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:40.117899+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:41.118192+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434442 data_alloc: 234881024 data_used: 19013632
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:42.118407+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:43.118641+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:44.118857+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8816000/0x0/0x4ffc00000, data 0x2d80114/0x2e58000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:45.119090+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:46.119551+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 121159680 unmapped: 31432704 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.754760742s of 22.446420670s, submitted: 108
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1493018 data_alloc: 234881024 data_used: 19013632
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:47.119872+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124174336 unmapped: 28418048 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:48.120097+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124248064 unmapped: 28344320 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:49.120366+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x3716114/0x37ee000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:50.120598+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:51.120790+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:52.121017+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:53.121251+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:54.121534+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:55.121773+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:56.122008+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:57.122244+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:58.122494+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:59.122733+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:00.122989+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:01.123368+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:02.123644+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:03.123913+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:04.124141+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124256256 unmapped: 28336128 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:05.124367+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:06.124687+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.048749924s of 20.456701279s, submitted: 60
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:07.125036+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:08.125552+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:09.125815+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:10.126103+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:11.126590+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:12.127041+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:13.127441+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:14.127720+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:15.127957+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:16.128356+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:17.128624+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:18.128849+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:19.129098+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:20.129413+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:21.129650+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:22.129941+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:23.130183+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:24.130823+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:25.131178+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:26.131663+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:27.131869+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:28.132199+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:29.132470+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:30.132766+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124264448 unmapped: 28327936 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:31.133076+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:32.133366+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:33.133676+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:34.133961+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:35.134876+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:36.135272+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:37.135710+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:38.136046+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:39.136606+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:40.136982+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:41.137170+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:42.137553+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:43.137783+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:44.138093+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:45.138269+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:46.138504+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:47.138719+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:48.138919+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:49.139232+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:50.139636+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:51.139915+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:52.140277+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:53.141247+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:54.141563+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 28319744 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:55.141782+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:56.142143+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:57.142561+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:58.142880+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:59.143108+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:00.143306+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:01.143556+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:02.143906+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:03.144303+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:04.144673+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:05.145046+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:06.145348+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:07.145723+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:08.146052+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:09.146297+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:10.146654+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 28311552 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:11.146902+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 28303360 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:12.150782+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124289024 unmapped: 28303360 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:13.151222+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:14.151672+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:15.152166+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:16.152653+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:17.153807+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:18.154351+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:19.154825+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:20.155277+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:21.155835+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:22.156284+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:23.156907+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:24.157239+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:25.157621+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:26.157863+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124297216 unmapped: 28295168 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:27.158353+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:28.159030+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:29.159500+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:30.160191+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:31.160710+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:32.161012+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:33.161239+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:34.161895+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:35.162106+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:36.162640+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:37.163220+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:38.163515+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:39.163997+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:40.164629+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:41.165696+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:42.166691+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:43.167061+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:44.167521+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:45.167721+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:46.168140+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:47.168698+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:48.169034+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:49.169312+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 28286976 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:50.169669+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:51.169910+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:52.170151+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:53.170824+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:54.171139+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:55.171361+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:56.171622+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:57.171855+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:58.172098+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:59.172564+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:00.172815+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:01.173101+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:02.173508+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:03.173705+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:04.173885+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:05.174106+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:06.174343+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124313600 unmapped: 28278784 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:07.174494+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 28270592 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:08.174692+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 28270592 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:09.174893+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 28270592 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:10.175107+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 28270592 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:11.175341+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966fd2400 session 0x563964d21860
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396781ec00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 28270592 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966fd2800 session 0x5639656d5e00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396781f000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966fd3800 session 0x5639656f1680
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563966fd2800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:12.175509+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 28270592 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510212 data_alloc: 234881024 data_used: 19693568
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:13.175720+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124321792 unmapped: 28270592 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:14.175940+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 28172288 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:15.176182+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 28172288 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:16.176632+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 28172288 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x371a114/0x37f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:17.176860+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 28172288 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511972 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:18.177173+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 28172288 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:19.178543+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 28172288 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 133.032196045s of 133.038803101s, submitted: 1
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:20.178910+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124608512 unmapped: 27983872 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:21.179255+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124608512 unmapped: 27983872 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e65000/0x0/0x4ffc00000, data 0x3731114/0x3809000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:22.179635+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511888 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:23.179977+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:24.180274+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:25.180618+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:26.180925+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:27.181225+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e65000/0x0/0x4ffc00000, data 0x3731114/0x3809000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511888 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:28.181566+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:29.181925+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:30.182142+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:31.182579+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e65000/0x0/0x4ffc00000, data 0x3731114/0x3809000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:32.182786+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1512368 data_alloc: 234881024 data_used: 19898368
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:33.183100+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:34.183530+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e65000/0x0/0x4ffc00000, data 0x3731114/0x3809000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:35.183857+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 28090368 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e65000/0x0/0x4ffc00000, data 0x3731114/0x3809000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.845728874s of 15.874156952s, submitted: 2
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:36.184361+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:37.184878+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1512788 data_alloc: 234881024 data_used: 19898368
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:38.186782+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:39.187108+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:40.187350+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:41.187964+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:42.188449+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1512788 data_alloc: 234881024 data_used: 19898368
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:43.188690+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:44.188982+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:45.189336+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:46.189946+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:47.190188+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1512788 data_alloc: 234881024 data_used: 19898368
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:48.190411+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:49.190605+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:50.190848+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:51.191075+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:52.191272+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1512788 data_alloc: 234881024 data_used: 19898368
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:53.191506+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:54.191738+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.920186996s of 18.941226959s, submitted: 2
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:55.191962+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:56.192783+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:57.193189+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:58.194277+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:59.194677+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:00.194882+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:01.195103+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:02.195585+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:03.195807+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:04.196008+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:05.196239+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:06.196514+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:07.196753+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:08.196991+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:09.197268+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:10.197546+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:11.197824+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:12.198135+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 27967488 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:13.198445+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 27967488 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:14.198694+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 27967488 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:15.198918+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 27967488 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:16.199244+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 27967488 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:17.199568+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:18.200055+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:19.200489+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:20.200945+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:21.201329+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:22.201670+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:23.201988+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:24.202349+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:25.202851+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:26.203281+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:27.203792+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:28.204177+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:29.204618+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:30.204941+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:31.205212+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:32.205598+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:33.206009+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:34.206359+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:35.206732+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:36.207233+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:37.207675+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:38.207964+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:39.208342+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:40.208782+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:41.209151+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:42.209755+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:43.210262+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:44.210686+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:45.211083+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:46.211571+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:47.211952+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:48.212315+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:49.212565+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:50.213040+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:51.213589+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:52.214011+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:53.214492+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:54.215147+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:55.215579+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:56.216660+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:57.217071+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:58.217566+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:59.217900+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:00.218707+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:01.219252+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:02.219891+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:03.220275+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:04.220928+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:05.221504+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:06.222243+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:07.222672+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:08.223158+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:09.223549+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:10.223982+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:11.224566+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:12.224901+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:13.225207+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:14.225581+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:15.225959+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:16.226229+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:17.227091+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:18.228465+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:19.229496+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:20.231640+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:21.233315+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:22.234560+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 27893760 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:23.236096+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:24.238135+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:25.238958+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:26.239235+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:27.239659+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:28.240085+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:29.240468+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:30.240795+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:31.241226+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:32.241756+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:33.242360+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:34.242877+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:35.243345+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:36.243858+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:37.244268+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:38.244674+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:39.245345+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:40.246001+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:41.246535+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:42.246765+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:43.247051+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:44.247428+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:45.247779+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:46.248173+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:47.248428+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:48.248717+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515588 data_alloc: 234881024 data_used: 19886080
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 114.282867432s of 114.314605713s, submitted: 15
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:49.249064+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124796928 unmapped: 27795456 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:50.249489+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124796928 unmapped: 27795456 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:51.249848+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124796928 unmapped: 27795456 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:52.250251+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124796928 unmapped: 27795456 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:53.250513+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518308 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:54.251097+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:55.251312+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:56.251878+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:57.252161+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:58.252583+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:59.252787+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:00.253120+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:01.254501+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:02.254843+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 27787264 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:03.255231+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124813312 unmapped: 27779072 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:04.255651+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124813312 unmapped: 27779072 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:05.256103+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124813312 unmapped: 27779072 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:06.256648+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124813312 unmapped: 27779072 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:07.256959+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124813312 unmapped: 27779072 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:08.257277+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124813312 unmapped: 27779072 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:09.257818+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124542976 unmapped: 28049408 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:10.258277+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124542976 unmapped: 28049408 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:11.258767+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:12.259365+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:13.259956+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:14.260327+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:15.260676+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:16.261120+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:17.261606+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:18.262010+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:19.262507+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:20.262930+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:21.263295+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:22.263506+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:23.264020+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:24.264779+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:25.265115+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:26.265621+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124551168 unmapped: 28041216 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:27.265927+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124559360 unmapped: 28033024 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:28.266250+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:29.266619+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:30.266953+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:31.267190+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:32.267459+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:33.267717+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:34.267930+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:35.268193+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:36.268473+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:37.268852+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:38.269171+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:39.269421+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:40.269747+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:41.269987+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:42.270163+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:43.272319+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:44.272704+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:45.273056+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:46.273434+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:47.273845+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:48.274146+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 28024832 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:49.274571+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 28016640 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:50.274855+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 28016640 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:51.275102+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 28016640 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:52.279465+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 28016640 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:53.279922+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 28016640 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:54.280333+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 28016640 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:55.280741+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 28016640 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:56.281203+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 28016640 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:57.281669+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124575744 unmapped: 28016640 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:58.282119+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:59.282637+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:00.282966+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:01.283309+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:02.283649+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:03.284099+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:04.284548+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:05.285021+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 28008448 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:06.285510+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:07.285964+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:08.286447+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:09.286881+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:10.287306+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:11.287679+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:12.287927+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:13.288282+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:14.288837+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:15.289268+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:16.289700+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:17.290173+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:18.290634+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:19.290956+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:20.291273+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:21.291687+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124592128 unmapped: 28000256 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:22.294247+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124600320 unmapped: 27992064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:23.295003+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124600320 unmapped: 27992064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:24.295582+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124600320 unmapped: 27992064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:25.295994+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124600320 unmapped: 27992064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:26.296667+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124600320 unmapped: 27992064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:27.297034+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124600320 unmapped: 27992064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:28.297471+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124600320 unmapped: 27992064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:29.297835+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124600320 unmapped: 27992064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:30.298141+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124608512 unmapped: 27983872 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:31.298577+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:32.299057+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:33.299642+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:34.299906+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:35.300304+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:36.300673+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:37.301072+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:38.301605+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:39.302012+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:40.302360+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:41.302867+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:42.303214+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:43.303464+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:44.303684+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:45.303934+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:46.304337+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:47.304729+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:48.305043+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:49.305361+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:50.305649+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:51.305845+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:52.306048+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:53.306623+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 27975680 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:54.306928+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:55.307141+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:56.307601+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:57.307987+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:58.308343+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:59.308830+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:00.309256+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:01.309692+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:02.310232+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:03.310448+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:04.310776+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:05.311170+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:06.311597+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:07.311850+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:08.312194+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:09.312595+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:10.313042+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 27959296 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:11.313752+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:12.314156+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:13.314694+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:14.315002+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:15.315339+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:16.315855+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:17.316202+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:18.316474+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:19.316918+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:20.317262+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:21.317654+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:22.317922+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:23.318452+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:24.318870+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:25.319286+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:26.319861+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 27951104 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:27.320186+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:28.320741+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:29.321111+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:30.321530+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:31.321814+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:32.322179+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:33.322633+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:34.323128+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:35.323543+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:36.324012+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27942912 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:37.324427+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 27934720 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:38.324836+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 27934720 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:39.325476+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 27934720 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:40.325862+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 27934720 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:41.326206+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 27934720 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:42.326737+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 27934720 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:43.327506+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:44.328008+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:45.328584+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:46.329054+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:47.329493+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:48.329773+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:49.330043+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:50.330301+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:51.330614+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:52.330945+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:53.331653+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:54.332088+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:55.332549+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:56.333055+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124665856 unmapped: 27926528 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:57.333354+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:58.334795+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:59.334992+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:00.335323+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 9063 writes, 35K keys, 9063 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9063 writes, 2265 syncs, 4.00 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 366 writes, 1012 keys, 366 commit groups, 1.0 writes per commit group, ingest: 1.01 MB, 0.00 MB/s
                                            Interval WAL: 366 writes, 160 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:01.335591+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:02.335895+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:03.336184+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:04.336615+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124674048 unmapped: 27918336 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:05.336895+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 27910144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:06.337331+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 27910144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:07.337571+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 27910144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:08.337983+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 27910144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:09.338302+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 27910144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:10.338697+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 27910144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:11.339111+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 27910144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:12.339543+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 27910144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:13.340052+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 27910144 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:14.340535+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:15.340928+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:16.341351+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:17.341867+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:18.342191+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:19.342654+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:20.342936+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:21.343235+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:22.343475+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:23.343694+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:24.343889+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:25.344259+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:26.344662+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:27.345019+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:28.345302+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 27901952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:29.345754+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 27893760 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:30.346086+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 27893760 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 234881024 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:31.346539+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 27893760 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:32.346982+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 27893760 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:33.347462+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 27893760 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:34.347791+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:35.348259+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 218103808 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:36.348883+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:37.349257+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:38.349628+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 27885568 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:39.350125+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:40.350859+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 218103808 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:41.351248+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:42.351688+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:43.352005+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:44.352475+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:45.352875+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 27877376 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 218103808 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:46.353287+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:47.353664+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:48.353967+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:49.354366+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:50.354731+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 218103808 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:51.354993+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:52.355309+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:53.355624+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:54.355914+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:55.356142+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 218103808 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:56.356658+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:57.357074+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:58.357524+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:59.357741+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:00.358194+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 27869184 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515492 data_alloc: 218103808 data_used: 20148224
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396698bc00 session 0x563966a24000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563966998c00 session 0x5639674081e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f7e50000/0x0/0x4ffc00000, data 0x3746114/0x381e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:01.358656+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396781e000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 124731392 unmapped: 27860992 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 252.500213623s of 252.522964478s, submitted: 2
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396781e000 session 0x563964291860
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:02.359479+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:03.359949+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:04.360491+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8772000/0x0/0x4ffc00000, data 0x2e24114/0x2efc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:05.360877+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1421904 data_alloc: 218103808 data_used: 17698816
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:06.361180+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:07.361615+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:08.361938+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8772000/0x0/0x4ffc00000, data 0x2e24114/0x2efc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:09.362300+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:10.362738+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8772000/0x0/0x4ffc00000, data 0x2e24114/0x2efc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1421904 data_alloc: 218103808 data_used: 17698816
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:11.363122+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:12.363601+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.848731995s of 10.890914917s, submitted: 8
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804cc00 session 0x563966cb21e0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 29278208 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x56396804d000 session 0x5639656eb860
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f8772000/0x0/0x4ffc00000, data 0x2e24114/0x2efc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:13.363842+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 ms_handle_reset con 0x563964e4e800 session 0x563967403c20
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:14.364091+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:15.364558+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305401 data_alloc: 218103808 data_used: 14086144
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:16.364966+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:17.365316+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f9399000/0x0/0x4ffc00000, data 0x21fe104/0x22d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:18.366095+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 heartbeat osd_stat(store_statfs(0x4f9399000/0x0/0x4ffc00000, data 0x21fe104/0x22d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:19.366728+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:20.367087+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305401 data_alloc: 218103808 data_used: 14086144
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:21.367900+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:22.368461+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.599736214s of 10.729805946s, submitted: 23
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:23.368794+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 32088064 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:24.369202+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f939a000/0x0/0x4ffc00000, data 0x21fe0e1/0x22d4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 42237952 heap: 152592384 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 141 ms_handle_reset con 0x56396698a400 session 0x563966cb3e00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698bc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:25.369880+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa005000/0x0/0x4ffc00000, data 0x158fce5/0x1669000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110403584 unmapped: 50585600 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250608 data_alloc: 218103808 data_used: 2621440
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _renew_subs
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:26.370413+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 142 ms_handle_reset con 0x56396698bc00 session 0x56396738f4a0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x563964e4e800
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 51306496 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:27.370681+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _renew_subs
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 51273728 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 143 ms_handle_reset con 0x563964e4e800 session 0x5639675145a0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:28.371082+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:29.371497+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110837760 unmapped: 50151424 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:30.371934+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203663 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:31.372307+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa001000/0x0/0x4ffc00000, data 0x159342c/0x166d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: get_auth_request con 0x563964c64000 auth_method 0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:32.372757+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:33.373709+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:34.374720+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:35.375229+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203663 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:36.376023+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9ffd000/0x0/0x4ffc00000, data 0x1594eab/0x1670000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.149994850s of 13.364699364s, submitted: 190
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:37.376604+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:38.377281+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:39.377746+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:40.378129+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:41.378628+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:42.379073+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:43.379518+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:44.380053+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:45.380754+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:46.381279+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:47.381836+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:48.382227+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:49.382614+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:50.383017+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:51.383472+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:52.383826+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:53.384120+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:54.384557+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:55.384890+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:56.385602+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:57.385963+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:58.386364+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:59.386811+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:00.387256+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:01.387715+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:02.388151+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:03.388573+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:04.389167+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:05.389701+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:06.390157+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:07.390665+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:08.391011+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:09.391485+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:10.391847+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:11.392235+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:12.392642+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:13.392987+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:14.393466+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:15.393881+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:16.394341+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:17.394563+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:18.394970+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:19.395327+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:20.395707+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:21.396302+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:22.396571+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:23.397946+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:24.398336+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:25.398695+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:26.399125+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:27.399508+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:28.399767+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:29.400266+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:30.400815+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:31.401208+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:32.403097+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:33.403980+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:34.404971+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:35.405547+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:36.406126+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:37.406641+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:38.407074+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:39.407872+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:40.409075+0000)
Oct 02 20:32:10 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15901 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:41.409577+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:42.409830+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:43.410141+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:44.410556+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:45.410961+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:46.411304+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:47.411607+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:48.412004+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:49.412322+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:50.412592+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:51.413284+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:52.413560+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:53.413840+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:54.414053+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:55.414287+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:56.414548+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 50044928 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'config diff' '{prefix=config diff}'
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'config show' '{prefix=config show}'
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:57.414759+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 49938432 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:58.415013+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:59.415553+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 49856512 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'log dump' '{prefix=log dump}'
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:00.416509+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 122175488 unmapped: 38813696 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'perf dump' '{prefix=perf dump}'
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'perf schema' '{prefix=perf schema}'
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:01.416806+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:02.417064+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:03.417293+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:04.417607+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:05.417861+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:06.420517+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:07.420860+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:08.421079+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:09.421298+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:10.421538+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:11.421781+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:12.422036+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:13.422264+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:14.422694+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:15.422967+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:16.423346+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:17.423676+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:18.425306+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:19.425532+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:20.425980+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:21.426241+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:22.426503+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:23.426732+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:24.427596+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:25.427844+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:26.428190+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:27.428558+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:28.428801+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:29.429075+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:30.429321+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:31.429537+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:32.429739+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:33.429949+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:34.430199+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:35.430450+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:36.430866+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:37.431206+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:38.431462+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:39.431883+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:40.432072+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:41.432278+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:42.432544+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:43.432768+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:44.432964+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:45.433304+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:46.433728+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:47.434039+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:48.434524+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:49.435070+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:50.435529+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:51.435881+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:52.436090+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:53.436575+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:54.436833+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:55.437173+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:56.437614+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:57.437885+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:58.438289+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:59.438769+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:00.439079+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:01.439305+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:02.439752+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:03.440174+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:04.440432+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:05.440886+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:06.441461+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:07.441946+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:08.442496+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:09.442818+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:10.443216+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:11.443596+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:12.444065+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:13.444518+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:14.444795+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:15.445225+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:16.445520+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:17.445790+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:18.446142+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:19.446577+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:20.446971+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:21.447245+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:22.447500+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:23.447817+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:24.448070+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:25.448510+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:26.449071+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:27.449685+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:28.450080+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:29.450552+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:30.450784+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:31.451116+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:32.451771+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:33.452630+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:34.452972+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:35.454813+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:36.455531+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:37.456468+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:38.457047+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:39.457866+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:40.458505+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:41.458833+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:42.459232+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:43.461374+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:44.461995+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:45.462659+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:46.463103+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:47.463593+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:48.463780+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:49.464095+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:50.464633+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:51.464904+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:52.465478+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:53.465801+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:54.466123+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:55.466507+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:56.466838+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:57.467311+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:58.467726+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:59.468075+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:00.468705+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:01.469154+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:02.469630+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:03.470046+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:04.470343+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:05.470848+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:06.471165+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:07.471592+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:08.472028+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:09.472519+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:10.472882+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:11.473252+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:12.473679+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:13.474089+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:14.474542+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:15.474844+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:16.475280+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:17.475598+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:18.475921+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:19.476297+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:20.476673+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:21.476933+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:22.477264+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:23.477649+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:24.477970+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:25.478301+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:26.478672+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:27.479134+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:28.479650+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:29.480030+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:30.480304+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:31.480533+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:32.480889+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:33.481237+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:34.481600+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:35.481990+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:36.482316+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:37.482725+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:38.483154+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:39.483507+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:40.483891+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:41.484523+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:42.484934+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:43.485261+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 50208768 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:44.485643+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:45.485951+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:46.486529+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:47.486820+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:48.487199+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:49.487566+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:50.487978+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:51.488360+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:52.488859+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:53.489164+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:54.489598+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:55.489882+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:56.490311+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:57.490666+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:58.491244+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:59.492589+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:00.492905+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:01.493110+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:02.493488+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:03.493893+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:04.494201+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:05.494457+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:06.494893+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:07.495159+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:08.495599+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:09.495910+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:10.496252+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:11.496582+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:12.496965+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:13.497593+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:14.498009+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:15.498482+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:16.498993+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:17.499534+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 ms_handle_reset con 0x563967423000 session 0x563964ee8780
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396698a400
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:18.499898+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:19.500297+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:20.500745+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:21.501269+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:22.501509+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:23.501882+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:24.502271+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:25.502711+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:26.503260+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:27.503760+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:28.504088+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:29.504527+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:30.505111+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:31.505637+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:32.505951+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:33.506327+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:34.506806+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:35.507269+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:36.507716+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:37.508229+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:38.508581+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:39.512110+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:40.513983+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:41.514557+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:42.515212+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:43.516221+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:44.517652+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:45.518327+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:46.519726+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:47.520255+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:48.521204+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:49.521877+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:50.522616+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:51.523785+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:52.524476+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:53.525342+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:54.525718+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:55.526100+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:56.526867+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:57.527535+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:58.527755+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:59.528197+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:00.528669+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:01.529045+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:02.529513+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:03.529869+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:04.530260+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:05.530687+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:06.531203+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:07.531712+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:08.532107+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:09.532536+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:10.532823+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:11.533112+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:12.533767+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:13.534060+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:14.534360+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:15.534722+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:16.535138+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:17.535531+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:18.535882+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:19.536218+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:20.536666+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:21.537105+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:22.537539+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:23.537796+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:24.537995+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:25.538512+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:26.538857+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 50200576 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:27.539132+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:28.539595+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:29.539997+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:30.540210+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:31.540671+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:32.541156+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:33.541604+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:34.541970+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:35.542280+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:36.542767+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:37.543073+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:38.543505+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:39.543942+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:40.544497+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:41.544824+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:42.545302+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:43.545705+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:44.546019+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:45.546524+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:46.546904+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:47.547241+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:48.547622+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 50192384 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:49.548017+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 50184192 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:50.548471+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 50184192 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:51.548811+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 50184192 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:52.549267+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 50184192 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:53.549629+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 50184192 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:54.549953+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 50184192 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:55.550299+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 50184192 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:56.550698+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 50184192 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:57.551033+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 50184192 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:58.551362+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 50184192 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:59.551755+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:00.552098+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:01.552555+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:02.552757+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:03.553592+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:04.553964+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:05.554202+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:06.554479+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:07.554895+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:08.555700+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:09.555913+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:10.556236+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:11.556656+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:12.556871+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:13.557196+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:14.557589+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:15.557930+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:16.558301+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:17.558666+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:18.559123+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:19.559506+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:20.559817+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:21.560132+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 50176000 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:22.560426+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:23.560716+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:24.561102+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:25.561301+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:26.561525+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:27.561887+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:28.562104+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:29.562621+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:30.562977+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:31.563318+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:32.563634+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:33.563948+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:34.564235+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:35.564587+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:36.565037+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:37.565471+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:38.565903+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 50167808 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:39.566267+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 50159616 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:40.566647+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 50159616 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:41.567073+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 50159616 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:42.567436+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 50159616 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:43.567861+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 50159616 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:44.568234+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 50159616 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:45.568596+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 50159616 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:46.568995+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 50159616 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:47.569469+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 50159616 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:48.569782+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 50159616 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:49.570138+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 50159616 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:50.570574+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 50159616 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:51.571541+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 50159616 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:52.571896+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 50159616 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:53.572829+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110829568 unmapped: 50159616 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:54.573613+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110837760 unmapped: 50151424 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:55.574525+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110837760 unmapped: 50151424 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:56.574943+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110837760 unmapped: 50151424 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:57.575511+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 50143232 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:58.575783+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 50143232 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:59.576531+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 50143232 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:00.576800+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 50143232 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:01.577114+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 50143232 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:02.577690+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 50143232 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:03.578305+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 50143232 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:04.578881+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 50143232 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:05.579099+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 50143232 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:06.579832+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 50143232 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:07.580676+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 50143232 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:08.589774+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 50143232 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:09.590033+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 50143232 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:10.590594+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 50143232 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 ms_handle_reset con 0x56396781ec00 session 0x563964ef45a0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804cc00
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 ms_handle_reset con 0x56396781f000 session 0x563966a25680
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396804d000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:11.590986+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 ms_handle_reset con 0x563966fd2800 session 0x5639656483c0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: handle_auth_request added challenge on 0x56396781f000
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:12.591314+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:13.591883+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:14.592339+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:15.592742+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:16.593182+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:17.593694+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 50135040 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:18.594158+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110862336 unmapped: 50126848 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:19.594555+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110862336 unmapped: 50126848 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:20.594879+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110862336 unmapped: 50126848 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:21.595482+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110862336 unmapped: 50126848 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:22.595790+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110862336 unmapped: 50126848 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:23.596021+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110862336 unmapped: 50126848 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:24.596483+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110862336 unmapped: 50126848 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:25.596851+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110862336 unmapped: 50126848 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:26.597155+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110862336 unmapped: 50126848 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:27.597669+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110862336 unmapped: 50126848 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:28.598369+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110862336 unmapped: 50126848 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:29.598632+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110862336 unmapped: 50126848 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:30.599034+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110862336 unmapped: 50126848 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:31.599667+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110862336 unmapped: 50126848 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:32.600134+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110862336 unmapped: 50126848 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:33.600615+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 50118656 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:34.601152+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 50118656 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:35.601469+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 50118656 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:36.601878+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 50118656 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:37.602248+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 50118656 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:38.602594+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:39.602972+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 50118656 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:40.603449+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 50118656 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:41.603913+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 50118656 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:42.604317+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 50118656 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:43.604674+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 50118656 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:44.605030+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 50118656 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:45.605552+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 50118656 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:46.606125+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 50118656 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:47.606664+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 50118656 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:48.607176+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 50118656 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:49.607655+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 50118656 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:50.607871+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 50110464 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:51.608347+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 50110464 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:52.608843+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 50110464 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:53.609235+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 50102272 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:54.609534+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 50102272 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:55.609942+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 50102272 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:56.610281+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 50102272 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:57.610560+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 50102272 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:58.610777+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 50102272 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:59.611090+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 50102272 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:00.611662+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.1 total, 600.0 interval
                                            Cumulative writes: 9495 writes, 36K keys, 9495 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9495 writes, 2461 syncs, 3.86 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 432 writes, 1184 keys, 432 commit groups, 1.0 writes per commit group, ingest: 0.44 MB, 0.00 MB/s
                                            Interval WAL: 432 writes, 196 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 50102272 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:01.611871+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 50102272 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:02.612174+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 50102272 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:03.612505+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 50102272 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:04.612862+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 50102272 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:05.613113+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 50094080 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:06.613540+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 50094080 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:07.614052+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 50094080 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:08.614509+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 50094080 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:09.614745+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 50094080 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:10.615333+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 50094080 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:11.615973+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 50094080 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:12.616572+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 50094080 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:13.616844+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 50094080 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:14.617260+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 50094080 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:15.617679+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 50094080 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:16.617984+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 50094080 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:17.618335+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 50094080 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:18.618791+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 50094080 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:19.619149+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 50094080 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:20.619444+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 50094080 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:21.619814+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 50085888 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:22.620222+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 50085888 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:23.620506+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 50085888 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:24.620814+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 50085888 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:25.621144+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 50085888 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:26.621484+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 50085888 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:27.621762+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 50085888 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:28.621940+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 50085888 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:29.622162+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110903296 unmapped: 50085888 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:30.622668+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 50069504 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:31.622903+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 50053120 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:32.623131+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 50053120 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:33.623450+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 50053120 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:34.623716+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 50053120 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:35.624060+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 50053120 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:36.624322+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 50053120 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:37.624543+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 50053120 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:38.624950+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 50053120 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:39.625346+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 50053120 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:40.625774+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 50053120 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:41.626015+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 50053120 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:42.626472+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 50053120 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:43.626729+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 50053120 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:44.627056+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 50053120 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:45.627295+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 50053120 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:46.627673+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 50053120 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:47.627970+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 50044928 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:48.628454+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 50044928 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:49.628786+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 50044928 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:50.629132+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 50044928 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:51.629525+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 50044928 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:52.629776+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 50044928 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:53.630042+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 50044928 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:54.630361+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 50044928 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:55.631114+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 50044928 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:56.631679+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 50044928 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:57.632138+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 50044928 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:58.632419+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 50044928 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:59.632778+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 50044928 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:00.633106+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 50044928 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:01.633590+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 50044928 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:02.633994+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110944256 unmapped: 50044928 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:03.634437+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 50036736 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:04.635680+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 50036736 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:05.636363+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 50036736 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:06.637073+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 50036736 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:07.637527+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 50036736 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:08.638017+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 50036736 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:09.638580+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 50036736 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:10.638933+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 50036736 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:11.639187+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 50036736 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:12.639577+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 50028544 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:13.639914+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 50028544 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:14.640208+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 50028544 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:15.640621+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 50028544 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:16.640924+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 50028544 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:17.641520+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 50028544 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:18.641975+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 50028544 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:19.642193+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 50028544 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:20.642524+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 50028544 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:21.642946+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 50028544 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:22.643344+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 50028544 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:23.643735+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 50028544 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:24.644118+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206637 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 50028544 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:25.644299+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110968832 unmapped: 50020352 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:26.644686+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 589.926635742s of 589.951538086s, submitted: 15
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110968832 unmapped: 50020352 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:27.645082+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffa000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,1])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110968832 unmapped: 50020352 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:28.645530+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110968832 unmapped: 50020352 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:29.645726+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205829 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110968832 unmapped: 50020352 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffb000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:30.646074+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110968832 unmapped: 50020352 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:31.646359+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffb000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 49987584 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:32.647776+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111001600 unmapped: 49987584 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:33.648130+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffb000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111009792 unmapped: 49979392 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:34.648327+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:10 compute-0 ceph-osd[208121]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore.MempoolThread(0x56396358bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205757 data_alloc: 218103808 data_used: 2637824
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 49954816 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:35.648621+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 111034368 unmapped: 49954816 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:36.649008+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.208870411s of 10.318965912s, submitted: 58
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'config diff' '{prefix=config diff}'
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'config show' '{prefix=config show}'
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 50110464 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:37.649211+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 50118656 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:38.649596+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffb000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: osd.2 145 heartbeat osd_stat(store_statfs(0x4f9ffb000/0x0/0x4ffc00000, data 0x159690e/0x1673000, compress 0x0/0x0/0x0, omap 0x639, meta 0x458f9c7), peers [0,1] op hist [])
Oct 02 20:32:10 compute-0 ceph-osd[208121]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 50102272 heap: 160989184 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: tick
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_tickets
Oct 02 20:32:10 compute-0 ceph-osd[208121]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:39.650607+0000)
Oct 02 20:32:10 compute-0 ceph-osd[208121]: do_command 'log dump' '{prefix=log dump}'
Oct 02 20:32:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 20:32:10 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3073113942' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 20:32:10 compute-0 ceph-mon[191910]: pgmap v2627: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 31 op/s
Oct 02 20:32:10 compute-0 ceph-mon[191910]: from='client.15889 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:10 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1316137830' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 20:32:10 compute-0 ceph-mon[191910]: from='client.15895 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:10 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/635963952' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 20:32:10 compute-0 nova_compute[355794]: 2025-10-02 20:32:10.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:32:10 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2628: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Oct 02 20:32:10 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15905 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:10 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 20:32:10 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1545217026' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 20:32:11 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 20:32:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:32:11 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15909 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:11 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 20:32:11 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2608400188' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 20:32:11 compute-0 ceph-mon[191910]: from='client.15898 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:11 compute-0 ceph-mon[191910]: from='client.15901 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:11 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3073113942' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 20:32:11 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1545217026' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 20:32:11 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2608400188' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 20:32:11 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15913 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:12 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 20:32:12 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/829268422' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 20:32:12 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15917 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:12 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2629: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 47 op/s
Oct 02 20:32:12 compute-0 ceph-mon[191910]: pgmap v2628: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Oct 02 20:32:12 compute-0 ceph-mon[191910]: from='client.15905 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:12 compute-0 ceph-mon[191910]: from='client.15909 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:12 compute-0 ceph-mon[191910]: from='client.15913 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:12 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/829268422' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 20:32:12 compute-0 ceph-mon[191910]: from='client.15917 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:12 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15921 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:12 compute-0 podman[499297]: 2025-10-02 20:32:12.761671103 +0000 UTC m=+0.165719938 container health_status c3fcaca71939d1fabab2b26b53f35ae1b254594e7cf704606443003d0af40acc (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, architecture=x86_64, managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, release=1755695350, distribution-scope=public)
Oct 02 20:32:12 compute-0 podman[499296]: 2025-10-02 20:32:12.780528213 +0000 UTC m=+0.202443962 container health_status a6facd8726985c12436c920fbe61b78b5934b0aead34492da7842640cb6a7b92 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:32:12 compute-0 podman[499307]: 2025-10-02 20:32:12.783183882 +0000 UTC m=+0.170184144 container health_status fa10de30653e69dd7244be75691fedc893ba1311916307d66df27d13ba29ede2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 20:32:12 compute-0 podman[499295]: 2025-10-02 20:32:12.791153489 +0000 UTC m=+0.196368904 container health_status 6578a30dc39c186a7ae83a70f27ac82ffcfda67f90f7e831da5cd3fb5f00ff19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 20:32:12 compute-0 podman[499298]: 2025-10-02 20:32:12.806944869 +0000 UTC m=+0.217086292 container health_status daccf08b26b2121f0c488ff6118db37ea02bc978ac39fdbbccb460da7167ef97 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 20:32:12 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 20:32:12 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/867977797' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15923 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:13 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 02 20:32:13 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3663648630' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 20:32:13 compute-0 nova_compute[355794]: 2025-10-02 20:32:13.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] _maybe_adjust
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:13 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:32:13 compute-0 ceph-mon[191910]: pgmap v2629: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 47 op/s
Oct 02 20:32:13 compute-0 ceph-mon[191910]: from='client.15921 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:13 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/867977797' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 20:32:13 compute-0 ceph-mon[191910]: from='client.15923 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:13 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3663648630' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 20:32:14 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15931 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:14 compute-0 ceph-mgr[192222]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 20:32:14 compute-0 ceph-6019f664-a1c2-5955-8391-692cb79a59f9-mgr-compute-0-uktbkz[192218]: 2025-10-02T20:32:14.085+0000 7f5cb7835640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 02 20:32:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 02 20:32:14 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1838358614' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 20:32:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 02 20:32:14 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/476366584' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 20:32:14 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2630: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Oct 02 20:32:14 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 02 20:32:14 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1206292727' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 20:32:14 compute-0 ceph-mon[191910]: from='client.15931 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:14 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1838358614' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 20:32:14 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/476366584' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 20:32:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 02 20:32:15 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/673095376' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:02.033682+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fa1b3000/0x0/0x4ffc00000, data 0x13f4111/0x14ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b446000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104120320 unmapped: 14467072 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:03.034200+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104120320 unmapped: 14467072 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:04.034613+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147049 data_alloc: 218103808 data_used: 13557760
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 14385152 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:05.035096+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 14385152 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:06.035622+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 14385152 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:07.036324+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fa1b2000/0x0/0x4ffc00000, data 0x13f4144/0x14bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 132 ms_handle_reset con 0x563e2b446000 session 0x563e2c5ed860
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104235008 unmapped: 14352384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:08.036789+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104235008 unmapped: 14352384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:09.037287+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152942 data_alloc: 218103808 data_used: 13565952
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104235008 unmapped: 14352384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:10.037712+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa1ad000/0x0/0x4ffc00000, data 0x13f5ce4/0x14c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104235008 unmapped: 14352384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:11.038154+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104235008 unmapped: 14352384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:12.038662+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa1ad000/0x0/0x4ffc00000, data 0x13f5ce4/0x14c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104235008 unmapped: 14352384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:13.039112+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104235008 unmapped: 14352384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:14.039592+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152942 data_alloc: 218103808 data_used: 13565952
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104235008 unmapped: 14352384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:15.040027+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c501400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.238119125s of 14.359631538s, submitted: 30
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _renew_subs
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104275968 unmapped: 14311424 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:16.040505+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 133 ms_handle_reset con 0x563e2c501400 session 0x563e2c5ecb40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:17.040901+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa1ac000/0x0/0x4ffc00000, data 0x13f785f/0x14c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:18.041280+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa1ac000/0x0/0x4ffc00000, data 0x13f785f/0x14c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa1ac000/0x0/0x4ffc00000, data 0x13f785f/0x14c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:19.041899+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153873 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:20.042311+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa1ac000/0x0/0x4ffc00000, data 0x13f785f/0x14c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:21.042548+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:22.043177+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:23.043689+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa1ac000/0x0/0x4ffc00000, data 0x13f785f/0x14c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:24.059077+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153873 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:25.059561+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104325120 unmapped: 14262272 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:26.059867+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f4400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 133 ms_handle_reset con 0x563e2c4f4400 session 0x563e2b0deb40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.979585648s of 11.122095108s, submitted: 27
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ac000/0x0/0x4ffc00000, data 0x13f785f/0x14c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:27.060116+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:28.060453+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:29.060760+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:30.061095+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:31.061547+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:32.061757+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:33.061944+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:34.062315+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:35.062633+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:36.063075+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:37.063636+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:38.063930+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:39.064458+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:40.064903+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:41.065203+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 14245888 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:42.065630+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 14237696 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:43.066137+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 14237696 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:44.066605+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:45.066867+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:46.067284+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:47.067568+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:48.067860+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:49.068146+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:50.068601+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:51.069324+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:52.069838+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:53.070757+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 8518 writes, 32K keys, 8518 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 8518 writes, 1987 syncs, 4.29 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 891 writes, 2340 keys, 891 commit groups, 1.0 writes per commit group, ingest: 1.49 MB, 0.00 MB/s
                                            Interval WAL: 891 writes, 405 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:54.071287+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:55.072098+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:56.073092+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:57.074066+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:58.074637+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:59.075713+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:00.075993+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:01.077542+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:02.078936+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:03.079957+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:04.080581+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:05.081091+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:06.081517+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:07.082205+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:08.082550+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:09.082974+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:10.083250+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:11.083528+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:12.083936+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:13.084541+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:14.084977+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 14229504 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:15.085500+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:16.085907+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:17.086252+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:18.086592+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:19.087194+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:20.087849+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:21.088280+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:22.088706+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:23.089038+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:24.089655+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:25.090043+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:26.090465+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:27.090934+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:28.091317+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:29.092128+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:30.092637+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:31.093050+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:32.093739+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:33.094189+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:34.094598+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:35.094929+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:36.095298+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:37.095832+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:38.096204+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:39.096704+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:40.097102+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:41.097557+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:42.097962+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:43.098303+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:44.098702+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:45.099012+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:46.099457+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:47.099870+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:48.100323+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:49.100873+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:50.101320+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:51.101702+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:52.102116+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:53.103473+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:54.104719+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:55.106053+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:56.106600+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:57.107651+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:58.107983+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:59.108898+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:00.109619+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:01.110351+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:02.111049+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:03.111345+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:04.111877+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:05.112558+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:06.113064+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:07.113491+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:08.113965+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:09.114724+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:10.115324+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:11.115859+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:12.116502+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:13.116957+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:14.117290+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:15.117691+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:16.118208+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:17.118802+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:18.119278+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:19.119768+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:20.120061+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:21.120492+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:22.121039+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:23.121638+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:24.122028+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1aa000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 14221312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:25.122525+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155746 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 119.220855713s of 119.241088867s, submitted: 14
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104415232 unmapped: 14172160 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:26.123775+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104431616 unmapped: 14155776 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:27.124473+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104472576 unmapped: 14114816 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:28.124681+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:29.124961+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:30.125556+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:31.125889+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:32.126312+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:33.126634+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:34.126945+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:35.127266+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:36.128875+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:37.129290+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:38.129729+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:39.130161+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:40.130658+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:41.131114+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:42.131704+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:43.132166+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:44.132596+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:45.133094+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:46.133620+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:47.133936+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:48.134303+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:49.134890+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:50.135222+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:51.135632+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:52.135996+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:53.138768+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:54.139198+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:55.139483+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:56.139774+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:57.139995+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:58.141017+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:59.141590+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:00.143678+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:01.145314+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:02.145814+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:03.146346+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:04.146654+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:05.147063+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:06.147354+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:07.147726+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:08.148325+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:09.148900+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:10.149300+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:11.149753+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:12.150186+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:13.150654+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:14.151032+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:15.151537+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:16.152002+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:17.152473+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:18.152773+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:19.153193+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:20.153650+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:21.154058+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:22.154547+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:23.154858+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:24.155288+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:25.155789+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:26.156218+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:27.156672+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:28.157132+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:29.157635+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:30.158610+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:31.159160+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:32.159588+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:33.159908+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:34.160347+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:35.161024+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:36.161620+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:37.162190+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:38.162622+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:39.163095+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:40.163619+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:41.163947+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:42.164498+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:43.164956+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:44.165688+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:45.166035+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:46.166798+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:47.167158+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:48.167608+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104480768 unmapped: 14106624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:49.168065+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:50.168511+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:51.168940+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:52.169324+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:53.169793+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:54.170062+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:55.170786+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:56.171226+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:57.171546+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:58.171785+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:59.172158+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:00.172641+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:01.172979+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:02.173196+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:03.173600+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:04.174128+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:05.174760+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:06.175196+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:07.175582+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:08.176132+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:09.177839+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:10.178309+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:11.178654+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:12.179496+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104488960 unmapped: 14098432 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:13.180214+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:14.180725+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:15.181256+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:16.181678+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:17.182070+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:18.183121+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:19.183697+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:20.184081+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:21.184499+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:22.184777+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:23.185200+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:24.185600+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:25.185989+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:26.186756+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:27.187107+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:28.187601+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:29.188083+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:30.188744+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:31.189183+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:32.189658+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:33.190197+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:34.190728+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:35.191137+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:36.191504+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:37.192034+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:38.192299+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:39.192804+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:40.193197+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:41.193580+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:42.193957+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:43.194364+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:44.194827+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:45.195170+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:46.195615+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:47.196015+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:48.196511+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:49.196911+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:50.197288+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:51.197837+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:52.198298+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:53.198723+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:54.199169+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:55.199616+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:56.200044+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:57.200912+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:58.201174+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:59.201632+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:00.202046+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:01.202527+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:02.202845+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:03.203095+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:04.203327+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:05.203602+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:06.203941+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154866 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:07.204318+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:08.204748+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:09.205267+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 14090240 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b445c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 163.753616333s of 164.341629028s, submitted: 90
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:10.205686+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104513536 unmapped: 14073856 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa1ab000/0x0/0x4ffc00000, data 0x13f92c2/0x14c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:11.206120+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104562688 unmapped: 22421504 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244277 data_alloc: 218103808 data_used: 13574144
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _renew_subs
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 135 ms_handle_reset con 0x563e2b445c00 session 0x563e29830b40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:12.206488+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50e400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104570880 unmapped: 22413312 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f9535000/0x0/0x4ffc00000, data 0x206ae72/0x2138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:13.206837+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 22364160 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _renew_subs
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 136 ms_handle_reset con 0x563e2c50e400 session 0x563e2a59fa40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d34000/0x0/0x4ffc00000, data 0x286ae82/0x2939000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:14.207083+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22331392 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:15.207683+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22331392 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:16.208041+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 22331392 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309399 data_alloc: 218103808 data_used: 13582336
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:17.208838+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22323200 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:18.209265+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22323200 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:19.209787+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22323200 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:20.210207+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22323200 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:21.210670+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22323200 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309399 data_alloc: 218103808 data_used: 13582336
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:22.211025+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22323200 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:23.211471+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22323200 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:24.212042+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22323200 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:25.212553+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104660992 unmapped: 22323200 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:26.212999+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 22315008 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309399 data_alloc: 218103808 data_used: 13582336
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:27.213318+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 22306816 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:28.213695+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 22306816 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:29.214162+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:30.214590+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:31.214853+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309399 data_alloc: 218103808 data_used: 13582336
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:32.215209+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:33.215617+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:34.216003+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:35.216346+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:36.216741+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309399 data_alloc: 218103808 data_used: 13582336
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:37.217125+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:38.217575+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:39.218019+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:40.220574+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:41.220972+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309399 data_alloc: 218103808 data_used: 13582336
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:42.221275+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:43.221663+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:44.222012+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:45.222677+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:46.223049+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309399 data_alloc: 218103808 data_used: 13582336
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:47.223602+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:48.223958+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:49.224431+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:50.224843+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:51.225247+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309399 data_alloc: 218103808 data_used: 13582336
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:52.225477+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:53.225818+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:54.226186+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:55.226609+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8d30000/0x0/0x4ffc00000, data 0x286ca22/0x293d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:56.226942+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f0800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 22298624 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _renew_subs
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 46.658763885s of 46.857627869s, submitted: 20
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313045 data_alloc: 218103808 data_used: 13582336
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 137 ms_handle_reset con 0x563e2c4f0800 session 0x563e2a0eb2c0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:57.227310+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50d000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 137 ms_handle_reset con 0x563e2c50d000 session 0x563e2a0eaf00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b445c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 137 ms_handle_reset con 0x563e2b445c00 session 0x563e2a0ead20
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 22265856 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:58.227739+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f0800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 22265856 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8d2c000/0x0/0x4ffc00000, data 0x286e59f/0x2940000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:59.228256+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 138 ms_handle_reset con 0x563e2c4f0800 session 0x563e2a0eab40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 21151744 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:00.228618+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 21151744 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:01.229123+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 21151744 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315171 data_alloc: 218103808 data_used: 13582336
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:02.229525+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 21151744 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:03.229977+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 21151744 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8d2a000/0x0/0x4ffc00000, data 0x2870170/0x2943000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:04.230472+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 21151744 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8d2a000/0x0/0x4ffc00000, data 0x2870170/0x2943000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:05.230937+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 105848832 unmapped: 21135360 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:06.231281+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 105848832 unmapped: 21135360 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318145 data_alloc: 218103808 data_used: 13582336
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:07.231702+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b8f3800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b8f3800 session 0x563e2b10bc20
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bd5cc00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 105848832 unmapped: 21135360 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2bd5cc00 session 0x563e2a4f3a40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:08.232112+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 10043392 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8d27000/0x0/0x4ffc00000, data 0x2871bd3/0x2946000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:09.232498+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 10043392 heap: 126984192 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bd5b800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.062339783s of 13.161125183s, submitted: 28
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2bd5b800 session 0x563e2c5205a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b445c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:10.232805+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f8d27000/0x0/0x4ffc00000, data 0x2871bd3/0x2946000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b445c00 session 0x563e28ccd860
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 13967360 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:11.233166+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 13967360 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1446005 data_alloc: 234881024 data_used: 25051136
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:12.233637+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f821f000/0x0/0x4ffc00000, data 0x337abd3/0x344f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 13959168 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:13.234032+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 13959168 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:14.234588+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 13959168 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:15.235055+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 13959168 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:16.235260+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfae400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2bfae400 session 0x563e2b0dfc20
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f821f000/0x0/0x4ffc00000, data 0x337abd3/0x344f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 13959168 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1446005 data_alloc: 234881024 data_used: 25051136
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:17.235644+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2a2e0400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2a2e0400 session 0x563e2a0e9680
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 13959168 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b429800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b429800 session 0x563e2a0e8960
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:18.235977+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c505400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c505400 session 0x563e2a0e81e0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 14311424 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2a2e0400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b429800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:19.236206+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 14311424 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:20.236593+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 116883456 unmapped: 14303232 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:21.236969+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c508c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 14237696 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1459997 data_alloc: 234881024 data_used: 26419200
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:22.237202+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f81f4000/0x0/0x4ffc00000, data 0x33a4be3/0x347a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 12804096 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:23.237451+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 120160256 unmapped: 11026432 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:24.237714+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f81f4000/0x0/0x4ffc00000, data 0x33a4be3/0x347a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:25.237979+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:26.238215+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1523997 data_alloc: 234881024 data_used: 35049472
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:27.238659+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:28.238854+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:29.239156+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:30.239538+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f81f4000/0x0/0x4ffc00000, data 0x33a4be3/0x347a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:31.239752+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1523997 data_alloc: 234881024 data_used: 35049472
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:32.240041+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f81f4000/0x0/0x4ffc00000, data 0x33a4be3/0x347a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:33.240462+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:34.240708+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:35.241143+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f81f4000/0x0/0x4ffc00000, data 0x33a4be3/0x347a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:36.241565+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1523997 data_alloc: 234881024 data_used: 35049472
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:37.241942+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123076608 unmapped: 8110080 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:38.242188+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f81f4000/0x0/0x4ffc00000, data 0x33a4be3/0x347a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123092992 unmapped: 8093696 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:39.242499+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123092992 unmapped: 8093696 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:40.242726+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123092992 unmapped: 8093696 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:41.243014+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123092992 unmapped: 8093696 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1523997 data_alloc: 234881024 data_used: 35049472
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:42.243323+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123092992 unmapped: 8093696 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:43.243546+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123092992 unmapped: 8093696 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:44.243957+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f81f4000/0x0/0x4ffc00000, data 0x33a4be3/0x347a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123092992 unmapped: 8093696 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:45.244236+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123101184 unmapped: 8085504 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:46.244522+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123101184 unmapped: 8085504 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1523997 data_alloc: 234881024 data_used: 35049472
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:47.244776+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123101184 unmapped: 8085504 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:48.245129+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123101184 unmapped: 8085504 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:49.245529+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123101184 unmapped: 8085504 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f81f4000/0x0/0x4ffc00000, data 0x33a4be3/0x347a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:50.245829+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f4800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f4800 session 0x563e2a0e9680
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c506c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c506c00 session 0x563e2a23da40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50a400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c50a400 session 0x563e2c5ed4a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 123101184 unmapped: 8085504 heap: 131186688 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29824800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e29824800 session 0x563e2a23f4a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c502800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.027656555s of 41.223041534s, submitted: 22
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:51.246026+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c502800 session 0x563e297bd860
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29824800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e29824800 session 0x563e296ab860
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f4800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f4800 session 0x563e2c026000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c506c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c506c00 session 0x563e2c6c61e0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50a400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c50a400 session 0x563e2c5bfe00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 124338176 unmapped: 17432576 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:52.246251+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1641312 data_alloc: 234881024 data_used: 35065856
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f750a000/0x0/0x4ffc00000, data 0x408cc55/0x4164000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,5])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 129556480 unmapped: 12214272 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:53.246501+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 129556480 unmapped: 12214272 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:54.246733+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f0c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f0c00 session 0x563e2c5ec960
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29824800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e29824800 session 0x563e2d604b40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f4800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f4800 session 0x563e2c6701e0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c506c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 130138112 unmapped: 11632640 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c506c00 session 0x563e2a0eb860
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50a400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c50a400 session 0x563e2d6041e0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:55.246904+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6324000/0x0/0x4ffc00000, data 0x5263c55/0x533b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 129548288 unmapped: 12222464 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:56.247242+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 13082624 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:57.247871+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1782029 data_alloc: 234881024 data_used: 35516416
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6325000/0x0/0x4ffc00000, data 0x526fc55/0x5347000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c507800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c507800 session 0x563e2a818780
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 128770048 unmapped: 13000704 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:58.248074+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29824800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e29824800 session 0x563e2c518000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 128770048 unmapped: 13000704 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:59.248597+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f4800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f4800 session 0x563e2c6c7a40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c506c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c506c00 session 0x563e2c6b74a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 128770048 unmapped: 13000704 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:00.248979+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c507800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50a400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 128794624 unmapped: 12976128 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:01.249218+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f1400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f1400 session 0x563e2c5ef2c0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 128827392 unmapped: 12943360 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:02.260210+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1787707 data_alloc: 234881024 data_used: 35520512
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c509400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c509400 session 0x563e2c6705a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6316000/0x0/0x4ffc00000, data 0x5277c65/0x5350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 128770048 unmapped: 13000704 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c509400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c509400 session 0x563e2c3c34a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:03.260508+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29824800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.324402809s of 12.147669792s, submitted: 194
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e29824800 session 0x563e2c6c6780
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f1400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 13115392 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f4800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:04.260790+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 133103616 unmapped: 8667136 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:05.260934+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 5521408 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:06.266360+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 5513216 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:07.266607+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1854029 data_alloc: 251658240 data_used: 45158400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 5513216 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f631d000/0x0/0x4ffc00000, data 0x5277c75/0x5351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:08.266811+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136257536 unmapped: 5513216 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:09.267074+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f1400 session 0x563e29777860
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f4800 session 0x563e2c5210e0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c503400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 5505024 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:10.267258+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c503400 session 0x563e2c0274a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 5488640 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:11.267606+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 5488640 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:12.267878+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1852228 data_alloc: 251658240 data_used: 45158400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 5488640 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:13.268142+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f631e000/0x0/0x4ffc00000, data 0x5277c65/0x5350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 5480448 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:14.268634+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 5480448 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:15.269026+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 5480448 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:16.269484+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 5472256 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:17.269869+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1852228 data_alloc: 251658240 data_used: 45158400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 5472256 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:18.270191+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 5472256 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:19.270587+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f631e000/0x0/0x4ffc00000, data 0x5277c65/0x5350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 5472256 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:20.270834+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 5472256 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:21.271218+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136306688 unmapped: 5464064 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:22.271564+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1852228 data_alloc: 251658240 data_used: 45158400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 5455872 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:23.271844+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 5455872 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:24.272159+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 5455872 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:25.272541+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f631e000/0x0/0x4ffc00000, data 0x5277c65/0x5350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 5455872 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:26.272912+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 5455872 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:27.273173+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1852228 data_alloc: 251658240 data_used: 45158400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:28.273656+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 5455872 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:29.274073+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 5455872 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f631e000/0x0/0x4ffc00000, data 0x5277c65/0x5350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:30.274472+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136323072 unmapped: 5447680 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:31.274900+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 5439488 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.639814377s of 28.783666611s, submitted: 29
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:32.275185+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136650752 unmapped: 5120000 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f631e000/0x0/0x4ffc00000, data 0x5277c65/0x5350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1855220 data_alloc: 251658240 data_used: 45146112
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:33.275495+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136683520 unmapped: 5087232 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:34.275676+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136683520 unmapped: 5087232 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:35.275891+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136683520 unmapped: 5087232 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:36.276078+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 5054464 heap: 141770752 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f631e000/0x0/0x4ffc00000, data 0x5277c65/0x5350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c508c00 session 0x563e29791860
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b429800 session 0x563e2a4f6960
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:37.276282+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144416768 unmapped: 1548288 heap: 145965056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b410c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b410c00 session 0x563e2c5ec3c0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1920636 data_alloc: 251658240 data_used: 46415872
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:38.276515+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144482304 unmapped: 1482752 heap: 145965056 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c508400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:39.276733+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f5af5000/0x0/0x4ffc00000, data 0x5a9dc65/0x5b76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,1])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143155200 unmapped: 3858432 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:40.276930+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143310848 unmapped: 3702784 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c503c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:41.277366+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143507456 unmapped: 3506176 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:42.277684+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143556608 unmapped: 3457024 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1935816 data_alloc: 251658240 data_used: 47112192
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f5ab3000/0x0/0x4ffc00000, data 0x5ad9c65/0x5bb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:43.278001+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143597568 unmapped: 3416064 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:44.278234+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143597568 unmapped: 3416064 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f5ab3000/0x0/0x4ffc00000, data 0x5ad9c65/0x5bb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2a2e0400 session 0x563e29777e00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfae800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:45.278441+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143605760 unmapped: 3407872 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.809200287s of 13.237763405s, submitted: 139
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2bfae800 session 0x563e2a6ae5a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:46.278588+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:47.278794+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1755600 data_alloc: 251658240 data_used: 41615360
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:48.279022+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6ca1000/0x0/0x4ffc00000, data 0x48efc65/0x49c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:49.279441+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:50.279797+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:51.280073+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6ca1000/0x0/0x4ffc00000, data 0x48efc65/0x49c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:52.280311+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1755844 data_alloc: 251658240 data_used: 41615360
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:53.280525+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:54.280848+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6c9d000/0x0/0x4ffc00000, data 0x48f8c65/0x49d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:55.281182+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:56.281584+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 6258688 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:57.281968+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140763136 unmapped: 6250496 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.015203476s of 12.132278442s, submitted: 25
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1755920 data_alloc: 251658240 data_used: 41615360
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:58.282264+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 6791168 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50b800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c50b800 session 0x563e2c303680
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29721c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e29721c00 session 0x563e298a7680
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b447000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b447000 session 0x563e28ccd860
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:59.282560+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50e400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c50e400 session 0x563e2c026960
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 6774784 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c505000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6c9a000/0x0/0x4ffc00000, data 0x48fbc65/0x49d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,1,2])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c505000 session 0x563e2a23fa40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29721c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e29721c00 session 0x563e2a4f30e0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b447000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b447000 session 0x563e2a0e81e0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50b800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c50b800 session 0x563e2a0e9e00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c50e400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c50e400 session 0x563e2a59fc20
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:00.282858+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141352960 unmapped: 9871360 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:01.283043+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141352960 unmapped: 9871360 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:02.283351+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141352960 unmapped: 9871360 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1792350 data_alloc: 251658240 data_used: 41615360
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:03.283681+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141352960 unmapped: 9871360 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f2400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f2400 session 0x563e29816d20
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:04.284025+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141312000 unmapped: 14114816 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f623b000/0x0/0x4ffc00000, data 0x5359cc7/0x5433000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:05.284260+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141312000 unmapped: 14114816 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:06.284500+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141312000 unmapped: 14114816 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29681400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:07.284704+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141312000 unmapped: 14114816 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1845812 data_alloc: 251658240 data_used: 41615360
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:08.284908+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141312000 unmapped: 14114816 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c508c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c508c00 session 0x563e2c6dde00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f623b000/0x0/0x4ffc00000, data 0x5359cc7/0x5433000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:09.285158+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f1400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f1400 session 0x563e2c519680
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141312000 unmapped: 14114816 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.276736259s of 12.543600082s, submitted: 51
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f623b000/0x0/0x4ffc00000, data 0x5359cc7/0x5433000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:10.285435+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141099008 unmapped: 14327808 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f623b000/0x0/0x4ffc00000, data 0x5359cc7/0x5433000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c503000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c503000 session 0x563e29777680
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c501400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c501400 session 0x563e2b0df4a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:11.285587+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 142548992 unmapped: 12877824 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b3a8c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:12.285794+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 12165120 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1878755 data_alloc: 251658240 data_used: 45379584
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6216000/0x0/0x4ffc00000, data 0x537dcd7/0x5458000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:13.286009+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 12165120 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:14.286322+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143269888 unmapped: 12156928 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:15.287284+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143269888 unmapped: 12156928 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:16.287462+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 142630912 unmapped: 12795904 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6216000/0x0/0x4ffc00000, data 0x537dcd7/0x5458000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:17.287674+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfae400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 142835712 unmapped: 12591104 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1890499 data_alloc: 251658240 data_used: 46137344
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:18.287886+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 143106048 unmapped: 12320768 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:19.288180+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 9453568 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:20.288423+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148447232 unmapped: 6979584 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6216000/0x0/0x4ffc00000, data 0x537dcd7/0x5458000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:21.288772+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148447232 unmapped: 6979584 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.734128952s of 11.828881264s, submitted: 30
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:22.288986+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6216000/0x0/0x4ffc00000, data 0x537dcd7/0x5458000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148799488 unmapped: 6627328 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1936515 data_alloc: 251658240 data_used: 52989952
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:23.289163+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148799488 unmapped: 6627328 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:24.289366+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148799488 unmapped: 6627328 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:25.289643+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148799488 unmapped: 6627328 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:26.289848+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148799488 unmapped: 6627328 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:27.290078+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148799488 unmapped: 6627328 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1936515 data_alloc: 251658240 data_used: 52989952
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6216000/0x0/0x4ffc00000, data 0x537dcd7/0x5458000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:28.290329+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148832256 unmapped: 6594560 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:29.290860+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148832256 unmapped: 6594560 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:30.291187+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148840448 unmapped: 6586368 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:31.291474+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148840448 unmapped: 6586368 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:32.291722+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148840448 unmapped: 6586368 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6216000/0x0/0x4ffc00000, data 0x537dcd7/0x5458000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1936163 data_alloc: 251658240 data_used: 52989952
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:33.292088+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148840448 unmapped: 6586368 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:34.292544+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148840448 unmapped: 6586368 heap: 155426816 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29825400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.136129379s of 13.184433937s, submitted: 6
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:35.292794+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e29825400 session 0x563e2ba694a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f1400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f1400 session 0x563e29777860
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c501400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c501400 session 0x563e29776b40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c503000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c503000 session 0x563e29791860
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c508c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c508c00 session 0x563e2bec8b40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 149159936 unmapped: 12099584 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f5e0f000/0x0/0x4ffc00000, data 0x5783d00/0x585f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:36.293006+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f5982000/0x0/0x4ffc00000, data 0x5c10d39/0x5cec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148611072 unmapped: 12648448 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:37.293298+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148611072 unmapped: 12648448 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2007891 data_alloc: 251658240 data_used: 52989952
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:38.293569+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148619264 unmapped: 12640256 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:39.294062+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148627456 unmapped: 12632064 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:40.294536+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f5982000/0x0/0x4ffc00000, data 0x5c10d39/0x5cec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148627456 unmapped: 12632064 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:41.294737+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b4ce400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b4ce400 session 0x563e2a0bab40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148717568 unmapped: 12541952 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:42.295025+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b4ce400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b4ce400 session 0x563e2c519a40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f5982000/0x0/0x4ffc00000, data 0x5c10d39/0x5cec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148717568 unmapped: 12541952 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2010003 data_alloc: 251658240 data_used: 52977664
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:43.295504+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148717568 unmapped: 12541952 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f1400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f1400 session 0x563e2b9b5c20
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c501400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c501400 session 0x563e299e1e00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:44.295705+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c503000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c508c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150544384 unmapped: 10715136 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:45.295959+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 149929984 unmapped: 11329536 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c503c00 session 0x563e2c303c20
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c508400 session 0x563e2d6052c0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f1000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.684912682s of 11.138339996s, submitted: 92
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:46.296281+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c4f1000 session 0x563e29777860
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146710528 unmapped: 14548992 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:47.296597+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146833408 unmapped: 14426112 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f63e3000/0x0/0x4ffc00000, data 0x50fad29/0x51d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1897107 data_alloc: 251658240 data_used: 49094656
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:48.296937+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148447232 unmapped: 12812288 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:49.297276+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 151085056 unmapped: 10174464 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:50.297676+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 151126016 unmapped: 10133504 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c503000 session 0x563e2ba681e0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c508c00 session 0x563e2c302000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b8f3800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:51.297891+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 148381696 unmapped: 12877824 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b8f3800 session 0x563e2be183c0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:52.298202+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 147881984 unmapped: 13377536 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1814543 data_alloc: 251658240 data_used: 47054848
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:53.298446+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f6a36000/0x0/0x4ffc00000, data 0x485dcc7/0x4937000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150323200 unmapped: 10936320 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:54.298699+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150159360 unmapped: 11100160 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:55.299063+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150208512 unmapped: 11051008 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:56.299365+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.446751595s of 10.153461456s, submitted: 133
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 149987328 unmapped: 11272192 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:57.299878+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150020096 unmapped: 11239424 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f68af000/0x0/0x4ffc00000, data 0x4cd7cc7/0x4db1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1871715 data_alloc: 251658240 data_used: 47898624
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:58.300077+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150020096 unmapped: 11239424 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:59.300576+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150020096 unmapped: 11239424 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:00.300939+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f68af000/0x0/0x4ffc00000, data 0x4cd7cc7/0x4db1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150020096 unmapped: 11239424 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:01.301298+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150020096 unmapped: 11239424 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:02.301683+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150020096 unmapped: 11239424 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1872355 data_alloc: 251658240 data_used: 47915008
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:03.302079+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2bfae400 session 0x563e2a23c5a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b3a8c00 session 0x563e2c670f00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b8f3800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 150020096 unmapped: 11239424 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:04.302283+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f68bd000/0x0/0x4ffc00000, data 0x4cd7cc7/0x4db1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [1,0,1])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b8f3800 session 0x563e29827a40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f68bd000/0x0/0x4ffc00000, data 0x4cd7cc7/0x4db1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144547840 unmapped: 16711680 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:05.302829+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144556032 unmapped: 16703488 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:06.303299+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144556032 unmapped: 16703488 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:07.303680+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144556032 unmapped: 16703488 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1716636 data_alloc: 234881024 data_used: 40157184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:08.304110+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144556032 unmapped: 16703488 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:09.304583+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144556032 unmapped: 16703488 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f73ec000/0x0/0x4ffc00000, data 0x41a9cb7/0x4282000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:10.304928+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144556032 unmapped: 16703488 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:11.305311+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.966886520s of 15.341312408s, submitted: 58
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c507800 session 0x563e2ba68960
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2c50a400 session 0x563e2a23c1e0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144556032 unmapped: 16703488 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b3a8400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:12.305541+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136060928 unmapped: 25198592 heap: 161259520 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 ms_handle_reset con 0x563e2b3a8400 session 0x563e2c5bf0e0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491097 data_alloc: 234881024 data_used: 28659712
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:13.306031+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b3a8c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136142848 unmapped: 41902080 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:14.306318+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b3a8c00 session 0x563e2b10b0e0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136151040 unmapped: 41893888 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:15.306668+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136151040 unmapped: 41893888 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7cb3000/0x0/0x4ffc00000, data 0x36917e5/0x376a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:16.307099+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136151040 unmapped: 41893888 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:17.308188+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136175616 unmapped: 41869312 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1552674 data_alloc: 234881024 data_used: 28667904
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:18.308583+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f04000/0x0/0x4ffc00000, data 0x36917e5/0x376a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136175616 unmapped: 41869312 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:19.308999+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136175616 unmapped: 41869312 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:20.309272+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136175616 unmapped: 41869312 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:21.309590+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.849703789s of 10.186175346s, submitted: 47
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7f04000/0x0/0x4ffc00000, data 0x36917e5/0x376a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfb9c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bfb9c00 session 0x563e2a67f4a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bd5bc00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bd5bc00 session 0x563e2a819e00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b428400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b428400 session 0x563e2978a5a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136306688 unmapped: 41738240 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfb0c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bfb0c00 session 0x563e2b10ab40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:22.309823+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b3a8c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b3a8c00 session 0x563e2be19680
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b428400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b428400 session 0x563e2be19e00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bd5bc00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bd5bc00 session 0x563e2be18b40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfb9c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bfb9c00 session 0x563e2be18000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f4000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2c4f4000 session 0x563e2bf9a000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 41517056 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:23.310155+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1638880 data_alloc: 234881024 data_used: 28676096
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e29681400 session 0x563e2c303e00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b3a8c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 135872512 unmapped: 42172416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:24.310518+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b428400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b428400 session 0x563e2a23d860
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bd5bc00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bd5bc00 session 0x563e299e05a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfb9c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bfb9c00 session 0x563e2a0ea1e0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b3a8c00 session 0x563e2a23e000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 131317760 unmapped: 46727168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:25.310915+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29681400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e29681400 session 0x563e2c04cf00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b428400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 138764288 unmapped: 39280640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b428400 session 0x563e2a56c1e0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:26.311300+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f759a000/0x0/0x4ffc00000, data 0x3aa0793/0x3b79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b3a9800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b3a9800 session 0x563e2a0e9c20
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 138764288 unmapped: 39280640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:27.311550+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b447800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b447800 session 0x563e2bec8960
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c509800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29681c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e29681c00 session 0x563e2a0e9860
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2c509800 session 0x563e29776960
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 138919936 unmapped: 39124992 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29681400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e29681400 session 0x563e2a0e8d20
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b3a9800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:28.311919+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b428400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b428400 session 0x563e2b0e6960
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1642907 data_alloc: 234881024 data_used: 31825920
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b3a9800 session 0x563e2aed4b40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b447800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b426400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b447800 session 0x563e28ccc5a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29681400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b426400 session 0x563e2a241a40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e29681400 session 0x563e2a57a960
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 39108608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:29.312458+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b3a9800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2b428400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 39108608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:30.312727+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 39108608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:31.313016+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f5800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2c4f5800 session 0x563e2bec9680
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f729a000/0x0/0x4ffc00000, data 0x42f87d6/0x43d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c506400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.249265671s of 10.107059479s, submitted: 83
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfafc00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139370496 unmapped: 38674432 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:32.313587+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139370496 unmapped: 38674432 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:33.313909+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b3a9800 session 0x563e2a57ab40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663417 data_alloc: 234881024 data_used: 33452032
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2b428400 session 0x563e2a0bb680
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29681400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 39460864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:34.314324+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e29681400 session 0x563e2c0265a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 138264576 unmapped: 39780352 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:35.314603+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7cc9000/0x0/0x4ffc00000, data 0x38cb793/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 138264576 unmapped: 39780352 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:36.314906+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139378688 unmapped: 38666240 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:37.315231+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:38.315816+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1633604 data_alloc: 234881024 data_used: 40321024
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:39.316268+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:40.316694+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:41.317067+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7cc9000/0x0/0x4ffc00000, data 0x38cb793/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:42.317609+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:43.317947+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1633604 data_alloc: 234881024 data_used: 40321024
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:44.318313+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:45.318711+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:46.319076+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:47.319490+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7cc9000/0x0/0x4ffc00000, data 0x38cb793/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:48.319881+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1633604 data_alloc: 234881024 data_used: 40321024
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:49.320461+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:50.320847+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7cc9000/0x0/0x4ffc00000, data 0x38cb793/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:51.321260+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:52.321713+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:53.322169+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1633604 data_alloc: 234881024 data_used: 40321024
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:54.322591+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:55.322856+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:56.323262+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7cc9000/0x0/0x4ffc00000, data 0x38cb793/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:57.323606+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 38436864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:58.323890+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139616256 unmapped: 38428672 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1633604 data_alloc: 234881024 data_used: 40321024
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:59.324269+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139616256 unmapped: 38428672 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:00.324611+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139616256 unmapped: 38428672 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:01.324846+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7cc9000/0x0/0x4ffc00000, data 0x38cb793/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 38748160 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:02.326340+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 38748160 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:03.326646+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 38748160 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1633604 data_alloc: 234881024 data_used: 40321024
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:04.327164+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 38748160 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:05.327499+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 38748160 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7cc9000/0x0/0x4ffc00000, data 0x38cb793/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:06.327826+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 38748160 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:07.328118+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 38748160 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:08.328462+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 38748160 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1633604 data_alloc: 234881024 data_used: 40321024
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.861461639s of 37.033191681s, submitted: 30
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:09.328806+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 142426112 unmapped: 35618816 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:10.329055+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 32358400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:11.329459+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145743872 unmapped: 32301056 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c5000/0x0/0x4ffc00000, data 0x3fb8793/0x4091000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:12.329856+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145752064 unmapped: 32292864 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:13.330151+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145129472 unmapped: 32915456 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1692490 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:14.330469+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145170432 unmapped: 32874496 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:15.330785+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145170432 unmapped: 32874496 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:16.331059+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145170432 unmapped: 32874496 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:17.331458+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145170432 unmapped: 32874496 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:18.331858+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145170432 unmapped: 32874496 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1698668 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:19.332275+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 32866304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.486363411s of 10.983119965s, submitted: 51
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:20.332650+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 32866304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:21.332991+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 32866304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:22.333344+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 32866304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:23.333666+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 32866304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1698684 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:24.334667+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 32866304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:25.335005+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 32866304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:26.335344+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 32866304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:27.335704+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 32858112 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:28.336107+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 32858112 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1698684 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:29.336557+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 32858112 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:30.336899+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 32858112 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.996733665s of 11.008992195s, submitted: 1
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:31.337291+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 32858112 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:32.337731+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 32858112 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:33.338511+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 32858112 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:34.338869+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 32858112 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:35.339244+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:36.339666+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:37.340091+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:38.340649+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:39.341005+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:40.341479+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:41.341905+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:42.342320+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:43.342622+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:44.343023+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:45.343436+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:46.343805+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 32849920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:47.344165+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 32841728 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:48.344547+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 32841728 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:49.344804+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 32841728 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:50.345140+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 32841728 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:51.345505+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145211392 unmapped: 32833536 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:52.345769+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145211392 unmapped: 32833536 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:53.346260+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2779 syncs, 3.76 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1922 writes, 6979 keys, 1922 commit groups, 1.0 writes per commit group, ingest: 7.75 MB, 0.01 MB/s
                                            Interval WAL: 1922 writes, 792 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145219584 unmapped: 32825344 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:54.346558+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145219584 unmapped: 32825344 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:55.346938+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145219584 unmapped: 32825344 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:56.347343+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145219584 unmapped: 32825344 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:57.347766+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145219584 unmapped: 32825344 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:58.348125+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145219584 unmapped: 32825344 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:59.348600+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:00.349143+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2c50e000 session 0x563e2bfc72c0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f2c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: mgrc ms_handle_reset ms_handle_reset con 0x563e2a142000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2078717049
Oct 02 20:32:15 compute-0 ceph-osd[207106]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2078717049,v1:192.168.122.100:6801/2078717049]
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: get_auth_request con 0x563e2b447800 auth_method 0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: mgrc handle_mgr_configure stats_period=5
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145383424 unmapped: 32661504 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:01.349652+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2c504c00 session 0x563e298303c0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c509000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bfba400 session 0x563e297a8f00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c504c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145383424 unmapped: 32661504 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:02.350181+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145383424 unmapped: 32661504 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:03.350684+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145383424 unmapped: 32661504 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:04.350967+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145383424 unmapped: 32661504 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:05.351498+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145367040 unmapped: 32677888 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:06.351859+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:07.352313+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:08.352757+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:09.353119+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:10.353346+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:11.353556+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:12.353762+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:13.353979+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:14.354712+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:15.354994+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:16.355302+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:17.355574+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:18.355804+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:19.356150+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:20.356638+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:21.356863+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:22.357194+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:23.357557+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:24.357889+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:25.358165+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:26.358542+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:27.359082+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:28.359317+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:29.359556+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 32817152 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:30.359917+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:31.360580+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:32.361084+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:33.361480+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:34.361925+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:35.362327+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:36.362713+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 32808960 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:37.362944+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 32800768 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:38.363357+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 32800768 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:39.363954+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 32800768 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:40.364224+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 32800768 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:41.364689+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 32800768 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:42.365076+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:43.365609+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:44.365938+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:45.366478+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:46.366914+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:47.367483+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:48.367832+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:49.368492+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:50.368991+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:51.369548+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:52.369838+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 32792576 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:53.370037+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 32784384 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:54.370458+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 32784384 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:55.371083+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 32784384 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:56.371540+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145268736 unmapped: 32776192 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:57.371861+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145268736 unmapped: 32776192 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:58.372044+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145268736 unmapped: 32776192 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:59.372457+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697388 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71c0000/0x0/0x4ffc00000, data 0x3fc5793/0x409e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145268736 unmapped: 32776192 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:00.372841+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145268736 unmapped: 32776192 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:01.373178+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 32768000 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:02.373457+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 32768000 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:03.373692+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29720c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 92.732841492s of 92.740623474s, submitted: 1
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f71bf000/0x0/0x4ffc00000, data 0x3fc57bc/0x409f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,19])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146563072 unmapped: 31481856 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:04.373960+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1852929 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e29720c00 session 0x563e29790960
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f5800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:05.374359+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 32964608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2c4f5800 session 0x563e2a240f00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:06.374922+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 32964608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:07.375310+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 32964608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:08.375763+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 32964608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:09.376158+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 32964608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1816761 data_alloc: 234881024 data_used: 40574976
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b4000/0x0/0x4ffc00000, data 0x4ed07f5/0x4faa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:10.376530+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 32964608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29683400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:11.376873+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144457728 unmapped: 33587200 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e29683400 session 0x563e2aed4b40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:12.377268+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144457728 unmapped: 33587200 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:13.377544+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144457728 unmapped: 33587200 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:14.378074+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfb3000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 144465920 unmapped: 33579008 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1819214 data_alloc: 251658240 data_used: 40783872
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:15.378318+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146661376 unmapped: 31383552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:16.378523+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 153673728 unmapped: 24371200 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [1])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:17.378946+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156041216 unmapped: 22003712 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bfb3400 session 0x563e2a4f23c0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29681400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:18.379356+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156041216 unmapped: 22003712 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:19.380007+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156041216 unmapped: 22003712 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1927214 data_alloc: 251658240 data_used: 55971840
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:20.380512+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156033024 unmapped: 22011904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:21.380760+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156033024 unmapped: 22011904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:22.381027+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156033024 unmapped: 22011904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:23.381245+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156033024 unmapped: 22011904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.263887405s of 20.334077835s, submitted: 44
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:24.381518+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156041216 unmapped: 22003712 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1927326 data_alloc: 251658240 data_used: 55980032
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:25.381724+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156041216 unmapped: 22003712 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:26.381933+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156082176 unmapped: 21962752 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:27.382425+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156114944 unmapped: 21929984 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:28.382662+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156188672 unmapped: 21856256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:29.383037+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156188672 unmapped: 21856256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1927486 data_alloc: 251658240 data_used: 55984128
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:30.383253+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156188672 unmapped: 21856256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:31.383791+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156188672 unmapped: 21856256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:32.384236+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156188672 unmapped: 21856256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:33.384681+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156188672 unmapped: 21856256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:34.384912+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1927486 data_alloc: 251658240 data_used: 55984128
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:35.385254+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:36.385454+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:37.385624+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:38.385824+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:39.386164+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1927486 data_alloc: 251658240 data_used: 55984128
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:40.386351+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:41.386567+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:42.386766+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:43.386952+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f62b3000/0x0/0x4ffc00000, data 0x4ed0818/0x4fab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:44.387175+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1927486 data_alloc: 251658240 data_used: 55984128
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:45.387369+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 21848064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:46.387578+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.686796188s of 22.472694397s, submitted: 110
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 157483008 unmapped: 20561920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:47.387809+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163241984 unmapped: 14802944 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:48.388017+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163692544 unmapped: 14352384 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f5579000/0x0/0x4ffc00000, data 0x5bfc818/0x5cd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,1])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:49.388518+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165175296 unmapped: 12869632 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2039870 data_alloc: 251658240 data_used: 56713216
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:50.388765+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165175296 unmapped: 12869632 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:51.389068+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165175296 unmapped: 12869632 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:52.389354+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54f6000/0x0/0x4ffc00000, data 0x5c87818/0x5d62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165175296 unmapped: 12869632 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:53.389784+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165175296 unmapped: 12869632 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:54.390188+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165175296 unmapped: 12869632 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2039870 data_alloc: 251658240 data_used: 56713216
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:55.390509+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:56.390732+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54eb000/0x0/0x4ffc00000, data 0x5c98818/0x5d73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54eb000/0x0/0x4ffc00000, data 0x5c98818/0x5d73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:57.391033+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:58.391298+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:59.391821+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037570 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:00.392087+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54eb000/0x0/0x4ffc00000, data 0x5c98818/0x5d73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:01.392331+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:02.392525+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:03.392796+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:04.393057+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54eb000/0x0/0x4ffc00000, data 0x5c98818/0x5d73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037570 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:05.393258+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:06.393470+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 12705792 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54eb000/0x0/0x4ffc00000, data 0x5c98818/0x5d73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.506355286s of 20.436758041s, submitted: 159
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:07.393710+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54eb000/0x0/0x4ffc00000, data 0x5c98818/0x5d73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165347328 unmapped: 12697600 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:08.394041+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165347328 unmapped: 12697600 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:09.394493+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165355520 unmapped: 12689408 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037086 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:10.394891+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165355520 unmapped: 12689408 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:11.395061+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165355520 unmapped: 12689408 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:12.395510+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165355520 unmapped: 12689408 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:13.395870+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165355520 unmapped: 12689408 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:14.396117+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165355520 unmapped: 12689408 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037086 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:15.396558+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 12681216 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:16.396790+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 12681216 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:17.396990+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 12681216 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:18.397236+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 12681216 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:19.397515+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.786537170s of 12.801416397s, submitted: 2
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037086 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:20.397787+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:21.398033+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:22.398244+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:23.398648+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165380096 unmapped: 12664832 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:24.399072+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165380096 unmapped: 12664832 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037262 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:25.399349+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 12681216 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:26.399748+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 12681216 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:27.400044+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 12681216 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:28.400465+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 12681216 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:29.400819+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037262 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:30.401093+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:31.401449+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:32.401745+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:33.402080+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:34.402527+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037262 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:35.402940+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:36.403310+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:37.403730+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:38.404010+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 12673024 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:39.404270+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165380096 unmapped: 12664832 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037262 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:40.404556+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165380096 unmapped: 12664832 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:41.404759+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165380096 unmapped: 12664832 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:42.404938+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165388288 unmapped: 12656640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:43.405144+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165388288 unmapped: 12656640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:44.405348+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165388288 unmapped: 12656640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037262 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:45.405526+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165388288 unmapped: 12656640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:46.405833+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165388288 unmapped: 12656640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:47.406071+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165388288 unmapped: 12656640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:48.406427+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165388288 unmapped: 12656640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:49.406687+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165388288 unmapped: 12656640 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037262 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:50.406887+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.686998367s of 30.720819473s, submitted: 3
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:51.407202+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:52.407582+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:53.407882+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:54.408314+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:55.408731+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:56.409167+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:57.409469+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:58.409888+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:59.410353+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:00.410663+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:01.411073+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 12648448 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:02.411513+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:03.412080+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:04.412359+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:05.412888+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:06.413247+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:07.413658+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:08.414122+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:09.414640+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:10.414859+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:11.415125+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:12.415437+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 12632064 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:13.415792+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165421056 unmapped: 12623872 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:14.416184+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165421056 unmapped: 12623872 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:15.416639+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165421056 unmapped: 12623872 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:16.417127+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165421056 unmapped: 12623872 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:17.417638+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165421056 unmapped: 12623872 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:18.417950+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:19.418360+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:20.418628+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:21.418952+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:22.419189+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:23.419603+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:24.419865+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:25.420268+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:26.420498+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:27.422327+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:28.422659+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:29.423009+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:30.423453+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:31.423742+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:32.424117+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 12607488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:33.424702+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165445632 unmapped: 12599296 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:34.425079+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165445632 unmapped: 12599296 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:35.425420+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 12640256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:36.425709+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 12640256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:37.428477+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 12640256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:38.428753+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 12640256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:39.429173+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 12640256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:40.429623+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 12640256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:41.429878+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 12640256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:42.430130+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 12640256 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:43.430446+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:44.430688+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163987456 unmapped: 14057472 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:45.431040+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163987456 unmapped: 14057472 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:46.431500+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:47.431894+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:48.432257+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:49.432638+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:50.432860+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:51.433054+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:52.433327+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:53.433749+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:54.434059+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:55.434270+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:56.434718+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:57.435047+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:58.435292+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 14049280 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:59.435514+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:00.435675+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:01.435869+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:02.436055+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:03.436265+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:04.436717+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:05.437144+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:06.437556+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:07.437790+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:08.438000+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:09.438296+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:10.438715+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038142 data_alloc: 251658240 data_used: 56717312
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:11.439087+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bd5d400 session 0x563e2c6dd2c0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29683400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:12.439304+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:13.439571+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 14041088 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 83.314064026s of 83.341011047s, submitted: 7
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:14.439954+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164012032 unmapped: 14032896 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:15.440501+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164012032 unmapped: 14032896 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:16.440784+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164020224 unmapped: 14024704 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:17.441101+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164020224 unmapped: 14024704 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:18.442050+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164020224 unmapped: 14024704 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:19.442428+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:20.442693+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:21.443067+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:22.443512+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:23.443952+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:24.444255+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:25.444606+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:26.444872+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:27.445113+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:28.445518+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:29.445895+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:30.446118+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 14016512 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:31.446532+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164036608 unmapped: 14008320 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:32.446802+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164036608 unmapped: 14008320 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:33.447162+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164036608 unmapped: 14008320 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:34.447596+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164036608 unmapped: 14008320 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:35.447935+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164036608 unmapped: 14008320 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:36.448317+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164036608 unmapped: 14008320 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:37.448682+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:38.448926+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:39.449350+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:40.449814+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:41.450092+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:42.450367+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:43.450795+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:44.451215+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:45.451592+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:46.451941+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:47.452282+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:48.452521+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:49.452820+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:50.453124+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:51.453423+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:52.453737+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:53.454083+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:54.454446+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:55.454847+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:56.455173+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:57.455638+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:58.456017+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:59.456521+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:00.456804+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:01.457112+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:02.457490+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:03.457767+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:04.457989+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:05.458500+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:06.458863+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:07.459182+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 14000128 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:08.459551+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:09.459856+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:10.460163+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:11.460364+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:12.460663+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:13.461043+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:14.461824+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:15.462225+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:16.462460+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:17.462689+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 13991936 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:18.463346+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164061184 unmapped: 13983744 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:19.463788+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:20.464062+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:21.464318+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:22.464733+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:23.465090+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:24.465557+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:25.465852+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:26.466233+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:27.467193+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:28.467448+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:29.467680+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:30.467848+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:31.468206+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:32.468552+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 13975552 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:33.468927+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164077568 unmapped: 13967360 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:34.469250+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164077568 unmapped: 13967360 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:35.469625+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164077568 unmapped: 13967360 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:36.469843+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164077568 unmapped: 13967360 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:37.470237+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164077568 unmapped: 13967360 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:38.470547+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164077568 unmapped: 13967360 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:39.471010+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164077568 unmapped: 13967360 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:40.471469+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164077568 unmapped: 13967360 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:41.471839+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164077568 unmapped: 13967360 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:42.472262+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:43.472735+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:44.473179+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:45.473652+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:46.474055+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:47.474528+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:48.474862+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:49.475528+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:50.476067+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:51.476519+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:52.476998+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:53.477816+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:54.478744+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:55.479926+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:56.480609+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:57.481027+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 13959168 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:58.481909+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:59.482247+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:00.482503+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:01.482785+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:02.483043+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:03.483529+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:04.484009+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:05.484285+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:06.484546+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:07.484809+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:08.485151+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:09.485625+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164102144 unmapped: 13942784 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:10.485886+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:11.486141+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 02 20:32:15 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4040335141' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:12.486567+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:13.486998+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:14.487508+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:15.487928+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:16.488270+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:17.488511+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:18.488722+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:19.489003+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:20.489231+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:21.489453+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:22.489639+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:23.489866+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:24.490084+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 13950976 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:25.490297+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164102144 unmapped: 13942784 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:26.490642+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164102144 unmapped: 13942784 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:27.490812+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:28.491052+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:29.491457+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:30.491807+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:31.492075+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:32.492468+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:33.492843+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:34.493228+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:35.493640+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:36.494019+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:37.494472+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:38.494804+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 13934592 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:39.495213+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:40.495614+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:41.495905+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:42.496256+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:43.496508+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:44.496819+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:45.497247+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:46.497667+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2036894 data_alloc: 251658240 data_used: 56721408
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:47.498046+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:48.498430+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:49.498744+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54e8000/0x0/0x4ffc00000, data 0x5c9b818/0x5d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:50.499121+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:51.499585+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038334 data_alloc: 251658240 data_used: 57057280
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:52.499924+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 13926400 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:53.500270+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164126720 unmapped: 13918208 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:54.500663+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164126720 unmapped: 13918208 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 161.123291016s of 161.148391724s, submitted: 3
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:55.501023+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164216832 unmapped: 13828096 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54cf000/0x0/0x4ffc00000, data 0x5cb4818/0x5d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:56.501474+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2039474 data_alloc: 251658240 data_used: 57057280
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164216832 unmapped: 13828096 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:57.502006+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164216832 unmapped: 13828096 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:58.502303+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164216832 unmapped: 13828096 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:59.503742+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164216832 unmapped: 13828096 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54cf000/0x0/0x4ffc00000, data 0x5cb4818/0x5d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54cf000/0x0/0x4ffc00000, data 0x5cb4818/0x5d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:00.504200+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164216832 unmapped: 13828096 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:01.506691+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2039474 data_alloc: 251658240 data_used: 57057280
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164216832 unmapped: 13828096 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:02.507286+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54cf000/0x0/0x4ffc00000, data 0x5cb4818/0x5d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164216832 unmapped: 13828096 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:03.508208+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164225024 unmapped: 13819904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:04.508613+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164225024 unmapped: 13819904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:05.508961+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164225024 unmapped: 13819904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54cf000/0x0/0x4ffc00000, data 0x5cb4818/0x5d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:06.509251+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2039794 data_alloc: 251658240 data_used: 57065472
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164225024 unmapped: 13819904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:07.509943+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164225024 unmapped: 13819904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:08.510362+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164225024 unmapped: 13819904 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:09.510853+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54cf000/0x0/0x4ffc00000, data 0x5cb4818/0x5d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164233216 unmapped: 13811712 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:10.511469+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.815058708s of 15.871011734s, submitted: 3
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164405248 unmapped: 13639680 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:11.511920+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2040282 data_alloc: 251658240 data_used: 57065472
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164405248 unmapped: 13639680 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:12.512668+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164405248 unmapped: 13639680 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:13.513005+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164405248 unmapped: 13639680 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:14.513317+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164405248 unmapped: 13639680 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:15.513623+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164405248 unmapped: 13639680 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:16.514032+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2040282 data_alloc: 251658240 data_used: 57065472
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164405248 unmapped: 13639680 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:17.514330+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 13631488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:18.514714+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 13631488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:19.515184+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 13631488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:20.515640+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 13631488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:21.515995+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2040282 data_alloc: 251658240 data_used: 57065472
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 13631488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:22.516494+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 13631488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:23.516982+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 13631488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:24.517502+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 13631488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:25.517775+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 13631488 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:26.518218+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2040282 data_alloc: 251658240 data_used: 57065472
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164421632 unmapped: 13623296 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:27.518711+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164421632 unmapped: 13623296 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:28.519133+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164421632 unmapped: 13623296 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:29.519594+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.172403336s of 19.199481964s, submitted: 3
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164487168 unmapped: 13557760 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:30.519898+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164487168 unmapped: 13557760 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:31.520174+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043082 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164487168 unmapped: 13557760 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:32.520599+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164487168 unmapped: 13557760 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:33.521018+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:34.521203+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164503552 unmapped: 13541376 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:35.521614+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:36.522191+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043082 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:37.522506+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:38.522769+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:39.523653+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.212740898s of 10.239569664s, submitted: 16
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:40.523954+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:41.524336+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:42.524662+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:43.525032+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:44.525496+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:45.525890+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:46.526250+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:47.526913+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:48.527322+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:49.527797+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:50.528244+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:51.528717+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164495360 unmapped: 13549568 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:52.529078+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164519936 unmapped: 13524992 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:53.529654+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164519936 unmapped: 13524992 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:54.530060+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164519936 unmapped: 13524992 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:55.530538+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164519936 unmapped: 13524992 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:56.530933+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164519936 unmapped: 13524992 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:57.531356+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164519936 unmapped: 13524992 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:58.531855+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:59.532365+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:00.532897+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:01.533204+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:02.533627+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:03.534053+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:04.534638+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:05.535014+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:06.535504+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:07.535926+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:08.536545+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:09.537060+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:10.537614+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:11.537803+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:12.538285+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:13.538528+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:14.538852+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:15.539589+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:16.539865+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:17.541041+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:18.541602+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:19.542040+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:20.542990+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:21.543480+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:22.543910+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164528128 unmapped: 13516800 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:23.544294+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:24.544630+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:25.545038+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:26.545548+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:27.545930+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:28.546323+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:29.546941+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:30.547455+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:31.547899+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:32.548312+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:33.548671+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:34.548983+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:35.549619+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:36.550022+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:37.550472+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:38.551080+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:39.551457+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:40.551908+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:41.552309+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:42.552681+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:43.554269+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:44.554705+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 13508608 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:45.554916+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:46.555078+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:47.555284+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:48.555572+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:49.555885+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:50.556259+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:51.561068+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:52.561467+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:53.561971+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:54.562187+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164544512 unmapped: 13500416 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:55.562623+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164552704 unmapped: 13492224 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:56.563033+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:57.563484+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:58.564019+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:59.564642+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:00.565072+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:01.565452+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:02.565762+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:03.566159+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:04.566551+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:05.566929+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164552704 unmapped: 13492224 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:06.567366+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164552704 unmapped: 13492224 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:07.567674+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164552704 unmapped: 13492224 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:08.568145+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164552704 unmapped: 13492224 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:09.568636+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164552704 unmapped: 13492224 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:10.568994+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164552704 unmapped: 13492224 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:11.569520+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164560896 unmapped: 13484032 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:12.569866+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:13.570076+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:14.570474+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:15.570702+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:16.571117+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:17.571578+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:18.572029+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:19.572615+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:20.572979+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:21.573552+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:22.574061+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:23.574701+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:24.575129+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:25.575599+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:26.575983+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 13475840 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:27.576573+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:28.576898+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:29.577574+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:30.577961+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:31.578489+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:32.578952+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:33.579223+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:34.579663+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:35.580079+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:36.580443+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:37.580762+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:38.581158+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:39.581647+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:40.582173+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:41.582638+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:42.582986+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:43.583487+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 13467648 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:44.583837+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164585472 unmapped: 13459456 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:45.584199+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164585472 unmapped: 13459456 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:46.584859+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164585472 unmapped: 13459456 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:47.585339+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164585472 unmapped: 13459456 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:48.585801+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:49.586661+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:50.587014+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:51.587614+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:52.588200+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:53.589166+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 2993 syncs, 3.68 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 565 writes, 1917 keys, 565 commit groups, 1.0 writes per commit group, ingest: 2.62 MB, 0.00 MB/s
                                            Interval WAL: 565 writes, 214 syncs, 2.64 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:54.589701+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:55.589952+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:56.590180+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:57.590484+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 13451264 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:58.590894+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:59.591345+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:00.591585+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:01.591936+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:02.592207+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:03.592622+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:04.592976+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:05.593244+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:06.593598+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:07.593931+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:08.594259+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:09.594726+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:10.595044+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:11.595538+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:12.595924+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164601856 unmapped: 13443072 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:13.596287+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:14.596746+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:15.597008+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:16.597256+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:17.597565+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:18.597892+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:19.598462+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:20.598810+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:21.599157+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:22.599614+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:23.600006+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:24.600836+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:25.601202+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:26.601492+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:27.601780+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 251658240 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:28.602145+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:29.602647+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:30.602951+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:31.603364+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:32.603839+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 234881024 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:33.604142+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:34.604543+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:35.605068+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:36.605700+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:37.606089+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 234881024 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:38.606637+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:39.607065+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:40.607343+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 13434880 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:41.607695+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:42.608101+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 234881024 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:43.608581+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:44.608873+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:45.609271+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:46.609669+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:47.610032+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 234881024 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:48.610305+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:49.610710+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:50.611006+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:51.611465+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:52.611755+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 234881024 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:53.612119+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:54.612565+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 13426688 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:55.612990+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164626432 unmapped: 13418496 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:56.613330+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164634624 unmapped: 13410304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:57.613700+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164634624 unmapped: 13410304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043258 data_alloc: 234881024 data_used: 57053184
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:58.613969+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164634624 unmapped: 13410304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:59.614351+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164634624 unmapped: 13410304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x5cca818/0x5da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:00.614941+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164634624 unmapped: 13410304 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 200.544860840s of 200.551040649s, submitted: 1
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2c506400 session 0x563e2ba69680
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bfafc00 session 0x563e2a818f00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bd5d400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:01.615255+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 164651008 unmapped: 13393920 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bd5d400 session 0x563e28cccf00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:02.615615+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f640c000/0x0/0x4ffc00000, data 0x4d78808/0x4e52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1859518 data_alloc: 234881024 data_used: 48300032
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:03.615976+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:04.616360+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:05.616791+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:06.617294+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:07.617981+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:08.618246+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1859518 data_alloc: 234881024 data_used: 48300032
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f640c000/0x0/0x4ffc00000, data 0x4d78808/0x4e52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:09.618748+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f640c000/0x0/0x4ffc00000, data 0x4d78808/0x4e52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:10.619106+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f640c000/0x0/0x4ffc00000, data 0x4d78808/0x4e52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:11.619514+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:12.619890+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.379122734s of 11.708549500s, submitted: 52
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e2bfb3000 session 0x563e297905a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 18915328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29720c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:13.620158+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 ms_handle_reset con 0x563e29720c00 session 0x563e2c5ecd20
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1533530 data_alloc: 218103808 data_used: 31834112
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:14.620647+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:15.621081+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:16.621533+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7d11000/0x0/0x4ffc00000, data 0x3073783/0x314b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:17.621871+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:18.622536+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1533530 data_alloc: 218103808 data_used: 31834112
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:19.622949+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7d11000/0x0/0x4ffc00000, data 0x3073783/0x314b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:20.623571+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:21.624540+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:22.624770+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146374656 unmapped: 31670272 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bd5d400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.619196892s of 10.809016228s, submitted: 43
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:23.625288+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1533426 data_alloc: 218103808 data_used: 31834112
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f7d11000/0x0/0x4ffc00000, data 0x3073783/0x314b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 146399232 unmapped: 31645696 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _renew_subs
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:24.625878+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 141 ms_handle_reset con 0x563e2bd5d400 session 0x563e2b10b2c0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 37347328 heap: 178044928 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f810f000/0x0/0x4ffc00000, data 0x3075354/0x314e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfafc00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:25.626170+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140656640 unmapped: 45785088 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _renew_subs
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 142 ms_handle_reset con 0x563e2bfafc00 session 0x563e2a56c1e0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:26.626662+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfb3000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140673024 unmapped: 45768704 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f7c9c000/0x0/0x4ffc00000, data 0x34e6eba/0x35bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _renew_subs
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:27.626917+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 143 ms_handle_reset con 0x563e2bfb3000 session 0x563e297a85a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140730368 unmapped: 45711360 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:28.627330+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1470363 data_alloc: 218103808 data_used: 24506368
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 45670400 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f890b000/0x0/0x4ffc00000, data 0x2878a9b/0x2952000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,1])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:29.628256+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140804096 unmapped: 45637632 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:30.628621+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140804096 unmapped: 45637632 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c506400
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 143 ms_handle_reset con 0x563e2c506400 session 0x563e2a240d20
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:31.629072+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 45613056 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:32.629660+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 45613056 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:33.630308+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473497 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 45613056 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8908000/0x0/0x4ffc00000, data 0x287a51a/0x2955000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:34.630950+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 45613056 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:35.631484+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.383268356s of 12.413671494s, submitted: 149
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:36.631966+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:37.632703+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:38.633455+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:39.633981+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:40.634304+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:41.634702+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:42.635099+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:43.635585+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:44.635938+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:45.636358+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:46.636808+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:47.637173+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:48.637497+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:49.637932+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:50.638204+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:51.638668+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:52.639001+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:53.639288+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:54.639655+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:55.639986+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:56.640496+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:57.641021+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:58.641500+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:59.641970+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:00.642476+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:01.642824+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:02.643012+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:03.643508+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:04.643911+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:05.644202+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:06.644656+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:07.645216+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:08.645620+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:09.646059+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:10.646506+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:11.646856+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:12.647234+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:13.647590+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:14.647968+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:15.648183+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:16.648501+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:17.648747+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:18.649164+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:19.649630+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 45588480 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:20.649846+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:21.650359+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:22.651057+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:23.651297+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:24.651681+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:25.651944+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:26.652629+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:27.652857+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:28.653095+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:29.653464+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:30.653821+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:31.654674+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:32.655230+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:33.655715+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:34.656367+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:35.656888+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:36.657736+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:37.658036+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:38.658579+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:39.659110+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:40.659363+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:41.659834+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:42.660232+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:43.660550+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:44.661161+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:45.661642+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:46.662088+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:47.662364+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:48.662764+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:49.663153+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:50.663553+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:51.663853+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:52.664055+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:53.664280+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:54.664508+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:55.664694+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:56.664906+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:57.672360+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:58.672773+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:59.673204+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:00.673446+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:01.673773+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140836864 unmapped: 45604864 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'config diff' '{prefix=config diff}'
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'config show' '{prefix=config show}'
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:02.673989+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 141099008 unmapped: 45342720 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:03.674309+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 45998080 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:04.674543+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140230656 unmapped: 46211072 heap: 186441728 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'log dump' '{prefix=log dump}'
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:05.674849+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 151273472 unmapped: 46211072 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'perf dump' '{prefix=perf dump}'
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:06.675075+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'perf schema' '{prefix=perf schema}'
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 57155584 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:07.675279+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:08.675499+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:09.675728+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:10.676052+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:11.676313+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:12.676668+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:13.676851+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:14.677069+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:15.677536+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:16.677746+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:17.677963+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:18.678364+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:19.680500+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:20.681006+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:21.681317+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:22.681507+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:23.681745+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:24.681996+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:25.682335+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:26.682564+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:27.682805+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:28.682992+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:29.683211+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:30.683460+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:31.683676+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:32.684282+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:33.684616+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:34.684925+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:35.685127+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:36.685676+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:37.686104+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:38.686347+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:39.687418+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:40.687881+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:41.688118+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:42.688455+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:43.688675+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:44.688875+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:45.689172+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:46.689606+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:47.689871+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:48.690224+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:49.690633+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:50.690926+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:51.691322+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:52.691608+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:53.691977+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:54.692510+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:55.692908+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:56.693289+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:57.693511+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:58.693946+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:59.694178+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:00.694621+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:01.694970+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:02.695325+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:03.695605+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:04.695849+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 57188352 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:05.696166+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:06.696566+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:07.696792+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:08.697004+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:09.697344+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:10.697624+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:11.698031+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:12.698329+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:13.698716+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:14.699118+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:15.699457+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:16.699889+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:17.700135+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:18.700535+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:19.700872+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:20.701133+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:21.701335+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:22.701585+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:23.701814+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:24.702098+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:25.702488+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:26.702863+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:27.703058+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:28.703466+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:29.703704+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:30.703979+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:31.704300+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:32.704561+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:33.704883+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:34.705438+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:35.706049+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:36.706631+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:37.707228+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:38.707798+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:39.708293+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:40.708781+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:41.709117+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:42.709672+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:43.710009+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:44.710293+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:45.710521+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:46.710818+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:47.711233+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:48.711670+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:49.712136+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:50.712571+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:51.712923+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:52.713289+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:53.713476+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:54.713768+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:55.714078+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:56.714296+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:57.714738+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:58.715135+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:59.715455+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 ms_handle_reset con 0x563e2c4f2c00 session 0x563e2a56de00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c4f5800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:00.715693+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 ms_handle_reset con 0x563e2c509000 session 0x563e29790b40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29720800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 ms_handle_reset con 0x563e2c504c00 session 0x563e2978e5a0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2c509000
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:01.715989+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:02.716240+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:03.716529+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:04.716862+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:05.717249+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:06.717495+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:07.717838+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:08.718203+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:09.718551+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:10.718920+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:11.719254+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:12.719506+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:13.719909+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:14.720291+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:15.720700+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:16.721068+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:17.721362+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:18.721882+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:19.722323+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:20.722562+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:21.722961+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:22.723299+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:23.723534+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:24.723906+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:25.724319+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:26.724699+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:27.725199+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:28.725639+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:29.726060+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:30.726522+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:31.726756+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:32.727050+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:33.727474+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:34.727812+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:35.728346+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:36.728860+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:37.729282+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:38.729745+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:39.730322+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:40.730827+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:41.731237+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:42.731643+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:43.732066+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:44.732521+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:45.732946+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:46.733495+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:47.734002+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:48.734288+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:49.734825+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:50.735247+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:51.735708+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:52.736006+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:53.736460+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:54.736823+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 57262080 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:55.737354+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 57270272 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:56.737715+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 57270272 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:57.738056+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 57270272 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:58.738345+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 57270272 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:59.738816+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 57270272 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:00.739056+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 57270272 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:01.739342+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 57270272 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:02.739569+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 57270272 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:03.739908+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 57270272 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:04.740180+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 57270272 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:05.740578+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 57270272 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:06.740861+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:07.741251+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:08.741613+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:09.742499+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:10.742963+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:11.743195+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:12.743596+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:13.743973+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:14.744329+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:15.744660+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:16.744916+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:17.745293+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 ms_handle_reset con 0x563e29681400 session 0x563e298dfa40
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e2bfb7800
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:18.745686+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:19.746201+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:20.746700+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:21.747007+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:22.747442+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:23.747769+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:24.748170+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:25.748473+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:26.748897+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:27.749132+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:28.749533+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:29.749785+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:30.750016+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:31.750562+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:32.750908+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:33.751098+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:34.751723+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:35.751971+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:36.752190+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:37.752633+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:38.752978+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:39.753690+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:40.754037+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:41.755506+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:42.756601+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:43.757765+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:44.759618+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:45.760524+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:46.761480+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:47.761747+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:48.762968+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:49.763697+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:50.764428+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:51.764969+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:52.765647+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:53.766311+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:54.766673+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:55.767236+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:56.767628+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:57.768190+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:58.768666+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:59.769155+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:00.769597+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:01.770113+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:02.770844+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:03.771143+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:04.771357+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:05.771887+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:06.772259+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:07.772570+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:08.772938+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:09.773432+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:10.773873+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:11.774173+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:12.774449+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:13.774754+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:14.774961+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:15.775160+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:16.775585+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:17.776133+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:18.776502+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:19.776899+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:20.777175+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:21.777457+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:22.778597+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:23.778925+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:24.779288+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:25.779708+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:26.779963+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:27.780364+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:28.781020+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:29.781492+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:30.781711+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:31.782098+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:32.782353+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:33.782644+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:34.782858+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:35.783071+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:36.783517+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:37.783948+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:38.784271+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:39.784769+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:40.785155+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:41.785624+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:42.785982+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:43.786347+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:44.786619+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:45.786961+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:46.787511+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:47.787909+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:48.788257+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:49.788609+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 57335808 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:50.788965+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:51.789266+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:52.789627+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:53.790053+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:54.790512+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:55.790837+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:56.791032+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:57.791294+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:58.791599+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:59.791925+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:00.792285+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:01.792723+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:02.793168+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:03.793632+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:04.793996+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:05.794357+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:06.794641+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:07.795029+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:08.795482+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:09.795910+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:10.796242+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:11.796633+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:12.796932+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:13.797240+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:14.797642+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:15.798036+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:16.798460+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:17.798896+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:18.799258+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:19.799551+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:20.799872+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:21.800195+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:22.800487+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:23.800694+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:24.801119+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:25.801555+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:26.801921+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:27.802195+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:28.802470+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:29.802743+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:30.802936+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:31.803206+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:32.803634+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:33.803930+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:34.804161+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:35.804571+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:36.804983+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:37.805271+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:38.805619+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:39.805961+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:40.806254+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:41.806603+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:42.806964+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:43.807310+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:44.807595+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:45.807968+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:46.808312+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:47.809772+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:48.810894+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:49.811848+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:50.812218+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:51.812695+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:52.812950+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:53.813609+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:54.814044+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:55.814630+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:56.815027+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:57.815312+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:58.815680+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:59.815992+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:00.816209+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:01.816419+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:02.816790+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:03.817319+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:04.817735+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:05.818242+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:06.818548+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:07.818902+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:08.819159+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:09.819631+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:10.820020+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 ms_handle_reset con 0x563e29683400 session 0x563e2c3032c0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: handle_auth_request added challenge on 0x563e29721c00
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:11.820460+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:12.820946+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:13.821639+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:14.822036+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:15.822480+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:16.822692+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:17.823114+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:18.823426+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:19.823812+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:20.824024+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:21.824608+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:22.825571+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:23.825988+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:24.826632+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:25.826970+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:26.827792+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:27.828492+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:28.828894+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:29.829299+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:30.830251+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:31.830572+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:32.831050+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:33.831538+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:34.831986+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:35.832275+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:36.832713+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:37.832936+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:38.833414+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:39.833779+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:40.834449+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:41.834695+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:42.834973+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:43.835487+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:44.835926+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:45.836290+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:46.836580+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:47.837043+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:48.837306+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:49.837821+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:50.838199+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:51.838532+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:52.839023+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.1 total, 600.0 interval
                                            Cumulative writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 3206 syncs, 3.58 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 470 writes, 1409 keys, 470 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s
                                            Interval WAL: 470 writes, 213 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:53.839512+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:54.839882+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:55.840502+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:56.840872+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:57.841348+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:58.841858+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:59.842093+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:00.842854+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:01.843142+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:02.843490+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:03.844032+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:04.844674+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:05.845194+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:06.845571+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:07.845861+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:08.846239+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:09.846778+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:10.847079+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:11.847571+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:12.847998+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:13.848352+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:14.848910+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:15.849172+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:16.849675+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:17.850035+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:18.850509+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:19.850993+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:20.851509+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:21.852007+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:22.852479+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:23.852735+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:24.853185+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:25.853463+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:26.853754+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:27.854132+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:28.854365+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:29.854693+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:30.855058+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:31.855590+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:32.855817+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:33.856071+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:34.856599+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:35.856893+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:36.857261+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:37.857683+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:38.857958+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:39.858219+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:40.858507+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:41.858847+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:42.859212+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:43.859580+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:44.859873+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:45.860904+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:46.861260+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:47.861516+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:48.861909+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:49.862328+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:50.862676+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:51.862993+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:52.863228+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:53.863620+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:54.863903+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:55.864278+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:56.864631+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:57.864882+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:58.865149+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:59.865596+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:00.866070+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:01.866586+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:02.867209+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:03.867544+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:04.868026+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:05.868639+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:06.868883+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:07.869155+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:08.869679+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:09.870553+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:10.871085+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:11.871529+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:12.871771+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:13.872214+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:14.872624+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:15.873090+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:16.873553+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:17.873976+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:18.874320+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:19.874774+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:20.875128+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:21.875600+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:22.876083+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 57368576 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:23.876626+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 57360384 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:24.877034+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 57360384 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:25.877346+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 57360384 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:26.877521+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 57360384 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8905000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:27.877879+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476471 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 592.433166504s of 592.457214355s, submitted: 13
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 57360384 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:28.878246+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 57360384 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:29.878514+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 57360384 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:30.878700+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 57360384 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:31.879027+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 57360384 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:32.879295+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475591 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 57360384 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,1])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:33.879489+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:34.879674+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 57352192 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:35.879927+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 57344000 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:36.880251+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 57319424 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:37.880679+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475591 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 57319424 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:38.881049+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.015854836s of 11.219168663s, submitted: 72
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 57303040 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:39.881503+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 57303040 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:40.881703+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 57303040 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:41.881925+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'config diff' '{prefix=config diff}'
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'config show' '{prefix=config show}'
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 57131008 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 20:32:15 compute-0 ceph-osd[207106]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x287bf7d/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:42.882125+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:15 compute-0 ceph-osd[207106]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:15 compute-0 ceph-osd[207106]: bluestore.MempoolThread(0x563e27f41b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475591 data_alloc: 218103808 data_used: 24510464
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 57311232 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:43.882307+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: prioritycache tune_memory target: 4294967296 mapped: 140009472 unmapped: 57475072 heap: 197484544 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: tick
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_tickets
Oct 02 20:32:15 compute-0 ceph-osd[207106]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:44.882670+0000)
Oct 02 20:32:15 compute-0 ceph-osd[207106]: do_command 'log dump' '{prefix=log dump}'
Oct 02 20:32:15 compute-0 nova_compute[355794]: 2025-10-02 20:32:15.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:32:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 02 20:32:15 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/904153552' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 20:32:15 compute-0 crontab[499754]: (root) LIST (root)
Oct 02 20:32:15 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 02 20:32:15 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1476882666' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 20:32:15 compute-0 ceph-mon[191910]: pgmap v2630: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Oct 02 20:32:15 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1206292727' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 20:32:15 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/673095376' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 20:32:15 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4040335141' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 20:32:15 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/904153552' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 20:32:15 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1476882666' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 20:32:15 compute-0 rsyslogd[187702]: imjournal from <compute-0:ceph-osd>: begin to drop messages due to rate-limiting
Oct 02 20:32:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 02 20:32:16 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1224143127' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 20:32:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:32:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 02 20:32:16 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3984172838' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 20:32:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 02 20:32:16 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3052114473' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 20:32:16 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2631: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Oct 02 20:32:16 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 02 20:32:16 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2615254307' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 20:32:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 02 20:32:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2372184609' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 20:32:17 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1224143127' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 20:32:17 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3984172838' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 20:32:17 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3052114473' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 20:32:17 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2615254307' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 20:32:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 02 20:32:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2525308338' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 20:32:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 02 20:32:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2897472242' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 20:32:17 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 02 20:32:17 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4225135018' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 20:32:17 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15963 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:18 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 02 20:32:18 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4178631631' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 20:32:18 compute-0 ceph-mon[191910]: pgmap v2631: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Oct 02 20:32:18 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2372184609' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 20:32:18 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2525308338' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 20:32:18 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2897472242' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 20:32:18 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4225135018' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 20:32:18 compute-0 nova_compute[355794]: 2025-10-02 20:32:18.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:32:18 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15967 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:18 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2632: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 35 op/s
Oct 02 20:32:18 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15969 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:18 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15971 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:19 compute-0 ceph-mon[191910]: from='client.15963 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:19 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4178631631' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 20:32:19 compute-0 ceph-mon[191910]: from='client.15967 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:19 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15973 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:19 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15975 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:19 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15979 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 20:32:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2760109210' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:32:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 20:32:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2760109210' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:32:20 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15983 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 02 20:32:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1039633698' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:05.425017+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 44261376 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 131 heartbeat osd_stat(store_statfs(0x4f8e8e000/0x0/0x4ffc00000, data 0x2718503/0x27e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:06.425492+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 44261376 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 131 heartbeat osd_stat(store_statfs(0x4f8e8e000/0x0/0x4ffc00000, data 0x2718503/0x27e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:07.425670+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 44261376 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 131 ms_handle_reset con 0x55b417715800 session 0x55b41a20fe00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347960 data_alloc: 218103808 data_used: 16359424
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:08.426035+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 44261376 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _renew_subs
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:09.426469+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 132 heartbeat osd_stat(store_statfs(0x4f8e8a000/0x0/0x4ffc00000, data 0x271a080/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:10.426754+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:11.427194+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:12.427672+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352134 data_alloc: 218103808 data_used: 16367616
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:13.428073+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 132 heartbeat osd_stat(store_statfs(0x4f8e8a000/0x0/0x4ffc00000, data 0x271a080/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:14.428564+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417715800
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:15.428964+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.089331627s of 15.191314697s, submitted: 17
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 133 ms_handle_reset con 0x55b417715800 session 0x55b41cb0cd20
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:16.429368+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:17.429835+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301532 data_alloc: 218103808 data_used: 16367616
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:18.430239+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:19.430653+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9687000/0x0/0x4ffc00000, data 0x1f1bc51/0x1fe6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:20.431050+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:21.431521+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:22.431904+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 44253184 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301852 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:23.432439+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 133 heartbeat osd_stat(store_statfs(0x4f9687000/0x0/0x4ffc00000, data 0x1f1bc51/0x1fe6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:24.433102+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:25.433589+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.738893509s of 10.827541351s, submitted: 16
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:26.433901+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:27.434343+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:28.434762+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:29.435154+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:30.436016+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:31.437236+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:32.437615+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:33.438017+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 44810240 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:34.438491+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:35.438848+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:36.439674+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:37.440071+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:38.440527+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:39.440909+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:40.441489+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:41.442010+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:42.442549+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:43.442966+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:44.443269+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:45.443671+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:46.444037+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 8068 writes, 31K keys, 8068 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 8068 writes, 1863 syncs, 4.33 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1254 writes, 4156 keys, 1254 commit groups, 1.0 writes per commit group, ingest: 3.24 MB, 0.01 MB/s
                                            Interval WAL: 1254 writes, 527 syncs, 2.38 writes per sync, written: 0.00 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:47.444282+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:48.444698+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:49.445311+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:50.445659+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:51.446559+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:52.447062+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:53.447603+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:54.447963+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:55.448319+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:56.448616+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:57.449199+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:58.449590+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T19:59:59.450451+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:00.450871+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:01.451168+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:02.451621+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:03.451994+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:04.452229+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:05.453967+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:06.454466+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:07.454984+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:08.455575+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:09.455922+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:10.456235+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:11.456809+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:12.457546+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:13.457896+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:14.458203+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:15.458652+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:16.458982+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:17.459546+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:18.459918+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:19.460707+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:20.460921+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:21.461307+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:22.461755+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:23.462114+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:24.462447+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:25.462795+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:26.463066+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:27.463551+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:28.464100+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:29.464518+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:30.464896+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:31.465634+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:32.466111+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:33.466637+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:34.467011+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:35.467493+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:36.467784+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:37.468192+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:38.468624+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:39.469074+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:40.469346+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:41.469706+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:42.470125+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:43.470520+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:44.470880+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:45.471289+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:46.471669+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:47.471942+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:48.472291+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:49.472676+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:50.473145+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:51.473620+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:52.474097+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:53.474850+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:54.475107+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:55.475552+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:56.475963+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:57.477015+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:58.477681+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:00:59.478902+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:00.479296+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:01.479773+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:02.480583+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:03.481085+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:04.481962+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:05.482562+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:06.483077+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:07.483672+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:08.484178+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:09.484614+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:10.485098+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:11.485648+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:12.486248+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:13.486708+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:14.487226+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:15.487633+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:16.488059+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:17.488490+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:18.488830+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:19.489193+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:20.489668+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:21.490149+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:22.490807+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:23.491159+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304826 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:24.491645+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 44793856 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:25.492120+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 119.251411438s of 119.269851685s, submitted: 9
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114614272 unmapped: 44769280 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9684000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:26.492916+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114647040 unmapped: 44736512 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:27.493290+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 44703744 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:28.493625+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:29.494084+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:30.494639+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:31.494947+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:32.495569+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:33.495882+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:34.496331+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:35.496771+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:36.497175+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:37.497688+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:38.498090+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:39.498665+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:40.499104+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:41.499434+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:42.499928+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:43.500332+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:44.500835+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:45.501187+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:46.501610+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:47.502040+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:48.502788+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:49.503037+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:50.503366+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:51.503726+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:52.504111+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:53.504533+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:54.504898+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:55.505177+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:56.505515+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:57.505911+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:58.506683+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:01:59.508357+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:00.509525+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 44646400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:01.509815+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 44638208 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:02.510670+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 44638208 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:03.511891+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 44638208 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:04.512303+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 44638208 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:05.512667+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 44638208 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:06.512993+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:07.513620+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:08.513934+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:09.514465+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:10.514898+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:11.515353+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:12.515972+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:13.516297+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:14.516831+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:15.517205+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:16.517649+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:17.518114+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:18.518640+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-mon[191910]: pgmap v2632: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 35 op/s
Oct 02 20:32:20 compute-0 ceph-mon[191910]: from='client.15969 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:20 compute-0 ceph-mon[191910]: from='client.15971 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:20 compute-0 ceph-mon[191910]: from='client.15973 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:20 compute-0 ceph-mon[191910]: from='client.15975 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2760109210' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 20:32:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.10:0/2760109210' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 20:32:20 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1039633698' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:19.519137+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:20.519463+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:21.519882+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:22.520294+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:23.520694+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:24.521129+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:25.521548+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:26.521955+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:27.522348+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:28.522850+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:29.523213+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:30.523598+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:31.524053+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:32.524540+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:33.525312+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:34.525740+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:35.526266+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:36.526778+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:37.527161+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:38.527643+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:39.528183+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:40.528617+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:41.529005+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:42.529639+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:43.530020+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:44.530489+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:45.530816+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:46.531240+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:47.531617+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:48.531949+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:49.532310+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:50.532675+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:51.533062+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:52.533609+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:53.533943+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:54.534295+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:55.534649+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:56.535064+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:57.535445+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:58.535823+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:02:59.536445+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:00.536848+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:01.537076+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:02.537641+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:03.538091+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:04.538501+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:05.539535+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:06.539922+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:07.540245+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:08.540744+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:09.541117+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:10.541661+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:11.542166+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:12.542698+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:13.543079+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:14.543585+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:15.543991+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:16.544450+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 44630016 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:17.544823+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:18.545363+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:19.545806+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:20.546216+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:21.546611+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:22.547072+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:23.547528+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:24.547875+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:25.548189+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:26.548576+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:27.549101+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:28.549507+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:29.549960+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:30.550473+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:31.550887+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:32.551339+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:33.551685+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:34.552293+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:35.552680+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:36.553079+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114761728 unmapped: 44621824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:37.553635+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:38.555087+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:39.555547+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:40.556118+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:41.556591+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:42.557008+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:43.558180+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:44.559269+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:45.559680+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:46.561003+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:47.561686+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:48.562810+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:49.563087+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:50.563528+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:51.563943+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:52.564505+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:53.564921+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:54.565289+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:55.565764+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:56.566219+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:57.566677+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 44613632 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:58.566955+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:03:59.567326+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:00.567627+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:01.568152+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:02.568652+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:03.569068+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:04.569527+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:05.569970+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:06.570279+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:07.570700+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:08.571136+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:09.571476+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303946 data_alloc: 218103808 data_used: 16375808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 heartbeat osd_stat(store_statfs(0x4f9685000/0x0/0x4ffc00000, data 0x1f1d6b4/0x1fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:10.571883+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 44597248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:11.572282+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 165.190155029s of 165.937301636s, submitted: 106
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114794496 unmapped: 44589056 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:12.572740+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 135 heartbeat osd_stat(store_statfs(0x4f9681000/0x0/0x4ffc00000, data 0x1f1f231/0x1fec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 44556288 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:13.573136+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _renew_subs
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 136 ms_handle_reset con 0x55b4161eb400 session 0x55b4195c2d20
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:14.573639+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315220 data_alloc: 218103808 data_used: 16384000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:15.574034+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:16.574480+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:17.574995+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:18.575329+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:19.575816+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315220 data_alloc: 218103808 data_used: 16384000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:20.576070+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:21.576870+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:22.577704+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:23.578123+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:24.578562+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315220 data_alloc: 218103808 data_used: 16384000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:25.579018+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:26.579629+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:27.579876+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 44548096 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:28.580291+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 44539904 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:29.580519+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 44539904 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315220 data_alloc: 218103808 data_used: 16384000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:30.580900+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 44539904 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:31.581289+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 44539904 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:32.581864+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 44539904 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:33.582283+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114851840 unmapped: 44531712 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:34.582623+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 44515328 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315220 data_alloc: 218103808 data_used: 16384000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:35.583074+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 44515328 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:36.583522+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 44515328 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:37.583891+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:38.584244+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:39.584545+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315220 data_alloc: 218103808 data_used: 16384000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:40.584894+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:41.585179+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:42.585631+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:43.585944+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:44.586272+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315220 data_alloc: 218103808 data_used: 16384000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:45.586523+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:46.586858+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:47.587235+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 44507136 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:48.587580+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114884608 unmapped: 44498944 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:49.587935+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114884608 unmapped: 44498944 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315220 data_alloc: 218103808 data_used: 16384000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:50.588158+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114884608 unmapped: 44498944 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:51.588609+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f967c000/0x0/0x4ffc00000, data 0x1f20dd1/0x1ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114884608 unmapped: 44498944 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:52.589049+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114884608 unmapped: 44498944 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:53.589468+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114884608 unmapped: 44498944 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:54.589880+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 114892800 unmapped: 44490752 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:55.590268+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315220 data_alloc: 218103808 data_used: 16384000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.528549194s of 44.705825806s, submitted: 22
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 43466752 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:56.590623+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _renew_subs
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 136 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 137 ms_handle_reset con 0x55b4191a6c00 session 0x55b41958c1e0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f9678000/0x0/0x4ffc00000, data 0x1f22d71/0x1ff5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41965f400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 43466752 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 137 ms_handle_reset con 0x55b41965f400 session 0x55b419509680
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4196a9c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:57.591030+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 137 ms_handle_reset con 0x55b4196a9c00 session 0x55b419678d20
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f9678000/0x0/0x4ffc00000, data 0x1f22d71/0x1ff5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f9678000/0x0/0x4ffc00000, data 0x1f22d71/0x1ff5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4196a9c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 43622400 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:58.591258+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _renew_subs
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 138 ms_handle_reset con 0x55b4196a9c00 session 0x55b419679860
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 43573248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:04:59.591721+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9674000/0x0/0x4ffc00000, data 0x1f2451f/0x1ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 43573248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:00.592073+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324238 data_alloc: 218103808 data_used: 16392192
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 43573248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:01.592564+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 43573248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:02.592869+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 43573248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:03.593353+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9674000/0x0/0x4ffc00000, data 0x1f2451f/0x1ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 43573248 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:04.593763+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 43565056 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:05.594098+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f9674000/0x0/0x4ffc00000, data 0x1f2451f/0x1ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325268 data_alloc: 218103808 data_used: 16392192
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b41cb0c000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417715800
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417715800 session 0x55b4195921e0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4191a6c00 session 0x55b41cb0c1e0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41965f400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 43597824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b41965f400 session 0x55b417accd20
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:06.594470+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b416f3eb40
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417715800
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417715800 session 0x55b419679680
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 43597824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:07.594837+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 43597824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:08.595232+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 43597824 heap: 159383552 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:09.595659+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4191a6c00 session 0x55b4195085a0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f9674000/0x0/0x4ffc00000, data 0x1f25f82/0x1ff9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4196a9c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4196a9c00 session 0x55b4176ea960
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418aea000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.985056877s of 13.554063797s, submitted: 80
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b418aea000 session 0x55b41c9754a0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418aea000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b418aea000 session 0x55b41cb0c960
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16000 session 0x55b418afd680
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417715800
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b4195083c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417715800 session 0x55b41c974f00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 46579712 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4196a9c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:10.595916+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4196a9c00 session 0x55b41c974780
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4191a6c00 session 0x55b41961b4a0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1516733 data_alloc: 218103808 data_used: 16392192
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417715800
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f9675000/0x0/0x4ffc00000, data 0x1f25f82/0x1ff9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [1])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417715800 session 0x55b416ef4d20
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b4195ef2c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16000 session 0x55b41958c780
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 46678016 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:11.596230+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 46678016 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:12.596730+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:13.597224+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 46678016 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e41000/0x0/0x4ffc00000, data 0x3757ff3/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:14.597570+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 46678016 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418aea000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b418aea000 session 0x55b418b294a0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:15.597941+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 46678016 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1516617 data_alloc: 218103808 data_used: 16392192
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b4196083c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417715800
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417715800 session 0x55b41779c960
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:16.598323+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 46678016 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16000 session 0x55b41779de00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e41000/0x0/0x4ffc00000, data 0x3757ff3/0x382d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,1])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4191a6c00 session 0x55b418b4d680
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16400 session 0x55b416d42f00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:17.598629+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 47177728 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16400 session 0x55b4195c2960
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b419678b40
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:18.598839+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 47169536 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417715800
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16800
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:19.599097+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 47161344 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:20.599324+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 47161344 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523859 data_alloc: 218103808 data_used: 16404480
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e3e000/0x0/0x4ffc00000, data 0x3758049/0x3830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:21.599628+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 47153152 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:22.599894+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 121028608 unmapped: 42033152 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:23.600320+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 126115840 unmapped: 36945920 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:24.600530+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128401408 unmapped: 34660352 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:25.600753+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128401408 unmapped: 34660352 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701139 data_alloc: 234881024 data_used: 36032512
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:26.601001+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128401408 unmapped: 34660352 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e3e000/0x0/0x4ffc00000, data 0x3758049/0x3830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:27.601235+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128401408 unmapped: 34660352 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e3e000/0x0/0x4ffc00000, data 0x3758049/0x3830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:28.605324+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:29.605858+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:30.606023+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701139 data_alloc: 234881024 data_used: 36032512
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:31.606239+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e3e000/0x0/0x4ffc00000, data 0x3758049/0x3830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:32.606507+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:33.606720+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:34.606957+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e3e000/0x0/0x4ffc00000, data 0x3758049/0x3830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:35.607155+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701139 data_alloc: 234881024 data_used: 36032512
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:36.607331+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:37.607502+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:38.607695+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:39.608077+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:40.608651+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128434176 unmapped: 34627584 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701139 data_alloc: 234881024 data_used: 36032512
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e3e000/0x0/0x4ffc00000, data 0x3758049/0x3830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:41.608853+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:42.609112+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:43.609313+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:44.609522+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e3e000/0x0/0x4ffc00000, data 0x3758049/0x3830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:45.609715+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701139 data_alloc: 234881024 data_used: 36032512
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:46.609929+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7e3e000/0x0/0x4ffc00000, data 0x3758049/0x3830000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:47.610135+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:48.610428+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:49.610772+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:50.611084+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 34594816 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701139 data_alloc: 234881024 data_used: 36032512
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.761749268s of 41.352859497s, submitted: 93
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16c00 session 0x55b418b4c5a0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17000 session 0x55b417accd20
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:51.611466+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17400 session 0x55b41958d860
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b418b4cb40
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 34406400 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16400 session 0x55b416bb9a40
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f76a9000/0x0/0x4ffc00000, data 0x3eed049/0x3fc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,1])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:52.611952+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 27148288 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:53.612162+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 27148288 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16c00 session 0x55b416ef5e00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17000 session 0x55b417c96f00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17800
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17800 session 0x55b4195090e0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b41958c000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:54.612496+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16400 session 0x55b416a62780
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16c00 session 0x55b419593a40
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17000 session 0x55b418b4cf00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138330112 unmapped: 24731648 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17800
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17800 session 0x55b4195921e0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b41a20f0e0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5ef1000/0x0/0x4ffc00000, data 0x56a4059/0x577d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:55.612687+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 141189120 unmapped: 21872640 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1977734 data_alloc: 234881024 data_used: 37883904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:56.612970+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 141303808 unmapped: 21757952 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:57.613407+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 21716992 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:58.613640+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5e54000/0x0/0x4ffc00000, data 0x5741059/0x581a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 21716992 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5e54000/0x0/0x4ffc00000, data 0x5741059/0x581a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16400 session 0x55b418b4d680
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:05:59.613844+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 141762560 unmapped: 21299200 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:00.614143+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 141770752 unmapped: 21291008 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1985883 data_alloc: 234881024 data_used: 38105088
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:01.614406+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17c00 session 0x55b41cb0c5a0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 141770752 unmapped: 21291008 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b416bb4400 session 0x55b417cafc20
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:02.614655+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 141770752 unmapped: 21291008 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418b17000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b418b17000 session 0x55b418af72c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.103260994s of 12.145826340s, submitted: 212
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b418a6f0e0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:03.614936+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 143540224 unmapped: 19521536 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:04.615102+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147005440 unmapped: 16056320 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5de1000/0x0/0x4ffc00000, data 0x57b307c/0x588d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:05.615304+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147046400 unmapped: 16015360 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2051411 data_alloc: 251658240 data_used: 44470272
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:06.615543+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 148275200 unmapped: 14786560 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5de1000/0x0/0x4ffc00000, data 0x57b307c/0x588d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:07.615788+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 149921792 unmapped: 13139968 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:08.615964+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 156483584 unmapped: 6578176 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:09.616218+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b416bb4400 session 0x55b418afc5a0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16400 session 0x55b41a3a81e0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157810688 unmapped: 5251072 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:10.616437+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17c00 session 0x55b417acbe00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151175168 unmapped: 11886592 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1959003 data_alloc: 251658240 data_used: 44462080
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:11.616647+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151175168 unmapped: 11886592 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f695f000/0x0/0x4ffc00000, data 0x4c36049/0x4d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:12.616977+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151175168 unmapped: 11886592 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:13.617341+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f695f000/0x0/0x4ffc00000, data 0x4c36049/0x4d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 11853824 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f695f000/0x0/0x4ffc00000, data 0x4c36049/0x4d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:14.617727+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 11853824 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.932250023s of 12.194139481s, submitted: 54
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:15.617975+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 11853824 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1959223 data_alloc: 251658240 data_used: 44462080
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:16.618346+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 11853824 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:17.618757+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 11853824 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:18.619123+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f695d000/0x0/0x4ffc00000, data 0x4c39049/0x4d11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 11853824 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:19.619536+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 11853824 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:20.619925+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 11853824 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1959223 data_alloc: 251658240 data_used: 44462080
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:21.620261+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 11853824 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:22.620560+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151216128 unmapped: 11845632 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f695d000/0x0/0x4ffc00000, data 0x4c39049/0x4d11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:23.620840+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151216128 unmapped: 11845632 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:24.621127+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151216128 unmapped: 11845632 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.978276253s of 10.003231049s, submitted: 3
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:25.621415+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150732800 unmapped: 12328960 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1959223 data_alloc: 251658240 data_used: 44462080
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:26.621792+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150732800 unmapped: 12328960 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f695d000/0x0/0x4ffc00000, data 0x4c39049/0x4d11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:27.622127+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150773760 unmapped: 12288000 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:28.622450+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150781952 unmapped: 12279808 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:29.622767+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150781952 unmapped: 12279808 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:30.623094+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150790144 unmapped: 12271616 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1959719 data_alloc: 251658240 data_used: 44470272
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:31.623466+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150790144 unmapped: 12271616 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f695d000/0x0/0x4ffc00000, data 0x4c39049/0x4d11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:32.623846+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150798336 unmapped: 12263424 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:33.624139+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150798336 unmapped: 12263424 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:34.624464+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f6957000/0x0/0x4ffc00000, data 0x4c3f049/0x4d17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150798336 unmapped: 12263424 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:35.624753+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150798336 unmapped: 12263424 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1960287 data_alloc: 251658240 data_used: 44482560
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.874515533s of 10.945261955s, submitted: 10
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4191a6c00 session 0x55b419609680
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16800 session 0x55b418b58b40
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:36.625080+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151461888 unmapped: 11599872 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b418b4d4a0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:37.625290+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151420928 unmapped: 11640832 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:38.627638+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151420928 unmapped: 11640832 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:39.627870+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150274048 unmapped: 12787712 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:40.628058+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f6310000/0x0/0x4ffc00000, data 0x5286049/0x535e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150282240 unmapped: 12779520 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2016787 data_alloc: 251658240 data_used: 45211648
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:41.628238+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150282240 unmapped: 12779520 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:42.628515+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150290432 unmapped: 12771328 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:43.628756+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150290432 unmapped: 12771328 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:44.629044+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417715800 session 0x55b416bb83c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16000 session 0x55b4195ef4a0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f6310000/0x0/0x4ffc00000, data 0x5286049/0x535e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 150290432 unmapped: 12771328 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:45.629239+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17c00 session 0x55b417caef00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f6310000/0x0/0x4ffc00000, data 0x5286049/0x535e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1830524 data_alloc: 234881024 data_used: 38776832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:46.629473+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:47.629773+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:48.630114+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:49.630522+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:50.630879+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f71c3000/0x0/0x4ffc00000, data 0x43d4016/0x44aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1830524 data_alloc: 234881024 data_used: 38776832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:51.631286+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:52.631580+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:53.631782+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:54.632185+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f71c3000/0x0/0x4ffc00000, data 0x43d4016/0x44aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:55.632594+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1830524 data_alloc: 234881024 data_used: 38776832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:56.632863+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.567926407s of 21.020404816s, submitted: 99
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:57.633267+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:58.633665+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 17326080 heap: 163061760 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:06:59.633951+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17c00 session 0x55b41950a3c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f665a000/0x0/0x4ffc00000, data 0x4f3e016/0x5014000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145358848 unmapped: 35069952 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:00.634201+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145358848 unmapped: 35069952 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1919852 data_alloc: 234881024 data_used: 38776832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:01.634458+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145358848 unmapped: 35069952 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:02.634834+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4161eb400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4161eb400 session 0x55b41a3a92c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417715800
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417715800 session 0x55b418afdc20
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16000 session 0x55b41961b680
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145358848 unmapped: 35069952 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16800
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16800 session 0x55b419592960
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16800
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:03.635277+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16800 session 0x55b418b290e0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17c00 session 0x55b41958cb40
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b4191a6c00 session 0x55b418b583c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b418873c00 session 0x55b41cb0d860
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419224400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 35225600 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b419224400 session 0x55b41cb0c960
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:04.635654+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419224400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b419224400 session 0x55b41961b680
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5e5c000/0x0/0x4ffc00000, data 0x573a088/0x5812000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 35225600 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16800
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16800 session 0x55b41961a960
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:05.635881+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17c00 session 0x55b41961a5a0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 35225600 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1996427 data_alloc: 234881024 data_used: 38735872
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b418873c00 session 0x55b417acd680
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:06.636061+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b4191a6c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419225c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 35266560 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:07.636445+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 35266560 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:08.636699+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b41b8f5400 session 0x55b417caeb40
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 35266560 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b41b8f5400 session 0x55b416f3e3c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:09.636926+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5e31000/0x0/0x4ffc00000, data 0x5764098/0x583d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.122673988s of 12.545958519s, submitted: 72
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 144531456 unmapped: 35897344 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:10.637253+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17c00 session 0x55b41a20fa40
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b418873c00 session 0x55b41a20e3c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 144547840 unmapped: 35880960 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2020628 data_alloc: 234881024 data_used: 41746432
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419224400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:11.637450+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 146472960 unmapped: 33955840 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:12.637676+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 148168704 unmapped: 32260096 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5e2e000/0x0/0x4ffc00000, data 0x57650bb/0x583f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:13.638346+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 148201472 unmapped: 32227328 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:14.638547+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 148201472 unmapped: 32227328 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:15.638745+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 148357120 unmapped: 32071680 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2082172 data_alloc: 251658240 data_used: 50143232
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:16.638935+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 149028864 unmapped: 31399936 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:17.639146+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 31006720 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:18.639367+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5e2f000/0x0/0x4ffc00000, data 0x57650bb/0x583f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152772608 unmapped: 27656192 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:19.639604+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154820608 unmapped: 25608192 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:20.639859+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154820608 unmapped: 25608192 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2136412 data_alloc: 251658240 data_used: 57307136
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:21.640163+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.759449005s of 11.826215744s, submitted: 11
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154836992 unmapped: 25591808 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:22.640568+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154836992 unmapped: 25591808 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:23.640761+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5e2f000/0x0/0x4ffc00000, data 0x57650bb/0x583f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 25550848 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:24.640999+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 25550848 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:25.641235+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 25550848 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:26.641469+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2136412 data_alloc: 251658240 data_used: 57307136
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 25550848 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:27.641698+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 25550848 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:28.641922+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5e2f000/0x0/0x4ffc00000, data 0x57650bb/0x583f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154910720 unmapped: 25518080 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:29.642275+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154910720 unmapped: 25518080 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:30.642563+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154910720 unmapped: 25518080 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:31.642784+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2136412 data_alloc: 251658240 data_used: 57307136
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5e2f000/0x0/0x4ffc00000, data 0x57650bb/0x583f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154910720 unmapped: 25518080 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:32.643086+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154910720 unmapped: 25518080 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:33.643508+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154910720 unmapped: 25518080 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:34.643794+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5800
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.154161453s of 13.161978722s, submitted: 1
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b41b8f5800 session 0x55b416ef5860
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157532160 unmapped: 22896640 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:35.644131+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157532160 unmapped: 22896640 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:36.644321+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2202704 data_alloc: 251658240 data_used: 57307136
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5698000/0x0/0x4ffc00000, data 0x5efc0bb/0x5fd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157532160 unmapped: 22896640 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:37.644607+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157532160 unmapped: 22896640 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:38.644996+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157532160 unmapped: 22896640 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:39.645306+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:40.645632+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157540352 unmapped: 22888448 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:41.645923+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157540352 unmapped: 22888448 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2202704 data_alloc: 251658240 data_used: 57307136
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:42.646465+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157540352 unmapped: 22888448 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5698000/0x0/0x4ffc00000, data 0x5efc0bb/0x5fd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:43.646716+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 157548544 unmapped: 22880256 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b41b8f5c00 session 0x55b419508b40
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:44.646923+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5698000/0x0/0x4ffc00000, data 0x5efc0bb/0x5fd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,7])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 158302208 unmapped: 22126592 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:45.647114+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 164069376 unmapped: 16359424 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.382223129s of 10.757692337s, submitted: 94
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b416bb4400 session 0x55b4176eb2c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16400 session 0x55b418a48960
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:46.647443+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152526848 unmapped: 27901952 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2026518 data_alloc: 234881024 data_used: 41459712
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b418873c00 session 0x55b418afc960
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:47.647635+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152543232 unmapped: 27885568 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:48.647822+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152543232 unmapped: 27885568 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:49.648050+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 153714688 unmapped: 26714112 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5ed7000/0x0/0x4ffc00000, data 0x56bd027/0x5794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:50.648515+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 153714688 unmapped: 26714112 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b41b8f5400 session 0x55b4196794a0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5800
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:51.650501+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151445504 unmapped: 28983296 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1966314 data_alloc: 234881024 data_used: 41656320
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b41b8f5800 session 0x55b418a6f680
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:52.652249+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 151445504 unmapped: 28983296 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:53.652507+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 27951104 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f666e000/0x0/0x4ffc00000, data 0x4f29027/0x5000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5b5e000/0x0/0x4ffc00000, data 0x5a31027/0x5b08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:54.652714+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154107904 unmapped: 26320896 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:55.652966+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 154173440 unmapped: 26255360 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.284941673s of 10.033411026s, submitted: 154
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:56.653359+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152911872 unmapped: 27516928 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2061334 data_alloc: 234881024 data_used: 41906176
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:57.653870+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152920064 unmapped: 27508736 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:58.654124+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152920064 unmapped: 27508736 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5ae3000/0x0/0x4ffc00000, data 0x5ab3027/0x5b8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:07:59.654537+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152920064 unmapped: 27508736 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:00.654841+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152920064 unmapped: 27508736 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:01.655094+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152920064 unmapped: 27508736 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2061526 data_alloc: 234881024 data_used: 41910272
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f5ae3000/0x0/0x4ffc00000, data 0x5ab3027/0x5b8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:02.655528+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 152920064 unmapped: 27508736 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b419224400 session 0x55b418b29c20
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:03.656175+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 153067520 unmapped: 27361280 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f6ad3000/0x0/0x4ffc00000, data 0x472cfb5/0x4802000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b416bb4400 session 0x55b41950ab40
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:04.656445+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147431424 unmapped: 32997376 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:05.656692+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147431424 unmapped: 32997376 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:06.657044+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147431424 unmapped: 32997376 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1859028 data_alloc: 234881024 data_used: 34693120
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:07.657325+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147431424 unmapped: 32997376 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:08.657636+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147431424 unmapped: 32997376 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f6ad3000/0x0/0x4ffc00000, data 0x472cf92/0x4801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:09.657977+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147439616 unmapped: 32989184 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:10.658603+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147439616 unmapped: 32989184 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:11.659023+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.170475960s of 15.876229286s, submitted: 80
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16c00 session 0x55b418b4d2c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c17000 session 0x55b417b532c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 147447808 unmapped: 32980992 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1858320 data_alloc: 234881024 data_used: 34693120
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:12.659875+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139968512 unmapped: 40460288 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 ms_handle_reset con 0x55b417c16400 session 0x55b41c9743c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 heartbeat osd_stat(store_statfs(0x4f7c6e000/0x0/0x4ffc00000, data 0x392bf92/0x3a00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:13.660207+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139993088 unmapped: 40435712 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:14.660457+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139993088 unmapped: 40435712 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:15.660711+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139993088 unmapped: 40435712 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:16.661074+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c6a000/0x0/0x4ffc00000, data 0x392db0f/0x3a03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139968512 unmapped: 40460288 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691332 data_alloc: 234881024 data_used: 26165248
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:17.661731+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139968512 unmapped: 40460288 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:18.662134+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139968512 unmapped: 40460288 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:19.662495+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139968512 unmapped: 40460288 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:20.662789+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139968512 unmapped: 40460288 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:21.663103+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.873231888s of 10.174868584s, submitted: 44
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139968512 unmapped: 40460288 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b416bb4400 session 0x55b416e47c20
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c16c00 session 0x55b419679680
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1690696 data_alloc: 234881024 data_used: 26165248
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c6a000/0x0/0x4ffc00000, data 0x392db0f/0x3a03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c17000 session 0x55b4196790e0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419224400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b419224400 session 0x55b41958d860
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b418873c00 session 0x55b41a20e000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:22.663433+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b416bb4400 session 0x55b41cb0cf00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c16c00 session 0x55b41961b860
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138649600 unmapped: 41779200 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c17000 session 0x55b418b285a0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b418873c00 session 0x55b416bb8960
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b4191a6c00 session 0x55b417caef00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b419225c00 session 0x55b41958c780
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:23.663730+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138649600 unmapped: 41779200 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:24.663960+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b416bb4400 session 0x55b41961b680
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128802816 unmapped: 51625984 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f8f10000/0x0/0x4ffc00000, data 0x220caff/0x22e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:25.664169+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128802816 unmapped: 51625984 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:26.664610+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128802816 unmapped: 51625984 heap: 180428800 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437130 data_alloc: 218103808 data_used: 14290944
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f8f10000/0x0/0x4ffc00000, data 0x220caff/0x22e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:27.664821+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 54796288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c16c00 session 0x55b41a20f2c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b418873c00 session 0x55b416ef4f00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:28.665203+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c17000 session 0x55b4176eba40
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130154496 unmapped: 54476800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c17000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b416bb4400 session 0x55b418a6f0e0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:29.665473+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130129920 unmapped: 54501376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b418873c00 session 0x55b4196794a0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:30.665898+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f8534000/0x0/0x4ffc00000, data 0x2c54b61/0x2d2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130129920 unmapped: 54501376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419225c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b419225c00 session 0x55b419679680
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f8534000/0x0/0x4ffc00000, data 0x2c54b61/0x2d2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419224400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b419224400 session 0x55b4196790e0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:31.666113+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130129920 unmapped: 54501376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528772 data_alloc: 218103808 data_used: 14290944
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:32.666355+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130129920 unmapped: 54501376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.741597176s of 11.463397026s, submitted: 94
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c17000 session 0x55b418b294a0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c16c00 session 0x55b419678960
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f8532000/0x0/0x4ffc00000, data 0x2c54b94/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:33.666630+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 129007616 unmapped: 55623680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b416bb4400 session 0x55b416bc0960
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:34.667035+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128999424 unmapped: 55631872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x294bb94/0x2a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:35.667544+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128851968 unmapped: 55779328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:36.667777+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 128868352 unmapped: 55762944 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1549376 data_alloc: 218103808 data_used: 19161088
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:37.668143+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:38.668602+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:39.668954+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:40.669308+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x294bb94/0x2a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:41.669641+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578816 data_alloc: 234881024 data_used: 22302720
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:42.670059+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:43.670548+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:44.674822+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:45.675079+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x294bb94/0x2a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:46.675449+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578816 data_alloc: 234881024 data_used: 22302720
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:47.675761+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:48.676140+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:49.677063+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:50.677441+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:51.678024+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x294bb94/0x2a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578816 data_alloc: 234881024 data_used: 22302720
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:52.678611+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:53.679006+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:54.679274+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x294bb94/0x2a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:55.679586+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:56.680061+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578816 data_alloc: 234881024 data_used: 22302720
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x294bb94/0x2a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:57.680480+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:58.680952+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:08:59.681263+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:00.681633+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:01.681988+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 54583296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578816 data_alloc: 234881024 data_used: 22302720
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:02.682289+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 54575104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x294bb94/0x2a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:03.682492+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 54575104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:04.682890+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 54575104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:05.683199+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 54575104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x294bb94/0x2a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:06.683554+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 54575104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578816 data_alloc: 234881024 data_used: 22302720
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:07.683836+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 54575104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:08.684241+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.585948944s of 35.731628418s, submitted: 20
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x294bb94/0x2a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 54575104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:09.684704+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139722752 unmapped: 44908544 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:10.684957+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7cc0000/0x0/0x4ffc00000, data 0x34c6b94/0x359e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 47013888 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:11.685503+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 47013888 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1669322 data_alloc: 234881024 data_used: 23547904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:12.685957+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137715712 unmapped: 46915584 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:13.686715+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c21000/0x0/0x4ffc00000, data 0x3565b94/0x363d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:14.687108+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:15.687550+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:16.687842+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673848 data_alloc: 234881024 data_used: 23543808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:17.688157+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:18.688656+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 46628864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:19.689183+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.388254166s of 10.945773125s, submitted: 128
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c21000/0x0/0x4ffc00000, data 0x3565b94/0x363d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 46628864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:20.689566+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 46628864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:21.689953+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 46628864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1674064 data_alloc: 234881024 data_used: 23543808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:22.690590+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 46628864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:23.691033+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 46628864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:24.691469+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 46628864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:25.691704+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 46628864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:26.692102+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 46620672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1674064 data_alloc: 234881024 data_used: 23543808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:27.692607+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 46620672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:28.693049+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 46620672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:29.693544+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 46620672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:30.693919+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 46620672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:31.694295+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 46620672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1674064 data_alloc: 234881024 data_used: 23543808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:32.694613+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 46620672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:33.694963+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 46620672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:34.695295+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 46612480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:35.695753+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 46612480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:36.696226+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 46612480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1674064 data_alloc: 234881024 data_used: 23543808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:37.696479+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 46612480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:38.696898+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 46612480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:39.697301+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 46612480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:40.697705+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 46612480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:41.697939+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 46612480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1674064 data_alloc: 234881024 data_used: 23543808
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:42.698298+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138027008 unmapped: 46604288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:43.698742+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.028068542s of 24.047372818s, submitted: 1
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138027008 unmapped: 46604288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:44.699143+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138027008 unmapped: 46604288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:45.699557+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138027008 unmapped: 46604288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:46.699894+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2829 syncs, 3.68 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 2346 writes, 8456 keys, 2346 commit groups, 1.0 writes per commit group, ingest: 8.13 MB, 0.01 MB/s
                                            Interval WAL: 2346 writes, 966 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138027008 unmapped: 46604288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:47.700203+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138027008 unmapped: 46604288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:48.700778+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138035200 unmapped: 46596096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:49.701188+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138035200 unmapped: 46596096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:50.701571+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138035200 unmapped: 46596096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:51.701934+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138035200 unmapped: 46596096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:52.702474+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:53.702926+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138035200 unmapped: 46596096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: mgrc ms_handle_reset ms_handle_reset con 0x55b417714800
Oct 02 20:32:20 compute-0 ceph-osd[206053]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2078717049
Oct 02 20:32:20 compute-0 ceph-osd[206053]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2078717049,v1:192.168.122.100:6801/2078717049]
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: get_auth_request con 0x55b417c17000 auth_method 0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: mgrc handle_mgr_configure stats_period=5
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:54.703320+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:55.703685+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:56.704019+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:57.704330+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:58.704666+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:09:59.705036+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 46751744 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:00.705545+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 46751744 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b41ad9a400 session 0x55b41cb0de00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419225c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:01.706004+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 46751744 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:02.706670+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 46751744 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:03.706908+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 46751744 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:04.707088+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:05.707579+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:06.707831+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:07.708339+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:08.708797+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:09.709215+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137854976 unmapped: 46776320 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:10.709507+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137854976 unmapped: 46776320 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:11.709961+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137854976 unmapped: 46776320 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:12.710475+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137854976 unmapped: 46776320 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:13.710804+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137854976 unmapped: 46776320 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:14.711007+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137854976 unmapped: 46776320 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:15.711241+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:16.711679+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:17.712034+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:18.712283+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:19.712584+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:20.712825+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:21.713821+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:22.714595+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137863168 unmapped: 46768128 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:23.714817+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 46751744 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:24.715168+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 46751744 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:25.715606+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 46751744 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:26.715911+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137887744 unmapped: 46743552 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:27.716230+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137887744 unmapped: 46743552 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:28.716585+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137887744 unmapped: 46743552 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:29.716833+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137887744 unmapped: 46743552 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:30.717078+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137887744 unmapped: 46743552 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:31.717548+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 46735360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:32.717922+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 46735360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:33.718178+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 46735360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:34.718440+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 46735360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:35.718644+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 46735360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:36.718879+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 46735360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:37.719449+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 46735360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:38.720187+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 46735360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:39.720939+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:40.721718+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:41.723122+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:42.724896+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:43.726640+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:44.727873+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:45.728652+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137904128 unmapped: 46727168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:46.729328+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 46718976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:47.730091+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 46718976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:48.730859+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 46718976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:49.731633+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 46718976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:50.732352+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 46718976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:51.733195+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 46718976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:52.735213+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 46718976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:53.736596+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 46718976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:54.738118+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 46710784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:55.739131+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 46710784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:56.739851+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 46710784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:57.740632+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 46710784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:58.742197+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 46710784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:10:59.743967+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 46710784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:00.745194+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 46710784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:01.746162+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 46710784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:02.747057+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 46710784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673136 data_alloc: 234881024 data_used: 23547904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b41b8f5c00 session 0x55b418afd0e0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41c60a000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b41c60a000 session 0x55b41958d860
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41c60a400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b41c60a400 session 0x55b41958c780
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b416bb4400 session 0x55b41958cb40
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b417c16c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 79.477409363s of 79.490234375s, submitted: 2
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:03.748441+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x3567b94/0x363f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 141189120 unmapped: 43442176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:04.749886+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c16c00 session 0x55b41958c5a0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b41b8f5c00 session 0x55b416ef4000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41c60a000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b41c60a000 session 0x55b41cb0cf00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419660800
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b419660800 session 0x55b41a20e000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b416bb4400 session 0x55b41a20f2c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:05.751689+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:06.752977+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:07.754744+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706622 data_alloc: 234881024 data_used: 23547904
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:08.755698+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7877000/0x0/0x4ffc00000, data 0x390eba4/0x39e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:09.756265+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 46637056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:10.756495+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419660c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7877000/0x0/0x4ffc00000, data 0x390eba4/0x39e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 46620672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b419660c00 session 0x55b41961b860
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:11.756801+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138182656 unmapped: 46448640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419660400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419661000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:12.757123+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138182656 unmapped: 46448640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1710869 data_alloc: 234881024 data_used: 23556096
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:13.757345+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138182656 unmapped: 46448640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:14.757515+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 46047232 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:15.757721+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f784d000/0x0/0x4ffc00000, data 0x3938ba4/0x3a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 46047232 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:16.758293+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 46047232 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b417c8ec00 session 0x55b418af63c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419661c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:17.758702+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 46047232 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1729749 data_alloc: 234881024 data_used: 26247168
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f784d000/0x0/0x4ffc00000, data 0x3938ba4/0x3a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:18.758903+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 46047232 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:19.759107+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 46047232 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:20.759365+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 46047232 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:21.759620+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f784d000/0x0/0x4ffc00000, data 0x3938ba4/0x3a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 46047232 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:22.759873+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138584064 unmapped: 46047232 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1729749 data_alloc: 234881024 data_used: 26247168
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:23.760100+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.415521622s of 20.502935410s, submitted: 19
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138575872 unmapped: 46055424 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:24.760354+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138608640 unmapped: 46022656 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:25.760852+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f784d000/0x0/0x4ffc00000, data 0x3938ba4/0x3a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,1])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138616832 unmapped: 46014464 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:26.761245+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138665984 unmapped: 45965312 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:27.761942+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138698752 unmapped: 45932544 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1731829 data_alloc: 234881024 data_used: 26300416
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:28.762874+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138723328 unmapped: 45907968 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:29.763318+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f784d000/0x0/0x4ffc00000, data 0x3938ba4/0x3a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138723328 unmapped: 45907968 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:30.763516+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138723328 unmapped: 45907968 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:31.763752+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f784d000/0x0/0x4ffc00000, data 0x3938ba4/0x3a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138723328 unmapped: 45907968 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:32.764162+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138723328 unmapped: 45907968 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1731829 data_alloc: 234881024 data_used: 26300416
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:33.764572+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138731520 unmapped: 45899776 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:34.768963+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138731520 unmapped: 45899776 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:35.770095+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138731520 unmapped: 45899776 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f784d000/0x0/0x4ffc00000, data 0x3938ba4/0x3a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:36.770856+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138731520 unmapped: 45899776 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:37.771859+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138731520 unmapped: 45899776 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1731829 data_alloc: 234881024 data_used: 26300416
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:38.773038+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138731520 unmapped: 45899776 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f784d000/0x0/0x4ffc00000, data 0x3938ba4/0x3a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:39.773494+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138739712 unmapped: 45891584 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:40.773693+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138739712 unmapped: 45891584 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:41.774125+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138739712 unmapped: 45891584 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:42.774447+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138747904 unmapped: 45883392 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1731829 data_alloc: 234881024 data_used: 26300416
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:43.774643+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138747904 unmapped: 45883392 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:44.774849+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f784d000/0x0/0x4ffc00000, data 0x3938ba4/0x3a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138747904 unmapped: 45883392 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:45.775046+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 138747904 unmapped: 45883392 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.649646759s of 22.576417923s, submitted: 132
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:46.775340+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f76a3000/0x0/0x4ffc00000, data 0x3ae2ba4/0x3bbb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 139993088 unmapped: 44638208 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:47.775607+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1751883 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:48.776060+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f769e000/0x0/0x4ffc00000, data 0x3ae6ba4/0x3bbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:49.776310+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:50.776624+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:51.776894+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:52.777177+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:53.777671+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:54.778025+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:55.778465+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:56.778834+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:57.779224+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:58.779630+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:11:59.779862+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:00.780200+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:01.780425+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:02.780710+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:03.780938+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140025856 unmapped: 44605440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:04.781194+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:05.781499+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:06.781661+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:07.782014+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:08.782287+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:09.782680+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:10.782912+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:11.783174+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:12.783496+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:13.783768+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 44597248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:14.784193+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140042240 unmapped: 44589056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:15.784496+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140042240 unmapped: 44589056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:16.784725+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140042240 unmapped: 44589056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:17.785079+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140042240 unmapped: 44589056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:18.785438+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140042240 unmapped: 44589056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:19.785734+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140042240 unmapped: 44589056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:20.785990+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140042240 unmapped: 44589056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:21.786230+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140050432 unmapped: 44580864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:22.786513+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:23.786731+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140050432 unmapped: 44580864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:24.787958+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140050432 unmapped: 44580864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:25.788308+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140050432 unmapped: 44580864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:26.788906+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140050432 unmapped: 44580864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:27.789143+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140050432 unmapped: 44580864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:28.789518+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140050432 unmapped: 44580864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:29.789760+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140058624 unmapped: 44572672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:30.790321+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140058624 unmapped: 44572672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:31.793916+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140058624 unmapped: 44572672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:32.794740+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140058624 unmapped: 44572672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:33.794987+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140058624 unmapped: 44572672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:34.795230+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140058624 unmapped: 44572672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:35.795599+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140058624 unmapped: 44572672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:36.795986+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140058624 unmapped: 44572672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:37.796333+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140066816 unmapped: 44564480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:38.796688+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140066816 unmapped: 44564480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:39.797032+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140066816 unmapped: 44564480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:40.797252+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140066816 unmapped: 44564480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:41.797469+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:42.797714+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:43.798062+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:44.798264+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:45.798545+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:46.798829+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:47.799125+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:48.799624+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:49.799933+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:50.800290+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 44556288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:51.800628+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140083200 unmapped: 44548096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:52.800938+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140083200 unmapped: 44548096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:53.801283+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140091392 unmapped: 44539904 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:54.801773+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140091392 unmapped: 44539904 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:55.802153+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:56.802605+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:57.803022+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:58.803512+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:12:59.803854+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:00.804244+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:01.804483+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:02.804870+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:03.805296+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:04.805577+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:05.806004+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:06.806474+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:07.806868+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:08.807167+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:09.807563+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:10.807913+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:11.808248+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:12.808615+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:13.808894+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:14.809152+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:15.809607+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:16.810184+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 44523520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:17.810609+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:18.811088+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:19.811323+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:20.811662+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:21.811925+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:22.812293+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:23.812584+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:24.812881+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:25.813277+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:26.813627+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:27.813859+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:28.814141+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:29.814433+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:30.814726+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:31.815004+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:32.815359+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 44531712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:33.815696+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:34.816048+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:35.816553+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:36.816972+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:37.817262+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:38.817755+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:39.818125+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:40.818364+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:41.818727+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140115968 unmapped: 44515328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:42.819034+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:43.819326+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:44.819614+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:45.820142+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:46.820507+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:47.820771+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:48.821171+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:49.821581+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:50.821775+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140124160 unmapped: 44507136 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:51.821955+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 44498944 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:52.822295+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 44498944 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:53.822726+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 44498944 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:54.822923+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 44498944 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:55.823257+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 44498944 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:56.823646+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140132352 unmapped: 44498944 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:57.824033+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:58.824279+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:13:59.824533+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:00.824758+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:01.824981+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:02.825239+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:03.825535+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:04.825900+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:05.826240+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:06.826605+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 44490752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:07.826828+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 44482560 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:08.827205+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 44482560 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760183 data_alloc: 234881024 data_used: 26476544
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:09.827583+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 44482560 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:10.828046+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 44482560 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:11.828466+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 44482560 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:12.828864+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 44482560 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b4196a9000 session 0x55b416bc0b40
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:13.829126+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b41b8f5c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140148736 unmapped: 44482560 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760663 data_alloc: 234881024 data_used: 26554368
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:14.829598+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7692000/0x0/0x4ffc00000, data 0x3af2ba4/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:15.829830+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:16.830214+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:17.830504+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:18.830913+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760663 data_alloc: 234881024 data_used: 26554368
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:19.831199+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 153.187469482s of 153.408279419s, submitted: 21
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:20.831465+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af4ba4/0x3bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:21.831721+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:22.832171+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:23.832551+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af4ba4/0x3bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 nova_compute[355794]: 2025-10-02 20:32:20.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760847 data_alloc: 234881024 data_used: 26554368
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:24.832851+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:25.833182+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:26.833476+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:27.833802+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:28.834101+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760847 data_alloc: 234881024 data_used: 26554368
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:29.834349+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af4ba4/0x3bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:30.834588+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:31.834913+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:32.835246+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:33.835580+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1761007 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:34.835982+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af4ba4/0x3bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:35.836446+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.862440109s of 15.872762680s, submitted: 1
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:36.836808+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:37.837186+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af5ba4/0x3bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:38.837503+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759827 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af5ba4/0x3bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:39.837795+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af5ba4/0x3bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:40.838083+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:41.838508+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:42.838961+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:43.839348+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759827 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:44.839563+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af5ba4/0x3bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:45.839913+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:46.840273+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:47.840561+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af5ba4/0x3bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:48.841348+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759827 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:49.841659+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af5ba4/0x3bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:50.841914+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:51.842303+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af5ba4/0x3bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:52.842780+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x3af5ba4/0x3bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:53.843188+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 44466176 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759827 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:54.843542+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.933099747s of 18.943176270s, submitted: 1
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 44474368 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:55.843873+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 44457984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:56.844156+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 44457984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:57.844565+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 44457984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:58.844879+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 44457984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759959 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:14:59.845150+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 44457984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:00.845607+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 44457984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:01.845916+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 44457984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:02.846313+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 44457984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:03.846595+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759959 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:04.846967+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.221395493s of 10.239365578s, submitted: 2
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:05.847320+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:06.847667+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:07.848063+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:08.848488+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:09.848775+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:10.849025+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:11.849491+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:12.851223+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:13.851757+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:14.852188+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:15.852480+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140181504 unmapped: 44449792 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:16.852685+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140189696 unmapped: 44441600 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:17.852934+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140189696 unmapped: 44441600 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:18.853128+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140189696 unmapped: 44441600 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:19.853628+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:20.853895+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:21.854318+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:22.858244+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:23.858698+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:24.859108+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:25.859590+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:26.859906+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:27.860189+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:28.860436+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 44433408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:29.860654+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140206080 unmapped: 44425216 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:30.860844+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140206080 unmapped: 44425216 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:31.861293+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:32.861747+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:33.862161+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:34.863294+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:35.863695+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:36.864091+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:37.864650+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:38.864946+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:39.865448+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:40.868339+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:41.868823+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 44417024 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:42.869325+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:43.869767+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:44.870358+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:45.870864+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:46.871186+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:47.871440+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:48.871890+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:49.872461+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:50.872812+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:51.873221+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:52.873712+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:53.874193+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:54.874972+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:55.875647+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:56.876115+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:57.876775+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140222464 unmapped: 44408832 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:58.877086+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140230656 unmapped: 44400640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:15:59.877323+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140230656 unmapped: 44400640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:00.877557+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140230656 unmapped: 44400640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:01.877853+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140230656 unmapped: 44400640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:02.878333+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140230656 unmapped: 44400640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:03.878638+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140230656 unmapped: 44400640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:04.878976+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140230656 unmapped: 44400640 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:05.879275+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 44392448 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:06.879581+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 44392448 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:07.880083+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 44392448 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:08.880443+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 44392448 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:09.880706+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 44392448 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:10.881126+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 44392448 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:11.881528+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 44392448 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:12.881905+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 44392448 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:13.882494+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140238848 unmapped: 44392448 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:14.882990+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:15.884072+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:16.884460+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:17.884714+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:18.884944+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:19.885140+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:20.885346+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:21.885528+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:22.885767+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:23.886015+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:24.886267+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140247040 unmapped: 44384256 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:25.886640+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140255232 unmapped: 44376064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:26.886878+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140255232 unmapped: 44376064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:27.887076+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140255232 unmapped: 44376064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:28.887356+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:29.887782+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:30.888133+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:31.888630+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:32.889020+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:33.889473+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:34.889862+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:35.890262+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:36.890579+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:37.891018+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:38.891538+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:39.891923+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:40.892352+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:41.892666+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 44367872 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:42.893023+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:43.893463+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:44.893676+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1760135 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:45.893967+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:46.894217+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:47.894524+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768e000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:48.894796+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 104.154281616s of 104.162818909s, submitted: 1
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:49.895165+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759783 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:50.895655+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:51.895929+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:52.896503+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140271616 unmapped: 44359680 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:53.896883+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:54.897255+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759783 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:55.897552+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:56.897988+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:57.898609+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:58.899647+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:16:59.899972+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759783 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:00.900556+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:01.900866+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:02.901630+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 44351488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:03.901997+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140288000 unmapped: 44343296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:04.902583+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140288000 unmapped: 44343296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759783 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:05.902933+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140288000 unmapped: 44343296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:06.903285+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140288000 unmapped: 44343296 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:07.903764+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:08.904806+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:09.905119+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759783 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:10.905667+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:11.906020+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:12.906563+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 rsyslogd[187702]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:13.907039+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:14.907612+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759783 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:15.908084+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:16.908640+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 44335104 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:17.909113+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 44326912 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:18.909450+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 44326912 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:19.909949+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 44326912 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759783 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:20.910325+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 44326912 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:21.910786+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 44326912 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:22.911151+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 44326912 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:23.911628+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 44326912 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:24.912114+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 44326912 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759783 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:25.912508+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 44326912 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:26.912963+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140312576 unmapped: 44318720 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:27.913332+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140312576 unmapped: 44318720 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:28.913717+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140312576 unmapped: 44318720 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768f000/0x0/0x4ffc00000, data 0x3af6ba4/0x3bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:29.914140+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.837165833s of 40.845638275s, submitted: 1
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:30.914542+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:31.914869+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:32.915298+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:33.915706+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:34.915938+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:35.916271+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:36.916579+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:37.916929+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:38.917296+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:39.917646+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:40.918005+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:41.918472+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:42.918917+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:43.919253+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:44.919701+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:45.920140+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:46.920694+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:47.920927+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:48.921254+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:49.921666+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:50.921990+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:51.922746+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:52.923121+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:53.923592+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:54.923915+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:55.924304+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:56.924671+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:57.925069+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 44294144 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:58.925481+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:17:59.925872+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 44310528 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:00.926121+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:01.926658+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:02.928022+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:03.929022+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:04.929363+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:05.929983+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:06.930336+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:07.930891+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:08.931320+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:09.931686+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:10.932139+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:11.932563+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:12.933030+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:13.933345+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 44302336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:14.933710+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 44294144 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:15.934090+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 44294144 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:16.934484+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 44294144 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:17.935006+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 44294144 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:18.935436+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 44294144 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:19.936019+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 44294144 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:20.936264+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 44294144 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:21.936709+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140337152 unmapped: 44294144 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:22.937278+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:23.937642+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:24.938036+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:25.938562+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:26.938839+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:27.939222+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:28.939595+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:29.939830+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:30.940225+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:31.940592+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:32.941079+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:33.941540+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:34.941856+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:35.942097+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:36.942489+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:37.942985+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140353536 unmapped: 44277760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:38.943349+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:39.943760+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:40.944224+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:41.944514+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:42.944913+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:43.945493+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:44.946708+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:45.947126+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:46.947429+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:47.947873+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 44269568 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:48.948336+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140369920 unmapped: 44261376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:49.948846+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140369920 unmapped: 44261376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:50.949554+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140369920 unmapped: 44261376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:51.950025+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:52.950497+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140369920 unmapped: 44261376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:53.950698+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140369920 unmapped: 44261376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:54.950993+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140369920 unmapped: 44261376 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:55.951227+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:56.951587+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:57.952023+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:58.952476+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:18:59.952768+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:00.953671+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:01.954114+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:02.954646+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:03.954907+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:04.955193+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:05.955683+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:06.956108+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:07.956680+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:08.957124+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 44253184 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:09.957630+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 44244992 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:10.958002+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:11.958582+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:12.959061+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:13.959560+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:14.959962+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:15.960785+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:16.961098+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:17.961557+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:18.961811+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:19.962149+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:20.962455+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:21.962830+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:22.963425+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 44236800 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:23.963883+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140402688 unmapped: 44228608 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:24.964290+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140402688 unmapped: 44228608 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:25.964830+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140402688 unmapped: 44228608 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:26.965131+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140402688 unmapped: 44228608 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:27.965562+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140402688 unmapped: 44228608 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:28.965982+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140402688 unmapped: 44228608 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:29.966301+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 44220416 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:30.966748+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 44220416 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:31.967146+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 44220416 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:32.967470+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 44220416 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:33.967698+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 44220416 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:34.968159+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:35.968461+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:36.968790+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:37.969200+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:38.969613+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:39.970511+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:40.971099+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:41.971563+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:42.971988+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:43.972517+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:44.973149+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:45.973619+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:46.974192+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2997 syncs, 3.59 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 347 writes, 721 keys, 347 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s
                                            Interval WAL: 347 writes, 168 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:47.974646+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:48.975086+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:49.975670+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140419072 unmapped: 44212224 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:50.976030+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 44204032 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:51.976648+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 44204032 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:52.977166+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 44204032 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:53.977676+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 44204032 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:54.977907+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 44204032 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:55.978243+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 44204032 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:56.978644+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 44204032 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:57.978919+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 44204032 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:58.979252+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 44195840 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:19:59.979498+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 44195840 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:00.979823+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 44195840 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:01.980163+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 44195840 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:02.980642+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 44195840 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:03.980998+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 44195840 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:04.981465+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 44195840 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:05.981903+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 44195840 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:06.982259+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:07.982671+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:08.983087+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:09.983554+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:10.983955+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:11.984351+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:12.985073+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:13.985684+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:14.986490+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:15.987053+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:16.987720+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:17.988316+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:18.988852+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:19.989203+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 44187648 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:20.989646+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140451840 unmapped: 44179456 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:21.989987+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140451840 unmapped: 44179456 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:22.990578+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:23.990917+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:24.991147+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:25.991640+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:26.992093+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:27.992605+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:28.993000+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:29.993463+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:30.993901+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:31.994218+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 44171264 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:32.994728+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:33.995051+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:34.995542+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:35.995814+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:36.996094+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 234881024 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:37.996769+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:38.997128+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:39.997522+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:40.997734+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:41.998079+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 218103808 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:42.998638+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:43.999126+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:44.999603+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:45.999913+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 44163072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:47.000215+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 218103808 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 44154880 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:48.000507+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 44154880 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:49.000845+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 44154880 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:50.001237+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 44154880 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:51.001603+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 44154880 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:52.001984+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 218103808 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 44154880 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:53.002490+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 44154880 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:54.002801+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 44154880 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:55.003194+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 44146688 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:56.003619+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 44146688 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:57.004018+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1759915 data_alloc: 218103808 data_used: 26558464
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 44146688 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:58.004513+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f768d000/0x0/0x4ffc00000, data 0x3af7ba4/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 44146688 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:20:59.004859+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 44146688 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:00.005231+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 210.742996216s of 210.749771118s, submitted: 1
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 44146688 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b418873c00 session 0x55b4195ee000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b41b8f5400 session 0x55b419678d20
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:01.005794+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418afa000
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 133963776 unmapped: 50667520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:02.006465+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b418afa000 session 0x55b419508960
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1514044 data_alloc: 218103808 data_used: 14692352
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:03.006865+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:04.007179+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f8cce000/0x0/0x4ffc00000, data 0x24b3b0f/0x2589000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:05.007590+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:06.007942+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:07.008332+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1514044 data_alloc: 218103808 data_used: 14692352
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:08.008720+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f8cce000/0x0/0x4ffc00000, data 0x24b3b0f/0x2589000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:09.009095+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:10.009474+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:11.009808+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:12.010155+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f8cce000/0x0/0x4ffc00000, data 0x24b3b0f/0x2589000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1514044 data_alloc: 218103808 data_used: 14692352
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b419660400 session 0x55b418b285a0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.586451530s of 11.778108597s, submitted: 32
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b419661000 session 0x55b417cb0b40
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:13.010527+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 50601984 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 ms_handle_reset con 0x55b416bb4400 session 0x55b416c7f0e0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:14.010817+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:15.011043+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:16.011289+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:17.011667+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439084 data_alloc: 218103808 data_used: 11816960
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:18.012459+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f9262000/0x0/0x4ffc00000, data 0x1f27aff/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:19.012789+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:20.013617+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:21.014604+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:22.015732+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439084 data_alloc: 218103808 data_used: 11816960
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:23.016480+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:24.017241+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 heartbeat osd_stat(store_statfs(0x4f9262000/0x0/0x4ffc00000, data 0x1f27aff/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.601349831s of 11.746925354s, submitted: 26
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 141 ms_handle_reset con 0x55b418873c00 session 0x55b417aee960
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:25.017848+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 53764096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419660c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:26.018256+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130899968 unmapped: 53731328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _renew_subs
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 141 handle_osd_map epochs [142,142], i have 142, src has [1,142]
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 142 ms_handle_reset con 0x55b419660c00 session 0x55b4195ee960
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b416bb4400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:27.018511+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 130981888 unmapped: 53649408 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452749 data_alloc: 218103808 data_used: 11833344
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 143 ms_handle_reset con 0x55b416bb4400 session 0x55b418af61e0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:28.019362+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131063808 unmapped: 53567488 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9256000/0x0/0x4ffc00000, data 0x1f2ce4a/0x2005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:29.019845+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:30.020721+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:31.021662+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:32.022110+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1453725 data_alloc: 218103808 data_used: 11841536
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9255000/0x0/0x4ffc00000, data 0x1f2e8c9/0x2008000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:33.022660+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:34.023357+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:35.023849+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f9255000/0x0/0x4ffc00000, data 0x1f2e8c9/0x2008000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:36.024156+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:37.024592+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1453885 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:38.024971+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _renew_subs
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.299741745s of 14.475193024s, submitted: 177
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:39.025624+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:40.026285+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:41.026712+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:42.026950+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:43.027471+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:44.027906+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:45.028303+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:46.028739+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:47.029117+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:48.029676+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:49.030008+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:50.030548+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:51.030890+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:52.031270+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:53.031703+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:54.032055+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:55.032655+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:56.032952+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:57.033537+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:58.033923+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:21:59.034301+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:00.034644+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:01.035023+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:02.035349+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:03.035789+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:04.036113+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:05.036545+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:06.036745+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:07.037131+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:08.037626+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:09.038094+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:10.039089+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:11.039561+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:12.040001+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:13.040563+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:14.040780+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:15.041191+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:16.041749+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:17.042103+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:18.042920+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:19.043345+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:20.043657+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:21.044101+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:22.044531+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:23.044984+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:24.045475+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:25.045850+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:26.046125+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:27.046582+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:28.046973+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:29.047209+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:30.047513+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:31.047862+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:32.048095+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:33.048591+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:34.049109+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:35.049842+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:36.050245+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:37.050519+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:38.050953+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:39.051535+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:40.051997+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:41.052318+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:42.052702+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:43.053177+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:44.053550+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:45.053905+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:46.054327+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:47.054574+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:48.054790+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:49.055295+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:50.055657+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:51.056062+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:52.056533+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:53.056801+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:54.056987+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:55.057173+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:56.057536+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:57.057817+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:58.058034+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:22:59.058263+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:00.058540+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:01.058754+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:02.059027+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:03.059271+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:04.059533+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:05.059805+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:06.060017+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 52568064 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:07.060264+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132120576 unmapped: 52510720 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'config diff' '{prefix=config diff}'
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:08.060493+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'config show' '{prefix=config show}'
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132136960 unmapped: 52494336 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:09.060692+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132161536 unmapped: 52469760 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:10.061011+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'log dump' '{prefix=log dump}'
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 143212544 unmapped: 41418752 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:11.061194+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'perf dump' '{prefix=perf dump}'
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 52338688 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'perf schema' '{prefix=perf schema}'
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:12.061489+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:13.061864+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:14.062116+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:15.062314+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:16.062488+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:17.062695+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:18.062886+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:19.063106+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:20.063315+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:21.063509+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:22.063707+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:23.064013+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:24.064257+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:25.064537+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:26.064764+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:27.065366+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:28.066489+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:29.068064+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:30.069083+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:31.070101+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:32.071071+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:33.071976+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:34.072726+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:35.073517+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:36.074537+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:37.076091+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:38.077784+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:39.078358+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:40.079151+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:41.080661+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:42.081586+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:43.082043+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:44.082406+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:45.082643+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:46.082856+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:47.083178+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:48.083594+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:49.084060+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:50.084535+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:51.084894+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:52.085180+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:53.085577+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:54.085848+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2633: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:55.086322+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:56.086590+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:57.087122+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:58.087804+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:23:59.088585+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:00.088922+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:01.089255+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:02.089512+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:03.089909+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:04.090188+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:05.090600+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:06.090888+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:07.091127+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:08.091519+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:09.091780+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:10.092169+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:11.092585+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:12.092852+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:13.093561+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:14.093953+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:15.094318+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:16.094652+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:17.095007+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:18.095459+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:19.095758+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:20.096146+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:21.096349+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:22.096698+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:23.097105+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:24.097550+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:25.097769+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:26.098021+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:27.098291+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:28.098689+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:29.099020+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:30.099434+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:31.099713+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:32.100003+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:33.102238+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:34.103220+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:35.103671+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:36.104194+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:37.105041+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:38.105498+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:39.105696+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:40.106130+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:41.106862+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:42.107611+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:43.107948+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:44.108573+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:45.108992+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:46.109241+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:47.109470+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:48.109816+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:49.110164+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:50.110613+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:51.111030+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:52.111554+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:53.111962+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:54.112184+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:55.112588+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:56.112781+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:57.113070+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:58.113489+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:24:59.113910+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:00.114201+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:01.114472+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 ms_handle_reset con 0x55b419225c00 session 0x55b419678780
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b418873c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:02.114874+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:03.115259+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:04.115663+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:05.115939+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:06.116319+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:07.116671+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:08.117843+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:09.118184+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:10.118594+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:11.118999+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:12.119311+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:13.119781+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:14.120125+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:15.120437+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:16.120820+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:17.121242+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:18.121616+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:19.121977+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:20.122271+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:21.122498+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:22.125456+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:23.125921+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:24.126198+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:25.126573+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:26.126813+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:27.127248+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:28.127660+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:29.127921+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:30.128297+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:31.128638+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:32.129002+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:33.129555+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:34.129837+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:35.130174+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:36.130584+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:37.130967+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:38.131338+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:39.131662+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:40.131941+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:41.132320+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:42.132644+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:43.133077+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:44.133760+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:45.134181+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:46.134733+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:47.135151+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:48.135665+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:49.136031+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:50.136300+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:51.136691+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:52.137102+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:53.137621+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:54.137868+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:55.138179+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:56.138603+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:57.139012+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:58.139345+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:25:59.139659+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:00.140053+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:01.140489+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:02.140703+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:03.141446+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:04.141828+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:05.142222+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:06.142447+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:07.142754+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:08.143046+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:09.143260+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:10.143890+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:11.144201+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:12.144589+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:13.145092+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:14.145508+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:15.145960+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:16.146485+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:17.146785+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 ms_handle_reset con 0x55b419661c00 session 0x55b4195ee3c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419660400
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:18.147136+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:19.147365+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:20.147807+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:21.148183+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:22.148682+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:23.149178+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:24.149466+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:25.149937+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:26.150297+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:27.150583+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:28.150901+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:29.151324+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:30.151732+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:31.151987+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:32.152364+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:33.152768+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:34.153040+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:35.153490+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:36.153889+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:37.154572+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:38.154797+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:39.156109+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:40.157070+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:41.157490+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:42.158123+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:43.158753+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:44.159004+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:45.159630+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:46.159925+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:47.160463+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:48.160867+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:49.161291+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:50.161550+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:51.162295+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:52.162566+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:53.162871+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:54.163144+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:55.163440+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:56.163917+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:57.164288+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:58.164651+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:26:59.164855+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:00.165253+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:01.165708+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:02.166025+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:03.166757+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:04.167131+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:05.167657+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:06.168069+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:07.168503+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:08.168726+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:09.169035+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:10.169240+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:11.169479+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:12.169754+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:13.170145+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:14.170465+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:15.170905+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:16.171201+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:17.171538+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:18.171757+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:19.171965+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:20.172300+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:21.172504+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:22.172736+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:23.172959+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:24.173334+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:25.173731+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:26.173962+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:27.174320+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:28.175577+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:29.175932+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:30.176213+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:31.176529+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:32.176907+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:33.177464+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:34.177839+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:35.178190+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:36.178756+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:37.179216+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:38.179738+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:39.180116+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:40.180658+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:41.180919+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:42.181283+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:43.182063+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:44.182594+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:45.183001+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:46.184300+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:47.186110+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:48.187717+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:49.188802+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:50.190260+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:51.192204+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:52.194214+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:53.195597+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:54.196926+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:55.198905+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:56.200748+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:57.202243+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:58.203236+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:27:59.204209+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:00.204876+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:01.205219+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:02.205585+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:03.205895+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:04.206291+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:05.206533+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:06.206908+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:07.207594+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:08.207834+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:09.208217+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 52879360 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:10.208673+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131760128 unmapped: 52871168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:11.208968+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131760128 unmapped: 52871168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:12.209174+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131760128 unmapped: 52871168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:13.209510+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131760128 unmapped: 52871168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:14.209841+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131760128 unmapped: 52871168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:15.210238+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131760128 unmapped: 52871168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:16.210619+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131760128 unmapped: 52871168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:17.211004+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131760128 unmapped: 52871168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:18.211299+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131760128 unmapped: 52871168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:19.211706+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131760128 unmapped: 52871168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:20.212079+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131760128 unmapped: 52871168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:21.212465+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131760128 unmapped: 52871168 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:22.212743+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 52862976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:23.213208+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 52862976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:24.213438+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 52862976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:25.213763+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 52862976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:26.214024+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 52862976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:27.214474+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 52862976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:28.214684+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 52862976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:29.214995+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 52862976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:30.215215+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 52862976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:31.215511+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 52862976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:32.215898+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 52862976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:33.216499+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 52854784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:34.217130+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 52854784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:35.217764+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 52854784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:36.218137+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 52854784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:37.218898+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 52854784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:38.219321+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 52854784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:39.219725+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 52854784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:40.220115+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 52854784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:41.220555+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 52854784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:42.221057+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 52854784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:43.221614+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 52854784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:44.221976+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 52854784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:45.222462+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 52854784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:46.222696+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 52854784 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:47.223088+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 52846592 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:48.223570+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 52846592 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:49.225551+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 52846592 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:50.226717+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 52846592 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:51.227143+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 52838400 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:52.228935+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 52838400 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:53.229496+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 52838400 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:54.229822+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 52838400 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:55.230039+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 52838400 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:56.230675+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 52838400 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:57.231230+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 52838400 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:58.231552+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 52838400 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:28:59.231921+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 52838400 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:00.232440+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 52838400 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:01.232675+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 52830208 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:02.232986+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 52830208 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:03.233799+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 52830208 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:04.234088+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 52830208 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:05.234612+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 52830208 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:06.235210+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 52830208 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:07.235634+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 52830208 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:08.236147+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 52830208 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:09.236487+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 52830208 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:10.236855+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 52830208 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:11.237262+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 52830208 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:12.237699+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 52822016 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:13.238011+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 52822016 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 ms_handle_reset con 0x55b41b8f5c00 session 0x55b417aef2c0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: handle_auth_request added challenge on 0x55b419660c00
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:14.238542+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 52822016 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:15.238935+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 52822016 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:16.239357+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 52822016 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:17.239867+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 52822016 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:18.240265+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 52822016 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:19.240620+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 52813824 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:20.241084+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 52813824 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:21.241536+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 52813824 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:22.241902+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 52813824 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:23.242611+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 52813824 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:24.243202+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 52813824 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:25.243550+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 52813824 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:26.243890+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 52813824 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:27.244364+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 52805632 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:28.244855+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 52805632 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:29.245229+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 52805632 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:30.245611+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 52805632 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:31.245975+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 52805632 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:32.246588+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 52805632 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:33.247032+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 52805632 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:34.247638+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 52805632 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:35.247942+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 52805632 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:36.248306+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 52805632 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:37.248622+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 52805632 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:38.248850+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 52797440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:39.249139+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 52797440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:40.249557+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 52797440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:41.249970+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 52797440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:42.250285+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 52797440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:43.250560+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 52797440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:44.250954+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 52797440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:45.251344+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 52797440 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:46.251706+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 52789248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.1 total, 600.0 interval
                                            Cumulative writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 3214 syncs, 3.49 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 461 writes, 1111 keys, 461 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s
                                            Interval WAL: 461 writes, 217 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:47.252064+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 52789248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:48.252438+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 52789248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:49.252852+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 52789248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:50.253885+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 52789248 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:51.254311+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 52781056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:52.254795+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 52781056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:53.255176+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 52781056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:54.255655+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 52781056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:55.255972+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 52781056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:56.256542+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 52781056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:57.256922+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 52781056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:58.257224+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 52781056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:29:59.257748+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 52781056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:00.258193+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 52781056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:01.258667+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 52781056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:02.259152+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 52781056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:03.259491+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 52781056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:04.259802+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 52781056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:05.260074+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 52781056 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:06.260727+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 52772864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:07.261267+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 52772864 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:08.261776+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 52764672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:09.262254+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 52764672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:10.262738+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 52764672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:11.263178+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 52764672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:12.263756+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 52764672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:13.264261+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 52764672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:14.264685+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 52764672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:15.265228+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 52764672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:16.265609+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 52764672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:17.265995+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:18.266748+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 52764672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:19.267158+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 52764672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:20.267625+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 52764672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:21.268098+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 52764672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:22.268359+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 52764672 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:23.268760+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 52756480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:24.269144+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 52756480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:25.269683+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 52756480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:26.269948+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 52756480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:27.270237+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 52756480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:28.270461+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 52756480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:29.270649+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 52756480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:30.270865+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 52756480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:31.271204+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 52756480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:32.271609+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 52756480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:33.272020+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 52756480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:34.272496+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 52756480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:35.272904+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 52756480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:36.273143+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 52756480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:37.273641+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 52756480 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:38.274061+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 52748288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:39.274845+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 52748288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:40.275094+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 52748288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:41.275497+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 52748288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:42.275896+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 52748288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:43.276335+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 52748288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:44.276846+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 52748288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:45.277308+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 52748288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:46.277671+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 52748288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:47.278971+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 52748288 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:48.279447+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 52740096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:49.279687+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 52740096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:50.279886+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 52740096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:51.280224+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 52740096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:52.280446+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 52740096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:53.280791+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 52740096 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:54.281254+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 52731904 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:55.281615+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 52731904 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:56.282000+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 52731904 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:57.282445+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 52731904 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:58.282742+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 52731904 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:30:59.283053+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 52731904 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:00.283922+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 52731904 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:01.284561+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 52731904 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:02.285009+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 52731904 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:03.285651+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 52723712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:04.286074+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 52723712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:05.286640+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 52723712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:06.287116+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 52723712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:07.287623+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 52723712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:08.288183+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 52723712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:09.288551+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 52723712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:10.288933+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 52723712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:11.289296+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 52723712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:12.289815+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 52723712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:13.290341+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 52723712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:14.290767+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 52723712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:15.291230+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 52723712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:16.291745+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 52723712 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:17.292311+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 52715520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:18.292684+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 52715520 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:19.293194+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 52707328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:20.293742+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 52707328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:21.294059+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 52707328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:22.294490+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 52707328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:23.294946+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 52707328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:24.295303+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 52707328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:25.295654+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 52707328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:26.295990+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 52707328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456859 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:27.296357+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 52707328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:28.296887+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 52707328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9252000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 590.225341797s of 590.251586914s, submitted: 9
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:29.297248+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 52707328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:30.297661+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 52707328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:31.298141+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 52707328 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456066 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:32.298639+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 53379072 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:33.299452+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9253000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131276800 unmapped: 53354496 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:34.299672+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 53338112 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:35.299910+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 53338112 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9253000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:36.300110+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 53338112 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456051 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:37.300794+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 53288960 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:38.301015+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 53288960 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.059270859s of 10.131833076s, submitted: 87
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:39.301324+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 53288960 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:40.301496+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131366912 unmapped: 53264384 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9253000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:41.301692+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131366912 unmapped: 53264384 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1455979 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:42.301891+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131375104 unmapped: 53256192 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:43.302177+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131375104 unmapped: 53256192 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9253000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:44.302692+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131375104 unmapped: 53256192 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:45.302890+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131375104 unmapped: 53256192 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:46.303151+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131375104 unmapped: 53256192 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9253000/0x0/0x4ffc00000, data 0x1f3032c/0x200b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 20:32:20 compute-0 ceph-osd[206053]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 20:32:20 compute-0 ceph-osd[206053]: bluestore.MempoolThread(0x55b415503b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1455979 data_alloc: 218103808 data_used: 11845632
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:47.303430+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'config diff' '{prefix=config diff}'
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'config show' '{prefix=config show}'
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131571712 unmapped: 53059584 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:48.303679+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 52862976 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:49.303940+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 53329920 heap: 184631296 old mem: 2845415832 new mem: 2845415832
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: tick
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_tickets
Oct 02 20:32:20 compute-0 ceph-osd[206053]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T20:31:50.304176+0000)
Oct 02 20:32:20 compute-0 ceph-osd[206053]: do_command 'log dump' '{prefix=log dump}'
Oct 02 20:32:20 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.15991 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:20 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 02 20:32:20 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/953215819' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 20:32:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:32:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 20:32:21 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/399877990' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 20:32:21 compute-0 ceph-mon[191910]: from='client.15979 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:21 compute-0 ceph-mon[191910]: from='client.15983 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/953215819' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 20:32:21 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/399877990' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 20:32:21 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 02 20:32:21 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2187029564' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 20:32:21 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 20:32:21 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 20:32:22 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 02 20:32:22 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2047965307' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 20:32:22 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2634: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Oct 02 20:32:22 compute-0 ceph-mon[191910]: pgmap v2633: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Oct 02 20:32:22 compute-0 ceph-mon[191910]: from='client.15991 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:22 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2187029564' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 20:32:22 compute-0 ceph-mon[191910]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 20:32:22 compute-0 ceph-mon[191910]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 20:32:22 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/2047965307' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 20:32:23 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.16003 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:23 compute-0 nova_compute[355794]: 2025-10-02 20:32:23.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:32:23 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 02 20:32:23 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1401958271' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 20:32:24 compute-0 ceph-mon[191910]: pgmap v2634: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Oct 02 20:32:24 compute-0 ceph-mon[191910]: from='client.16003 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:24 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1401958271' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 20:32:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 02 20:32:24 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3665296286' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 20:32:24 compute-0 systemd[1]: Starting Hostname Service...
Oct 02 20:32:24 compute-0 podman[500645]: 2025-10-02 20:32:24.553643755 +0000 UTC m=+0.147682278 container health_status 21d950feb891033fc64822d95987890ddb784641d5bc4c759a36fdcfd8902bfd (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd)
Oct 02 20:32:24 compute-0 systemd[1]: Started Hostname Service.
Oct 02 20:32:24 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2635: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 0 B/s wr, 12 op/s
Oct 02 20:32:24 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 02 20:32:24 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1015654924' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 20:32:25 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3665296286' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 20:32:25 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1015654924' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 20:32:25 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 02 20:32:25 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/321969550' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 20:32:25 compute-0 nova_compute[355794]: 2025-10-02 20:32:25.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:32:25 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.16013 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 02 20:32:26 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4082460860' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 20:32:26 compute-0 ceph-mon[191910]: pgmap v2635: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 0 B/s wr, 12 op/s
Oct 02 20:32:26 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/321969550' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 20:32:26 compute-0 ceph-mon[191910]: from='client.16013 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:32:26 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2636: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Oct 02 20:32:26 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 02 20:32:26 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1328380630' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 20:32:27 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.16019 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:27 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4082460860' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 20:32:27 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1328380630' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 20:32:27 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 02 20:32:27 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1619246215' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 20:32:27 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.16023 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:28 compute-0 ceph-mon[191910]: pgmap v2636: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Oct 02 20:32:28 compute-0 ceph-mon[191910]: from='client.16019 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:28 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1619246215' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 20:32:28 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.16025 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:28 compute-0 nova_compute[355794]: 2025-10-02 20:32:28.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:32:28 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2637: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Oct 02 20:32:28 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 02 20:32:28 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4100369419' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 20:32:29 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 02 20:32:29 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3595720121' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 20:32:29 compute-0 ceph-mon[191910]: from='client.16023 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:29 compute-0 ceph-mon[191910]: from='client.16025 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:29 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4100369419' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 20:32:29 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3595720121' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 20:32:29 compute-0 nova_compute[355794]: 2025-10-02 20:32:29.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:32:29 compute-0 nova_compute[355794]: 2025-10-02 20:32:29.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:32:29 compute-0 nova_compute[355794]: 2025-10-02 20:32:29.575 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:32:29 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.16031 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:29 compute-0 podman[157186]: time="2025-10-02T20:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:32:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46266 "" "Go-http-client/1.1"
Oct 02 20:32:29 compute-0 podman[157186]: @ - - [02/Oct/2025:20:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9122 "" "Go-http-client/1.1"
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.16033 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513950275118838 of space, bias 1.0, pg target 0.16541850825356513 quantized to 32 (current 32)
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 02 20:32:30 compute-0 nova_compute[355794]: 2025-10-02 20:32:30.464 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:32:30 compute-0 nova_compute[355794]: 2025-10-02 20:32:30.465 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquired lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:32:30 compute-0 nova_compute[355794]: 2025-10-02 20:32:30.465 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:32:30 compute-0 nova_compute[355794]: 2025-10-02 20:32:30.465 2 DEBUG nova.objects.instance [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:32:30 compute-0 ceph-mon[191910]: pgmap v2637: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Oct 02 20:32:30 compute-0 nova_compute[355794]: 2025-10-02 20:32:30.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:32:30 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 02 20:32:30 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1935316994' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 20:32:30 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2638: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 02 20:32:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 02 20:32:31 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3046901472' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 20:32:31 compute-0 ceph-mon[191910]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 20:32:31 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.16039 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:31 compute-0 openstack_network_exporter[372736]: ERROR   20:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:32:31 compute-0 openstack_network_exporter[372736]: ERROR   20:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:32:31 compute-0 openstack_network_exporter[372736]: ERROR   20:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:32:31 compute-0 openstack_network_exporter[372736]: ERROR   20:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:32:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:32:31 compute-0 openstack_network_exporter[372736]: ERROR   20:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:32:31 compute-0 openstack_network_exporter[372736]: 
Oct 02 20:32:31 compute-0 ceph-mon[191910]: from='client.16031 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:31 compute-0 ceph-mon[191910]: from='client.16033 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:31 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1935316994' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 20:32:31 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3046901472' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 20:32:31 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.16041 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:32 compute-0 nova_compute[355794]: 2025-10-02 20:32:32.114 2 DEBUG nova.network.neutron [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updating instance_info_cache with network_info: [{"id": "24e0cf3f-162d-4105-9362-fc5616a6815a", "address": "fa:16:3e:6b:e8:fe", "network": {"id": "6e3c6c60-2fbc-4181-942a-00056fc667f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c35486f37b94d43a7bf2f2fa09c70b9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e0cf3f-16", "ovs_interfaceid": "24e0cf3f-162d-4105-9362-fc5616a6815a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:32:32 compute-0 nova_compute[355794]: 2025-10-02 20:32:32.142 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Releasing lock "refresh_cache-d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:32:32 compute-0 nova_compute[355794]: 2025-10-02 20:32:32.143 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] [instance: d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:32:32 compute-0 nova_compute[355794]: 2025-10-02 20:32:32.143 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:32:32 compute-0 nova_compute[355794]: 2025-10-02 20:32:32.144 2 DEBUG nova.compute.manager [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:32:32 compute-0 nova_compute[355794]: 2025-10-02 20:32:32.144 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:32:32 compute-0 nova_compute[355794]: 2025-10-02 20:32:32.172 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:32:32 compute-0 nova_compute[355794]: 2025-10-02 20:32:32.173 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:32:32 compute-0 nova_compute[355794]: 2025-10-02 20:32:32.173 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:32:32 compute-0 nova_compute[355794]: 2025-10-02 20:32:32.173 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:32:32 compute-0 nova_compute[355794]: 2025-10-02 20:32:32.174 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:32:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:32:32.356 285790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:32:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:32:32.357 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:32:32 compute-0 ovn_metadata_agent[285768]: 2025-10-02 20:32:32.357 285790 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:32:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 20:32:32 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1681679376' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 20:32:32 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2639: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:32:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:32:32 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/626271228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:32:32 compute-0 ceph-mon[191910]: pgmap v2638: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 02 20:32:32 compute-0 ceph-mon[191910]: from='client.16039 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:32 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1681679376' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 20:32:32 compute-0 nova_compute[355794]: 2025-10-02 20:32:32.789 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:32:32 compute-0 nova_compute[355794]: 2025-10-02 20:32:32.874 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:32:32 compute-0 nova_compute[355794]: 2025-10-02 20:32:32.875 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:32:32 compute-0 nova_compute[355794]: 2025-10-02 20:32:32.875 2 DEBUG nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 20:32:32 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct 02 20:32:32 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/992192731' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 20:32:33 compute-0 nova_compute[355794]: 2025-10-02 20:32:33.233 2 WARNING nova.virt.libvirt.driver [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:32:33 compute-0 nova_compute[355794]: 2025-10-02 20:32:33.236 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3435MB free_disk=59.955204010009766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:32:33 compute-0 nova_compute[355794]: 2025-10-02 20:32:33.236 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:32:33 compute-0 nova_compute[355794]: 2025-10-02 20:32:33.237 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:32:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct 02 20:32:33 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3490124417' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 20:32:33 compute-0 nova_compute[355794]: 2025-10-02 20:32:33.337 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Instance d4e04444-ce39-4fb0-af9c-8fd9b0e6fb77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:32:33 compute-0 nova_compute[355794]: 2025-10-02 20:32:33.338 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:32:33 compute-0 nova_compute[355794]: 2025-10-02 20:32:33.338 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:32:33 compute-0 nova_compute[355794]: 2025-10-02 20:32:33.385 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:32:33 compute-0 nova_compute[355794]: 2025-10-02 20:32:33.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:32:33 compute-0 ovs-appctl[502035]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 20:32:33 compute-0 ovs-appctl[502052]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 20:32:33 compute-0 ovs-appctl[502066]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 20:32:33 compute-0 podman[502065]: 2025-10-02 20:32:33.680630543 +0000 UTC m=+0.110725198 container health_status 308ebdf2f39411b0fbfb38d75fe92bca37fa10d598f73feb83a321a30fc5d431 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:32:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:32:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:32:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:32:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:32:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] scanning for idle connections..
Oct 02 20:32:33 compute-0 ceph-mgr[192222]: [volumes INFO mgr_util] cleaning up connections: []
Oct 02 20:32:33 compute-0 ceph-mgr[192222]: log_channel(audit) log [DBG] : from='client.16051 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:33 compute-0 podman[502108]: 2025-10-02 20:32:33.75745754 +0000 UTC m=+0.080629646 container health_status b21eaff6b095734d8f1f8c2cf5dbf0fad218ec6e43010822475155e8feb3f8ca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Oct 02 20:32:33 compute-0 ceph-mon[191910]: from='client.16041 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 20:32:33 compute-0 ceph-mon[191910]: pgmap v2639: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:32:33 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/626271228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:32:33 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/992192731' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 20:32:33 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/3490124417' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 20:32:33 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 20:32:33 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4057471999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:32:33 compute-0 nova_compute[355794]: 2025-10-02 20:32:33.836 2 DEBUG oslo_concurrency.processutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:32:33 compute-0 nova_compute[355794]: 2025-10-02 20:32:33.844 2 DEBUG nova.compute.provider_tree [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:32:33 compute-0 nova_compute[355794]: 2025-10-02 20:32:33.873 2 DEBUG nova.scheduler.client.report [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Inventory has not changed for provider 9d5f6e5d-658d-4616-b5da-8b0a4093afb0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:32:33 compute-0 nova_compute[355794]: 2025-10-02 20:32:33.875 2 DEBUG nova.compute.resource_tracker [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:32:33 compute-0 nova_compute[355794]: 2025-10-02 20:32:33.876 2 DEBUG oslo_concurrency.lockutils [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:32:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 20:32:34 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1578136418' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 20:32:34 compute-0 nova_compute[355794]: 2025-10-02 20:32:34.307 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:32:34 compute-0 nova_compute[355794]: 2025-10-02 20:32:34.308 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:32:34 compute-0 nova_compute[355794]: 2025-10-02 20:32:34.574 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:32:34 compute-0 nova_compute[355794]: 2025-10-02 20:32:34.575 2 DEBUG oslo_service.periodic_task [None req-cbabd259-fce4-4373-ba99-b2754ffeb925 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:32:34 compute-0 ceph-mgr[192222]: log_channel(cluster) log [DBG] : pgmap v2640: 321 pgs: 321 active+clean; 118 MiB data, 313 MiB used, 60 GiB / 60 GiB avail
Oct 02 20:32:34 compute-0 ceph-mon[191910]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Oct 02 20:32:34 compute-0 ceph-mon[191910]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/675179688' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 20:32:34 compute-0 sudo[502332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 20:32:34 compute-0 ceph-mon[191910]: from='client.16051 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 20:32:34 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/4057471999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 20:32:34 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/1578136418' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 20:32:34 compute-0 ceph-mon[191910]: from='client.? 192.168.122.100:0/675179688' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 20:32:34 compute-0 sudo[502332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
